From patchwork Mon Jan 25 19:19:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F25CAC43381 for ; Mon, 25 Jan 2021 19:19:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7DB342251F for ; Mon, 25 Jan 2021 19:19:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7DB342251F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D7EFE8D001C; Mon, 25 Jan 2021 14:19:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CBA258D0001; Mon, 25 Jan 2021 14:19:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA8FD8D001C; Mon, 25 Jan 2021 14:19:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id A16318D0001 for ; Mon, 25 Jan 2021 14:19:28 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 63F1C8249980 for ; Mon, 25 Jan 2021 19:19:28 +0000 (UTC) X-FDA: 77745261216.13.low04_280ac4827588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 44C5018140B7B for ; Mon, 25 Jan 2021 19:19:28 +0000 (UTC) X-HE-Tag: low04_280ac4827588 X-Filterd-Recvd-Size: 5156 Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:27 +0000 (UTC) Received: by mail-qv1-f43.google.com with SMTP id a1so6701131qvd.13 for ; Mon, 25 Jan 2021 11:19:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=etL32FYt+98u1sEnSN2D9m2zEWLuZNtfowf09CYacIU=; b=fd0Lk6UOzO0TicMR5oWXHg+JuTghXP3CauOmv+oga2qmCQO/ZG/IiT8HXfv2U++qqc KbyfcrG5gvFX62FWqf92w2nWIOCis9V5mQX1tQd+Ywe6zBeMa5xAdRflPHcI/Vhb+CB5 ArtjxyEr8hBh7PGYANxNGz7SyV5FeTRqEFDuquVkiK6f2rPEjTV8PU5aIhSSDteWN8nQ cZbAyCVs6sQpDayWlXkRJUoDMIX5pQAeFQZ5a4K8F5nO0isXUUoeNe3znwWYTxHKC6pL EB42Cxu/hu0LdmQdPRypPBrnMU+fYdv0cKqxTUvu29es2XrwE/96x6W4ShlArBm+SW68 /Sog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=etL32FYt+98u1sEnSN2D9m2zEWLuZNtfowf09CYacIU=; b=foGhWybKF1tMOcF4jiJ1gIPI77Xap0JXDTf+6TGk1PBbPQXdg2CeH8X5q1PF0VpiNm JDKsAcsOkiEKdTYL41fa5WkTIH108zUWtop8KQC+sOR6Jio1unZpzaSb+C/MSPSyBG1g poaC9Xluochk5Fh4peoZjX7kgp6OVEs4ZDLWW2AI0n0sUkwuDGbNFnACrGiYoso1eLMA 8TNrSbVgozFww5GoW9mI8DRJ3jGafN/MDqHJY3OuSzNNRnX6id4mKrVdriCfNyvYVbU5 Y9kvN8IW86lkzu9uBF2KJtAEDOfeZ5RLhwLcuOUlmVUZG8GQX1W5wPIh3//RWHVT0Ut4 3ROQ== X-Gm-Message-State: AOAM532EMhIK7uGm1DB/Ywn6LhCMvIj0xHHDlkw7pbzKZUKSUiqF9D09 ODy2HKus0rcmqqUyK1WYb6A1xw== X-Google-Smtp-Source: ABdhPJxsUsLk3uypyODw2uSFbzFEEVUFtn5pCaoqujizEArj1agTAVjfzSE4qT7+XYJRN9kW/PJkwQ== X-Received: by 2002:a05:6214:36b:: with SMTP id t11mr2228378qvu.33.1611602367042; Mon, 25 Jan 2021 11:19:27 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:26 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 01/18] arm64: kexec: make dtb_mem always enabled Date: Mon, 25 Jan 2021 14:19:06 -0500 Message-Id: <20210125191923.1060122-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, dtb_mem is enabled only when CONFIG_KEXEC_FILE is enabled. This adds ugly ifdefs to c files. Always enabled dtb_mem, when it is not used, it is NULL. Change the dtb_mem to phys_addr_t, as it is a physical address. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/include/asm/kexec.h | 4 ++-- arch/arm64/kernel/machine_kexec.c | 6 +----- 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d24b527e8c00..61530ec3a9b1 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,18 +90,18 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif -#ifdef CONFIG_KEXEC_FILE #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; - unsigned long dtb_mem; + phys_addr_t dtb_mem; /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_mem; unsigned long elf_headers_sz; }; +#ifdef CONFIG_KEXEC_FILE extern const struct kexec_file_ops kexec_image_ops; struct kimage; diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index a0b144cfaea7..8096a6aa1d49 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -204,11 +204,7 @@ void machine_kexec(struct kimage *kimage) * In kexec_file case, the kernel starts directly without purgatory. */ cpu_soft_restart(reboot_code_buffer_phys, kimage->head, kimage->start, -#ifdef CONFIG_KEXEC_FILE - kimage->arch.dtb_mem); -#else - 0); -#endif + kimage->arch.dtb_mem); BUG(); /* Should never get here. */ } From patchwork Mon Jan 25 19:19:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C664EC433E0 for ; Mon, 25 Jan 2021 19:19:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 69B4F2255F for ; Mon, 25 Jan 2021 19:19:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69B4F2255F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 678538D001D; Mon, 25 Jan 2021 14:19:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62B1C8D0001; Mon, 25 Jan 2021 14:19:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CD028D001D; Mon, 25 Jan 2021 14:19:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 368EB8D0001 for ; Mon, 25 Jan 2021 14:19:30 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E4AE13622 for ; Mon, 25 Jan 2021 19:19:29 +0000 (UTC) X-FDA: 77745261258.08.tin00_1b06c0f27588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id C09A01819E769 for ; Mon, 25 Jan 2021 19:19:29 +0000 (UTC) X-HE-Tag: tin00_1b06c0f27588 X-Filterd-Recvd-Size: 4391 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:29 +0000 (UTC) Received: by mail-qv1-f51.google.com with SMTP id u16so4300982qvo.9 for ; Mon, 25 Jan 2021 11:19:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=rWpWL/DRiZPfCy8fAJD8ErDuDMoAsKZLcMwbx3gozjY=; b=NZ5XxFttV3baXG5fjW/ZpaC7LBvDpicX5uySrcQVbyy1c18h0xyqlMmWuyCRowKzIL 3e33S4LCQP2unKI4CIWKN0KSN5UOTN71ef3uB9naUZwGHi6huRuILd95UDOOciWb/aN8 ULr8BmGlF99/Ts9F6CX+/Hl3EWpBrY4QuAZIvNHgadEQ7LrCErmysGI0hmBvGVKLOvww uaYh2RRy2tdHXfkNRSNPuja7VlZtWq13bIgkwLZe/uCR9EKKqC52TRzHPnJeGklcyVz4 aNd3+2sBb1fkTBenfwdJob3WSuHRrYk21+XJDoA0CWQJZBnq2vPRu1wy+jEOG6/FSzt0 YQNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rWpWL/DRiZPfCy8fAJD8ErDuDMoAsKZLcMwbx3gozjY=; b=SJjpfg5mCAS0TafwckqKfCiG48CLdw8TV5DV2jK21p7ZM38RQhCpVO244dvtlJbDPK qdpi81UmbEK2ReUkTTQNF4sw/6qcqjf+YAvm+zkQpycP2RIw39E6e1L3XEtc3kJ/uQDb Dc195P/VmweBqH2b3bXMNHpoMXB8rk1T4HMJy5BMNPftnsSK3OdQhxCJ+o7myWlladgx KBXaOMGAWGRF3vEQb7AlN7z4vPEhueuyT/yleWqbWjs2YmENEiRjG8TUrcg64oRNBzzm Sg1CnSl0yeYy9YjhTeYNXoJlcZQgUtXmDwkLM/ldNKkjSDJxKdmhHTXqKZnqLFUbyL9e Jpiw== X-Gm-Message-State: AOAM532FgVPD7Girr1wMKI6V179vAOxvepAtUUyO6SntT0v1rZPdZCsJ mVg5eUXoYc0C43GrCVQ39EUjVg== X-Google-Smtp-Source: ABdhPJwtUDo/AEU+wN44OEcc7AdxoOnqTZ02B3zhpDKqDqRd5gONz17nKuvw43h6SxCW1YGZ7BgVUQ== X-Received: by 2002:ad4:4e86:: with SMTP id dy6mr2312415qvb.4.1611602368593; Mon, 25 Jan 2021 11:19:28 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:28 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 02/18] arm64: hibernate: variable pudp is used instead of pd4dp Date: Mon, 25 Jan 2021 14:19:07 -0500 Message-Id: <20210125191923.1060122-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There should be p4dp used when p4d page is allocated. This is not a functional issue, but for the logical correctness this should be fixed. Fixes: e9f6376858b9 ("arm64: add support for folded p4d page tables") Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/hibernate.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 9c9f47e9f7f4..0a54d81c90f9 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -190,10 +190,10 @@ static int trans_pgd_map_page(pgd_t *trans_pgd, void *page, pgdp = pgd_offset_pgd(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) + p4dp = (void *)get_safe_page(GFP_ATOMIC); + if (!pgdp) return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); + pgd_populate(&init_mm, pgdp, p4dp); } p4dp = p4d_offset(pgdp, dst_addr); From patchwork Mon Jan 25 19:19:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EB0EC433DB for ; Mon, 25 Jan 2021 19:19:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0E5E92255F for ; Mon, 25 Jan 2021 19:19:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E5E92255F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 36D0A8D001E; Mon, 25 Jan 2021 14:19:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D01B8D0001; Mon, 25 Jan 2021 14:19:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0101D8D001E; Mon, 25 Jan 2021 14:19:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id DC45D8D0001 for ; Mon, 25 Jan 2021 14:19:31 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 911D03622 for ; Mon, 25 Jan 2021 19:19:31 +0000 (UTC) X-FDA: 77745261342.05.feet62_091410727588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 717A21802EADA for ; Mon, 25 Jan 2021 19:19:31 +0000 (UTC) X-HE-Tag: feet62_091410727588 X-Filterd-Recvd-Size: 19634 Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:30 +0000 (UTC) Received: by mail-qk1-f174.google.com with SMTP id k193so13611195qke.6 for ; Mon, 25 Jan 2021 11:19:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=gWo+XOLlvSFu2Q0rUJUVRXtmMkgZkyDfo2dHFScNBUo=; b=LqDMe3eOPRCh0LmJCvwRCsUvTSYvc9yktr4rl92IrXlPlKxe8yf2ehoJ93OT5QGF8X ka8m82UHigU51KhTc5eitfsRDuGL4atkVJeiuPTnOP3xSVFBrIWJ4TS7tJPHhNvlIV7E OnsfQhqph1ARfW7bnKbJs1NevlHz3IClvYTWA7bIRlOJW9VoOhm18BDlB5YPIoEFLYJv SWoQnbL1gec/EP/A7EVGgMGihzNLYr+bLaWsZUCZyXLGpdfwomncaAETHwm4pTWMauzh rSgVPlbE4YLwJTG5QzjeXpTxo81QtrEq6NBne8uI3VFzz3q+Ovgih6LiFF1ApJrbZyps w44Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gWo+XOLlvSFu2Q0rUJUVRXtmMkgZkyDfo2dHFScNBUo=; b=GKizTX1hWSk2AjfaNiVpdmMoQgiE9jfYQLtfgwhS1ydmZ5SK52JcWsYMs3N4fRsKG0 /cxj/BwG5c+vLZuOLsPBMyv5YoW8jdgc2K/paPip3Xh4qGxmURpzqOrLhV//6sjPQGrF tgQdJ5SG4UTSawYNPBfwegbFpeAXYbF1PW5JSdDHrRc3PG5zIdq2vfanYvWVzBFe9ffb ++C056uPgXhxmSpNjUOY3LxJzo4MnJ6whEQwhMUuJU7ekvI0qePASGPJaOCaXPg33GsJ +xUuLkubN2St9ADJBJeFHLUq5fL9GtYH3bGuDDWuz+H16GcLYGrqFu/m2Lddf8+peu/n UMXQ== X-Gm-Message-State: AOAM531QK/3eQ6G3gc6vEBqAkYdGLAydSNKfvGW1tOd5AFvl3/XtJIN4 aF1apk8jBFC4XuWWYNJGutz0Ww== X-Google-Smtp-Source: ABdhPJwRsutfqZPg7CQkhtjt0vPU0WCJTPCcRsyd2TE8K/vxLVCDWuFPNz3zCWV4YhPp49kMjVURcA== X-Received: by 2002:a37:a50e:: with SMTP id o14mr2319608qke.250.1611602370201; Mon, 25 Jan 2021 11:19:30 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:29 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 03/18] arm64: hibernate: move page handling function to new trans_pgd.c Date: Mon, 25 Jan 2021 14:19:08 -0500 Message-Id: <20210125191923.1060122-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we abstracted the required functions move them to a new home. Later, we will generalize these function in order to be useful outside of hibernation. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/Kconfig | 4 + arch/arm64/include/asm/trans_pgd.h | 21 +++ arch/arm64/kernel/hibernate.c | 228 +------------------------- arch/arm64/mm/Makefile | 1 + arch/arm64/mm/trans_pgd.c | 250 +++++++++++++++++++++++++++++ 5 files changed, 277 insertions(+), 227 deletions(-) create mode 100644 arch/arm64/include/asm/trans_pgd.h create mode 100644 arch/arm64/mm/trans_pgd.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index f39568b28ec1..fc0ed9d6e011 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1132,6 +1132,10 @@ config CRASH_DUMP For more details see Documentation/admin-guide/kdump/kdump.rst +config TRANS_TABLE + def_bool y + depends on HIBERNATION + config XEN_DOM0 def_bool y depends on XEN diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h new file mode 100644 index 000000000000..23153c13d1ce --- /dev/null +++ b/arch/arm64/include/asm/trans_pgd.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (c) 2020, Microsoft Corporation. + * Pavel Tatashin + */ + +#ifndef _ASM_TRANS_TABLE_H +#define _ASM_TRANS_TABLE_H + +#include +#include +#include + +int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end); + +int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, + pgprot_t pgprot); + +#endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 0a54d81c90f9..4a38662f0d90 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -16,7 +16,6 @@ #define pr_fmt(x) "hibernate: " x #include #include -#include #include #include #include @@ -31,13 +30,12 @@ #include #include #include -#include -#include #include #include #include #include #include +#include #include /* @@ -178,54 +176,6 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); -static int trans_pgd_map_page(pgd_t *trans_pgd, void *page, - unsigned long dst_addr, - pgprot_t pgprot) -{ - pgd_t *pgdp; - p4d_t *p4dp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - pgdp = pgd_offset_pgd(trans_pgd, dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - p4dp = (void *)get_safe_page(GFP_ATOMIC); - if (!pgdp) - return -ENOMEM; - pgd_populate(&init_mm, pgdp, p4dp); - } - - p4dp = p4d_offset(pgdp, dst_addr); - if (p4d_none(READ_ONCE(*p4dp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) - return -ENOMEM; - p4d_populate(&init_mm, p4dp, pudp); - } - - pudp = pud_offset(p4dp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); - if (!pmdp) - return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); - } - - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); - if (!ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); - } - - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); - - return 0; -} - /* * Copies length bytes, starting at src_start into an new page, * perform cache maintenance, then maps it at the specified address low @@ -462,182 +412,6 @@ int swsusp_arch_suspend(void) return ret; } -static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) -{ - pte_t pte = READ_ONCE(*src_ptep); - - if (pte_valid(pte)) { - /* - * Resume will overwrite areas that may be marked - * read only (code, rodata). Clear the RDONLY bit from - * the temporary mappings we use during restore. - */ - set_pte(dst_ptep, pte_mkwrite(pte)); - } else if (debug_pagealloc_enabled() && !pte_none(pte)) { - /* - * debug_pagealloc will removed the PTE_VALID bit if - * the page isn't in use by the resume kernel. It may have - * been in use by the original kernel, in which case we need - * to put it back in our copy to do the restore. - * - * Before marking this entry valid, check the pfn should - * be mapped. - */ - BUG_ON(!pfn_valid(pte_pfn(pte))); - - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); - } -} - -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) -{ - pte_t *src_ptep; - pte_t *dst_ptep; - unsigned long addr = start; - - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); - if (!dst_ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); - dst_ptep = pte_offset_kernel(dst_pmdp, start); - - src_ptep = pte_offset_kernel(src_pmdp, start); - do { - _copy_pte(dst_ptep, src_ptep, addr); - } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); - - return 0; -} - -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) -{ - pmd_t *src_pmdp; - pmd_t *dst_pmdp; - unsigned long next; - unsigned long addr = start; - - if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pmdp) - return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); - } - dst_pmdp = pmd_offset(dst_pudp, start); - - src_pmdp = pmd_offset(src_pudp, start); - do { - pmd_t pmd = READ_ONCE(*src_pmdp); - - next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) - continue; - if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) - return -ENOMEM; - } else { - set_pmd(dst_pmdp, - __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); - } - } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); - - return 0; -} - -static int copy_pud(p4d_t *dst_p4dp, p4d_t *src_p4dp, unsigned long start, - unsigned long end) -{ - pud_t *dst_pudp; - pud_t *src_pudp; - unsigned long next; - unsigned long addr = start; - - if (p4d_none(READ_ONCE(*dst_p4dp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pudp) - return -ENOMEM; - p4d_populate(&init_mm, dst_p4dp, dst_pudp); - } - dst_pudp = pud_offset(dst_p4dp, start); - - src_pudp = pud_offset(src_p4dp, start); - do { - pud_t pud = READ_ONCE(*src_pudp); - - next = pud_addr_end(addr, end); - if (pud_none(pud)) - continue; - if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) - return -ENOMEM; - } else { - set_pud(dst_pudp, - __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); - } - } while (dst_pudp++, src_pudp++, addr = next, addr != end); - - return 0; -} - -static int copy_p4d(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, - unsigned long end) -{ - p4d_t *dst_p4dp; - p4d_t *src_p4dp; - unsigned long next; - unsigned long addr = start; - - dst_p4dp = p4d_offset(dst_pgdp, start); - src_p4dp = p4d_offset(src_pgdp, start); - do { - next = p4d_addr_end(addr, end); - if (p4d_none(READ_ONCE(*src_p4dp))) - continue; - if (copy_pud(dst_p4dp, src_p4dp, addr, next)) - return -ENOMEM; - } while (dst_p4dp++, src_p4dp++, addr = next, addr != end); - - return 0; -} - -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) -{ - unsigned long next; - unsigned long addr = start; - pgd_t *src_pgdp = pgd_offset_k(start); - - dst_pgdp = pgd_offset_pgd(dst_pgdp, start); - do { - next = pgd_addr_end(addr, end); - if (pgd_none(READ_ONCE(*src_pgdp))) - continue; - if (copy_p4d(dst_pgdp, src_pgdp, addr, next)) - return -ENOMEM; - } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); - - return 0; -} - -static int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end) -{ - int rc; - pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); - - if (!trans_pgd) { - pr_err("Failed to allocate memory for temporary page tables.\n"); - return -ENOMEM; - } - - rc = copy_page_tables(trans_pgd, start, end); - if (!rc) - *dst_pgdp = trans_pgd; - - return rc; -} - /* * Setup then Resume from the hibernate image using swsusp_arch_suspend_exit(). * diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 5ead3c3de3b6..77222d92667a 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -6,6 +6,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_PTDUMP_CORE) += ptdump.o obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o +obj-$(CONFIG_TRANS_TABLE) += trans_pgd.o obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ARM64_MTE) += mteswap.o diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c new file mode 100644 index 000000000000..e048d1f5c912 --- /dev/null +++ b/arch/arm64/mm/trans_pgd.c @@ -0,0 +1,250 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Transitional page tables for kexec and hibernate + * + * This file derived from: arch/arm64/kernel/hibernate.c + * + * Copyright (c) 2020, Microsoft Corporation. + * Pavel Tatashin + * + */ + +/* + * Transitional tables are used during system transferring from one world to + * another: such as during hibernate restore, and kexec reboots. During these + * phases one cannot rely on page table not being overwritten. This is because + * hibernate and kexec can overwrite the current page tables during transition. + */ + +#include +#include +#include +#include +#include +#include +#include + +static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) +{ + pte_t pte = READ_ONCE(*src_ptep); + + if (pte_valid(pte)) { + /* + * Resume will overwrite areas that may be marked + * read only (code, rodata). Clear the RDONLY bit from + * the temporary mappings we use during restore. + */ + set_pte(dst_ptep, pte_mkwrite(pte)); + } else if (debug_pagealloc_enabled() && !pte_none(pte)) { + /* + * debug_pagealloc will removed the PTE_VALID bit if + * the page isn't in use by the resume kernel. It may have + * been in use by the original kernel, in which case we need + * to put it back in our copy to do the restore. + * + * Before marking this entry valid, check the pfn should + * be mapped. + */ + BUG_ON(!pfn_valid(pte_pfn(pte))); + + set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); + } +} + +static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, + unsigned long end) +{ + pte_t *src_ptep; + pte_t *dst_ptep; + unsigned long addr = start; + + dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); + if (!dst_ptep) + return -ENOMEM; + pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); + dst_ptep = pte_offset_kernel(dst_pmdp, start); + + src_ptep = pte_offset_kernel(src_pmdp, start); + do { + _copy_pte(dst_ptep, src_ptep, addr); + } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); + + return 0; +} + +static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, + unsigned long end) +{ + pmd_t *src_pmdp; + pmd_t *dst_pmdp; + unsigned long next; + unsigned long addr = start; + + if (pud_none(READ_ONCE(*dst_pudp))) { + dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); + if (!dst_pmdp) + return -ENOMEM; + pud_populate(&init_mm, dst_pudp, dst_pmdp); + } + dst_pmdp = pmd_offset(dst_pudp, start); + + src_pmdp = pmd_offset(src_pudp, start); + do { + pmd_t pmd = READ_ONCE(*src_pmdp); + + next = pmd_addr_end(addr, end); + if (pmd_none(pmd)) + continue; + if (pmd_table(pmd)) { + if (copy_pte(dst_pmdp, src_pmdp, addr, next)) + return -ENOMEM; + } else { + set_pmd(dst_pmdp, + __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); + } + } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); + + return 0; +} + +static int copy_pud(p4d_t *dst_p4dp, p4d_t *src_p4dp, unsigned long start, + unsigned long end) +{ + pud_t *dst_pudp; + pud_t *src_pudp; + unsigned long next; + unsigned long addr = start; + + if (p4d_none(READ_ONCE(*dst_p4dp))) { + dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); + if (!dst_pudp) + return -ENOMEM; + p4d_populate(&init_mm, dst_p4dp, dst_pudp); + } + dst_pudp = pud_offset(dst_p4dp, start); + + src_pudp = pud_offset(src_p4dp, start); + do { + pud_t pud = READ_ONCE(*src_pudp); + + next = pud_addr_end(addr, end); + if (pud_none(pud)) + continue; + if (pud_table(pud)) { + if (copy_pmd(dst_pudp, src_pudp, addr, next)) + return -ENOMEM; + } else { + set_pud(dst_pudp, + __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); + } + } while (dst_pudp++, src_pudp++, addr = next, addr != end); + + return 0; +} + +static int copy_p4d(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, + unsigned long end) +{ + p4d_t *dst_p4dp; + p4d_t *src_p4dp; + unsigned long next; + unsigned long addr = start; + + dst_p4dp = p4d_offset(dst_pgdp, start); + src_p4dp = p4d_offset(src_pgdp, start); + do { + next = p4d_addr_end(addr, end); + if (p4d_none(READ_ONCE(*src_p4dp))) + continue; + if (copy_pud(dst_p4dp, src_p4dp, addr, next)) + return -ENOMEM; + } while (dst_p4dp++, src_p4dp++, addr = next, addr != end); + + return 0; +} + +static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, + unsigned long end) +{ + unsigned long next; + unsigned long addr = start; + pgd_t *src_pgdp = pgd_offset_k(start); + + dst_pgdp = pgd_offset_pgd(dst_pgdp, start); + do { + next = pgd_addr_end(addr, end); + if (pgd_none(READ_ONCE(*src_pgdp))) + continue; + if (copy_p4d(dst_pgdp, src_pgdp, addr, next)) + return -ENOMEM; + } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); + + return 0; +} + +int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end) +{ + int rc; + pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); + + if (!trans_pgd) { + pr_err("Failed to allocate memory for temporary page tables.\n"); + return -ENOMEM; + } + + rc = copy_page_tables(trans_pgd, start, end); + if (!rc) + *dst_pgdp = trans_pgd; + + return rc; +} + +int trans_pgd_map_page(pgd_t *trans_pgd, void *page, + unsigned long dst_addr, + pgprot_t pgprot) +{ + pgd_t *pgdp; + p4d_t *p4dp; + pud_t *pudp; + pmd_t *pmdp; + pte_t *ptep; + + pgdp = pgd_offset_pgd(trans_pgd, dst_addr); + if (pgd_none(READ_ONCE(*pgdp))) { + p4dp = (void *)get_safe_page(GFP_ATOMIC); + if (!pgdp) + return -ENOMEM; + pgd_populate(&init_mm, pgdp, p4dp); + } + + p4dp = p4d_offset(pgdp, dst_addr); + if (p4d_none(READ_ONCE(*p4dp))) { + pudp = (void *)get_safe_page(GFP_ATOMIC); + if (!pudp) + return -ENOMEM; + p4d_populate(&init_mm, p4dp, pudp); + } + + pudp = pud_offset(p4dp, dst_addr); + if (pud_none(READ_ONCE(*pudp))) { + pmdp = (void *)get_safe_page(GFP_ATOMIC); + if (!pmdp) + return -ENOMEM; + pud_populate(&init_mm, pudp, pmdp); + } + + pmdp = pmd_offset(pudp, dst_addr); + if (pmd_none(READ_ONCE(*pmdp))) { + ptep = (void *)get_safe_page(GFP_ATOMIC); + if (!ptep) + return -ENOMEM; + pmd_populate_kernel(&init_mm, pmdp, ptep); + } + + ptep = pte_offset_kernel(pmdp, dst_addr); + set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + + return 0; +} From patchwork Mon Jan 25 19:19:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 812D0C433E6 for ; Mon, 25 Jan 2021 19:19:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 252D920E65 for ; Mon, 25 Jan 2021 19:19:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 252D920E65 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 565C38D001F; Mon, 25 Jan 2021 14:19:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 518198D0001; Mon, 25 Jan 2021 14:19:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40AD68D001F; Mon, 25 Jan 2021 14:19:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0111.hostedemail.com [216.40.44.111]) by kanga.kvack.org (Postfix) with ESMTP id 22B1E8D0001 for ; Mon, 25 Jan 2021 14:19:33 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D78B2180AE7F3 for ; Mon, 25 Jan 2021 19:19:32 +0000 (UTC) X-FDA: 77745261384.01.air78_16113c527588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id B5666100483E3 for ; Mon, 25 Jan 2021 19:19:32 +0000 (UTC) X-HE-Tag: air78_16113c527588 X-Filterd-Recvd-Size: 8927 Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:32 +0000 (UTC) Received: by mail-qt1-f179.google.com with SMTP id t14so6905607qto.8 for ; Mon, 25 Jan 2021 11:19:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=uo4pw5FNnmKWVvyw7b3Lb3eRlMWvIqi1VadyjI7Jm9U=; b=IE28xogklAT4sOez1Dy/Fu+fa8eNdiI9L10ZSXU1+5/yWipPF2Sx81TPKmgb+uw1dh jGYoNPksCaDXyypSXPaeWhb5sxug0tr/JO/83cvLPQ3M6Sp5JxgFVfXeuiQK0TBWG+0I Efgtb7wfBRSoCphyFE0zXigGVWsMBfWzO0k3CmG0zwG7rP+CUEgCjweGjk4Owkako7sg kGRSDnlmgy8f/demaZoW6ffBD7eiyP6ABCLBJuVkvjc3wuLlN02lRkjcwMkp/bRVrkrH l8gA4TsGufT1dkwtXchxNDTuRNc+TAcrp7WwV6d68TZzzukHJtRV0qHNiDJy4CL7/tD5 95Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uo4pw5FNnmKWVvyw7b3Lb3eRlMWvIqi1VadyjI7Jm9U=; b=b21DBsbCeIXMfkDVAbm7qb8EMHLSY8phhygFRW6wK1GuivzLOBlCrb8K5KW/kiMtCn XTESh7l9v2fg2hbLOoPLytATLXdW6oR0qJm+687sfl25eWZI8acXsm8DzjLd4szkPfm+ 3gDQcrIu9kWXAmRh0t/IZnmQMQBSbuCPxJLwuoOsDKw0Dlgq4cn2OS8PKtKK69SDP2Wp NE+ahB6OGzXxk/rChfIeoeB/oTlBc91tE33+RWdHVTxYC2Xwe5+RC4bkqf9l8lsbdKfu EnP6FhkWBaD3LEPcsHYJ62jHHP7+iuoKeI/dZ2y9OFpY7Z8rjIMiqoZGj+bIL0ezsItv 2eTQ== X-Gm-Message-State: AOAM533AGj3pU1caITCCM64rWQcKBwrgOBtSh4luCkPZYsGmpfen2MQ+ B/+gEZ07tPmF6VE12fUQzjSeKQ== X-Google-Smtp-Source: ABdhPJzZPCVq7UEdZey87cJYsvOc8buPPhi2/MEfXEKJx3amijR4SVfdPvhZKax11XK2NKqWOtCGUA== X-Received: by 2002:ac8:604b:: with SMTP id k11mr1970803qtm.321.1611602371717; Mon, 25 Jan 2021 11:19:31 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:31 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 04/18] arm64: trans_pgd: make trans_pgd_map_page generic Date: Mon, 25 Jan 2021 14:19:09 -0500 Message-Id: <20210125191923.1060122-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kexec is going to use a different allocator, so make trans_pgd_map_page to accept allocator as an argument, and also kexec is going to use a different map protection, so also pass it via argument. Signed-off-by: Pavel Tatashin Reviewed-by: Matthias Brugger --- arch/arm64/include/asm/trans_pgd.h | 19 +++++++++++++++++-- arch/arm64/kernel/hibernate.c | 12 +++++++++++- arch/arm64/mm/trans_pgd.c | 30 ++++++++++++++++++++++-------- 3 files changed, 50 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index 23153c13d1ce..b46409b25234 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -12,10 +12,25 @@ #include #include +/* + * trans_alloc_page + * - Allocator that should return exactly one zeroed page, if this + * allocator fails, trans_pgd_create_copy() and trans_pgd_map_page() + * return -ENOMEM error. + * + * trans_alloc_arg + * - Passed to trans_alloc_page as an argument + */ + +struct trans_pgd_info { + void * (*trans_alloc_page)(void *arg); + void *trans_alloc_arg; +}; + int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, unsigned long end); -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot); +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot); #endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 4a38662f0d90..c173f280bfea 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -176,6 +176,11 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); +static void *hibernate_page_alloc(void *arg) +{ + return (void *)get_safe_page((gfp_t)(unsigned long)arg); +} + /* * Copies length bytes, starting at src_start into an new page, * perform cache maintenance, then maps it at the specified address low @@ -192,6 +197,11 @@ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, phys_addr_t *phys_dst_addr) { + struct trans_pgd_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + }; + void *page = (void *)get_safe_page(GFP_ATOMIC); pgd_t *trans_pgd; int rc; @@ -206,7 +216,7 @@ static int create_safe_exec_page(void *src_start, size_t length, if (!trans_pgd) return -ENOMEM; - rc = trans_pgd_map_page(trans_pgd, page, dst_addr, + rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr, PAGE_KERNEL_EXEC); if (rc) return rc; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index e048d1f5c912..f28eceba2242 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -25,6 +25,11 @@ #include #include +static void *trans_alloc(struct trans_pgd_info *info) +{ + return info->trans_alloc_page(info->trans_alloc_arg); +} + static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) { pte_t pte = READ_ONCE(*src_ptep); @@ -201,9 +206,18 @@ int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, return rc; } -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, - unsigned long dst_addr, - pgprot_t pgprot) +/* + * Add map entry to trans_pgd for a base-size page at PTE level. + * info: contains allocator and its argument + * trans_pgd: page table in which new map is added. + * page: page to be mapped. + * dst_addr: new VA address for the page + * pgprot: protection for the page. + * + * Returns 0 on success, and -ENOMEM on failure. + */ +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot) { pgd_t *pgdp; p4d_t *p4dp; @@ -213,7 +227,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, pgdp = pgd_offset_pgd(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { - p4dp = (void *)get_safe_page(GFP_ATOMIC); + p4dp = trans_alloc(info); if (!pgdp) return -ENOMEM; pgd_populate(&init_mm, pgdp, p4dp); @@ -221,7 +235,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, p4dp = p4d_offset(pgdp, dst_addr); if (p4d_none(READ_ONCE(*p4dp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); + pudp = trans_alloc(info); if (!pudp) return -ENOMEM; p4d_populate(&init_mm, p4dp, pudp); @@ -229,7 +243,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, pudp = pud_offset(p4dp, dst_addr); if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); + pmdp = trans_alloc(info); if (!pmdp) return -ENOMEM; pud_populate(&init_mm, pudp, pmdp); @@ -237,14 +251,14 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, pmdp = pmd_offset(pudp, dst_addr); if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); + ptep = trans_alloc(info); if (!ptep) return -ENOMEM; pmd_populate_kernel(&init_mm, pmdp, ptep); } ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + set_pte(ptep, pfn_pte(virt_to_pfn(page), pgprot)); return 0; } From patchwork Mon Jan 25 19:19:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044115 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53792C433DB for ; Mon, 25 Jan 2021 19:19:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D453B20E65 for ; Mon, 25 Jan 2021 19:19:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D453B20E65 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D7DFF8D0020; Mon, 25 Jan 2021 14:19:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D2D558D0001; Mon, 25 Jan 2021 14:19:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1AC18D0020; Mon, 25 Jan 2021 14:19:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AD0068D0001 for ; Mon, 25 Jan 2021 14:19:34 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7B0F8180ACC5A for ; Mon, 25 Jan 2021 19:19:34 +0000 (UTC) X-FDA: 77745261468.21.crow81_200846f27588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 55F07180442CB for ; Mon, 25 Jan 2021 19:19:34 +0000 (UTC) X-HE-Tag: crow81_200846f27588 X-Filterd-Recvd-Size: 10569 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:33 +0000 (UTC) Received: by mail-qv1-f54.google.com with SMTP id es14so2919521qvb.3 for ; Mon, 25 Jan 2021 11:19:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=FGDp7jvUOvQaUtmYsOORS7JatJv8YjyLST8iVKp3uek=; b=MCIWlM9LqtrBZhnKkAMxaF3ZB6y612DYB/9YdT3+aJCN8N2Ped81zgD28h2d7Y2eui M1kCjXBd2IUYY5Wyk+2REfe21/3BZvjMKmy50MwmvYJV75E7R+PJ815FzSmDwolsQ/+t jlzH6h1VWG0WxXPPh/t5QYrP0sv8OosPme/9H8Iwiou9nQxfDuKMIQgpe2JtLd2pX+eB BFJlTpWqsf8V1rCwfzGPAjoaSrlSoDrH3R8qJn6tiYSEnQyOr+NSmm1VaS2m2J1QzlKM RDecp+ibsQWFrMRfWAQ8r7dgwQY5oFFokAh78jU24QV6JnsCbEPYzVXC4FKym3jn74BI 5+Bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FGDp7jvUOvQaUtmYsOORS7JatJv8YjyLST8iVKp3uek=; b=s7nt28MCPLPp9VedvZ1KS997z/0gPAzwGfsEV1rtnyUOkOkqsAWiF7QGDPo4Z/j8XV rY+DLBwKloZRJCQY3v5Z8dlhsf9uNlZEYetviTJncp8j9bqOFe+MBYfrMlTGERIVYNNJ fmeRHDrA0lagd/gjyCNdYfZC23tWa8Z1gDkancfd3aqyvj9P51UgVd4UmyB2mbOuQRZB bpUXA42SCSFrUTPz/Gv4HDr8WgGHoGRF4H2CymPVzSFpq8QopreDXlv4qZTNy8/gxVIl 1Y2UNtHzWQ3dQMJM3GtvwVeUz6CMNF2ucCP25Zt1Xwi9yo49+Zh8trgmYZcCS6bwB7uB yZVQ== X-Gm-Message-State: AOAM5325xi9EcyOtcORZQeKSWExMqUuXLLsSdKwPAr3XJEeT8FJwwGAJ 74ZIuJlQEimgkBmZJWL79ft6Og== X-Google-Smtp-Source: ABdhPJxxx+xP/tQ5CLVtjlwvppB0m/QNU5v1txKp9ktgaWccMjRmTAmf67LKfYWEvAAHjJJ9HcNAlg== X-Received: by 2002:ad4:4e8a:: with SMTP id dy10mr2245973qvb.36.1611602373235; Mon, 25 Jan 2021 11:19:33 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:32 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 05/18] arm64: trans_pgd: pass allocator trans_pgd_create_copy Date: Mon, 25 Jan 2021 14:19:10 -0500 Message-Id: <20210125191923.1060122-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make trans_pgd_create_copy and its subroutines to use allocator that is passed as an argument Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/include/asm/trans_pgd.h | 4 +-- arch/arm64/kernel/hibernate.c | 7 ++++- arch/arm64/mm/trans_pgd.c | 49 ++++++++++++++++++------------ 3 files changed, 38 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index b46409b25234..7fbf6a3ccff7 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -27,8 +27,8 @@ struct trans_pgd_info { void *trans_alloc_arg; }; -int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end); +int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **trans_pgd, + unsigned long start, unsigned long end); int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, void *page, unsigned long dst_addr, pgprot_t pgprot); diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index c173f280bfea..94fc275cdd21 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -437,13 +437,18 @@ int swsusp_arch_resume(void) phys_addr_t phys_hibernate_exit; void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *, void *, phys_addr_t, phys_addr_t); + struct trans_pgd_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + }; /* * Restoring the memory image will overwrite the ttbr1 page tables. * Create a second copy of just the linear map, and use this when * restoring. */ - rc = trans_pgd_create_copy(&tmp_pg_dir, PAGE_OFFSET, PAGE_END); + rc = trans_pgd_create_copy(&trans_info, &tmp_pg_dir, PAGE_OFFSET, + PAGE_END); if (rc) return rc; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index f28eceba2242..47b6b7029907 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -57,14 +57,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) } } -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) +static int copy_pte(struct trans_pgd_info *info, pmd_t *dst_pmdp, + pmd_t *src_pmdp, unsigned long start, unsigned long end) { pte_t *src_ptep; pte_t *dst_ptep; unsigned long addr = start; - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); + dst_ptep = trans_alloc(info); if (!dst_ptep) return -ENOMEM; pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); @@ -78,8 +78,8 @@ static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, return 0; } -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) +static int copy_pmd(struct trans_pgd_info *info, pud_t *dst_pudp, + pud_t *src_pudp, unsigned long start, unsigned long end) { pmd_t *src_pmdp; pmd_t *dst_pmdp; @@ -87,7 +87,7 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, unsigned long addr = start; if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); + dst_pmdp = trans_alloc(info); if (!dst_pmdp) return -ENOMEM; pud_populate(&init_mm, dst_pudp, dst_pmdp); @@ -102,7 +102,7 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, if (pmd_none(pmd)) continue; if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) + if (copy_pte(info, dst_pmdp, src_pmdp, addr, next)) return -ENOMEM; } else { set_pmd(dst_pmdp, @@ -113,7 +113,8 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, return 0; } -static int copy_pud(p4d_t *dst_p4dp, p4d_t *src_p4dp, unsigned long start, +static int copy_pud(struct trans_pgd_info *info, p4d_t *dst_p4dp, + p4d_t *src_p4dp, unsigned long start, unsigned long end) { pud_t *dst_pudp; @@ -122,7 +123,7 @@ static int copy_pud(p4d_t *dst_p4dp, p4d_t *src_p4dp, unsigned long start, unsigned long addr = start; if (p4d_none(READ_ONCE(*dst_p4dp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); + dst_pudp = trans_alloc(info); if (!dst_pudp) return -ENOMEM; p4d_populate(&init_mm, dst_p4dp, dst_pudp); @@ -137,7 +138,7 @@ static int copy_pud(p4d_t *dst_p4dp, p4d_t *src_p4dp, unsigned long start, if (pud_none(pud)) continue; if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) + if (copy_pmd(info, dst_pudp, src_pudp, addr, next)) return -ENOMEM; } else { set_pud(dst_pudp, @@ -148,7 +149,8 @@ static int copy_pud(p4d_t *dst_p4dp, p4d_t *src_p4dp, unsigned long start, return 0; } -static int copy_p4d(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, +static int copy_p4d(struct trans_pgd_info *info, pgd_t *dst_pgdp, + pgd_t *src_pgdp, unsigned long start, unsigned long end) { p4d_t *dst_p4dp; @@ -162,15 +164,15 @@ static int copy_p4d(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, next = p4d_addr_end(addr, end); if (p4d_none(READ_ONCE(*src_p4dp))) continue; - if (copy_pud(dst_p4dp, src_p4dp, addr, next)) + if (copy_pud(info, dst_p4dp, src_p4dp, addr, next)) return -ENOMEM; } while (dst_p4dp++, src_p4dp++, addr = next, addr != end); return 0; } -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) +static int copy_page_tables(struct trans_pgd_info *info, pgd_t *dst_pgdp, + unsigned long start, unsigned long end) { unsigned long next; unsigned long addr = start; @@ -181,25 +183,34 @@ static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, next = pgd_addr_end(addr, end); if (pgd_none(READ_ONCE(*src_pgdp))) continue; - if (copy_p4d(dst_pgdp, src_pgdp, addr, next)) + if (copy_p4d(info, dst_pgdp, src_pgdp, addr, next)) return -ENOMEM; } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); return 0; } -int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end) +/* + * Create trans_pgd and copy linear map. + * info: contains allocator and its argument + * dst_pgdp: new page table that is created, and to which map is copied. + * start: Start of the interval (inclusive). + * end: End of the interval (exclusive). + * + * Returns 0 on success, and -ENOMEM on failure. + */ +int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **dst_pgdp, + unsigned long start, unsigned long end) { int rc; - pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); + pgd_t *trans_pgd = trans_alloc(info); if (!trans_pgd) { pr_err("Failed to allocate memory for temporary page tables.\n"); return -ENOMEM; } - rc = copy_page_tables(trans_pgd, start, end); + rc = copy_page_tables(info, trans_pgd, start, end); if (!rc) *dst_pgdp = trans_pgd; From patchwork Mon Jan 25 19:19:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB858C433DB for ; Mon, 25 Jan 2021 19:19:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 69D9D224DF for ; Mon, 25 Jan 2021 19:19:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69D9D224DF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 472B28D0001; Mon, 25 Jan 2021 14:19:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F2758D0025; Mon, 25 Jan 2021 14:19:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F00B48D0001; Mon, 25 Jan 2021 14:19:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id D50468D0023 for ; Mon, 25 Jan 2021 14:19:41 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BBF4C180AD804 for ; Mon, 25 Jan 2021 19:19:40 +0000 (UTC) X-FDA: 77745261720.30.music90_4f1495027588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id F4132180B31BF for ; Mon, 25 Jan 2021 19:19:35 +0000 (UTC) X-HE-Tag: music90_4f1495027588 X-Filterd-Recvd-Size: 6108 Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:35 +0000 (UTC) Received: by mail-qt1-f182.google.com with SMTP id z22so10513187qto.7 for ; Mon, 25 Jan 2021 11:19:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=3FwrG1DRKa930mcqv1CQX/Ow19T+ppBoJGHaLYA3Zn4=; b=OMM+n/NgLxxolTPSpQfFupxi8pL7prIM8KcL0yuUt6nWfKRTlR6APYpr/V8F9WNvA1 ikIhnojbMioMSri7RLzroDZGFYn/E6gh220XP8IJyVS3exiStf/6gCaet1KfKtTGtzSB hUwZY6sdDa4bLX25PetOw90C87Q6MClcrvHBbWn37ai8bBgV5A9XsgIe+309KgWBm0o4 PM5Hgbdxd/Z4Wf3Z9Uy9rmMFm46l1p1ggWGl3Glxu55yQPOxsT+XaTwTkEaSpvjMbD6M 6rT7T58457HNNABrDb3FqPJjaoGnar+ESiEAa6nYAAfBmFI+ipXzsBU1dI23SbYeBsye z4Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3FwrG1DRKa930mcqv1CQX/Ow19T+ppBoJGHaLYA3Zn4=; b=mMnHFEtYhiT29lkYC1Fv+6RVuDCNoFzuTgZvYLZjKxNH3y6G/bg7Ce2+b+WQ3otHCc Oc1c2rT/d6e92P9VOv1AQsqZwzmNg13iRIVi+/HrdwkKIklzK90RHggb82ygVOPw3CXz Ud5NJIJS7iDXMBlSZd9ujKwS3I9CPPjc1MX02o6+SPVNBb5zAKZX+EIHuXV1I9pvl+HB rKAvfcn2ruhqs9VkVQXzhSRJIcpzgjUB820tSgOFfXO1tZUTYHqNqwiGR13gNYGM7d8x 55ImgEdj3YFWq8jTvv6TiXTlx1P1KhETe22d3rw9WH89QghPk04BvQy0uzv5ktB1KcD/ dX8g== X-Gm-Message-State: AOAM531z/YOIRBeUFftqmUR2tw+x9YZRUzEJOR+UeLTaHeaACEQOSLi7 3QgMGm1+o1YVoWf1sEi+SPzp1w== X-Google-Smtp-Source: ABdhPJzYpijMw9x6eiYnJkfIhvZXHqaaGkTUev606A5PQKAYeMNfq2YM+YhFDIUaCji9oVYPnb8lLA== X-Received: by 2002:ac8:3571:: with SMTP id z46mr2020689qtb.83.1611602374721; Mon, 25 Jan 2021 11:19:34 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:34 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 06/18] arm64: trans_pgd: pass NULL instead of init_mm to *_populate functions Date: Mon, 25 Jan 2021 14:19:11 -0500 Message-Id: <20210125191923.1060122-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: trans_pgd_* should be independent from mm context because the tables that are created by this code are used when there are no mm context around, as it is between kernels. Simply replace mm_init's with NULL. Signed-off-by: Pavel Tatashin Acked-by: James Morse --- arch/arm64/mm/trans_pgd.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 47b6b7029907..ded8e2ba0308 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -67,7 +67,7 @@ static int copy_pte(struct trans_pgd_info *info, pmd_t *dst_pmdp, dst_ptep = trans_alloc(info); if (!dst_ptep) return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); + pmd_populate_kernel(NULL, dst_pmdp, dst_ptep); dst_ptep = pte_offset_kernel(dst_pmdp, start); src_ptep = pte_offset_kernel(src_pmdp, start); @@ -90,7 +90,7 @@ static int copy_pmd(struct trans_pgd_info *info, pud_t *dst_pudp, dst_pmdp = trans_alloc(info); if (!dst_pmdp) return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); + pud_populate(NULL, dst_pudp, dst_pmdp); } dst_pmdp = pmd_offset(dst_pudp, start); @@ -126,7 +126,7 @@ static int copy_pud(struct trans_pgd_info *info, p4d_t *dst_p4dp, dst_pudp = trans_alloc(info); if (!dst_pudp) return -ENOMEM; - p4d_populate(&init_mm, dst_p4dp, dst_pudp); + p4d_populate(NULL, dst_p4dp, dst_pudp); } dst_pudp = pud_offset(dst_p4dp, start); @@ -241,7 +241,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, p4dp = trans_alloc(info); if (!pgdp) return -ENOMEM; - pgd_populate(&init_mm, pgdp, p4dp); + pgd_populate(NULL, pgdp, p4dp); } p4dp = p4d_offset(pgdp, dst_addr); @@ -249,7 +249,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, pudp = trans_alloc(info); if (!pudp) return -ENOMEM; - p4d_populate(&init_mm, p4dp, pudp); + p4d_populate(NULL, p4dp, pudp); } pudp = pud_offset(p4dp, dst_addr); @@ -257,7 +257,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, pmdp = trans_alloc(info); if (!pmdp) return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); + pud_populate(NULL, pudp, pmdp); } pmdp = pmd_offset(pudp, dst_addr); @@ -265,7 +265,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, ptep = trans_alloc(info); if (!ptep) return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); + pmd_populate_kernel(NULL, pmdp, ptep); } ptep = pte_offset_kernel(pmdp, dst_addr); From patchwork Mon Jan 25 19:19:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 719A3C433DB for ; Mon, 25 Jan 2021 19:19:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 00FD02255F for ; Mon, 25 Jan 2021 19:19:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 00FD02255F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A6F758D0021; Mon, 25 Jan 2021 14:19:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A20C38D0001; Mon, 25 Jan 2021 14:19:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84B638D0021; Mon, 25 Jan 2021 14:19:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id 6FB4D8D0001 for ; Mon, 25 Jan 2021 14:19:38 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 24BF4181AEF0B for ; Mon, 25 Jan 2021 19:19:38 +0000 (UTC) X-FDA: 77745261636.28.suit80_2607f0227588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id D7BCA6D63 for ; Mon, 25 Jan 2021 19:19:37 +0000 (UTC) X-HE-Tag: suit80_2607f0227588 X-Filterd-Recvd-Size: 5344 Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:36 +0000 (UTC) Received: by mail-qv1-f49.google.com with SMTP id 2so6730695qvd.0 for ; Mon, 25 Jan 2021 11:19:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=I2bLiPX3VTIFYdlWvHrlv9bi90SZn0tAgsUExdtL3vw=; b=e7EeqPlQvWqFBbDxIS+wdmiqyKoOqI9OZZCgYolvZoI6Xiol0eFKvHPC1pWgm6Fiay k8gT8zBoIce+Xnc0+wfreyByeComQF0lOYUuI2FV338Zysdkz2m76fy7N3c+/gmDJcw9 Su+R42Mjqxnq0Z04YmcTRsTv2Dkqh0+vHQYSAzntfUfSzhuEiRsIZKtRGZNohQuoiMP1 37lX2zUf84XJ8SC5th4NIOoejS1ep2O1KTF5HUrMordiA0b/YGqqyz/FlLrRWNDMSTQ+ KGVrdwd2jN+Ly8WZH1xArA9m5WyRxE/jCVrqctcOR1sw8aXhiCqDGpRsGzUEBQZL0sci 0F2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I2bLiPX3VTIFYdlWvHrlv9bi90SZn0tAgsUExdtL3vw=; b=pzm9pP3Lu0SPpCyW2JTrB1CSJ1nntAhXLY3gaZFZMZXJ1UUhRBIt4Vx2rawsKlJgaG PPKE3VV21dtsegFqpbRCxykvmzCHHbHgms89yULSoC/Mx45UzgR3ckqGZhCzBgIFxPv2 n/6PmSXninlQ/fL1xQ9RfGrmQ5kHzF7kI5RRrILdTHWn8LjiA00pfUssQOo2YnU5BVeQ +YuqfZoD1aj5/rNbkfVKbbkuc/lHWaz0CRdYpj7S9VfNBSBmeQZEOV3+ZXKlQJISIjCQ p2Has/20LnL65oTI3izU2Gpvv5oLSXY01TA2ZCBXusER8f5CBJ1GerVl8UuEJQgEQ6Zj 17DA== X-Gm-Message-State: AOAM533Yr7995icLpTJY3UwoCcvBUDVWMLC7EXg+2EF1PVFadxhVU95+ 3+kgSU36aX5HYKXjQ+541c0kxw== X-Google-Smtp-Source: ABdhPJzA63TTz10VNLj7mmheU/keqIHP7LOeLAzUTHMXQrqU9ZPOb8WN2VV8RoUDjNxuh0fZX6dq9g== X-Received: by 2002:a0c:d403:: with SMTP id t3mr2275554qvh.4.1611602376226; Mon, 25 Jan 2021 11:19:36 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:35 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 07/18] arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz() Date: Mon, 25 Jan 2021 14:19:12 -0500 Message-Id: <20210125191923.1060122-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: James Morse Because only the idmap sets a non-standard T0SZ, __cpu_set_tcr_t0sz() can check for platforms that need to do this using __cpu_uses_extended_idmap() before doing its work. The idmap is only built with enough levels, (and T0SZ bits) to map its single page. To allow hibernate, and then kexec to idmap their single page copy routines, __cpu_set_tcr_t0sz() needs to consider additional users, who may need a different number of levels/T0SZ-bits to the idmap. (i.e. VA_BITS may be enough for the idmap, but not hibernate/kexec) Always read TCR_EL1, and check whether any work needs doing for this request. __cpu_uses_extended_idmap() remains as it is used by KVM, whose idmap is also part of the kernel image. This mostly affects the cpuidle path, where we now get an extra system register read . CC: Lorenzo Pieralisi CC: Sudeep Holla Signed-off-by: James Morse Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/mmu_context.h | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 0b3079fd28eb..70ce8c1d2b07 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -81,16 +81,15 @@ static inline bool __cpu_uses_extended_idmap_level(void) } /* - * Set TCR.T0SZ to its default value (based on VA_BITS) + * Ensure TCR.T0SZ is set to the provided value. */ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz) { - unsigned long tcr; + unsigned long tcr = read_sysreg(tcr_el1); - if (!__cpu_uses_extended_idmap()) + if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz) return; - tcr = read_sysreg(tcr_el1); tcr &= ~TCR_T0SZ_MASK; tcr |= t0sz << TCR_T0SZ_OFFSET; write_sysreg(tcr, tcr_el1); From patchwork Mon Jan 25 19:19:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8DFFC433E0 for ; Mon, 25 Jan 2021 19:19:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4ECE420E65 for ; Mon, 25 Jan 2021 19:19:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4ECE420E65 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 19CBD8D0024; Mon, 25 Jan 2021 14:19:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 106078D0023; Mon, 25 Jan 2021 14:19:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E64B88D0024; Mon, 25 Jan 2021 14:19:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id CCA108D0001 for ; Mon, 25 Jan 2021 14:19:41 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 831851EE6 for ; Mon, 25 Jan 2021 19:19:41 +0000 (UTC) X-FDA: 77745261762.17.bead40_13161ee27588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 5DC84180D0184 for ; Mon, 25 Jan 2021 19:19:41 +0000 (UTC) X-HE-Tag: bead40_13161ee27588 X-Filterd-Recvd-Size: 10928 Received: from mail-qk1-f180.google.com (mail-qk1-f180.google.com [209.85.222.180]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:39 +0000 (UTC) Received: by mail-qk1-f180.google.com with SMTP id r77so13432946qka.12 for ; Mon, 25 Jan 2021 11:19:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=yxqxdjcYeXzHRYJR2uL7fg5YfMG49Nqr7zKyoy7ZCso=; b=B+qQeHZmm1nCpaer5jGR+2XL6vSG8oc6ubxBiW55/ZEpjT2ip14JUJefspWFfBQ6rt iBqlLQCI3KgXOUvLw0zhTVAQK5uxX/UjiKR/MkioK7T2r12lNsCoQBZbQn+9NVQ2oNuT YK+uko972boltBWppWe8whrFOtZ2Mhtqv/iP9xoIMVV3NqgpyuNg2c2fwqeyz4qeD/o3 ktZpoUs8f9jp3HUc1a5pQ/Utk2SPkmBTyIeWR5osMkCLaaoOXqGVKy3KDgzs4me7BHgM zboESC1tS7LywP8+X1Y0VNi5CfUjDoofhhyY29oDe1fIrxlNrkHZWRfJADOUB+qVFZGG qrAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yxqxdjcYeXzHRYJR2uL7fg5YfMG49Nqr7zKyoy7ZCso=; b=HvmvhPTB0rGRmfEendy4bgqBFkrArmwySouoWhySv3SfNvdTE/aSruB7YvKAxWGaGu 3sgOMfMJ5s+f5itUTg31uTn8Z7WN5ruNRjhSEs9B0/sBMrS20zXAJHrDDEKPoshr5wXZ pdb1gB2+praogwlLjv+XD9ncJDdf097Q9bTCfBzFLylajYZOeioqmaUSpPCpTnWlSfVD ENaMOoehrjyaLMakfEuTJ3WjCOtSvrPGsxV3k+pRGLXePjY8zgfmk3rR5f/EHfaWwMtW l7JXZSsDmFHFxpRwZSOkaQvIRiOCQYXZQVIPKCEheHsLfMCFxdQTKXPdpBtk/mLYLJeO P9hQ== X-Gm-Message-State: AOAM533roXtzu5O5mdEbWmpiZh/BHnjWUpOicBluouCdFL1tlhCSJKmi 19XK9gkjbSnNETb3YIlMzdHXkQ== X-Google-Smtp-Source: ABdhPJzSy2uQ+VCegZg1vz7vlssjTu7VocHSaS3LdXt5GoYDFqUz9exkpacYbL1y/NfjKgTESmOAjQ== X-Received: by 2002:a05:620a:1406:: with SMTP id d6mr2217884qkj.312.1611602377878; Mon, 25 Jan 2021 11:19:37 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:37 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 08/18] arm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines Date: Mon, 25 Jan 2021 14:19:13 -0500 Message-Id: <20210125191923.1060122-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: James Morse To resume from hibernate, the contents of memory are restored from the swap image. This may overwrite any page, including the running kernel and its page tables. Hibernate copies the code it uses to do the restore into a single page that it knows won't be overwritten, and maps it with page tables built from pages that won't be overwritten. Today the address it uses for this mapping is arbitrary, but to allow kexec to reuse this code, it needs to be idmapped. To idmap the page we must avoid the kernel helpers that have VA_BITS baked in. Convert create_single_mapping() to take a single PA, and idmap it. The page tables are built in the reverse order to normal using pfn_pte() to stir in any bits between 52:48. T0SZ is always increased to cover 48bits, or 52 if the copy code has bits 52:48 in its PA. Signed-off-by: James Morse [Adopted the original patch from James to trans_pgd interface, so it can be commonly used by both Kexec and Hibernate. Some minor clean-ups.] Signed-off-by: Pavel Tatashin Link: https://lore.kernel.org/linux-arm-kernel/20200115143322.214247-4-james.morse@arm.com/ --- arch/arm64/include/asm/trans_pgd.h | 3 ++ arch/arm64/kernel/hibernate.c | 32 +++++++------------ arch/arm64/mm/trans_pgd.c | 49 ++++++++++++++++++++++++++++++ 3 files changed, 63 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index 7fbf6a3ccff7..5d08e5adf3d5 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -33,4 +33,7 @@ int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **trans_pgd, int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, void *page, unsigned long dst_addr, pgprot_t pgprot); +int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0, + unsigned long *t0sz, void *page); + #endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 94fc275cdd21..9df32ba0d574 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -194,7 +194,6 @@ static void *hibernate_page_alloc(void *arg) * page system. */ static int create_safe_exec_page(void *src_start, size_t length, - unsigned long dst_addr, phys_addr_t *phys_dst_addr) { struct trans_pgd_info trans_info = { @@ -203,7 +202,8 @@ static int create_safe_exec_page(void *src_start, size_t length, }; void *page = (void *)get_safe_page(GFP_ATOMIC); - pgd_t *trans_pgd; + phys_addr_t trans_ttbr0; + unsigned long t0sz; int rc; if (!page) @@ -211,13 +211,7 @@ static int create_safe_exec_page(void *src_start, size_t length, memcpy(page, src_start, length); __flush_icache_range((unsigned long)page, (unsigned long)page + length); - - trans_pgd = (void *)get_safe_page(GFP_ATOMIC); - if (!trans_pgd) - return -ENOMEM; - - rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr, - PAGE_KERNEL_EXEC); + rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page); if (rc) return rc; @@ -230,12 +224,15 @@ static int create_safe_exec_page(void *src_start, size_t length, * page, but TLBs may contain stale ASID-tagged entries (e.g. for EFI * runtime services), while for a userspace-driven test_resume cycle it * points to userspace page tables (and we must point it at a zero page - * ourselves). Elsewhere we only (un)install the idmap with preemption - * disabled, so T0SZ should be as required regardless. + * ourselves). + * + * We change T0SZ as part of installing the idmap. This is undone by + * cpu_uninstall_idmap() in __cpu_suspend_exit(). */ cpu_set_reserved_ttbr0(); local_flush_tlb_all(); - write_sysreg(phys_to_ttbr(virt_to_phys(trans_pgd)), ttbr0_el1); + __cpu_set_tcr_t0sz(t0sz); + write_sysreg(trans_ttbr0, ttbr0_el1); isb(); *phys_dst_addr = virt_to_phys(page); @@ -434,7 +431,6 @@ int swsusp_arch_resume(void) void *zero_page; size_t exit_size; pgd_t *tmp_pg_dir; - phys_addr_t phys_hibernate_exit; void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *, void *, phys_addr_t, phys_addr_t); struct trans_pgd_info trans_info = { @@ -462,19 +458,13 @@ int swsusp_arch_resume(void) return -ENOMEM; } - /* - * Locate the exit code in the bottom-but-one page, so that *NULL - * still has disastrous affects. - */ - hibernate_exit = (void *)PAGE_SIZE; exit_size = __hibernate_exit_text_end - __hibernate_exit_text_start; /* * Copy swsusp_arch_suspend_exit() to a safe page. This will generate * a new set of ttbr0 page tables and load them. */ rc = create_safe_exec_page(__hibernate_exit_text_start, exit_size, - (unsigned long)hibernate_exit, - &phys_hibernate_exit); + (phys_addr_t *)&hibernate_exit); if (rc) { pr_err("Failed to create safe executable page for hibernate_exit code.\n"); return rc; @@ -493,7 +483,7 @@ int swsusp_arch_resume(void) * We can skip this step if we booted at EL1, or are running with VHE. */ if (el2_reset_needed()) { - phys_addr_t el2_vectors = phys_hibernate_exit; /* base */ + phys_addr_t el2_vectors = (phys_addr_t)hibernate_exit; el2_vectors += hibernate_el2_vectors - __hibernate_exit_text_start; /* offset */ diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index ded8e2ba0308..527f0a39c3da 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -273,3 +273,52 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, return 0; } + +/* + * The page we want to idmap may be outside the range covered by VA_BITS that + * can be built using the kernel's p?d_populate() helpers. As a one off, for a + * single page, we build these page tables bottom up and just assume that will + * need the maximum T0SZ. + * + * Returns 0 on success, and -ENOMEM on failure. + * On success trans_ttbr0 contains page table with idmapped page, t0sz is set to + * maximum T0SZ for this page. + */ +int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0, + unsigned long *t0sz, void *page) +{ + phys_addr_t dst_addr = virt_to_phys(page); + unsigned long pfn = __phys_to_pfn(dst_addr); + int max_msb = (dst_addr & GENMASK(52, 48)) ? 51 : 47; + int bits_mapped = PAGE_SHIFT - 4; + unsigned long level_mask, prev_level_entry, *levels[4]; + int this_level, index, level_lsb, level_msb; + + dst_addr &= PAGE_MASK; + prev_level_entry = pte_val(pfn_pte(pfn, PAGE_KERNEL_EXEC)); + + for (this_level = 3; this_level >= 0; this_level--) { + levels[this_level] = trans_alloc(info); + if (!levels[this_level]) + return -ENOMEM; + + level_lsb = ARM64_HW_PGTABLE_LEVEL_SHIFT(this_level); + level_msb = min(level_lsb + bits_mapped, max_msb); + level_mask = GENMASK_ULL(level_msb, level_lsb); + + index = (dst_addr & level_mask) >> level_lsb; + *(levels[this_level] + index) = prev_level_entry; + + pfn = virt_to_pfn(levels[this_level]); + prev_level_entry = pte_val(pfn_pte(pfn, + __pgprot(PMD_TYPE_TABLE))); + + if (level_msb == max_msb) + break; + } + + *trans_ttbr0 = phys_to_ttbr(__pfn_to_phys(pfn)); + *t0sz = TCR_T0SZ(max_msb + 1); + + return 0; +} From patchwork Mon Jan 25 19:19:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E73ACC433DB for ; Mon, 25 Jan 2021 19:19:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 85D052251F for ; Mon, 25 Jan 2021 19:19:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85D052251F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5B4E18D0022; Mon, 25 Jan 2021 14:19:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 58BFF8D0001; Mon, 25 Jan 2021 14:19:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 369648D0022; Mon, 25 Jan 2021 14:19:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 15C0A8D0001 for ; Mon, 25 Jan 2021 14:19:41 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A2BAF181AEF0B for ; Mon, 25 Jan 2021 19:19:40 +0000 (UTC) X-FDA: 77745261720.08.snake42_2a0f51127588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 798CE1819E769 for ; Mon, 25 Jan 2021 19:19:40 +0000 (UTC) X-HE-Tag: snake42_2a0f51127588 X-Filterd-Recvd-Size: 8520 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:39 +0000 (UTC) Received: by mail-qt1-f169.google.com with SMTP id z22so10513396qto.7 for ; Mon, 25 Jan 2021 11:19:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=zFOq/S9iOYCDWpf61SwLzKvAyTai8NEIz55jjvM4EVM=; b=ckXcD39ZeMZdTbgyZrKYzdjdsYWAcQx3VmO7mRxHypwrmjO4e8B0MSbG5sIlFuDpa7 W3rCPkP3Lgu5Lzz/yrTN4X+QZex8zNY8KP5B82WyOu1kph3LCamxEBXe58RAj4qI1vFl niZ2GvFNe2pz6XjLG5sXaa5Q9jjgPm4je/lUH3YbWKZmkGCn8Bh4Yn7/P+MYmvbJ4ox8 ldCHR7bGVIxMH69hJwYJjb377KZXnzdMapSbUePN+4ASS1+puTtcdF7qmAlfiIhpsqXC upgE85LngJaiKuGgdKWtSgT3qxYdiRFPh4vC1yi7clw7nnSYUT2JEVyJExz/S4Qk0CcJ 0Bug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zFOq/S9iOYCDWpf61SwLzKvAyTai8NEIz55jjvM4EVM=; b=oMSydJZWBAUfHBkPNEZvUboTlarvmlG/oW/3hOapzVHJPzD9iseeBsPa/b6Oi/Q/qX A5IF4qnDK+9bxVIqNby5OU66jiSC+ibGswAIgqgCRFV7ISXpAmod1Alr4gU060Nl7VHc QSllyIQN1Rt/sZLS1UY9qKeXTFmSE0+waWh1/lpHGthk++FKjNi+BsgS+6xmp8/95rl5 OzGSwexTnk6BwvtKWvCf6kPlvUifCdUy+6Pkx5Y3oLgwJbTXwDZ+ei7ht9BBB90fmV9k JO7lg/EaMkgBbuSQgWNxsDVMZIFqMjnevERCP6E7p1r+ALx5KwCqnWcgaeg+eb50Wrv0 0Q1w== X-Gm-Message-State: AOAM531BVOdOVKYmf27SDPUJZLXuJ4lYCSCiPWVSUqUTo0dUILTWoMuN fNWK361ZQxvHV/Wmft9bsIJ+cg== X-Google-Smtp-Source: ABdhPJzOtp3LRX+RA164VIS3opw37sa157PwXQZoQIDOfXAj8170m0WLiADo7/egWOhgpbrd6Sy/vg== X-Received: by 2002:ac8:57c1:: with SMTP id w1mr1954928qta.313.1611602379388; Mon, 25 Jan 2021 11:19:39 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:38 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 09/18] arm64: kexec: move relocation function setup Date: Mon, 25 Jan 2021 14:19:14 -0500 Message-Id: <20210125191923.1060122-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, kernel relocation function is configured in machine_kexec() at the time of kexec reboot by using control_code_page. This operation, however, is more logical to be done during kexec_load, and thus remove from reboot time. Move, setup of this function to newly added machine_kexec_post_load(). Because once MMU is enabled, kexec control page will contain more than relocation kernel, but also vector table, add pointer to the actual function within this page arch.kern_reloc. Currently, it equals to the beginning of page, we will add offsets later, when vector table is added. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/include/asm/kexec.h | 1 + arch/arm64/kernel/machine_kexec.c | 46 +++++++++++++------------------ 2 files changed, 20 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 61530ec3a9b1..9befcd87e9a8 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -95,6 +95,7 @@ static inline void crash_post_resume(void) {} struct kimage_arch { void *dtb; phys_addr_t dtb_mem; + phys_addr_t kern_reloc; /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_mem; diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 8096a6aa1d49..a8aaa6562429 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -42,6 +42,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" start: %lx\n", kimage->start); pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); + pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -58,6 +59,22 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +int machine_kexec_post_load(struct kimage *kimage) +{ + void *reloc_code = page_to_virt(kimage->control_code_page); + + memcpy(reloc_code, arm64_relocate_new_kernel, + arm64_relocate_new_kernel_size); + kimage->arch.kern_reloc = __pa(reloc_code); + + /* Flush the reloc_code in preparation for its execution. */ + __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size); + flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code + + arm64_relocate_new_kernel_size); + + return 0; +} + /** * machine_kexec_prepare - Prepare for a kexec reboot. * @@ -143,8 +160,6 @@ static void kexec_segment_flush(const struct kimage *kimage) */ void machine_kexec(struct kimage *kimage) { - phys_addr_t reboot_code_buffer_phys; - void *reboot_code_buffer; bool in_kexec_crash = (kimage == kexec_crash_image); bool stuck_cpus = cpus_are_stuck_in_kernel(); @@ -155,31 +170,8 @@ void machine_kexec(struct kimage *kimage) WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()), "Some CPUs may be stale, kdump will be unreliable.\n"); - reboot_code_buffer_phys = page_to_phys(kimage->control_code_page); - reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys); - kexec_image_info(kimage); - /* - * Copy arm64_relocate_new_kernel to the reboot_code_buffer for use - * after the kernel is shut down. - */ - memcpy(reboot_code_buffer, arm64_relocate_new_kernel, - arm64_relocate_new_kernel_size); - - /* Flush the reboot_code_buffer in preparation for its execution. */ - __flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size); - - /* - * Although we've killed off the secondary CPUs, we don't update - * the online mask if we're handling a crash kernel and consequently - * need to avoid flush_icache_range(), which will attempt to IPI - * the offline CPUs. Therefore, we must use the __* variant here. - */ - __flush_icache_range((uintptr_t)reboot_code_buffer, - (uintptr_t)reboot_code_buffer + - arm64_relocate_new_kernel_size); - /* Flush the kimage list and its buffers. */ kexec_list_flush(kimage); @@ -193,7 +185,7 @@ void machine_kexec(struct kimage *kimage) /* * cpu_soft_restart will shutdown the MMU, disable data caches, then - * transfer control to the reboot_code_buffer which contains a copy of + * transfer control to the kern_reloc which contains a copy of * the arm64_relocate_new_kernel routine. arm64_relocate_new_kernel * uses physical addressing to relocate the new image to its final * position and transfers control to the image entry point when the @@ -203,7 +195,7 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(reboot_code_buffer_phys, kimage->head, kimage->start, + cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, kimage->arch.dtb_mem); BUG(); /* Should never get here. */ From patchwork Mon Jan 25 19:19:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CEDEC433E0 for ; Mon, 25 Jan 2021 19:19:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1C40721D94 for ; Mon, 25 Jan 2021 19:19:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C40721D94 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3E0318D0025; Mon, 25 Jan 2021 14:19:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 38F9E8D0023; Mon, 25 Jan 2021 14:19:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 280538D0025; Mon, 25 Jan 2021 14:19:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id 0EF6E8D0023 for ; Mon, 25 Jan 2021 14:19:43 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BF79E3622 for ; Mon, 25 Jan 2021 19:19:42 +0000 (UTC) X-FDA: 77745261804.15.anger91_050c14927588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 8691D1814B0D2 for ; Mon, 25 Jan 2021 19:19:42 +0000 (UTC) X-HE-Tag: anger91_050c14927588 X-Filterd-Recvd-Size: 5055 Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:41 +0000 (UTC) Received: by mail-qt1-f174.google.com with SMTP id d15so10498038qtw.12 for ; Mon, 25 Jan 2021 11:19:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=5oItsXz80VGMkIK9n/+4yNTlc3wm3VkQinupbSJHsbU=; b=PZYpwfrFvSlaUCkpDD05HG1cJCDlDOf+CUaa46d/3Sf4IeCJvLQ226vo3BkaPzPeVK JZbb1ZDcCU7ye9jeVV1OG5khyvwWUIfNQO1CqaBvXmiHejT5L+rDVKo6Rm4kpcbugs9G ne9oGFKvOQsVFYgujx0RQze9i3AqNviHN45Bb0YFDIo741ZRjS1YRjfDGXn69eAi4PAV JIhvMTg3YI2g+4P9Vz7u/dSGxJd8MHa37mdnHAj1kcTor+MsBSmxNUee4ll41ONPNKp7 VCcG8A9NH9nt8ykRu91p3nz82wHjF6r3JWAGbffQLeTodD/nuGq7TYWmm9RrzYzoTo1a sSsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5oItsXz80VGMkIK9n/+4yNTlc3wm3VkQinupbSJHsbU=; b=UW8MjkTgO5oNQnYW4coyZ4KWQWVKGp0UviIKF5RvtATF7FnMxQzajMDFFy0+O8lOOm PfQ3/ork162KSIhqBZOn/mI/CGO+rKWy1tpnv1ItX03YIuJjmXQixystJtHjabeHq/62 2WhB7rRm8xNsfQaf1hpuawq81U3/scdrV1vmGmsWCYr9CfaCeyFPfvkf0N1aYGQx4hVy yUiYBf4BGrF0rqufEPkqeLMBC1LLrnMgWIn+EWVXODLpZmVLNY7ElyWK8EetJVORWmqJ iLoS82F8InxVbENVA3ccjqLJ1Vsbk0vgJvlmNZ70OdzzGlWu37Ai1rDFEQozwsLcqrmw IE1A== X-Gm-Message-State: AOAM530ZhM2q42QBqdisWBA3aaBLW7HSTmnBhLETtvpz8EBzJpB4CnJM Lid96JzTnHX3RPZI6f4GbtdLJg== X-Google-Smtp-Source: ABdhPJydaAMyVai0d0oDbWxmrL/nu0WOLUZ8DiszpSfhSyLHotQkTYoRRSqhOnxoO+wBONtIKJSi8w== X-Received: by 2002:ac8:490e:: with SMTP id e14mr1966761qtq.99.1611602380917; Mon, 25 Jan 2021 11:19:40 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:40 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 10/18] arm64: kexec: call kexec_image_info only once Date: Mon, 25 Jan 2021 14:19:15 -0500 Message-Id: <20210125191923.1060122-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, kexec_image_info() is called during load time, and right before kernel is being kexec'ed. There is no need to do both. So, call it only once when segments are loaded and the physical location of page with copy of arm64_relocate_new_kernel is known. Signed-off-by: Pavel Tatashin Acked-by: James Morse --- arch/arm64/kernel/machine_kexec.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index a8aaa6562429..90a335c74442 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -66,6 +66,7 @@ int machine_kexec_post_load(struct kimage *kimage) memcpy(reloc_code, arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); kimage->arch.kern_reloc = __pa(reloc_code); + kexec_image_info(kimage); /* Flush the reloc_code in preparation for its execution. */ __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size); @@ -84,8 +85,6 @@ int machine_kexec_post_load(struct kimage *kimage) */ int machine_kexec_prepare(struct kimage *kimage) { - kexec_image_info(kimage); - if (kimage->type != KEXEC_TYPE_CRASH && cpus_are_stuck_in_kernel()) { pr_err("Can't kexec: CPUs are stuck in the kernel.\n"); return -EBUSY; @@ -170,8 +169,6 @@ void machine_kexec(struct kimage *kimage) WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()), "Some CPUs may be stale, kdump will be unreliable.\n"); - kexec_image_info(kimage); - /* Flush the kimage list and its buffers. */ kexec_list_flush(kimage); From patchwork Mon Jan 25 19:19:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7319AC433DB for ; Mon, 25 Jan 2021 19:19:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F2D4F21744 for ; Mon, 25 Jan 2021 19:19:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2D4F21744 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BA7948D0026; Mon, 25 Jan 2021 14:19:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B59668D0023; Mon, 25 Jan 2021 14:19:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6D408D0026; Mon, 25 Jan 2021 14:19:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 9040A8D0023 for ; Mon, 25 Jan 2021 14:19:44 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9EAE3181AEF0B for ; Mon, 25 Jan 2021 19:19:43 +0000 (UTC) X-FDA: 77745261846.16.nut24_1d0e19727588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 75855100E690C for ; Mon, 25 Jan 2021 19:19:43 +0000 (UTC) X-HE-Tag: nut24_1d0e19727588 X-Filterd-Recvd-Size: 6807 Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:42 +0000 (UTC) Received: by mail-qt1-f172.google.com with SMTP id t17so10543358qtq.2 for ; Mon, 25 Jan 2021 11:19:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=fYtwEkSD1g4zgc3mwZlXud5vJEM5wBsBl27BfuiCCGo=; b=U2AIVAJVNsrZwMQGm3jBTprnYlMZE2r0f07KZ/zh8a2HtVegm5Vrw9+CSVp/lI7Olr b3BHv9VX9zYpOGGmLuvOmzIVhSJR0S+/r/mkSAONsXUZHdQBInsFN0VBlaumZAGrgg/F s4k2BJaAFHTTlqUf+cDOFIXSAPXmQj1K0VWS9M4Dhs8r6V23ddNJ1CPgIA5l3dp1IaPX 9gBIB7dFsc7RCkiDRTBhE9rOUkOsra1PlaC4eXs98zEiGF268+JK7j0dEP/u53Tvk+FU fIy+tzeA6q2grkIIzOJ3NToRGJ3bHoTlX+Ix/f+DR7XKJX/ZM7gvINzw0/dtPHUm1kYk PTgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fYtwEkSD1g4zgc3mwZlXud5vJEM5wBsBl27BfuiCCGo=; b=l5WjAJJDJ5qywtqlGoV/WEFSTfxkESAA9iZmyuanVaod0HJYMhDxJiStmfYmPpX3On CrvN/CgBeUOrHSy2Z/0Yn+Ny/Zi8F/Dfeqnumz6HFSLExWqOFsh2cXawEIJHd1N4S1kE rMbkd3w1peGAeOx7fk6iFGUqGS3fZFTKwnKfZ5HkP+32eNlFflhg4w5HJMzRtvaLqTZ4 EkVA0v0NfdOTjRlFzJE8q7lVt61eM9wATfDR2VTz/ELS/Wgda9kQoSlfxHxQGd2gNMvM ry/tsitXZkGHNF5v3xxiTEeUNBsqobtkX7rfJcPWufrQh+A5du4eHX3ZnvT75GcVkNha RQKw== X-Gm-Message-State: AOAM530A2+I3TRshnSznpZt9prNZBYge9AB/A4mIZxVeIObWMeHCnqWC fWeg1fVLW4YXr4HKdCvlQBUDFQ== X-Google-Smtp-Source: ABdhPJyI8k+86XaV/8c2jn/BKReyE9eDvlppb7XnDOdVY2z1s16lwW29T78nGb4fSsM7KDSok6C8eQ== X-Received: by 2002:ac8:a82:: with SMTP id d2mr1908195qti.343.1611602382398; Mon, 25 Jan 2021 11:19:42 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:41 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 11/18] arm64: kexec: arm64_relocate_new_kernel clean-ups and optimizations Date: Mon, 25 Jan 2021 14:19:16 -0500 Message-Id: <20210125191923.1060122-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In preparation to bigger changes to arm64_relocate_new_kernel that would enable this function to do MMU backed memory copy, do few clean-ups and optimizations. These include: 1. Call raw_dcache_line_size() only when relocation is actually going to happen. i.e. kdump type kexec, does not need it. 2. copy_page(dest, src, tmps...) increments dest and src by PAGE_SIZE, so no need to store dest prior to calling copy_page and increment it after. Also, src is not used after a copy, not need to copy either. 3. For consistency use comment on the same line with instruction when it describes the instruction itself. 4. Some comment corrections Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/relocate_kernel.S | 36 +++++++---------------------- 1 file changed, 8 insertions(+), 28 deletions(-) diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 84eec95ec06c..462ffbc37071 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -17,28 +17,24 @@ /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * - * The memory that the old kernel occupies may be overwritten when coping the + * The memory that the old kernel occupies may be overwritten when copying the * new image to its final location. To assure that the * arm64_relocate_new_kernel routine which does that copy is not overwritten, * all code and data needed by arm64_relocate_new_kernel must be between the * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec - * control_code_page, a special page which has been set up to be preserved - * during the copy operation. + * safe memory that has been set up to be preserved during the copy operation. */ SYM_CODE_START(arm64_relocate_new_kernel) - /* Setup the list loop variables. */ mov x18, x2 /* x18 = dtb address */ mov x17, x1 /* x17 = kimage_start */ mov x16, x0 /* x16 = kimage_head */ - raw_dcache_line_size x15, x0 /* x15 = dcache line size */ mov x14, xzr /* x14 = entry ptr */ mov x13, xzr /* x13 = copy dest */ - /* Check if the new image needs relocation. */ tbnz x16, IND_DONE_BIT, .Ldone - + raw_dcache_line_size x15, x0 /* x15 = dcache line size */ .Lloop: and x12, x16, PAGE_MASK /* x12 = addr */ @@ -57,34 +53,18 @@ SYM_CODE_START(arm64_relocate_new_kernel) b.lo 2b dsb sy - mov x20, x13 - mov x21, x12 - copy_page x20, x21, x0, x1, x2, x3, x4, x5, x6, x7 - - /* dest += PAGE_SIZE */ - add x13, x13, PAGE_SIZE + copy_page x13, x12, x0, x1, x2, x3, x4, x5, x6, x7 b .Lnext - .Ltest_indirection: tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - - /* ptr = addr */ - mov x14, x12 + mov x14, x12 /* ptr = addr */ b .Lnext - .Ltest_destination: tbz x16, IND_DESTINATION_BIT, .Lnext - - /* dest = addr */ - mov x13, x12 - + mov x13, x12 /* dest = addr */ .Lnext: - /* entry = *ptr++ */ - ldr x16, [x14], #8 - - /* while (!(entry & DONE)) */ - tbz x16, IND_DONE_BIT, .Lloop - + ldr x16, [x14], #8 /* entry = *ptr++ */ + tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ .Ldone: /* wait for writes from copy_page to finish */ dsb nsh From patchwork Mon Jan 25 19:19:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36F95C433E0 for ; Mon, 25 Jan 2021 19:20:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD2E321D94 for ; Mon, 25 Jan 2021 19:20:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD2E321D94 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D2F218D0027; Mon, 25 Jan 2021 14:19:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB5C68D0023; Mon, 25 Jan 2021 14:19:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B34468D0027; Mon, 25 Jan 2021 14:19:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id 926B98D0023 for ; Mon, 25 Jan 2021 14:19:45 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5A2AE824999B for ; Mon, 25 Jan 2021 19:19:45 +0000 (UTC) X-FDA: 77745261930.15.cup99_4f0884727588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 2994A1814B0C7 for ; Mon, 25 Jan 2021 19:19:45 +0000 (UTC) X-HE-Tag: cup99_4f0884727588 X-Filterd-Recvd-Size: 5103 Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:44 +0000 (UTC) Received: by mail-qt1-f172.google.com with SMTP id t17so10543444qtq.2 for ; Mon, 25 Jan 2021 11:19:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=tiSMngDA2ABiEtmeVcai+dAKxWkYAh1htex3hJFUjpk=; b=ViZk57Ck4HUJG00kgKhICCWv1eeMV2KNwvtWbX+xyUO32m6U7YnxxkY3kd7tyt7Ng+ XrR64IVqIQXWuSCO/zvWvsnc220koQtIQ6BeCpcN3m5qJtxE8hQd+lqOk1SSUws39mJ5 2UU0xjoEra/u8cu7+zZxLeSEhnlev+FK43G7b4wigJE0ZW7Fxmwc4f+5WKP6PGVvLDKD CevQSfsHTtUhlePppvfpl/vrZuOFENi0k/k70Vz6s/LnbGLcFohCKRaCw+y6dHHnaupf dxxTdXFgaF41aR4eCDsfpBOzusaAodDscncPCsGYESYFYQ62GgZzY2GbN4qZLZDAgGys GB3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tiSMngDA2ABiEtmeVcai+dAKxWkYAh1htex3hJFUjpk=; b=PwTXy3hfj+MCbFqS4p4APib2PjJJRMt51X28QiZCTNFeJp2K6s8CXFWaAgjWFaaa+q CI3sFJReb5FfZVAO1tS/fVgiPbujwpVO8pCpIwY5kFVFSLSSIqfHR6XARYzYCoybM+sn sO1MkJn7WVjSBjsNDBbT8+U7rUgGMt5GafTcchGUXdrlJr8N+u8RPsckgSLqvE+pRiTj Nq2/6cW8XFUDid7OAbCswEQR9yz/DWylEkkeRZsh3APko3T18ynNyxBRpHoOHnkykr0z nfXrMMlCU8h6G2/BtRZjhAfbum165S6bh7MXISk1RGmbW63rc/aBBJgl+7WUpnVyp6mC Zw+w== X-Gm-Message-State: AOAM532IerZXMq5J5iw3O6sT5OqgbSosr0pnj7uAOf4jxJ/yu98yW//k /e+nwdQ8x9w1aG/Fvk+wQwkoSg== X-Google-Smtp-Source: ABdhPJzn7ahPtOGX/KTR0Kc/e2td2Abo5VWVmrRWEDIeC/ylKNtmaaW0TcaryJ7Q6EPO8SaSfFQgTQ== X-Received: by 2002:ac8:4d93:: with SMTP id a19mr1928434qtw.356.1611602383884; Mon, 25 Jan 2021 11:19:43 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:43 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 12/18] arm64: kexec: arm64_relocate_new_kernel don't use x0 as temp Date: Mon, 25 Jan 2021 14:19:17 -0500 Message-Id: <20210125191923.1060122-13-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: x0 will contain the only argument to arm64_relocate_new_kernel; don't use it as a temp. Reassigned registers to free-up x0 so we won't need to copy argument, and can use it at the beginning and at the end of the function. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/kernel/relocate_kernel.S | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 462ffbc37071..b78ea5de97a4 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -34,7 +34,7 @@ SYM_CODE_START(arm64_relocate_new_kernel) mov x13, xzr /* x13 = copy dest */ /* Check if the new image needs relocation. */ tbnz x16, IND_DONE_BIT, .Ldone - raw_dcache_line_size x15, x0 /* x15 = dcache line size */ + raw_dcache_line_size x15, x1 /* x15 = dcache line size */ .Lloop: and x12, x16, PAGE_MASK /* x12 = addr */ @@ -43,17 +43,17 @@ SYM_CODE_START(arm64_relocate_new_kernel) tbz x16, IND_SOURCE_BIT, .Ltest_indirection /* Invalidate dest page to PoC. */ - mov x0, x13 - add x20, x0, #PAGE_SIZE + mov x2, x13 + add x20, x2, #PAGE_SIZE sub x1, x15, #1 - bic x0, x0, x1 -2: dc ivac, x0 - add x0, x0, x15 - cmp x0, x20 + bic x2, x2, x1 +2: dc ivac, x2 + add x2, x2, x15 + cmp x2, x20 b.lo 2b dsb sy - copy_page x13, x12, x0, x1, x2, x3, x4, x5, x6, x7 + copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 b .Lnext .Ltest_indirection: tbz x16, IND_INDIRECTION_BIT, .Ltest_destination From patchwork Mon Jan 25 19:19:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EF70C433DB for ; Mon, 25 Jan 2021 19:20:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E80B121744 for ; Mon, 25 Jan 2021 19:20:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E80B121744 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 676D28D0028; Mon, 25 Jan 2021 14:19:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64EAA8D0023; Mon, 25 Jan 2021 14:19:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 51B1F8D0028; Mon, 25 Jan 2021 14:19:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id 2BB038D0023 for ; Mon, 25 Jan 2021 14:19:47 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E274D3622 for ; Mon, 25 Jan 2021 19:19:46 +0000 (UTC) X-FDA: 77745261972.03.ship87_060a8f127588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id BDC5128A4E9 for ; Mon, 25 Jan 2021 19:19:46 +0000 (UTC) X-HE-Tag: ship87_060a8f127588 X-Filterd-Recvd-Size: 13627 Received: from mail-qv1-f53.google.com (mail-qv1-f53.google.com [209.85.219.53]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:45 +0000 (UTC) Received: by mail-qv1-f53.google.com with SMTP id h21so6715693qvb.8 for ; Mon, 25 Jan 2021 11:19:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ASygAqr57jUZ78nCjocRBoBMKp4WVYWpgcDuzWEf5/U=; b=kwanBJbnq5DMQn3NNdwALjHbGNIQHRgVMzw/Xc+aSak11cDNL99h8N7xtx7c8gngNP e3KAIrRTpyBGeF9fdLyOiJ3hPzW5AZaC88I9XyuG+3a1+/V4dltXX9U7nM7+9pEIB3Me 86UUqy73HTQ2qKV07Y9Mb83Y/TbL1+Ig8KWvxl5bPCfQ5cmcvHkndt40H4FloM3GoHcQ EFYUj2luKiryZhIHw/bvSX5V6SIKGJwx6oKh8SXtCholJ29JzgmOGyKpqHGvtbcO59Cj mW9VwXzYubwJ9ABOmue84kkNoRHqo7cHaIru9BKsgyCA6UBWVqBw9hwfaqR8LI9kWodg fCwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ASygAqr57jUZ78nCjocRBoBMKp4WVYWpgcDuzWEf5/U=; b=TqGUtyPow6zyZOu3ct6CjqcVlZI4nnWakMl8X4Y6K4gLZ4hJ7TR6iDVjvX3hRVHH06 dHYYCqV93WbaqS/OGW3SDariGW3vCqvo8qQtCST5mBPlg+Cxf4YPwaY03d3btQ8HZzrM cZPgNjkeKfzXtaUE3XHDQb6HwUYQXheIYL09+AYPrSfGQoq1UnRqEKp7GFoyNRIZzrQ1 SGwQWBsAPFoyHq7mC8zpF9LFE0R8ZK8OUYnusLaQNwOsqtcsUtxRH2NsWBN+K98DMFoa gu+9X9fYsZ40R8gNCYZEdTwdgahmQoizXkU7jwY0lij/w6N1fc3mA9YxI928A6BD9Mhl PT9A== X-Gm-Message-State: AOAM531VYcQJOaboFVE0aRr1ZX5HHX+YHBQ9//FDr9B0weDsTT6lOfDg 0Etrqk2xWgGSfkhYSOgS5TEScw== X-Google-Smtp-Source: ABdhPJyrusG+lMDL+aP7VGwdEsWxy6Ayk8cCYPTOqt0xOkOzZ3Z/dofRUCKSU1J661lfcVzfB4nlDQ== X-Received: by 2002:a0c:b99c:: with SMTP id v28mr2278172qvf.12.1611602385413; Mon, 25 Jan 2021 11:19:45 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:44 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 13/18] arm64: kexec: add expandable argument to relocation function Date: Mon, 25 Jan 2021 14:19:18 -0500 Message-Id: <20210125191923.1060122-14-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, kexec relocation function (arm64_relocate_new_kernel) accepts the following arguments: head: start of array that contains relocation information. entry: entry point for new kernel or purgatory. dtb_mem: first and only argument to entry. The number of arguments cannot be easily expended, because this function is also called from HVC_SOFT_RESTART, which preserves only three arguments (hypervisor abi). And, also arm64_relocate_new_kernel is written in assembly but called without stack, thus no place to move extra arguments to free registers. Soon, we will need to pass more arguments: once we enable MMU we will need to pass information about page tables. Add a new struct: kern_reloc_arg, and place it in kexec safe page (i.e memory that is not overwritten during relocation). Thus, make arm64_relocate_new_kernel to only take one argument, that contains all the needed information. Note: Another benefit of allowing this function to accept more arguments, is that kernel can actually accept up to 4 arguments (x0-x3), however currently only one is used, but if in the future we will need for more (for example, pass information about when previous kernel exited to have a precise measurement in time spent in purgatory), we won't be easilty do that if arm64_relocate_new_kernel can't accept more arguments. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 18 ++++++++++++++++++ arch/arm64/kernel/asm-offsets.c | 9 +++++++++ arch/arm64/kernel/cpu-reset.S | 11 +++-------- arch/arm64/kernel/cpu-reset.h | 8 +++----- arch/arm64/kernel/machine_kexec.c | 27 +++++++++++++++++++++++++-- arch/arm64/kernel/relocate_kernel.S | 21 ++++++++------------- 6 files changed, 66 insertions(+), 28 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 9befcd87e9a8..990185744148 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,12 +90,30 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +/* + * kern_reloc_arg is passed to kernel relocation function as an argument. + * head kimage->head, allows to traverse through relocation segments. + * entry_addr kimage->start, where to jump from relocation function (new + * kernel, or purgatory entry address). + * kern_arg0 first argument to kernel is its dtb address. The other + * arguments are currently unused, and must be set to 0 + */ +struct kern_reloc_arg { + phys_addr_t head; + phys_addr_t entry_addr; + phys_addr_t kern_arg0; + phys_addr_t kern_arg1; + phys_addr_t kern_arg2; + phys_addr_t kern_arg3; +}; + #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; phys_addr_t dtb_mem; phys_addr_t kern_reloc; + phys_addr_t kern_reloc_arg; /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_mem; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 301784463587..6067a288f568 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -23,6 +23,7 @@ #include #include #include +#include int main(void) { @@ -150,6 +151,14 @@ int main(void) DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga)); DEFINE(PTRAUTH_KERNEL_KEY_APIA, offsetof(struct ptrauth_keys_kernel, apia)); BLANK(); +#endif +#ifdef CONFIG_KEXEC_CORE + DEFINE(KEXEC_KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); + DEFINE(KEXEC_KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); + DEFINE(KEXEC_KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); + DEFINE(KEXEC_KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); + DEFINE(KEXEC_KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); + DEFINE(KEXEC_KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); #endif return 0; } diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 37721eb6f9a1..bbf70db43744 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -16,14 +16,11 @@ .pushsection .idmap.text, "awx" /* - * __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for - * cpu_soft_restart. + * __cpu_soft_restart(el2_switch, entry, arg) - Helper for cpu_soft_restart. * * @el2_switch: Flag to indicate a switch to EL2 is needed. * @entry: Location to jump to for soft reset. - * arg0: First argument passed to @entry. (relocation list) - * arg1: Second argument passed to @entry.(physical kernel entry) - * arg2: Third argument passed to @entry. (physical dtb address) + * arg: Entry argument * * Put the CPU into the same state as it would be if it had been reset, and * branch to what would be the reset vector. It must be executed with the @@ -47,9 +44,7 @@ SYM_CODE_START(__cpu_soft_restart) hvc #0 // no return 1: mov x8, x1 // entry - mov x0, x2 // arg0 - mov x1, x3 // arg1 - mov x2, x4 // arg2 + mov x0, x2 // arg br x8 SYM_CODE_END(__cpu_soft_restart) diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h index ed50e9587ad8..7a8720ff186f 100644 --- a/arch/arm64/kernel/cpu-reset.h +++ b/arch/arm64/kernel/cpu-reset.h @@ -11,12 +11,10 @@ #include void __cpu_soft_restart(unsigned long el2_switch, unsigned long entry, - unsigned long arg0, unsigned long arg1, unsigned long arg2); + unsigned long arg); static inline void __noreturn cpu_soft_restart(unsigned long entry, - unsigned long arg0, - unsigned long arg1, - unsigned long arg2) + unsigned long arg) { typeof(__cpu_soft_restart) *restart; @@ -25,7 +23,7 @@ static inline void __noreturn cpu_soft_restart(unsigned long entry, restart = (void *)__pa_symbol(__cpu_soft_restart); cpu_install_idmap(); - restart(el2_switch, entry, arg0, arg1, arg2); + restart(el2_switch, entry, arg); unreachable(); } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 90a335c74442..679db3f1e0c5 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -43,6 +43,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); + pr_debug(" kern_reloc_arg: %pa\n", &kimage->arch.kern_reloc_arg); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -59,19 +60,42 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +/* Allocates pages for kexec page table */ +static void *kexec_page_alloc(void *arg) +{ + struct kimage *kimage = (struct kimage *)arg; + struct page *page = kimage_alloc_control_pages(kimage, 0); + + if (!page) + return NULL; + + memset(page_address(page), 0, PAGE_SIZE); + + return page_address(page); +} + int machine_kexec_post_load(struct kimage *kimage) { void *reloc_code = page_to_virt(kimage->control_code_page); + struct kern_reloc_arg *kern_reloc_arg = kexec_page_alloc(kimage); + + if (!kern_reloc_arg) + return -ENOMEM; memcpy(reloc_code, arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); kimage->arch.kern_reloc = __pa(reloc_code); + kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); + kern_reloc_arg->head = kimage->head; + kern_reloc_arg->entry_addr = kimage->start; + kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; kexec_image_info(kimage); /* Flush the reloc_code in preparation for its execution. */ __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size); flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code + arm64_relocate_new_kernel_size); + __flush_dcache_area(kern_reloc_arg, sizeof(struct kern_reloc_arg)); return 0; } @@ -192,8 +216,7 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, - kimage->arch.dtb_mem); + cpu_soft_restart(kimage->arch.kern_reloc, kimage->arch.kern_reloc_arg); BUG(); /* Should never get here. */ } diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index b78ea5de97a4..c92228aeddca 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -8,7 +8,7 @@ #include #include - +#include #include #include #include @@ -26,13 +26,8 @@ * safe memory that has been set up to be preserved during the copy operation. */ SYM_CODE_START(arm64_relocate_new_kernel) - /* Setup the list loop variables. */ - mov x18, x2 /* x18 = dtb address */ - mov x17, x1 /* x17 = kimage_start */ - mov x16, x0 /* x16 = kimage_head */ - mov x14, xzr /* x14 = entry ptr */ - mov x13, xzr /* x13 = copy dest */ /* Check if the new image needs relocation. */ + ldr x16, [x0, #KEXEC_KRELOC_HEAD] /* x16 = kimage_head */ tbnz x16, IND_DONE_BIT, .Ldone raw_dcache_line_size x15, x1 /* x15 = dcache line size */ .Lloop: @@ -73,12 +68,12 @@ SYM_CODE_START(arm64_relocate_new_kernel) isb /* Start new image. */ - mov x0, x18 - mov x1, xzr - mov x2, xzr - mov x3, xzr - br x17 - + ldr x4, [x0, #KEXEC_KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + ldr x3, [x0, #KEXEC_KRELOC_KERN_ARG3] + ldr x2, [x0, #KEXEC_KRELOC_KERN_ARG2] + ldr x1, [x0, #KEXEC_KRELOC_KERN_ARG1] + ldr x0, [x0, #KEXEC_KRELOC_KERN_ARG0] /* x0 = dtb address */ + br x4 SYM_CODE_END(arm64_relocate_new_kernel) .align 3 /* To keep the 64-bit values below naturally aligned. */ From patchwork Mon Jan 25 19:19:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C8E0C433E9 for ; Mon, 25 Jan 2021 19:20:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A608A21744 for ; Mon, 25 Jan 2021 19:20:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A608A21744 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B34B28D0029; Mon, 25 Jan 2021 14:19:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A94568D0023; Mon, 25 Jan 2021 14:19:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 90FF68D0029; Mon, 25 Jan 2021 14:19:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 7B54B8D0023 for ; Mon, 25 Jan 2021 14:19:48 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3D3F11EF1 for ; Mon, 25 Jan 2021 19:19:48 +0000 (UTC) X-FDA: 77745262056.22.loaf87_13008e127588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 133A118038E67 for ; Mon, 25 Jan 2021 19:19:48 +0000 (UTC) X-HE-Tag: loaf87_13008e127588 X-Filterd-Recvd-Size: 9663 Received: from mail-qv1-f53.google.com (mail-qv1-f53.google.com [209.85.219.53]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:47 +0000 (UTC) Received: by mail-qv1-f53.google.com with SMTP id es14so2919913qvb.3 for ; Mon, 25 Jan 2021 11:19:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=iM7vyMjvp8HFtXYgFN4DAarGfV3AZjiwX6w+vbwvdyg=; b=cNVMTo7wVOFzkosxcDfMWZ2s1tXW6ofVvoFzZeHrw+FNJleyDydfxbhz3EfNsvpFJ/ 2s2rNi8tlVz1731hYXPPp+grZOrvK99ScwJsmxiKRan0biFSicPEerEV/fP0VD2+s3c0 Q1CpZE03nirV+yKYJKAtAexxmsocy0Px9UBV7F6Y40euZaJWsR3luw71jBO7izoGBRQx 6tTMZCqVvUdXzUTfrLOG5sToHBzDtqGXMuKPbC+C/uKPt/qHUepYF71bZy78jfYg1Yd2 z1w0Pu0vk8UauVFL/7IgUBGL2IigtR4ZgVXHJa5k4bOOK6+3w66dkb66fuid43YHPtow s98g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iM7vyMjvp8HFtXYgFN4DAarGfV3AZjiwX6w+vbwvdyg=; b=AppfbfMZuc7RLpc8rb2gaPV/zqYYsDZAy9TamE/rBEGjbDzOeDfSiQ5OvP4r49SqWq bsIORYl9cwWUPiuMuDeRKpBRQ3flb/Skb7S7VSyFIO8YE7MPCY0NDbGdyjoW8TMTNS21 znRXgVuSMh2PmYKBRBvpkeoun3daO37R1HowUC8wx7Ru3lM995fUruwQhH+ywGqLHDII hpLLoSLf3j+VukCJcPLH2STSDLM5R1TWQf7BfPLZdwNVrKpWJJJKz4sdh0PLHnmvcxX2 2G3rrJHFzqhgxIjpRqy5pXVNRhzW77cDVQNl/o0fNvmNR5E1GUgzjpeuHZd+6djDGBBH qSUQ== X-Gm-Message-State: AOAM531ImR0pdgfD8RpiOrvA2xVO13QLfSWeU5zqk2ofXpysQahnmBW1 b510rsnVd5SOzHWm09UZlxoipA== X-Google-Smtp-Source: ABdhPJxSLliyq8RV2Enm412pyUsXXb9GvTpxItvdOl+cT7BuQwZg2qzmybsh1JfoC5G0oaCSxQ/2hw== X-Received: by 2002:a0c:c38e:: with SMTP id o14mr2264022qvi.29.1611602386886; Mon, 25 Jan 2021 11:19:46 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:46 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 14/18] arm64: kexec: use ld script for relocation function Date: Mon, 25 Jan 2021 14:19:19 -0500 Message-Id: <20210125191923.1060122-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, relocation code declares start and end variables which are used to compute it size. The better way to do this is to use ld script incited, and put relocation function in its own section. Soon, relocation function will share the same page with EL2 vectors. So, proper marking is needed. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 4 ++++ arch/arm64/include/asm/sections.h | 1 + arch/arm64/kernel/machine_kexec.c | 17 ++++++++--------- arch/arm64/kernel/relocate_kernel.S | 15 ++------------- arch/arm64/kernel/vmlinux.lds.S | 19 +++++++++++++++++++ 5 files changed, 34 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 990185744148..7f4f9abdf049 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,6 +90,10 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +#if defined(CONFIG_KEXEC_CORE) +extern const char arm64_relocate_new_kernel[]; +#endif + /* * kern_reloc_arg is passed to kernel relocation function as an argument. * head kimage->head, allows to traverse through relocation segments. diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index 8ff579361731..ae873eb22205 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -19,5 +19,6 @@ extern char __exittext_begin[], __exittext_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; +extern char __relocate_new_kernel_start[], __relocate_new_kernel_end[]; #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 679db3f1e0c5..361a4d082093 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -20,13 +20,10 @@ #include #include #include +#include #include "cpu-reset.h" -/* Global variables for the arm64_relocate_new_kernel routine. */ -extern const unsigned char arm64_relocate_new_kernel[]; -extern const unsigned long arm64_relocate_new_kernel_size; - /** * kexec_image_info - For debugging output. */ @@ -78,13 +75,15 @@ int machine_kexec_post_load(struct kimage *kimage) { void *reloc_code = page_to_virt(kimage->control_code_page); struct kern_reloc_arg *kern_reloc_arg = kexec_page_alloc(kimage); + long func_offset, reloc_size; if (!kern_reloc_arg) return -ENOMEM; - memcpy(reloc_code, arm64_relocate_new_kernel, - arm64_relocate_new_kernel_size); - kimage->arch.kern_reloc = __pa(reloc_code); + func_offset = arm64_relocate_new_kernel - __relocate_new_kernel_start; + reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start; + memcpy(reloc_code, __relocate_new_kernel_start, reloc_size); + kimage->arch.kern_reloc = __pa(reloc_code) + func_offset; kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; @@ -92,9 +91,9 @@ int machine_kexec_post_load(struct kimage *kimage) kexec_image_info(kimage); /* Flush the reloc_code in preparation for its execution. */ - __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size); + __flush_dcache_area(reloc_code, reloc_size); flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code + - arm64_relocate_new_kernel_size); + reloc_size); __flush_dcache_area(kern_reloc_arg, sizeof(struct kern_reloc_arg)); return 0; diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index c92228aeddca..d2a4a0b0d76b 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -14,6 +14,7 @@ #include #include +.pushsection ".kexec_relocate.text", "ax" /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * @@ -75,16 +76,4 @@ SYM_CODE_START(arm64_relocate_new_kernel) ldr x0, [x0, #KEXEC_KRELOC_KERN_ARG0] /* x0 = dtb address */ br x4 SYM_CODE_END(arm64_relocate_new_kernel) - -.align 3 /* To keep the 64-bit values below naturally aligned. */ - -.Lcopy_end: -.org KEXEC_CONTROL_PAGE_SIZE - -/* - * arm64_relocate_new_kernel_size - Number of bytes to copy to the - * control_code_page. - */ -.globl arm64_relocate_new_kernel_size -arm64_relocate_new_kernel_size: - .quad .Lcopy_end - arm64_relocate_new_kernel +.popsection diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 4c0b0c89ad59..33b0d3c9fd3b 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -82,6 +83,16 @@ jiffies = jiffies_64; #define HIBERNATE_TEXT #endif +#ifdef CONFIG_KEXEC_CORE +#define KEXEC_TEXT \ + . = ALIGN(SZ_4K); \ + __relocate_new_kernel_start = .; \ + *(.kexec_relocate.text) \ + __relocate_new_kernel_end = .; +#else +#define KEXEC_TEXT +#endif + #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define TRAMP_TEXT \ . = ALIGN(PAGE_SIZE); \ @@ -142,6 +153,7 @@ SECTIONS HYPERVISOR_TEXT IDMAP_TEXT HIBERNATE_TEXT + KEXEC_TEXT TRAMP_TEXT *(.fixup) *(.gnu.warning) @@ -316,3 +328,10 @@ ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE, * If padding is applied before .head.text, virt<->phys conversions will fail. */ ASSERT(_text == KIMAGE_VADDR, "HEAD is misaligned") + +#ifdef CONFIG_KEXEC_CORE +/* kexec relocation code should fit into one KEXEC_CONTROL_PAGE_SIZE */ +ASSERT(__relocate_new_kernel_end - (__relocate_new_kernel_start & ~(SZ_4K - 1)) + <= SZ_4K, "kexec relocation code is too big or misaligned") +ASSERT(KEXEC_CONTROL_PAGE_SIZE >= SZ_4K, "KEXEC_CONTROL_PAGE_SIZE is brokern") +#endif From patchwork Mon Jan 25 19:19:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FCFCC4332B for ; Mon, 25 Jan 2021 19:20:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D3B1421D94 for ; Mon, 25 Jan 2021 19:20:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3B1421D94 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0360A8D002A; Mon, 25 Jan 2021 14:19:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 00DE58D0023; Mon, 25 Jan 2021 14:19:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA2158D002A; Mon, 25 Jan 2021 14:19:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id BFCFE8D0023 for ; Mon, 25 Jan 2021 14:19:49 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8770C181AEF0B for ; Mon, 25 Jan 2021 19:19:49 +0000 (UTC) X-FDA: 77745262098.16.top98_36034b327588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 648BB100E690C for ; Mon, 25 Jan 2021 19:19:49 +0000 (UTC) X-HE-Tag: top98_36034b327588 X-Filterd-Recvd-Size: 8749 Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:48 +0000 (UTC) Received: by mail-qk1-f169.google.com with SMTP id t63so974726qkc.1 for ; Mon, 25 Jan 2021 11:19:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=lbvVstnavugORmz+7RoYwB1ceirdtrsJhtHKQ5onWhE=; b=IK8GwsbNp05JTiDZeLKhus1Kd2crIvvLrCxhH/bhmegAMs2Z7vCp9d9KDysrBeDIzB KfxvBNXH0aHs05LraLi8jmw16detVwNx9DuZtp/sIo3qMZqxfqpu59gbI149OG92xdpR BrFyLDIenN5JcmuFpWIbMiKc4uzG51cstbavtA7pMHPtalMM29HTadK2mLoOXJWU5BxK 45HH6pKYIKWhfQ7huEmlUwbahY1bfK2R1z7LNKLBw4FcrzBIC5JKeyhBB9/bQJ+AHQBZ MQM5GrrFPz30IMgWgwFEjypPYyxfGfRD+/2oR5QMd8KQG6sPGUEoLJ4oRkCqK0gJSZZK orNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lbvVstnavugORmz+7RoYwB1ceirdtrsJhtHKQ5onWhE=; b=O99bH1KVhwEkrSdpfB4RA9NfvDINXuZGm64biDW8cGtO3TRTwbtTTBUoKGKVdFlOUL QDZBz+yiSh8rGrSJX1YxCU5UtX1hz8JNw9BlNubvz8pCsXH7qxWdoQryI1HRSOc//hV6 WeIwFtLsICHAXLJoSdzUc3sNn88CSrK4xu6d3kauWjCAdujlnxgx3zTvlyJLr2GCH4b7 Qt5P2w+pjB+c/0qp5CP3N3c+8DwUbQoN4pDuEFNsW2+rGJILiq5Jw/tW9U4PtBEMV08d 1MFOtqTctM9kaLQ3deiTM3BHOrC1LVJ7Ij30nSN1zpnNkYm9+wGjFv1hKJQUpyhbk8OR Q6QQ== X-Gm-Message-State: AOAM530z1LSQREnwxdkcXV/3ikOyznosZeUHq+cqwqAx4DyuvamRlnu4 OHBo6hHqy13ep7myMyjn6/AEMQ== X-Google-Smtp-Source: ABdhPJw6EjzUg6gmYM29zXvOx243XtORadzlnKH8/gi9zPCw6PpLiwryRc9t/9ZWzJPnyqYyMiOLXw== X-Received: by 2002:ae9:dc87:: with SMTP id q129mr2285101qkf.297.1611602388372; Mon, 25 Jan 2021 11:19:48 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:47 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 15/18] arm64: kexec: kexec may require EL2 vectors Date: Mon, 25 Jan 2021 14:19:20 -0500 Message-Id: <20210125191923.1060122-16-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we have a EL2 mode without VHE, the EL2 vectors are needed in order to switch to EL2 and jump to new world with hypervisor privileges. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 5 +++++ arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kernel/machine_kexec.c | 9 +++++++- arch/arm64/kernel/relocate_kernel.S | 35 +++++++++++++++++++++++++++++ 4 files changed, 49 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 7f4f9abdf049..b96d8a6aac80 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -92,6 +92,7 @@ static inline void crash_post_resume(void) {} #if defined(CONFIG_KEXEC_CORE) extern const char arm64_relocate_new_kernel[]; +extern const char arm64_kexec_el2_vectors[]; #endif /* @@ -101,6 +102,9 @@ extern const char arm64_relocate_new_kernel[]; * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other * arguments are currently unused, and must be set to 0 + * el2_vector If present means that relocation routine will go to EL1 + * from EL2 to do the copy, and then back to EL2 to do the jump + * to new world. */ struct kern_reloc_arg { phys_addr_t head; @@ -109,6 +113,7 @@ struct kern_reloc_arg { phys_addr_t kern_arg1; phys_addr_t kern_arg2; phys_addr_t kern_arg3; + phys_addr_t el2_vector; }; #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 6067a288f568..8a9475be1b62 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -159,6 +159,7 @@ int main(void) DEFINE(KEXEC_KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); DEFINE(KEXEC_KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); DEFINE(KEXEC_KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); + DEFINE(KEXEC_KRELOC_EL2_VECTOR, offsetof(struct kern_reloc_arg, el2_vector)); #endif return 0; } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 361a4d082093..41d1e3ca13f8 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -75,19 +75,26 @@ int machine_kexec_post_load(struct kimage *kimage) { void *reloc_code = page_to_virt(kimage->control_code_page); struct kern_reloc_arg *kern_reloc_arg = kexec_page_alloc(kimage); - long func_offset, reloc_size; + long func_offset, vector_offset, reloc_size; if (!kern_reloc_arg) return -ENOMEM; func_offset = arm64_relocate_new_kernel - __relocate_new_kernel_start; reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start; + vector_offset = arm64_kexec_el2_vectors - __relocate_new_kernel_start; + memcpy(reloc_code, __relocate_new_kernel_start, reloc_size); kimage->arch.kern_reloc = __pa(reloc_code) + func_offset; kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; + + /* Setup vector table only when EL2 is available, but no VHE */ + if (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) + kern_reloc_arg->el2_vector = __pa(reloc_code) + vector_offset; + kexec_image_info(kimage); /* Flush the reloc_code in preparation for its execution. */ diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index d2a4a0b0d76b..c6178b1a4e60 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -14,6 +14,17 @@ #include #include +.macro el1_sync_64 + .align 7 + br x4 /* Jump to new world from el2 */ +.endm + +.macro invalid_vector label +\label: + .align 7 + b \label +.endm + .pushsection ".kexec_relocate.text", "ax" /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. @@ -76,4 +87,28 @@ SYM_CODE_START(arm64_relocate_new_kernel) ldr x0, [x0, #KEXEC_KRELOC_KERN_ARG0] /* x0 = dtb address */ br x4 SYM_CODE_END(arm64_relocate_new_kernel) + +/* el2 vectors - switch el2 here while we restore the memory image. */ + .align 11 +SYM_CODE_START(arm64_kexec_el2_vectors) + invalid_vector el2_sync_invalid_sp0 /* Synchronous EL2t */ + invalid_vector el2_irq_invalid_sp0 /* IRQ EL2t */ + invalid_vector el2_fiq_invalid_sp0 /* FIQ EL2t */ + invalid_vector el2_error_invalid_sp0 /* Error EL2t */ + + invalid_vector el2_sync_invalid_spx /* Synchronous EL2h */ + invalid_vector el2_irq_invalid_spx /* IRQ EL2h */ + invalid_vector el2_fiq_invalid_spx /* FIQ EL2h */ + invalid_vector el2_error_invalid_spx /* Error EL2h */ + + el1_sync_64 /* Synchronous 64-bit EL1 */ + invalid_vector el1_irq_invalid_64 /* IRQ 64-bit EL1 */ + invalid_vector el1_fiq_invalid_64 /* FIQ 64-bit EL1 */ + invalid_vector el1_error_invalid_64 /* Error 64-bit EL1 */ + + invalid_vector el1_sync_invalid_32 /* Synchronous 32-bit EL1 */ + invalid_vector el1_irq_invalid_32 /* IRQ 32-bit EL1 */ + invalid_vector el1_fiq_invalid_32 /* FIQ 32-bit EL1 */ + invalid_vector el1_error_invalid_32 /* Error 32-bit EL1 */ +SYM_CODE_END(arm64_kexec_el2_vectors) .popsection From patchwork Mon Jan 25 19:19:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 423C8C433DB for ; Mon, 25 Jan 2021 19:20:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D3DB220E65 for ; Mon, 25 Jan 2021 19:20:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3DB220E65 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EB43F8D002B; Mon, 25 Jan 2021 14:19:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E68BD8D0023; Mon, 25 Jan 2021 14:19:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D53CA8D002B; Mon, 25 Jan 2021 14:19:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id BBEC98D0023 for ; Mon, 25 Jan 2021 14:19:54 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7E46B1EF1 for ; Mon, 25 Jan 2021 19:19:54 +0000 (UTC) X-FDA: 77745262308.07.ray48_3f01fd727588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 6257D1803F9A7 for ; Mon, 25 Jan 2021 19:19:54 +0000 (UTC) X-HE-Tag: ray48_3f01fd727588 X-Filterd-Recvd-Size: 9705 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:53 +0000 (UTC) Received: by mail-qv1-f51.google.com with SMTP id w11so766772qvz.12 for ; Mon, 25 Jan 2021 11:19:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=kuMkw7AdZybvslqmQltvC+V+qU5cLv31NDZ09K7EBEg=; b=oK64oKKQQBz3eN6h0+/Lx2yuanWsQs+hJCSPz+vyh29OFbeOFLT5Xe23Rr5vqAqqia hDdfTHUDsxKiPIVXSKssSWesBy0Uj4Qrv5NzWGtsu6VF0JIydxN9h37COcwJgenkEPlX dcq43hTSCusUMFWUgStPOoOET5G5vjjE0m/ScUoGS8mk88VVfb5cpkQpKS/hAsZ/9UoD v3LhLDNenT8aU7UnkW00YTObUx9CivESou5K9WMYZNr0kwf7BTb3869wXyfYMYY5rXJK 7qtuD6Bqxy7QIAS+LN3Lh374x141W+2qVud+Q21sZ79atN4ep8n8Gb9f0uzb6Q+S6fDP Omfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kuMkw7AdZybvslqmQltvC+V+qU5cLv31NDZ09K7EBEg=; b=Jtdc8t7uNagh4YKOdV9OXwbwefcExp+8XKyLlYjf/xYtfIN03Q0NG/th8jJoBUj1+n DoW8r4wEunwwQ0kxWRWZy+vwCzfjs+wn0VzXSp5MLjWGVuPozNcfT6BWNV6tKHDYm7Da bbmVxCgnsJBXYlnujRncNQS5+OHEtwV459on4pFY1Vu2KFsV3JhzekGqjH2IL7O9YBy9 crfMW8vmQLjzl9RKtwdGPwDduJIE1XXHy9uzbA5x+CzhdiSTKkJ2XQ4M7qHq7f7ywaRg 5pQV3NmpQlXPbE2MnVo+z/VZgJcETskwVG2bEEvUlyo0NzmZc350one+wYbUXrtoWPbl nFPw== X-Gm-Message-State: AOAM530BKTdPRI+E8/YqB1ucyK9Ge6KgDR7B3EdNthTrPFqy9vHYopsC eB25Y9xSXSF9pevI8zkAnhiQzA== X-Google-Smtp-Source: ABdhPJy1zPtPtrFKvie4dKIh/MK1P7SzhxNpMb4yTYY+AVvRu6RGBMRFYJn03biHv9W5Chg441JPkw== X-Received: by 2002:a0c:fcca:: with SMTP id i10mr2068628qvq.38.1611602393196; Mon, 25 Jan 2021 11:19:53 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:49 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 16/18] arm64: kexec: configure trans_pgd page table for kexec Date: Mon, 25 Jan 2021 14:19:21 -0500 Message-Id: <20210125191923.1060122-17-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Configure a page table located in kexec-safe memory that has the following mappings: 1. identity mapping for text of relocation function with executable permission. 2. va mappings for all source ranges 3. va mappings for all destination ranges. Signed-off-by: Pavel Tatashin Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 12 ++++ arch/arm64/kernel/asm-offsets.c | 6 ++ arch/arm64/kernel/machine_kexec.c | 91 ++++++++++++++++++++++++++++++- 3 files changed, 108 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index b96d8a6aac80..049cde429b1b 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -105,6 +105,12 @@ extern const char arm64_kexec_el2_vectors[]; * el2_vector If present means that relocation routine will go to EL1 * from EL2 to do the copy, and then back to EL2 to do the jump * to new world. + * trans_ttbr0 idmap for relocation function and its argument + * trans_ttbr1 map for source/destination addresses. + * trans_t0sz t0sz for idmap page in trans_ttbr0 + * src_addr start address for source pages. + * dst_addr start address for destination pages. + * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { phys_addr_t head; @@ -114,6 +120,12 @@ struct kern_reloc_arg { phys_addr_t kern_arg2; phys_addr_t kern_arg3; phys_addr_t el2_vector; + phys_addr_t trans_ttbr0; + phys_addr_t trans_ttbr1; + unsigned long trans_t0sz; + unsigned long src_addr; + unsigned long dst_addr; + unsigned long copy_len; }; #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 8a9475be1b62..06278611451d 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -160,6 +160,12 @@ int main(void) DEFINE(KEXEC_KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); DEFINE(KEXEC_KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); DEFINE(KEXEC_KRELOC_EL2_VECTOR, offsetof(struct kern_reloc_arg, el2_vector)); + DEFINE(KEXEC_KRELOC_TRANS_TTBR0, offsetof(struct kern_reloc_arg, trans_ttbr0)); + DEFINE(KEXEC_KRELOC_TRANS_TTBR1, offsetof(struct kern_reloc_arg, trans_ttbr1)); + DEFINE(KEXEC_KRELOC_TRANS_T0SZ, offsetof(struct kern_reloc_arg, trans_t0sz)); + DEFINE(KEXEC_KRELOC_SRC_ADDR, offsetof(struct kern_reloc_arg, src_addr)); + DEFINE(KEXEC_KRELOC_DST_ADDR, offsetof(struct kern_reloc_arg, dst_addr)); + DEFINE(KEXEC_KRELOC_COPY_LEN, offsetof(struct kern_reloc_arg, copy_len)); #endif return 0; } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 41d1e3ca13f8..dc1b7e5a54fb 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -21,6 +21,7 @@ #include #include #include +#include #include "cpu-reset.h" @@ -71,11 +72,91 @@ static void *kexec_page_alloc(void *arg) return page_address(page); } +/* + * Map source segments starting from src_va, and map destination + * segments starting from dst_va, and return size of copy in + * *copy_len argument. + * Relocation function essentially needs to do: + * memcpy(dst_va, src_va, copy_len); + */ +static int map_segments(struct kimage *kimage, pgd_t *pgdp, + struct trans_pgd_info *info, + unsigned long src_va, + unsigned long dst_va, + unsigned long *copy_len) +{ + unsigned long *ptr = 0; + unsigned long dest = 0; + unsigned long len = 0; + unsigned long entry, addr; + int rc; + + for (entry = kimage->head; !(entry & IND_DONE); entry = *ptr++) { + addr = entry & PAGE_MASK; + + switch (entry & IND_FLAGS) { + case IND_DESTINATION: + dest = addr; + break; + case IND_INDIRECTION: + ptr = __va(addr); + if (rc) + return rc; + break; + case IND_SOURCE: + rc = trans_pgd_map_page(info, pgdp, __va(addr), + src_va, PAGE_KERNEL); + if (rc) + return rc; + rc = trans_pgd_map_page(info, pgdp, __va(dest), + dst_va, PAGE_KERNEL); + if (rc) + return rc; + dest += PAGE_SIZE; + src_va += PAGE_SIZE; + dst_va += PAGE_SIZE; + len += PAGE_SIZE; + } + } + *copy_len = len; + + return 0; +} + +static int mmu_relocate_setup(struct kimage *kimage, void *reloc_code, + struct kern_reloc_arg *kern_reloc_arg) +{ + struct trans_pgd_info info = { + .trans_alloc_page = kexec_page_alloc, + .trans_alloc_arg = kimage, + }; + pgd_t *trans_pgd = kexec_page_alloc(kimage); + int rc; + + if (!trans_pgd) + return -ENOMEM; + + /* idmap relocation function */ + rc = trans_pgd_idmap_page(&info, &kern_reloc_arg->trans_ttbr0, + &kern_reloc_arg->trans_t0sz, reloc_code); + if (rc) + return rc; + + kern_reloc_arg->src_addr = _PAGE_OFFSET(VA_BITS_MIN); + kern_reloc_arg->dst_addr = _PAGE_OFFSET(VA_BITS_MIN - 1); + kern_reloc_arg->trans_ttbr1 = phys_to_ttbr(__pa(trans_pgd)); + + rc = map_segments(kimage, trans_pgd, &info, kern_reloc_arg->src_addr, + kern_reloc_arg->dst_addr, &kern_reloc_arg->copy_len); + return rc; +} + int machine_kexec_post_load(struct kimage *kimage) { void *reloc_code = page_to_virt(kimage->control_code_page); struct kern_reloc_arg *kern_reloc_arg = kexec_page_alloc(kimage); long func_offset, vector_offset, reloc_size; + int rc = 0; if (!kern_reloc_arg) return -ENOMEM; @@ -95,6 +176,14 @@ int machine_kexec_post_load(struct kimage *kimage) if (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) kern_reloc_arg->el2_vector = __pa(reloc_code) + vector_offset; + /* + * If relocation is not needed, we do not need to enable MMU in + * relocation routine, therefore do not create page tables for + * scenarios such as crash kernel + */ + if (!(kimage->head & IND_DONE)) + rc = mmu_relocate_setup(kimage, reloc_code, kern_reloc_arg); + kexec_image_info(kimage); /* Flush the reloc_code in preparation for its execution. */ @@ -103,7 +192,7 @@ int machine_kexec_post_load(struct kimage *kimage) reloc_size); __flush_dcache_area(kern_reloc_arg, sizeof(struct kern_reloc_arg)); - return 0; + return rc; } /** From patchwork Mon Jan 25 19:19:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 176EAC43381 for ; Mon, 25 Jan 2021 19:20:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8986721D94 for ; Mon, 25 Jan 2021 19:20:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8986721D94 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A23058D002C; Mon, 25 Jan 2021 14:19:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F9A38D0023; Mon, 25 Jan 2021 14:19:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8EA958D002C; Mon, 25 Jan 2021 14:19:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 737C88D0023 for ; Mon, 25 Jan 2021 14:19:56 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 35CAE8249980 for ; Mon, 25 Jan 2021 19:19:56 +0000 (UTC) X-FDA: 77745262392.23.skin20_010051c27588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 0422337608 for ; Mon, 25 Jan 2021 19:19:55 +0000 (UTC) X-HE-Tag: skin20_010051c27588 X-Filterd-Recvd-Size: 9279 Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:55 +0000 (UTC) Received: by mail-qt1-f174.google.com with SMTP id r9so10497070qtp.11 for ; Mon, 25 Jan 2021 11:19:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=RDSr+b90kmy9FHMSi/QfI0SbQbMsMCPJRwK16BQ54DQ=; b=WxdWiV+PelSJ2780GgGioA+Au/FzkI2Ld3y9Qv3Ck/gIiT2w/EXtKoJsE+0QnLkhrF NzgL174DN5cyesZLqlpqi8wPfQWzScrpFqhGV5CV3/CcIi8+lHuuakfvuwiDRZKLYZ3q nxbpY0DJyx2V7mpAlbB6yER6hOLSOQxb0inofimIHO0nE2l6OxdL+p8TooiCMkTBEqgS aKxIWoPPMcArqYT3n2Hllj39Y5pRCseZpf4aEvhKRcGsVkbcHpvyIkHgd/77M21AOhoO jx4xb6tc/azC6m9EMYG0FlqI24127XEIEFtRtuGwkiuvC1Tn85VAmqo8jJxWldh7tVaP NTXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RDSr+b90kmy9FHMSi/QfI0SbQbMsMCPJRwK16BQ54DQ=; b=rxp80GJ0egv5acR2VecSdoKCYba17o5wEuwJ1ICBdNyaGbibV8064AdVE3a0QPvrs9 Zu1WK7jiLngVJNNpq2YDP8GZWloregCKkmhrF/j/qx70iTB5HcNhtZIIuzaijFg1T+8k gkFfwcfJvhZpHthAZkn8i71taO6uktG8sVnOZGZjee6omnfRNYBY8Rx7tKWnp1CtoFDC EhrG4rt6MLhRfu/EBxoUw7+n8Iez1xkWRT6eqvICj+ksBDH1eOTQ3ho3TCdukVhdBfj9 7Lfd4jhsk1Kt9X3iaocp9PMZKYV2XJzFvw/VqmhIzzcWN4NEqKg0LhaYkipzDWRERnt1 qlPg== X-Gm-Message-State: AOAM532Va6h4l/vMBdFX4P1xwU8uBDNZmm0fYE59XgUnXeMlyNoXwXvX Edodk8STSrZD8CqmD3DV4TuYYQ== X-Google-Smtp-Source: ABdhPJxQav/dTh+GHIW6w/wd5sxxZLx8m02Wa9EmWUQyDQjM4DNHDdovsx6PrQGMmiR59tWpk8dAcg== X-Received: by 2002:ac8:5a4b:: with SMTP id o11mr1983211qta.202.1611602394865; Mon, 25 Jan 2021 11:19:54 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:54 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 17/18] arm64: kexec: enable MMU during kexec relocation Date: Mon, 25 Jan 2021 14:19:22 -0500 Message-Id: <20210125191923.1060122-18-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we have transitional page tables configured, temporarily enable MMU to allow faster relocation of segments to final destination. The performance data: for a moderate size kernel + initramfs: 25M the relocation was taking 0.382s, with enabled MMU it now takes 0.019s only or x20 improvement. The time is proportional to the size of relocation, therefore if initramfs is larger, 100M it could take over a second. Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/relocate_kernel.S | 131 ++++++++++++++++++---------- 1 file changed, 87 insertions(+), 44 deletions(-) diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index c6178b1a4e60..9c60981a6911 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -4,6 +4,8 @@ * * Copyright (C) Linaro. * Copyright (C) Huawei Futurewei Technologies. + * Copyright (C) 2020, Microsoft Corporation. + * Pavel Tatashin */ #include @@ -14,6 +16,54 @@ #include #include +.macro tlb_invalidate + dsb sy + dsb ish + tlbi vmalle1 + dsb ish + isb +.endm + +.macro turn_off_mmu tmp1, tmp2 + mrs \tmp1, sctlr_el1 + mov_q \tmp2, SCTLR_ELx_FLAGS + bic \tmp1, \tmp1, \tmp2 + pre_disable_mmu_workaround + msr sctlr_el1, \tmp1 + isb +.endm + +.macro turn_on_mmu tmp1, tmp2 + mrs \tmp1, sctlr_el1 + mov_q \tmp2, SCTLR_ELx_FLAGS + orr \tmp1, \tmp1, \tmp2 + msr sctlr_el1, \tmp1 + ic iallu + dsb nsh + isb +.endm + +/* + * Set ttbr0 and ttbr1, called while MMU is disabled, so no need to temporarily + * set zero_page table. Invalidate TLB after new tables are set. + */ +.macro set_ttbr arg, tmp1, tmp2 + ldr \tmp1, [\arg, #KEXEC_KRELOC_TRANS_TTBR0] + msr ttbr0_el1, \tmp1 + ldr \tmp1, [\arg, #KEXEC_KRELOC_TRANS_TTBR1] + offset_ttbr1 \tmp1, \tmp2 + msr ttbr1_el1, \tmp1 + isb +.endm + +/* Set T0SZ to match the requirements of idmap page */ +.macro set_tcr_t0sz arg, tmp1, tmp2 + ldr \tmp2, [\arg, #KEXEC_KRELOC_TRANS_T0SZ] + mrs \tmp1, tcr_el1 + bfi \tmp1, \tmp2, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH + msr tcr_el1, \tmp1 +.endm + .macro el1_sync_64 .align 7 br x4 /* Jump to new world from el2 */ @@ -36,56 +86,49 @@ * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec * safe memory that has been set up to be preserved during the copy operation. + * + * This function temporarily enables MMU if kernel relocation is needed. + * Also, if we enter this function at EL2 on non-VHE kernel, we temporarily go + * to EL1 to enable MMU, and escalate back to EL2 at the end to do the jump to + * the new kernel. This is determined by presence of el2_vector. */ SYM_CODE_START(arm64_relocate_new_kernel) - /* Check if the new image needs relocation. */ - ldr x16, [x0, #KEXEC_KRELOC_HEAD] /* x16 = kimage_head */ - tbnz x16, IND_DONE_BIT, .Ldone - raw_dcache_line_size x15, x1 /* x15 = dcache line size */ -.Lloop: - and x12, x16, PAGE_MASK /* x12 = addr */ - - /* Test the entry flags. */ -.Ltest_source: - tbz x16, IND_SOURCE_BIT, .Ltest_indirection - - /* Invalidate dest page to PoC. */ - mov x2, x13 - add x20, x2, #PAGE_SIZE - sub x1, x15, #1 - bic x2, x2, x1 -2: dc ivac, x2 - add x2, x2, x15 - cmp x2, x20 - b.lo 2b - dsb sy - - copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 - b .Lnext -.Ltest_indirection: - tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - mov x14, x12 /* ptr = addr */ - b .Lnext -.Ltest_destination: - tbz x16, IND_DESTINATION_BIT, .Lnext - mov x13, x12 /* dest = addr */ -.Lnext: - ldr x16, [x14], #8 /* entry = *ptr++ */ - tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ -.Ldone: - /* wait for writes from copy_page to finish */ - dsb nsh - ic iallu - dsb nsh - isb - - /* Start new image. */ - ldr x4, [x0, #KEXEC_KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + mov x20, xzr /* x20 will hold vector value */ + ldr x11, [x0, #KEXEC_KRELOC_COPY_LEN] + cbz x11, 5f /* Check if need to relocate */ + ldr x20, [x0, #KEXEC_KRELOC_EL2_VECTOR] + cbz x20, 2f /* need to reduce to EL1? */ + msr vbar_el2, x20 /* el2_vector present, means */ + adr x1, 2f /* we will do copy in el1 but */ + msr elr_el2, x1 /* do final jump from el2 */ + eret /* Reduce to EL1 */ +2: set_tcr_t0sz x0, x1, x2 /* Set t0sz for idmaped page */ + set_ttbr x0, x1, x2 /* Set our page tables */ + tlb_invalidate + ldr x1, [x0, #KEXEC_KRELOC_DST_ADDR]; /* arg is not idmapped so */ + ldr x2, [x0, #KEXEC_KRELOC_SRC_ADDR]; /* read before MMU is on */ + turn_on_mmu x3, x4 /* Turn MMU back on */ + mov x12, x1 /* x12 dst backup */ +3: copy_page x1, x2, x3, x4, x5, x6, x7, x8, x9, x10 + sub x11, x11, #PAGE_SIZE + cbnz x11, 3b /* page copy loop */ + raw_dcache_line_size x2, x3 /* x2 = dcache line size */ + sub x3, x2, #1 /* x3 = dcache_size - 1 */ + bic x12, x12, x3 +4: dc cvau, x12 /* Flush D-cache */ + add x12, x12, x2 + cmp x12, x1 /* Compare to dst + len */ + b.ne 4b /* D-cache flush loop */ + turn_off_mmu x1, x2 /* Turn off MMU */ + tlb_invalidate /* Invalidate TLB */ +5: ldr x4, [x0, #KEXEC_KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ ldr x3, [x0, #KEXEC_KRELOC_KERN_ARG3] ldr x2, [x0, #KEXEC_KRELOC_KERN_ARG2] ldr x1, [x0, #KEXEC_KRELOC_KERN_ARG1] ldr x0, [x0, #KEXEC_KRELOC_KERN_ARG0] /* x0 = dtb address */ - br x4 + cbnz x20, 6f /* need to escalate to el2? */ + br x4 /* Jump to new world */ +6: hvc #0 /* enters kexec_el1_sync */ SYM_CODE_END(arm64_relocate_new_kernel) /* el2 vectors - switch el2 here while we restore the memory image. */ From patchwork Mon Jan 25 19:19:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12044141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1DFDC433DB for ; Mon, 25 Jan 2021 19:20:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6709120E65 for ; Mon, 25 Jan 2021 19:20:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6709120E65 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3E0028D002D; Mon, 25 Jan 2021 14:19:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 393298D0023; Mon, 25 Jan 2021 14:19:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C3808D002D; Mon, 25 Jan 2021 14:19:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id F10898D0023 for ; Mon, 25 Jan 2021 14:19:57 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B2B303622 for ; Mon, 25 Jan 2021 19:19:57 +0000 (UTC) X-FDA: 77745262434.20.smell13_340d4fd27588 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 79D0B180C060E for ; Mon, 25 Jan 2021 19:19:57 +0000 (UTC) X-HE-Tag: smell13_340d4fd27588 X-Filterd-Recvd-Size: 5763 Received: from mail-qv1-f53.google.com (mail-qv1-f53.google.com [209.85.219.53]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 19:19:57 +0000 (UTC) Received: by mail-qv1-f53.google.com with SMTP id s6so6712584qvn.6 for ; Mon, 25 Jan 2021 11:19:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=bntzwWb0HnF+pu1wTRlxa4nGPd0MURNpYR8uZbvPwl4=; b=aznWjElVYsHK1zXw38ZeQN4JK2jZE1LCcfMo91RU+SnIfejFZOlOFF6GnXfHtUxTuB nx+vd0X1j3aObV6HjWaRTYR3YFz3OMIKiMzQsFdP8O//rc3JyHEGvZ1eza4Hx9FhM9Xu MLPWDXzEokhcrZo7iPjKXIf8uyiVouOu6eWmBrQdBUyASn920C6UxGCP11uwpZJnT8Aq c0PQ22aHJcWhVfHa+IuSyf3iqMVoSgtM7PDqmk74MOJ5xqFyGUnmJ1Dw4Bg7hGJGR+Jq o7Vrk/rs5TNslgwP46rBg/KriUXV461e/WvbyR+EfD1Cn1cQnLm6i4tN3YvEpNTWjQwh psfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bntzwWb0HnF+pu1wTRlxa4nGPd0MURNpYR8uZbvPwl4=; b=Xa62eRmPQ5hOXPUTQYyUABOyRMW262focPXzwGXLfUVB9uuYZZMTnx+vL7XXUDu+Ny 0WxE+Ry5dbZr1Pf392O5g5sPzwX6WD3kNW4ODL9vULUWPro5PvWe5pV3l7GNfG+XFNpg 477mBOwQwgC3WK/q46UdpCwBBkokKl9nPAvQWLzWWRh6UhEJsirwNIcZTPDLeLQCT4XC HsHdfQwemh1gEwpZiDpUAUxqnMi/v/YHPnrWrlaoyxZJVHpVs91ELPSOOQU8LZ8CZcdt fn2o38BUlU2lSnUlHf876nlAgznZ9JCdeT8ZGyhGeArdB3FO3SIpH1eeZjipFaa2e1uW Xv5A== X-Gm-Message-State: AOAM531VFf8uaOAtxVKTkb4a3zngGPI8ruvh0nCcBRW8+chT82z2igOz 1HS7Tr8lciyP+vbM0VPM+asEPg== X-Google-Smtp-Source: ABdhPJytuTHdBIVuiybjpt5hI8b1ljcElIFMcTl+j/BWrIjw/SnfAX/Yl/DjO9FsYiUxo5xvziJ1Kw== X-Received: by 2002:a05:6214:138e:: with SMTP id g14mr2241683qvz.7.1611602396408; Mon, 25 Jan 2021 11:19:56 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id s6sm9047638qtx.63.2021.01.25.11.19.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 11:19:55 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com Subject: [PATCH v10 18/18] arm64: kexec: remove head from relocation argument Date: Mon, 25 Jan 2021 14:19:23 -0500 Message-Id: <20210125191923.1060122-19-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210125191923.1060122-1-pasha.tatashin@soleen.com> References: <20210125191923.1060122-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that relocation is done using virtual addresses, reloc_arg->head is not needed anymore. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 2 -- arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kernel/machine_kexec.c | 1 - 3 files changed, 4 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 049cde429b1b..2fa4109bd582 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -97,7 +97,6 @@ extern const char arm64_kexec_el2_vectors[]; /* * kern_reloc_arg is passed to kernel relocation function as an argument. - * head kimage->head, allows to traverse through relocation segments. * entry_addr kimage->start, where to jump from relocation function (new * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other @@ -113,7 +112,6 @@ extern const char arm64_kexec_el2_vectors[]; * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { - phys_addr_t head; phys_addr_t entry_addr; phys_addr_t kern_arg0; phys_addr_t kern_arg1; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 06278611451d..94f050ad6471 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -153,7 +153,6 @@ int main(void) BLANK(); #endif #ifdef CONFIG_KEXEC_CORE - DEFINE(KEXEC_KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); DEFINE(KEXEC_KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); DEFINE(KEXEC_KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); DEFINE(KEXEC_KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index dc1b7e5a54fb..c2dff232a85b 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -168,7 +168,6 @@ int machine_kexec_post_load(struct kimage *kimage) memcpy(reloc_code, __relocate_new_kernel_start, reloc_size); kimage->arch.kern_reloc = __pa(reloc_code) + func_offset; kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); - kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem;