From patchwork Wed Jan 3 23:16:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bjorn Helgaas X-Patchwork-Id: 13510659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AABE0C3DA6E for ; Wed, 3 Jan 2024 23:16:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dLq/H7TR5qh8srTBrjOmJvic0B1V/HRRMl3/x4buQcg=; b=ZuDjlE92wNjXEf Dk8TDF46nkpIfY9n4Xh4PG6Bo08HVwXW04wPvh2CPxBhG1ofTEsFNckgwggNf5n0S2wyXeZRlorv0 s7xdhtAg9oTMbn1b6X5c25KFRE+HZ7+INHAP74MMx9+b4eQevD38HFoT9lcK2uEvzjaNUDUD6FzS2 iFvt188TPhehRtx/tcQ8Cv4JThENJZRKmNg1NO3xGzNQ7R1rdSvjRJ8g8mWsKvpI4bmcFQoOttv5b EVPpD0mECUCmeGP733uTP+ZQpVhL77xP9Zx3V6xV0U/8uUAZce7hHcRQgauyhrLYVeaDskVt6H2TF IYVK2Lh4em6gBP5H9pww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rLATL-00CNmz-1I; Wed, 03 Jan 2024 23:16:27 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rLATG-00CNk9-2V for linux-arm-kernel@lists.infradead.org; Wed, 03 Jan 2024 23:16:25 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 3A2D4B81620; Wed, 3 Jan 2024 23:16:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A3DDBC433C8; Wed, 3 Jan 2024 23:16:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1704323779; bh=Gr90VWCG4SEvfPBl+lUpDKdA7y5+PFDPIALUlbQNMzs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aZacJLpOn2XOmpLX7LR9Y71UrV33x6V6CyvVS9O7iKXSYy0AZPqE3FL2frJhqw5Nw JYnTNuUx3cXtpICUiifZty0kBy2YIoBfiQKk6GIPsBqgDH6ckEAxB3DfQ/VevScRHK hVUXhCP6/TbwDGWh4ZNB4ie8UB/guE5Ywenf8uLO+wTFaQhOTPKpMQUn9uyN44GMjY dVOVRPnRxjHYXo7vo7fUhto8AfePxvkD83gH9m6rkGReU7GnfRUNOhmjfPXmrvAkvB UM8NQJ7xsJE2yk6DrkcUUaoMWlGr/Er5S0CCwJlJHodjlaT1t8W5v5pTomZb4BF+mR T+OOY48t4Tefw== From: Bjorn Helgaas To: Catalin Marinas , Will Deacon Cc: Randy Dunlap , linux-kernel@vger.kernel.org, Bjorn Helgaas , linux-arm-kernel@lists.infradead.org Subject: [PATCH 3/8] arm64: Fix typos Date: Wed, 3 Jan 2024 17:16:00 -0600 Message-Id: <20240103231605.1801364-4-helgaas@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240103231605.1801364-1-helgaas@kernel.org> References: <20240103231605.1801364-1-helgaas@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240103_151623_124377_BDC64217 X-CRM114-Status: GOOD ( 36.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Bjorn Helgaas Fix typos, most reported by "codespell arch/arm64". Only touches comments, no code changes. Signed-off-by: Bjorn Helgaas Cc: linux-arm-kernel@lists.infradead.org Reviewed-by: Randy Dunlap --- arch/arm64/Kconfig | 2 +- arch/arm64/include/asm/assembler.h | 4 ++-- arch/arm64/include/asm/cpufeature.h | 4 ++-- arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/include/asm/suspend.h | 2 +- arch/arm64/include/asm/traps.h | 4 ++-- arch/arm64/kernel/acpi.c | 2 +- arch/arm64/kernel/cpufeature.c | 6 +++--- arch/arm64/kernel/entry-common.c | 2 +- arch/arm64/kernel/entry-ftrace.S | 2 +- arch/arm64/kernel/entry.S | 2 +- arch/arm64/kernel/ftrace.c | 2 +- arch/arm64/kernel/machine_kexec.c | 2 +- arch/arm64/kernel/probes/uprobes.c | 2 +- arch/arm64/kernel/sdei.c | 2 +- arch/arm64/kernel/smp.c | 2 +- arch/arm64/kernel/traps.c | 2 +- 17 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b071a00425d..1954035737cf 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2227,7 +2227,7 @@ config CMDLINE default "" help Provide a set of default command-line options at build time by - entering them here. As a minimum, you should specify the the + entering them here. As a minimum, you should specify the root device (e.g. root=/dev/nfs). choice diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 376a980f2bad..0b2e67fa9a11 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -390,7 +390,7 @@ alternative_endif * [start, end) with dcache line size explicitly provided. * * op: operation passed to dc instruction - * domain: domain used in dsb instruciton + * domain: domain used in dsb instruction * start: starting virtual address of the region * end: end virtual address of the region * linesz: dcache line size @@ -431,7 +431,7 @@ alternative_endif * [start, end) * * op: operation passed to dc instruction - * domain: domain used in dsb instruciton + * domain: domain used in dsb instruction * start: starting virtual address of the region * end: end virtual address of the region * fixup: optional label to branch to on user fault diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index f6d416fe49b0..a0f4010c1e85 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -198,7 +198,7 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; * registers (e.g, SCTLR, TCR etc.) or patching the kernel via * alternatives. The kernel patching is batched and performed at later * point. The actions are always initiated only after the capability - * is finalised. This is usally denoted by "enabling" the capability. + * is finalised. This is usually denoted by "enabling" the capability. * The actions are initiated as follows : * a) Action is triggered on all online CPUs, after the capability is * finalised, invoked within the stop_machine() context from @@ -250,7 +250,7 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; #define ARM64_CPUCAP_SCOPE_LOCAL_CPU ((u16)BIT(0)) #define ARM64_CPUCAP_SCOPE_SYSTEM ((u16)BIT(1)) /* - * The capabilitiy is detected on the Boot CPU and is used by kernel + * The capability is detected on the Boot CPU and is used by kernel * during early boot. i.e, the capability should be "detected" and * "enabled" as early as possibly on all booting CPUs. */ diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b19a8aee684c..25bf7d15a115 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -288,7 +288,7 @@ bool pgattr_change_is_safe(u64 old, u64 new); * 1 0 | 1 0 1 * 1 1 | 0 1 x * - * When hardware DBM is not present, the sofware PTE_DIRTY bit is updated via + * When hardware DBM is not present, the software PTE_DIRTY bit is updated via * the page fault mechanism. Checking the dirty status of a pte becomes: * * PTE_DIRTY || (PTE_WRITE && !PTE_RDONLY) diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h index 0cde2f473971..e65f33edf9d6 100644 --- a/arch/arm64/include/asm/suspend.h +++ b/arch/arm64/include/asm/suspend.h @@ -23,7 +23,7 @@ struct cpu_suspend_ctx { * __cpu_suspend_enter()'s caller, and populated by __cpu_suspend_enter(). * This data must survive until cpu_resume() is called. * - * This struct desribes the size and the layout of the saved cpu state. + * This struct describes the size and the layout of the saved cpu state. * The layout of the callee_saved_regs is defined by the implementation * of __cpu_suspend_enter(), and cpu_resume(). This struct must be passed * in by the caller as __cpu_suspend_enter()'s stack-frame is gone once it diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h index eefe766d6161..03084ed290ac 100644 --- a/arch/arm64/include/asm/traps.h +++ b/arch/arm64/include/asm/traps.h @@ -52,8 +52,8 @@ static inline int in_entry_text(unsigned long ptr) * CPUs with the RAS extensions have an Implementation-Defined-Syndrome bit * to indicate whether this ESR has a RAS encoding. CPUs without this feature * have a ISS-Valid bit in the same position. - * If this bit is set, we know its not a RAS SError. - * If its clear, we need to know if the CPU supports RAS. Uncategorized RAS + * If this bit is set, we know it's not a RAS SError. + * If it's clear, we need to know if the CPU supports RAS. Uncategorized RAS * errors share the same encoding as an all-zeros encoding from a CPU that * doesn't support RAS. */ diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c index dba8fcec7f33..7eca4273b415 100644 --- a/arch/arm64/kernel/acpi.c +++ b/arch/arm64/kernel/acpi.c @@ -128,7 +128,7 @@ static int __init acpi_fadt_sanity_check(void) /* * FADT is required on arm64; retrieve it to check its presence - * and carry out revision and ACPI HW reduced compliancy tests + * and carry out revision and ACPI HW reduced compliance tests */ status = acpi_get_table(ACPI_SIG_FADT, 0, &table); if (ACPI_FAILURE(status)) { diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 646591c67e7a..3089526900a8 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -748,7 +748,7 @@ static int search_cmp_ftr_reg(const void *id, const void *regp) * entry. * * returns - Upon success, matching ftr_reg entry for id. - * - NULL on failure. It is upto the caller to decide + * - NULL on failure. It is up to the caller to decide * the impact of a failure. */ static struct arm64_ftr_reg *get_arm64_ftr_reg_nowarn(u32 sys_id) @@ -874,7 +874,7 @@ static void __init sort_ftr_regs(void) /* * Initialise the CPU feature register from Boot CPU values. - * Also initiliases the strict_mask for the register. + * Also initialises the strict_mask for the register. * Any bits that are not covered by an arm64_ftr_bits entry are considered * RES0 for the system-wide value, and must strictly match. */ @@ -3108,7 +3108,7 @@ static void verify_local_cpu_caps(u16 scope_mask) /* * We have to issue cpu_enable() irrespective of * whether the CPU has it or not, as it is enabeld - * system wide. It is upto the call back to take + * system wide. It is up to the call back to take * appropriate action on this CPU. */ if (caps->cpu_enable) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 0fc94207e69a..80b5268578a8 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -660,7 +660,7 @@ static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr) static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr) { - /* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */ + /* Only watchpoints write FAR_EL1, otherwise it's UNKNOWN */ unsigned long far = read_sysreg(far_el1); enter_from_user_mode(regs); diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index f0c16640ef21..e24e7d8f8b61 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -94,7 +94,7 @@ SYM_CODE_START(ftrace_caller) stp x29, x30, [sp, #FREGS_SIZE] add x29, sp, #FREGS_SIZE - /* Prepare arguments for the the tracer func */ + /* Prepare arguments for the tracer func */ sub x0, x30, #AARCH64_INSN_SIZE // ip (callsite's BL insn) mov x1, x9 // parent_ip (callsite's LR) mov x3, sp // regs diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index a6030913cd58..00bdd1fa8151 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -547,7 +547,7 @@ SYM_CODE_START_LOCAL(__bad_stack) mrs x0, tpidrro_el0 /* - * Store the original GPRs to the new stack. The orginal SP (minus + * Store the original GPRs to the new stack. The original SP (minus * PT_REGS_SIZE) was stashed in tpidr_el0 by kernel_ventry. */ sub sp, sp, #PT_REGS_SIZE diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index a650f5e11fc5..6e00b39059ff 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -423,7 +423,7 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, return ret; /* - * When using mcount, callsites in modules may have been initalized to + * When using mcount, callsites in modules may have been initialized to * call an arbitrary module PLT (which redirects to the _mcount stub) * rather than the ftrace PLT we'll use at runtime (which redirects to * the ftrace trampoline). We can ignore the old PLT when initializing diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 078910db77a4..36721a7e7855 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -296,7 +296,7 @@ void crash_post_resume(void) * marked as Reserved as memory was allocated via memblock_reserve(). * * In hibernation, the pages which are Reserved and yet "nosave" are excluded - * from the hibernation iamge. crash_is_nosave() does thich check for crash + * from the hibernation image. crash_is_nosave() does this check for crash * dump kernel and will reduce the total size of hibernation image. */ diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c index d49aef2657cd..5016f7f681c0 100644 --- a/arch/arm64/kernel/probes/uprobes.c +++ b/arch/arm64/kernel/probes/uprobes.c @@ -122,7 +122,7 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) struct uprobe_task *utask = current->utask; /* - * Task has received a fatal signal, so reset back to probbed + * Task has received a fatal signal, so reset back to probed * address. */ instruction_pointer_set(regs, utask->vaddr); diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index 255d12f881c2..931f317a9ffa 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -206,7 +206,7 @@ unsigned long sdei_arch_get_entry_point(int conduit) /* * do_sdei_event() returns one of: * SDEI_EV_HANDLED - success, return to the interrupted context. - * SDEI_EV_FAILED - failure, return this error code to firmare. + * SDEI_EV_FAILED - failure, return this error code to firmware. * virtual-address - success, return to this address. */ unsigned long __kprobes do_sdei_event(struct pt_regs *regs, diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index defbab84e9e5..8b8e1320033b 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -351,7 +351,7 @@ void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) /* * Now that the dying CPU is beyond the point of no return w.r.t. - * in-kernel synchronisation, try to get the firwmare to help us to + * in-kernel synchronisation, try to get the firmware to help us to * verify that it has really left the kernel before we consider * clobbering anything it might still be using. */ diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 215e6d7f2df8..e76c71c54c8c 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -897,7 +897,7 @@ void __noreturn panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigne __show_regs(regs); /* - * We use nmi_panic to limit the potential for recusive overflows, and + * We use nmi_panic to limit the potential for recursive overflows, and * to get a better stack trace. */ nmi_panic(NULL, "kernel stack overflow"); From patchwork Wed Jan 3 23:16:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bjorn Helgaas X-Patchwork-Id: 13510660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C934AC3DA6E for ; Wed, 3 Jan 2024 23:17:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pKUMbGqbIABEf+Lg4awpr2cpMKF0UQBBfjn759yg3qU=; b=cXsA8jMw3bhmX3 OsT4Lm54ZpxLq/RLtV7udbGd3uJSwepwVT4DupARDJVTNkMIQM54EfMsicoDYyS6GPrrwhEXxEGXa hzsgmGfsNhXEbgKHVBKsACfjjWn7wPd2ya4COlmfg/Na78dU1soFM0/H8+YJTuTkGXTbOpzZiTDKd mpIX+uPY7s/8TYTMrZpOVHhsRq99D9dfvOQsVUTQUh/XEbaipSrCdx2yQMUH5pn+0/fxpDNsXtQ9+ 1w+EzgZ0mNE3zAZpW65UDVxaLopGavYegCymJpOl82ZKRYl2B8lcOv5TUV9y/81HI765KqBlS9Zdo +l9qQoOsyAh/DWPnh0PQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rLATO-00CNoe-0D; Wed, 03 Jan 2024 23:16:30 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rLATJ-00CNl7-0N for linux-arm-kernel@lists.infradead.org; Wed, 03 Jan 2024 23:16:27 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id D090FB81649; Wed, 3 Jan 2024 23:16:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 477F5C433C9; Wed, 3 Jan 2024 23:16:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1704323783; bh=sqy9aC7H0/x8lEnV3p+cfivSEeKK30zPeIpc2NygEFg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uja1Gff7v+6SEJKhRUrsx9BCnzxNDoqv7uUUk/4m2Mg2KT4IoGul5Gm14hTFYaDHJ Nn0niG962K5X9tWoG0vxKkWFDNfjslqAkp77oc4aW2+0nD//27alIS5oxUS6jHMQMR YHBdL99SZjljExOgC3Mo8GdHKbekDqEdDQW3v/AR5irR0ZSBpW9SMTpSxFCfRE8kPx OXort1Rno1Ay/vPPsHfTgl/XmVZ5SQTHpHyEfBiX6QjxqI+B9eoV6mZHf1Tm6PT/Ld U+U1xzHM1WoJplnnYWyao+Q5FXB2bj0praIJtadlES6F83rjmOIfTyVkL7zJFyw6Pm saqFOrc0cVAfA== From: Bjorn Helgaas To: Marc Zyngier , Oliver Upton Cc: Randy Dunlap , linux-kernel@vger.kernel.org, Bjorn Helgaas , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH 5/8] KVM: arm64: Fix typos Date: Wed, 3 Jan 2024 17:16:02 -0600 Message-Id: <20240103231605.1801364-6-helgaas@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240103231605.1801364-1-helgaas@kernel.org> References: <20240103231605.1801364-1-helgaas@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240103_151625_467593_2D235D6D X-CRM114-Status: GOOD ( 21.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Bjorn Helgaas Fix typos, most reported by "codespell arch/arm64". Only touches comments, no code changes. Signed-off-by: Bjorn Helgaas Cc: James Morse Cc: Suzuki K Poulose Cc: Zenghui Yu Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.linux.dev Reviewed-by: Randy Dunlap Reviewed-by: Zenghui Yu --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/arch_timer.c | 2 +- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/hyp/nvhe/host.S | 2 +- arch/arm64/kvm/hyp/nvhe/mm.c | 4 ++-- arch/arm64/kvm/inject_fault.c | 2 +- arch/arm64/kvm/vgic/vgic-init.c | 2 +- arch/arm64/kvm/vgic/vgic-its.c | 4 ++-- 8 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 145ce73fc16c..3e2a1ac0c9bb 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -70,7 +70,7 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); /* * Without an __arch_swab32(), we fall back to ___constant_swab32(), but the * static inline can allow the compiler to out-of-line this. KVM always wants - * the macro version as its always inlined. + * the macro version as it's always inlined. */ #define __kvm_swab32(x) ___constant_swab32(x) diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 13ba691b848f..ded8063b8813 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -746,7 +746,7 @@ static void kvm_timer_vcpu_load_nested_switch(struct kvm_vcpu *vcpu, WARN_ON_ONCE(ret); /* - * The virtual offset behaviour is "interresting", as it + * The virtual offset behaviour is "interesting", as it * always applies when HCR_EL2.E2H==0, but only when * accessed from EL1 when HCR_EL2.E2H==1. So make sure we * track E2H when putting the HV timer in "direct" mode. diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 8c1d0d4853df..571cf6eef1e1 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -117,7 +117,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) } /* - * Called just before entering the guest once we are no longer preemptable + * Called just before entering the guest once we are no longer preemptible * and interrupts are disabled. If we have managed to run anything using * FP while we were preemptible (such as off the back of an interrupt), * then neither the host nor the guest own the FP hardware (and it was the diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 7693a6757cd7..135cfb294ee5 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -110,7 +110,7 @@ SYM_FUNC_END(__host_enter) * u64 elr, u64 par); */ SYM_FUNC_START(__hyp_do_panic) - /* Prepare and exit to the host's panic funciton. */ + /* Prepare and exit to the host's panic function. */ mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ PSR_MODE_EL1h) msr spsr_el2, lr diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 65a7a186d7b2..daf91a7989d7 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -155,7 +155,7 @@ int hyp_back_vmemmap(phys_addr_t back) start = hyp_memory[i].base; start = ALIGN_DOWN((u64)hyp_phys_to_page(start), PAGE_SIZE); /* - * The begining of the hyp_vmemmap region for the current + * The beginning of the hyp_vmemmap region for the current * memblock may already be backed by the page backing the end * the previous region, so avoid mapping it twice. */ @@ -408,7 +408,7 @@ static void *admit_host_page(void *arg) return pop_hyp_memcache(host_mc, hyp_phys_to_virt); } -/* Refill our local memcache by poping pages from the one provided by the host. */ +/* Refill our local memcache by popping pages from the one provided by the host. */ int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages, struct kvm_hyp_memcache *host_mc) { diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 0bd93a5f21ce..a640e839848e 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -134,7 +134,7 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr) if (vcpu_read_sys_reg(vcpu, TCR_EL1) & TTBCR_EAE) { fsr = DFSR_LPAE | DFSR_FSC_EXTABT_LPAE; } else { - /* no need to shuffle FS[4] into DFSR[10] as its 0 */ + /* no need to shuffle FS[4] into DFSR[10] as it's 0 */ fsr = DFSR_FSC_EXTABT_nLPAE; } diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index c8c3cb812783..a0a9badaa91c 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -309,7 +309,7 @@ int vgic_init(struct kvm *kvm) vgic_lpi_translation_cache_init(kvm); /* - * If we have GICv4.1 enabled, unconditionnaly request enable the + * If we have GICv4.1 enabled, unconditionally request enable the * v4 support so that we get HW-accelerated vSGIs. Otherwise, only * enable it if we present a virtual ITS to the guest. */ diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c index 2dad2d095160..39d8c15202e7 100644 --- a/arch/arm64/kvm/vgic/vgic-its.c +++ b/arch/arm64/kvm/vgic/vgic-its.c @@ -1337,8 +1337,8 @@ static int vgic_its_cmd_handle_inv(struct kvm *kvm, struct vgic_its *its, } /** - * vgic_its_invall - invalidate all LPIs targetting a given vcpu - * @vcpu: the vcpu for which the RD is targetted by an invalidation + * vgic_its_invall - invalidate all LPIs targeting a given vcpu + * @vcpu: the vcpu for which the RD is targeted by an invalidation * * Contrary to the INVALL command, this targets a RD instead of a * collection, and we don't need to hold the its_lock, since no ITS is