From patchwork Thu Jun 25 13:37:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salil Mehta X-Patchwork-Id: 11625385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E36F714B7 for ; Thu, 25 Jun 2020 13:45:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA2D320781 for ; Thu, 25 Jun 2020 13:45:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405181AbgFYNph (ORCPT ); Thu, 25 Jun 2020 09:45:37 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:6827 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2405096AbgFYNph (ORCPT ); Thu, 25 Jun 2020 09:45:37 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 52A299654FC5BEE3B867; Thu, 25 Jun 2020 21:43:55 +0800 (CST) Received: from A190218597.china.huawei.com (10.47.76.118) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Thu, 25 Jun 2020 21:43:48 +0800 From: Salil Mehta To: CC: , , , , , , , , , , , , , , , , , , , , , , , Salil Mehta , "Xiongfeng Wang" Subject: [PATCH RFC 1/4] arm64: kernel: Handle disabled[(+)present] cpus in MADT/GICC during init Date: Thu, 25 Jun 2020 14:37:54 +0100 Message-ID: <20200625133757.22332-2-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20200625133757.22332-1-salil.mehta@huawei.com> References: <20200625133757.22332-1-salil.mehta@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.76.118] X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org With ACPI enabled, cpus get identified by the presence of the GICC entry in the MADT Table. Each GICC entry part of MADT presents cpu as enabled or disabled. As of now, the disabled cpus are skipped as physical cpu hotplug is not supported. These remain disabled even after the kernel has booted. To support virtual cpu hotplug(in which case disabled vcpus could be hotplugged even after kernel has booted), QEMU will populate MADT Table with appropriate details of GICC entry for each possible(present+disabled) vcpu. Now, during the init time vcpus will be identified as present or disabled. To achieve this, below changes have been made with respect to the present/possible vcpu handling along with the mentioned reasoning: 1. Identify all possible(present+disabled) vcpus at boot/init time and set their present mask and possible mask. In the existing code, cpus are being marked present quite late within smp_prepare_cpus() function, which gets called in context to the kernel thread. Since the cpu hotplug is not supported, present cpus are always equal to the possible cpus. But with cpu hotplug enabled, this assumption is not true. Hence, present cpus should be marked while MADT GICC entries are bring parsed for each vcpu. 2. Set possible cpus to include disabled. This needs to be done now while parsing MADT GICC entries corresponding to each vcpu as the disabled vcpu info is available only at this point as for hotplug case possible vcpus is not equal to present vcpus. 3. We will store the parsed madt/gicc entry even for the disabled vcpus during init time. This is needed as some modules like PMU registers IRQs for each possible vcpus during init time. Therefore, a valid entry of the MADT GICC should be present for all possible vcpus. 4. Refactoring related to DT/OF is also done to align it with the init changes to support vcpu hotplug. Signed-off-by: Salil Mehta Signed-off-by: Xiongfeng Wang --- arch/arm64/kernel/smp.c | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index e43a8ff19f0f..51a707928302 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -509,13 +509,12 @@ static int __init smp_cpu_setup(int cpu) if (ops->cpu_init(cpu)) return -ENODEV; - set_cpu_possible(cpu, true); - return 0; } static bool bootcpu_valid __initdata; static unsigned int cpu_count = 1; +static unsigned int disabled_cpu_count; #ifdef CONFIG_ACPI static struct acpi_madt_generic_interrupt cpu_madt_gicc[NR_CPUS]; @@ -534,10 +533,17 @@ struct acpi_madt_generic_interrupt *acpi_cpu_get_madt_gicc(int cpu) static void __init acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor) { + unsigned int total_cpu_count = disabled_cpu_count + cpu_count; u64 hwid = processor->arm_mpidr; if (!(processor->flags & ACPI_MADT_ENABLED)) { +#ifndef CONFIG_ACPI_HOTPLUG_CPU pr_debug("skipping disabled CPU entry with 0x%llx MPIDR\n", hwid); +#else + cpu_madt_gicc[total_cpu_count] = *processor; + set_cpu_possible(total_cpu_count, true); + disabled_cpu_count++; +#endif return; } @@ -546,7 +552,7 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor) return; } - if (is_mpidr_duplicate(cpu_count, hwid)) { + if (is_mpidr_duplicate(total_cpu_count, hwid)) { pr_err("duplicate CPU MPIDR 0x%llx in MADT\n", hwid); return; } @@ -567,9 +573,9 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor) return; /* map the logical cpu id to cpu MPIDR */ - cpu_logical_map(cpu_count) = hwid; + cpu_logical_map(total_cpu_count) = hwid; - cpu_madt_gicc[cpu_count] = *processor; + cpu_madt_gicc[total_cpu_count] = *processor; /* * Set-up the ACPI parking protocol cpu entries @@ -580,8 +586,10 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor) * initialize the cpu if the parking protocol is * the only available enable method). */ - acpi_set_mailbox_entry(cpu_count, processor); + acpi_set_mailbox_entry(total_cpu_count, processor); + set_cpu_possible(total_cpu_count, true); + set_cpu_present(total_cpu_count, true); cpu_count++; } @@ -614,6 +622,9 @@ static void __init acpi_parse_and_init_cpus(void) acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_INTERRUPT, acpi_parse_gic_cpu_interface, 0); + pr_debug("possible cpus(%u) present cpus(%u) disabled cpus(%u)\n", + cpu_count+disabled_cpu_count, cpu_count, disabled_cpu_count); + /* * In ACPI, SMP and CPU NUMA information is provided in separate * static tables, namely the MADT and the SRAT. @@ -684,6 +695,9 @@ static void __init of_parse_and_init_cpus(void) cpu_logical_map(cpu_count) = hwid; early_map_cpu_to_node(cpu_count, of_node_to_nid(dn)); + + set_cpu_possible(cpu_count, true); + set_cpu_present(cpu_count, true); next: cpu_count++; } @@ -768,7 +782,6 @@ void __init smp_prepare_cpus(unsigned int max_cpus) if (err) continue; - set_cpu_present(cpu, true); numa_store_cpu_info(cpu); } } From patchwork Thu Jun 25 13:37:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salil Mehta X-Patchwork-Id: 11625379 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D324E912 for ; Thu, 25 Jun 2020 13:44:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C3828207E8 for ; Thu, 25 Jun 2020 13:44:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405221AbgFYNoL (ORCPT ); Thu, 25 Jun 2020 09:44:11 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:49350 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2405190AbgFYNoK (ORCPT ); Thu, 25 Jun 2020 09:44:10 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 6F308B2C6D69505DCE5A; Thu, 25 Jun 2020 21:44:05 +0800 (CST) Received: from A190218597.china.huawei.com (10.47.76.118) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Thu, 25 Jun 2020 21:43:54 +0800 From: Salil Mehta To: CC: , , , , , , , , , , , , , , , , , , , , , , , Salil Mehta , "Xiongfeng Wang" Subject: [PATCH RFC 2/4] arm64: kernel: Bound the total(present+disabled) cpus with nr_cpu_ids Date: Thu, 25 Jun 2020 14:37:55 +0100 Message-ID: <20200625133757.22332-3-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20200625133757.22332-1-salil.mehta@huawei.com> References: <20200625133757.22332-1-salil.mehta@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.76.118] X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Bound the total number of identified cpus(including disabled cpus) by maximum allowed limit by the kernel. Max value is either specified as part of the kernel parameters 'nr_cpus' or specified during compile time using CONFIG_NR_CPUS. Signed-off-by: Salil Mehta Signed-off-by: Xiongfeng Wang --- arch/arm64/kernel/smp.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 51a707928302..864ccd3da419 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -513,6 +513,7 @@ static int __init smp_cpu_setup(int cpu) } static bool bootcpu_valid __initdata; +static bool cpus_clipped __initdata = false; static unsigned int cpu_count = 1; static unsigned int disabled_cpu_count; @@ -536,6 +537,11 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor) unsigned int total_cpu_count = disabled_cpu_count + cpu_count; u64 hwid = processor->arm_mpidr; + if (total_cpu_count > nr_cpu_ids) { + cpus_clipped = true; + return; + } + if (!(processor->flags & ACPI_MADT_ENABLED)) { #ifndef CONFIG_ACPI_HOTPLUG_CPU pr_debug("skipping disabled CPU entry with 0x%llx MPIDR\n", hwid); @@ -569,9 +575,6 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor) return; } - if (cpu_count >= NR_CPUS) - return; - /* map the logical cpu id to cpu MPIDR */ cpu_logical_map(total_cpu_count) = hwid; @@ -688,8 +691,10 @@ static void __init of_parse_and_init_cpus(void) continue; } - if (cpu_count >= NR_CPUS) + if (cpu_count >= NR_CPUS) { + cpus_clipped = true; goto next; + } pr_debug("cpu logical map 0x%llx\n", hwid); cpu_logical_map(cpu_count) = hwid; @@ -710,6 +715,7 @@ static void __init of_parse_and_init_cpus(void) */ void __init smp_init_cpus(void) { + unsigned int total_cpu_count = disabled_cpu_count + cpu_count; int i; if (acpi_disabled) @@ -717,9 +723,9 @@ void __init smp_init_cpus(void) else acpi_parse_and_init_cpus(); - if (cpu_count > nr_cpu_ids) + if (cpus_clipped) pr_warn("Number of cores (%d) exceeds configured maximum of %u - clipping\n", - cpu_count, nr_cpu_ids); + total_cpu_count, nr_cpu_ids); if (!bootcpu_valid) { pr_err("missing boot CPU MPIDR, not enabling secondaries\n"); From patchwork Thu Jun 25 13:37:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salil Mehta X-Patchwork-Id: 11625381 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0F8B7912 for ; Thu, 25 Jun 2020 13:44:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0179620857 for ; Thu, 25 Jun 2020 13:44:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405240AbgFYNoP (ORCPT ); Thu, 25 Jun 2020 09:44:15 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:60444 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2404890AbgFYNoN (ORCPT ); Thu, 25 Jun 2020 09:44:13 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 827B9338FB7BF2F083A7; Thu, 25 Jun 2020 21:44:10 +0800 (CST) Received: from A190218597.china.huawei.com (10.47.76.118) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Thu, 25 Jun 2020 21:44:01 +0800 From: Salil Mehta To: CC: , , , , , , , , , , , , , , , , , , , , , , , Salil Mehta , "Xiongfeng Wang" Subject: [PATCH RFC 3/4] arm64: kernel: Init cpu operations for all possible vcpus Date: Thu, 25 Jun 2020 14:37:56 +0100 Message-ID: <20200625133757.22332-4-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20200625133757.22332-1-salil.mehta@huawei.com> References: <20200625133757.22332-1-salil.mehta@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.76.118] X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, cpu-operations are only initialized for the cpus which already have logical cpuid to hwid assoication established. And this only happens for the cpus which are present during boot time. To support virtual cpu hotplug, we shall initialze the cpu-operations for all possible(present+disabled) vcpus. This means logical cpuid to hwid/mpidr association might not exists(i.e. might be INVALID_HWID) during init. Later, when the vcpu is actually hotplugged logical cpuid is allocated and associated with the hwid/mpidr. This patch does some refactoring to support above change. Signed-off-by: Salil Mehta Signed-off-by: Xiongfeng Wang --- arch/arm64/kernel/smp.c | 38 ++++++++++++++++---------------------- 1 file changed, 16 insertions(+), 22 deletions(-) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 864ccd3da419..63f31ea23e55 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -503,13 +503,16 @@ static int __init smp_cpu_setup(int cpu) const struct cpu_operations *ops; if (init_cpu_ops(cpu)) - return -ENODEV; + goto out; ops = get_cpu_ops(cpu); if (ops->cpu_init(cpu)) - return -ENODEV; + goto out; return 0; +out: + cpu_logical_map(cpu) = INVALID_HWID; + return -ENODEV; } static bool bootcpu_valid __initdata; @@ -547,7 +550,8 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor) pr_debug("skipping disabled CPU entry with 0x%llx MPIDR\n", hwid); #else cpu_madt_gicc[total_cpu_count] = *processor; - set_cpu_possible(total_cpu_count, true); + if (!smp_cpu_setup(total_cpu_count)) + set_cpu_possible(total_cpu_count, true); disabled_cpu_count++; #endif return; @@ -591,8 +595,11 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor) */ acpi_set_mailbox_entry(total_cpu_count, processor); - set_cpu_possible(total_cpu_count, true); - set_cpu_present(total_cpu_count, true); + if (!smp_cpu_setup(total_cpu_count)) { + set_cpu_possible(total_cpu_count, true); + set_cpu_present(total_cpu_count, true); + } + cpu_count++; } @@ -701,8 +708,10 @@ static void __init of_parse_and_init_cpus(void) early_map_cpu_to_node(cpu_count, of_node_to_nid(dn)); - set_cpu_possible(cpu_count, true); - set_cpu_present(cpu_count, true); + if (!smp_cpu_setup(cpu_count)) { + set_cpu_possible(cpu_count, true); + set_cpu_present(cpu_count, true); + } next: cpu_count++; } @@ -716,7 +725,6 @@ static void __init of_parse_and_init_cpus(void) void __init smp_init_cpus(void) { unsigned int total_cpu_count = disabled_cpu_count + cpu_count; - int i; if (acpi_disabled) of_parse_and_init_cpus(); @@ -731,20 +739,6 @@ void __init smp_init_cpus(void) pr_err("missing boot CPU MPIDR, not enabling secondaries\n"); return; } - - /* - * We need to set the cpu_logical_map entries before enabling - * the cpus so that cpu processor description entries (DT cpu nodes - * and ACPI MADT entries) can be retrieved by matching the cpu hwid - * with entries in cpu_logical_map while initializing the cpus. - * If the cpu set-up fails, invalidate the cpu_logical_map entry. - */ - for (i = 1; i < nr_cpu_ids; i++) { - if (cpu_logical_map(i) != INVALID_HWID) { - if (smp_cpu_setup(i)) - cpu_logical_map(i) = INVALID_HWID; - } - } } void __init smp_prepare_cpus(unsigned int max_cpus) From patchwork Thu Jun 25 13:37:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salil Mehta X-Patchwork-Id: 11625383 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7105914B7 for ; Thu, 25 Jun 2020 13:44:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 62AE92076E for ; Thu, 25 Jun 2020 13:44:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405250AbgFYNoW (ORCPT ); Thu, 25 Jun 2020 09:44:22 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:49622 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2405247AbgFYNoU (ORCPT ); Thu, 25 Jun 2020 09:44:20 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 991E3A2CC515FC3F8A23; Thu, 25 Jun 2020 21:44:15 +0800 (CST) Received: from A190218597.china.huawei.com (10.47.76.118) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Thu, 25 Jun 2020 21:44:07 +0800 From: Salil Mehta To: CC: , , , , , , , , , , , , , , , , , , , , , , , Salil Mehta , "Xiongfeng Wang" Subject: [PATCH RFC 4/4] arm64: kernel: Arch specific ACPI hooks(like logical cpuid<->hwid etc.) Date: Thu, 25 Jun 2020 14:37:57 +0100 Message-ID: <20200625133757.22332-5-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20200625133757.22332-1-salil.mehta@huawei.com> References: <20200625133757.22332-1-salil.mehta@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.76.118] X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To support virtual cpu hotplug, some arch specifc hooks must be facilitated. These hooks are called by the generic ACPI cpu hotplug framework during a vcpu hot-(un)plug event handling. The changes required involve: 1. Allocation of the logical cpuid corresponding to the hwid/mpidr 2. Mapping of logical cpuid to hwid/mpidr and marking present 3. Removing vcpu from present mask during hot-unplug 4. For arm64, all possible cpus are registered within topology_init() Hence, we need to override the weak ACPI call of arch_register_cpu() (which returns -ENODEV) and return success. 5. NUMA node mapping set for this vcpu using SRAT Table info during init time will be discarded as the logical cpu-ids used at that time might not be correct. This mapping will be set again using the proximity/node info obtained by evaluating _PXM ACPI method. Note, during hot unplug of vcpu, we do not unmap the association between the logical cpuid and hwid/mpidr. This remains persistent. Signed-off-by: Salil Mehta Signed-off-by: Xiongfeng Wang --- arch/arm64/kernel/smp.c | 80 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 63f31ea23e55..f3315840e829 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -528,6 +528,86 @@ struct acpi_madt_generic_interrupt *acpi_cpu_get_madt_gicc(int cpu) return &cpu_madt_gicc[cpu]; } +#ifdef CONFIG_ACPI_HOTPLUG_CPU +int arch_register_cpu(int num) +{ + return 0; +} + +static int set_numa_node_for_cpu(acpi_handle handle, int cpu) +{ +#ifdef CONFIG_ACPI_NUMA + int node_id; + + /* will evaluate _PXM */ + node_id = acpi_get_node(handle); + if (node_id != NUMA_NO_NODE) + set_cpu_numa_node(cpu, node_id); +#endif + return 0; +} + +static void unset_numa_node_for_cpu(int cpu) +{ +#ifdef CONFIG_ACPI_NUMA + set_cpu_numa_node(cpu, NUMA_NO_NODE); +#endif +} + +static int allocate_logical_cpuid(u64 physid) +{ + int first_invalid_idx = -1; + bool first = true; + int i; + + for_each_possible_cpu(i) { + /* + * logical cpuid<->hwid association remains persistent once + * established + */ + if (cpu_logical_map(i) == physid) + return i; + + if ((cpu_logical_map(i) == INVALID_HWID) && first) { + first_invalid_idx = i; + first = false; + } + } + + return first_invalid_idx; +} + +int acpi_unmap_cpu(int cpu) +{ + set_cpu_present(cpu, false); + unset_numa_node_for_cpu(cpu); + + return 0; +} + +int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 acpi_id, + int *cpuid) +{ + int cpu; + + cpu = allocate_logical_cpuid(physid); + if (cpu < 0) { + pr_warn("Unable to map logical cpuid to physid 0x%llx\n", + physid); + return -ENOSPC; + } + + /* map the logical cpu id to cpu MPIDR */ + cpu_logical_map(cpu) = physid; + set_numa_node_for_cpu(handle, cpu); + + set_cpu_present(cpu, true); + *cpuid = cpu; + + return 0; +} +#endif + /* * acpi_map_gic_cpu_interface - parse processor MADT entry *