From patchwork Sat Jun 13 21:36:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salil Mehta X-Patchwork-Id: 11603007 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73759912 for ; Sat, 13 Jun 2020 21:51:33 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 544CD20792 for ; Sat, 13 Jun 2020 21:51:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 544CD20792 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:52098 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jkE3o-0007lC-H3 for patchwork-qemu-devel@patchwork.kernel.org; Sat, 13 Jun 2020 17:51:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:59764) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jkDxt-0006fk-Mt; Sat, 13 Jun 2020 17:45:25 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:3708 helo=huawei.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jkDxr-0003Cw-Gr; Sat, 13 Jun 2020 17:45:25 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id DCAD1BD2440337CDA0F7; Sun, 14 Jun 2020 05:45:20 +0800 (CST) Received: from A190218597.china.huawei.com (10.47.30.60) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Sun, 14 Jun 2020 05:45:11 +0800 From: Salil Mehta To: , Subject: [PATCH RFC 10/22] arm/cpuhp: Update CPUs AML with cpu-(ctrl)dev change Date: Sat, 13 Jun 2020 22:36:17 +0100 Message-ID: <20200613213629.21984-11-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20200613213629.21984-1-salil.mehta@huawei.com> References: <20200613213629.21984-1-salil.mehta@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.30.60] X-CFilter-Loop: Reflected Received-SPF: pass client-ip=45.249.212.190; envelope-from=salil.mehta@huawei.com; helo=huawei.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/13 17:44:30 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H4=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, drjones@redhat.com, sudeep.holla@arm.com, gshan@redhat.com, mst@redhat.com, jiakernel2@gmail.com, maz@kernel.org, zhukeqian1@huawei.com, david@redhat.com, richard.henderson@linaro.org, linuxarm@huawei.com, eric.auger@redhat.com, james.morse@arm.com, catalin.marinas@arm.com, imammedo@redhat.com, Salil Mehta , pbonzini@redhat.com, mehta.salil.lnk@gmail.com, maran.wilson@oracle.com, will@kernel.org, wangxiongfeng2@huawei.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" CPUs Control device(\\_SB.PCI0) register interface for the x86 arch is based on PCI and is IO port based and hence existing cpus AML code assumes _CRS objects would evaluate to a system resource which describes IO Port address. But on ARM arch CPUs control device(\\_SB.PRES) register interface is memory-mapped hence _CRS object should evaluate to system resource which describes memory-mapped base address. This cpus AML code change updates the existing inerface of the build cpus AML function to accept both IO/MEMORY type regions and update the _CRS object correspondingly. NOTE: Beside above CPU scan shall be triggered when OSPM evaluates _EVT method part of the GED framework which is covered in subsequent patch. Co-developed-by: Keqian Zhu Signed-off-by: Salil Mehta --- hw/acpi/cpu.c | 23 ++++++++++++++++------- hw/arm/virt-acpi-build.c | 13 ++++++++++++- hw/i386/acpi-build.c | 2 +- include/hw/acpi/cpu.h | 5 +++-- 4 files changed, 32 insertions(+), 11 deletions(-) diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c index 21fe0463b9..867fdd6993 100644 --- a/hw/acpi/cpu.c +++ b/hw/acpi/cpu.c @@ -335,9 +335,10 @@ const VMStateDescription vmstate_cpu_hotplug = { #define CPU_EJECT_EVENT "CEJ0" void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts, - hwaddr io_base, + hwaddr mmap_io_base, const char *res_root, - const char *event_handler_method) + const char *event_handler_method, + AmlRegionSpace rs) { Aml *ifctx; Aml *field; @@ -365,13 +366,19 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts, aml_append(cpu_ctrl_dev, aml_mutex(CPU_LOCK, 0)); crs = aml_resource_template(); - aml_append(crs, aml_io(AML_DECODE16, io_base, io_base, 1, + if (rs == AML_SYSTEM_IO) { + aml_append(crs, aml_io(AML_DECODE16, mmap_io_base, mmap_io_base, 1, ACPI_CPU_HOTPLUG_REG_LEN)); + } else { + aml_append(crs, aml_memory32_fixed(mmap_io_base, + ACPI_CPU_HOTPLUG_REG_LEN, AML_READ_WRITE)); + } + aml_append(cpu_ctrl_dev, aml_name_decl("_CRS", crs)); /* declare CPU hotplug MMIO region with related access fields */ aml_append(cpu_ctrl_dev, - aml_operation_region("PRST", AML_SYSTEM_IO, aml_int(io_base), + aml_operation_region("PRST", rs, aml_int(mmap_io_base), ACPI_CPU_HOTPLUG_REG_LEN)); field = aml_field("PRST", AML_BYTE_ACC, AML_NOLOCK, @@ -593,9 +600,11 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts, aml_append(sb_scope, cpus_dev); aml_append(table, sb_scope); - method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED); - aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD)); - aml_append(table, method); + if (event_handler_method) { + method = aml_method(event_handler_method, 0, AML_NOTSERIALIZED); + aml_append(method, aml_call0("\\_SB.CPUS." CPU_SCAN_METHOD)); + aml_append(table, method); + } g_free(cphp_res_path); } diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c index ca31f70f7f..d40540db61 100644 --- a/hw/arm/virt-acpi-build.c +++ b/hw/arm/virt-acpi-build.c @@ -731,7 +731,18 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms) * the RTC ACPI device at all when using UEFI. */ scope = aml_scope("\\_SB"); - acpi_dsdt_add_cpus(scope, vms->smp_cpus); + /* if GED is enabled then cpus AML shall be added as part build_cpus_aml */ + if (vms->acpi_dev) { + CPUHotplugFeatures opts = { + .acpi_1_compatible = false, + .has_legacy_cphp = false + }; + + build_cpus_aml(scope, ms, opts, memmap[VIRT_CPUHP_ACPI].base, + "\\_SB", NULL, AML_SYSTEM_MEMORY); + } else { + acpi_dsdt_add_cpus(scope, vms->smp_cpus); + } acpi_dsdt_add_uart(scope, &memmap[VIRT_UART], (irqmap[VIRT_UART] + ARM_SPI_BASE)); acpi_dsdt_add_flash(scope, &memmap[VIRT_FLASH]); diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index 473cbdfffd..4b224d80a5 100644 --- a/hw/i386/acpi-build.c +++ b/hw/i386/acpi-build.c @@ -1668,7 +1668,7 @@ build_dsdt(GArray *table_data, BIOSLinker *linker, .acpi_1_compatible = true, .has_legacy_cphp = true }; build_cpus_aml(dsdt, machine, opts, pm->cpu_hp_io_base, - "\\_SB.PCI0", "\\_GPE._E02"); + "\\_SB.PCI0", "\\_GPE._E02", AML_SYSTEM_IO); } if (pcms->memhp_io_base && nr_mem) { diff --git a/include/hw/acpi/cpu.h b/include/hw/acpi/cpu.h index 62f0278ba2..c3a9981dc3 100644 --- a/include/hw/acpi/cpu.h +++ b/include/hw/acpi/cpu.h @@ -53,9 +53,10 @@ typedef struct CPUHotplugFeatures { } CPUHotplugFeatures; void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts, - hwaddr io_base, + hwaddr mmap_io_base, const char *res_root, - const char *event_handler_method); + const char *event_handler_method, + AmlRegionSpace rs); void acpi_cpu_ospm_status(CPUHotplugState *cpu_st, ACPIOSTInfoList ***list);