From patchwork Tue Jan 7 09:18:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zengtao (B)" X-Patchwork-Id: 11320647 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0187714B4 for ; Tue, 7 Jan 2020 09:28:12 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 63FB020656 for ; Tue, 7 Jan 2020 09:28:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 63FB020656 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:45286 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iol9j-0004T2-TZ for patchwork-qemu-devel@patchwork.kernel.org; Tue, 07 Jan 2020 04:28:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:42341) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iol5C-0005Xe-9z for qemu-devel@nongnu.org; Tue, 07 Jan 2020 04:23:27 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iol52-00080o-Pu for qemu-devel@nongnu.org; Tue, 07 Jan 2020 04:23:19 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:2229 helo=huawei.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iol4x-0007kv-Qa; Tue, 07 Jan 2020 04:23:12 -0500 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id B80F5282A2F41FAD5E84; Tue, 7 Jan 2020 17:23:03 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.439.0; Tue, 7 Jan 2020 17:22:55 +0800 From: Zeng Tao To: Subject: [PATCH] hw/arm/acpi: Pack the SRAT processors structure by node_id ascending order Date: Tue, 7 Jan 2020 17:18:49 +0800 Message-ID: <1578388729-55540-1-git-send-email-prime.zeng@hisilicon.com> X-Mailer: git-send-email 2.8.1 MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 45.249.212.190 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , "Michael S. Tsirkin" , qemu-trivial@nongnu.org, Shannon Zhao , qemu-arm@nongnu.org, Zeng Tao , Igor Mammedov Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" When booting the guest linux with the following numa configuration: -numa node,node_id=1,cpus=0-3 -numa node,node_id=0,cpus=4-7 We can get the following numa topology in the guest system: Architecture: aarch64 Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 2 L1d cache: unknown size L1i cache: unknown size L2 cache: unknown size NUMA node0 CPU(s): 0-3 NUMA node1 CPU(s): 4-7 The Cpus 0-3 is assigned with NUMA node 1 in QEMU while it get NUMA node 0 in the guest. In fact, In the linux kernel, numa_node_id is allocated per the ACPI SRAT processors structure order,so the cpu 0 will be the first one to allocate its NUMA node id, so it gets the NUMA node 0. To fix this issue, we pack the SRAT processors structure in numa node id order but not the default cpu number order. Signed-off-by: Zeng Tao --- hw/arm/virt-acpi-build.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c index bd5f771..497192b 100644 --- a/hw/arm/virt-acpi-build.c +++ b/hw/arm/virt-acpi-build.c @@ -520,7 +520,8 @@ build_srat(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms) AcpiSystemResourceAffinityTable *srat; AcpiSratProcessorGiccAffinity *core; AcpiSratMemoryAffinity *numamem; - int i, srat_start; + int i, j, srat_start; + uint32_t node_id; uint64_t mem_base; MachineClass *mc = MACHINE_GET_CLASS(vms); MachineState *ms = MACHINE(vms); @@ -530,13 +531,19 @@ build_srat(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms) srat = acpi_data_push(table_data, sizeof(*srat)); srat->reserved1 = cpu_to_le32(1); - for (i = 0; i < cpu_list->len; ++i) { - core = acpi_data_push(table_data, sizeof(*core)); - core->type = ACPI_SRAT_PROCESSOR_GICC; - core->length = sizeof(*core); - core->proximity = cpu_to_le32(cpu_list->cpus[i].props.node_id); - core->acpi_processor_uid = cpu_to_le32(i); - core->flags = cpu_to_le32(1); + for (i = 0; i < ms->numa_state->num_nodes; ++i) { + for (j = 0; j < cpu_list->len; ++j) { + node_id = cpu_to_le32(cpu_list->cpus[j].props.node_id); + if (node_id != i) { + continue; + } + core = acpi_data_push(table_data, sizeof(*core)); + core->type = ACPI_SRAT_PROCESSOR_GICC; + core->length = sizeof(*core); + core->proximity = node_id; + core->acpi_processor_uid = cpu_to_le32(j); + core->flags = cpu_to_le32(1); + } } mem_base = vms->memmap[VIRT_MEM].base;