From patchwork Fri Aug 12 06:54:04 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 9276523 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1A2A860752 for ; Fri, 12 Aug 2016 07:11:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0683F288CF for ; Fri, 12 Aug 2016 07:11:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EF26A288D2; Fri, 12 Aug 2016 07:11:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C555288CF for ; Fri, 12 Aug 2016 07:11:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751031AbcHLHL2 (ORCPT ); Fri, 12 Aug 2016 03:11:28 -0400 Received: from mga14.intel.com ([192.55.52.115]:33541 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750878AbcHLHL2 (ORCPT ); Fri, 12 Aug 2016 03:11:28 -0400 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP; 12 Aug 2016 00:07:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,509,1464678000"; d="scan'208";a="154662651" Received: from xiaoreal1.sh.intel.com (HELO xiaoreal1.sh.intel.com.sh.intel.com) ([10.239.48.133]) by fmsmga004.fm.intel.com with ESMTP; 12 Aug 2016 00:07:11 -0700 From: Xiao Guangrong To: pbonzini@redhat.com, imammedo@redhat.com Cc: gleb@kernel.org, mtosatti@redhat.com, stefanha@redhat.com, mst@redhat.com, rth@twiddle.net, ehabkost@redhat.com, dan.j.williams@intel.com, kvm@vger.kernel.org, qemu-devel@nongnu.org, Xiao Guangrong Subject: [PATCH v2 2/8] nvdimm acpi: prebuild nvdimm devices for available slots Date: Fri, 12 Aug 2016 14:54:04 +0800 Message-Id: <1470984850-66891-3-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1470984850-66891-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1470984850-66891-1-git-send-email-guangrong.xiao@linux.intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For each NVDIMM present or intended to be supported by platform, platform firmware also exposes an ACPI Namespace Device under the root device So it builds nvdimm devices for all slots to support vNVDIMM hotplug Signed-off-by: Xiao Guangrong --- hw/acpi/nvdimm.c | 41 ++++++++++++++++++++++++----------------- hw/i386/acpi-build.c | 2 +- include/hw/mem/nvdimm.h | 3 ++- 3 files changed, 27 insertions(+), 19 deletions(-) diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index 5454c0f..0e2b9f0 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -886,12 +886,11 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle) aml_append(dev, method); } -static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev) +static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots) { - for (; device_list; device_list = device_list->next) { - DeviceState *dev = device_list->data; - int slot = object_property_get_int(OBJECT(dev), PC_DIMM_SLOT_PROP, - NULL); + uint32_t slot; + + for (slot = 0; slot < ram_slots; slot++) { uint32_t handle = nvdimm_slot_to_handle(slot); Aml *nvdimm_dev; @@ -912,9 +911,9 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev) } } -static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets, - GArray *table_data, BIOSLinker *linker, - GArray *dsm_dma_arrea) +static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data, + BIOSLinker *linker, GArray *dsm_dma_arrea, + uint32_t ram_slots) { Aml *ssdt, *sb_scope, *dev, *field; int mem_addr_offset, nvdimm_ssdt; @@ -1003,7 +1002,7 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets, /* 0 is reserved for root device. */ nvdimm_build_device_dsm(dev, 0); - nvdimm_build_nvdimm_devices(device_list, dev); + nvdimm_build_nvdimm_devices(dev, ram_slots); aml_append(sb_scope, dev); aml_append(ssdt, sb_scope); @@ -1028,17 +1027,25 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets, } void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data, - BIOSLinker *linker, GArray *dsm_dma_arrea) + BIOSLinker *linker, GArray *dsm_dma_arrea, + uint32_t ram_slots) { GSList *device_list; - /* no NVDIMM device is plugged. */ device_list = nvdimm_get_plugged_device_list(); - if (!device_list) { - return; + + /* NVDIMM device is plugged. */ + if (device_list) { + nvdimm_build_nfit(device_list, table_offsets, table_data, linker); + g_slist_free(device_list); + } + + /* + * NVDIMM device is allowed to be plugged only if there has available + * slot. + */ + if (ram_slots) { + nvdimm_build_ssdt(table_offsets, table_data, linker, dsm_dma_arrea, + ram_slots); } - nvdimm_build_nfit(device_list, table_offsets, table_data, linker); - nvdimm_build_ssdt(device_list, table_offsets, table_data, linker, - dsm_dma_arrea); - g_slist_free(device_list); } diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index a26a4bb..b1d0ced 100644 --- a/hw/i386/acpi-build.c +++ b/hw/i386/acpi-build.c @@ -2712,7 +2712,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine) } if (pcms->acpi_nvdimm_state.is_enabled) { nvdimm_build_acpi(table_offsets, tables_blob, tables->linker, - pcms->acpi_nvdimm_state.dsm_mem); + pcms->acpi_nvdimm_state.dsm_mem, machine->ram_slots); } /* Add tables supplied by user (if any) */ diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h index 1cfe9e0..63a2b20 100644 --- a/include/hw/mem/nvdimm.h +++ b/include/hw/mem/nvdimm.h @@ -112,5 +112,6 @@ typedef struct AcpiNVDIMMState AcpiNVDIMMState; void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io, FWCfgState *fw_cfg, Object *owner); void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data, - BIOSLinker *linker, GArray *dsm_dma_arrea); + BIOSLinker *linker, GArray *dsm_dma_arrea, + uint32_t ram_slots); #endif