From patchwork Thu Nov 3 03:51:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 9410161 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D00CC6022E for ; Thu, 3 Nov 2016 04:13:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C19F92A7BA for ; Thu, 3 Nov 2016 04:13:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B409E2A7BF; Thu, 3 Nov 2016 04:13:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9041C2A7BA for ; Thu, 3 Nov 2016 04:13:22 +0000 (UTC) Received: from localhost ([::1]:58874 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c29Oz-00012H-QD for patchwork-qemu-devel@patchwork.kernel.org; Thu, 03 Nov 2016 00:13:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44571) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c29If-000474-Fs for qemu-devel@nongnu.org; Thu, 03 Nov 2016 00:06:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c29Id-0003sd-9g for qemu-devel@nongnu.org; Thu, 03 Nov 2016 00:06:49 -0400 Received: from mga02.intel.com ([134.134.136.20]:6083) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c29Ic-0003rX-Rx for qemu-devel@nongnu.org; Thu, 03 Nov 2016 00:06:47 -0400 Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP; 02 Nov 2016 21:06:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,585,1473145200"; d="scan'208";a="26805925" Received: from xiaoreal1.sh.intel.com (HELO xiaoreal1.sh.intel.com.sh.intel.com) ([10.239.48.133]) by fmsmga006.fm.intel.com with ESMTP; 02 Nov 2016 21:06:43 -0700 From: Xiao Guangrong To: pbonzini@redhat.com, imammedo@redhat.com Date: Thu, 3 Nov 2016 11:51:29 +0800 Message-Id: <1478145090-11987-3-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1478145090-11987-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1478145090-11987-1-git-send-email-guangrong.xiao@linux.intel.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.20 Subject: [Qemu-devel] [PATCH v4 2/3] nvdimm acpi: introduce _FIT X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Xiao Guangrong , ehabkost@redhat.com, kvm@vger.kernel.org, mst@redhat.com, gleb@kernel.org, mtosatti@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, dan.j.williams@intel.com, rth@twiddle.net Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP _FIT is required for hotplug support, guest will inquire the updated device info from it if a hotplug event is received As FIT buffer is not completely mapped into guest address space, so a new function, Read FIT whose UUID is UUID 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62, handle 0x10000, function index is 0x1, is reserved by QEMU to read the piece of FIT buffer. The buffer is concatenated before _FIT return Refer to docs/specs/acpi-nvdimm.txt for detailed design Signed-off-by: Xiao Guangrong --- docs/specs/acpi_nvdimm.txt | 63 ++++++++++++- hw/acpi/nvdimm.c | 225 ++++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 271 insertions(+), 17 deletions(-) diff --git a/docs/specs/acpi_nvdimm.txt b/docs/specs/acpi_nvdimm.txt index 0fdd251..364e832 100644 --- a/docs/specs/acpi_nvdimm.txt +++ b/docs/specs/acpi_nvdimm.txt @@ -65,8 +65,8 @@ _FIT(Firmware Interface Table) The detailed definition of the structure can be found at ACPI 6.0: 5.2.25 NVDIMM Firmware Interface Table (NFIT). -QEMU NVDIMM Implemention -======================== +QEMU NVDIMM Implementation +========================== QEMU uses 4 bytes IO Port starting from 0x0a18 and a RAM-based memory page for NVDIMM ACPI. @@ -82,6 +82,16 @@ Memory: ACPI writes _DSM Input Data (based on the offset in the page): [0x0 - 0x3]: 4 bytes, NVDIMM Device Handle, 0 is reserved for NVDIMM Root device. + + The handle is completely QEMU internal thing, the values in + range [0, 0xFFFF] indicate nvdimm device (O means nvdimm + root device named NVDR), other values are reserved by other + purpose. + + Current reserved handle: + 0x10000 is reserved for QEMU internal DSM function called on + the root device. + [0x4 - 0x7]: 4 bytes, Revision ID, that is the Arg1 of _DSM method. [0x8 - 0xB]: 4 bytes. Function Index, that is the Arg2 of _DSM method. [0xC - 0xFFF]: 4084 bytes, the Arg3 of _DSM method. @@ -127,6 +137,49 @@ _DSM process diagram: | result from the page | | | +--------------------------+ +--------------+ - _FIT implementation - ------------------- - TODO (will fill it when nvdimm hotplug is introduced) +QEMU internal use only _DSM function +------------------------------------ +There is the function introduced by QEMU and only used by QEMU internal. + +1) Read FIT + UUID 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62 is reserved for Read_FIT DSM + function (private QEMU function) + + _FIT method uses Read_FIT function to fetch NFIT structures blob from + QEMU in 1 page sized increments which are then concatenated and returned + as _FIT method result. + + Input parameters: + Arg0 – UUID {set to 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62} + Arg1 – Revision ID (set to 1) + Arg2 - Function Index, 0x1 + Arg3 - A package containing a buffer whose layout is as follows: + + +----------+--------+--------+-------------------------------------------+ + | Field | Length | Offset | Description | + +----------+--------+--------+-------------------------------------------+ + | offset | 4 | 0 | offset in QEMU's NFIT structures blob to | + | | | | read from | + +----------+--------+--------+-------------------------------------------+ + + Output: + +----------+--------+--------+-------------------------------------------+ + | Field | Length | Offset | Description | + +----------+--------+--------+-------------------------------------------+ + | | | | return status codes | + | | | | 0x100 - error caused by NFIT update while | + | status | 4 | 0 | read by _FIT wasn't completed, other | + | | | | codes follow Chapter 3 in DSM Spec Rev1 | + +----------+--------+--------+-------------------------------------------+ + | length | 4 | 4 | The fit size | + +----------+-----------------+-------------------------------------------+ + | fit data | Varies | 8 | FIT data, its size is indicated by length | + | | | | filed above | + +----------+--------+--------+-------------------------------------------+ + + The FIT offset is maintained by the OSPM itself, current offset plus + the length returned by the function is the next offset we should read. + When all the FIT data has been read out, zero length is returned. + + If it returns 0x100, OSPM should restart to read FIT (read from offset 0 + again). diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index 9fee077..593ac0d 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -484,6 +484,23 @@ typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn; QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) + offsetof(NvdimmDsmIn, arg3) > 4096); +struct NvdimmFuncReadFITIn { + uint32_t offset; /* the offset into FIT buffer. */ +} QEMU_PACKED; +typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn; +QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) + + offsetof(NvdimmDsmIn, arg3) > 4096); + +struct NvdimmFuncReadFITOut { + /* the size of buffer filled by QEMU. */ + uint32_t len; + uint32_t func_ret_status; /* return status code. */ + uint32_t length; /* the length of fit. */ + uint8_t fit[0]; /* the FIT data. */ +} QEMU_PACKED; +typedef struct NvdimmFuncReadFITOut NvdimmFuncReadFITOut; +QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITOut) > 4096); + static void nvdimm_dsm_function0(uint32_t supported_func, hwaddr dsm_mem_addr) { @@ -504,6 +521,77 @@ nvdimm_dsm_no_payload(uint32_t func_ret_status, hwaddr dsm_mem_addr) cpu_physical_memory_write(dsm_mem_addr, &out, sizeof(out)); } +#define NVDIMM_DSM_RET_STATUS_SUCCESS 0 /* Success */ +#define NVDIMM_DSM_RET_STATUS_UNSUPPORT 1 /* Not Supported */ +#define NVDIMM_DSM_RET_STATUS_INVALID 3 /* Invalid Input Parameters */ +#define NVDIMM_DSM_RET_STATUS_FIT_CHANGED 0x100 /* FIT Changed */ + +#define NVDIMM_QEMU_RSVD_HANDLE_ROOT 0x10000 + +/* Read FIT data, defined in docs/specs/acpi_nvdimm.txt. */ +static void nvdimm_dsm_func_read_fit(AcpiNVDIMMState *state, NvdimmDsmIn *in, + hwaddr dsm_mem_addr) +{ + NvdimmFitBuffer *fit_buf = &state->fit_buf; + NvdimmFuncReadFITIn *read_fit; + NvdimmFuncReadFITOut *read_fit_out; + GArray *fit; + uint32_t read_len = 0, func_ret_status, offset; + int size; + + read_fit = (NvdimmFuncReadFITIn *)in->arg3; + offset = le32_to_cpu(read_fit->offset); + + fit = fit_buf->fit; + + nvdimm_debug("Read FIT: offset %#x FIT size %#x Dirty %s.\n", + offset, fit->len, fit_buf->dirty ? "Yes" : "No"); + + /* It is the first time to read FIT. */ + if (!offset) { + fit_buf->dirty = false; + } else if (fit_buf->dirty) { /* FIT has been changed during RFIT. */ + func_ret_status = NVDIMM_DSM_RET_STATUS_FIT_CHANGED; + goto exit; + } + + if (offset > fit->len) { + func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID; + goto exit; + } + + func_ret_status = NVDIMM_DSM_RET_STATUS_SUCCESS; + read_len = MIN(fit->len - offset, 4096 - sizeof(NvdimmFuncReadFITOut)); + +exit: + size = sizeof(NvdimmFuncReadFITOut) + read_len; + read_fit_out = g_malloc(size); + + read_fit_out->len = cpu_to_le32(size); + read_fit_out->func_ret_status = cpu_to_le32(func_ret_status); + read_fit_out->length = cpu_to_le32(read_len); + memcpy(read_fit_out->fit, fit->data + offset, read_len); + + cpu_physical_memory_write(dsm_mem_addr, read_fit_out, size); + + g_free(read_fit_out); +} + +static void nvdimm_dsm_reserved_root(AcpiNVDIMMState *state, NvdimmDsmIn *in, + hwaddr dsm_mem_addr) +{ + switch (in->function) { + case 0x0: + nvdimm_dsm_function0(0x1 | 1 << 1 /* Read FIT */, dsm_mem_addr); + return; + case 0x1 /* Read FIT */: + nvdimm_dsm_func_read_fit(state, in, dsm_mem_addr); + return; + } + + nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr); +} + static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr) { /* @@ -563,7 +651,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice *nvdimm, hwaddr dsm_mem_addr) nvdimm_debug("label_size %#x, max_xfer %#x.\n", label_size, mxfer); - label_size_out.func_ret_status = cpu_to_le32(0 /* Success */); + label_size_out.func_ret_status = cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS); label_size_out.label_size = cpu_to_le32(label_size); label_size_out.max_xfer = cpu_to_le32(mxfer); @@ -574,7 +662,7 @@ static void nvdimm_dsm_label_size(NVDIMMDevice *nvdimm, hwaddr dsm_mem_addr) static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm, uint32_t offset, uint32_t length) { - uint32_t ret = 3 /* Invalid Input Parameters */; + uint32_t ret = NVDIMM_DSM_RET_STATUS_INVALID; if (offset + length < offset) { nvdimm_debug("offset %#x + length %#x is overflow.\n", offset, @@ -594,7 +682,7 @@ static uint32_t nvdimm_rw_label_data_check(NVDIMMDevice *nvdimm, return ret; } - return 0 /* Success */; + return NVDIMM_DSM_RET_STATUS_SUCCESS; } /* @@ -618,7 +706,7 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in, status = nvdimm_rw_label_data_check(nvdimm, get_label_data->offset, get_label_data->length); - if (status != 0 /* Success */) { + if (status != NVDIMM_DSM_RET_STATUS_SUCCESS) { nvdimm_dsm_no_payload(status, dsm_mem_addr); return; } @@ -628,7 +716,8 @@ static void nvdimm_dsm_get_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in, get_label_data_out = g_malloc(size); get_label_data_out->len = cpu_to_le32(size); - get_label_data_out->func_ret_status = cpu_to_le32(0 /* Success */); + get_label_data_out->func_ret_status = + cpu_to_le32(NVDIMM_DSM_RET_STATUS_SUCCESS); nvc->read_label_data(nvdimm, get_label_data_out->out_buf, get_label_data->length, get_label_data->offset); @@ -656,7 +745,7 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in, status = nvdimm_rw_label_data_check(nvdimm, set_label_data->offset, set_label_data->length); - if (status != 0 /* Success */) { + if (status != NVDIMM_DSM_RET_STATUS_SUCCESS) { nvdimm_dsm_no_payload(status, dsm_mem_addr); return; } @@ -666,7 +755,7 @@ static void nvdimm_dsm_set_label_data(NVDIMMDevice *nvdimm, NvdimmDsmIn *in, nvc->write_label_data(nvdimm, set_label_data->in_buf, set_label_data->length, set_label_data->offset); - nvdimm_dsm_no_payload(0 /* Success */, dsm_mem_addr); + nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_SUCCESS, dsm_mem_addr); } static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr) @@ -717,7 +806,7 @@ static void nvdimm_dsm_device(NvdimmDsmIn *in, hwaddr dsm_mem_addr) break; } - nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr); + nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr); } static uint64_t @@ -730,6 +819,7 @@ nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size) static void nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size) { + AcpiNVDIMMState *state = opaque; NvdimmDsmIn *in; hwaddr dsm_mem_addr = val; @@ -753,7 +843,12 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size) if (in->revision != 0x1 /* Currently we only support DSM Spec Rev1. */) { nvdimm_debug("Revision %#x is not supported, expect %#x.\n", in->revision, 0x1); - nvdimm_dsm_no_payload(1 /* Not Supported */, dsm_mem_addr); + nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr); + goto exit; + } + + if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) { + nvdimm_dsm_reserved_root(state, in, dsm_mem_addr); goto exit; } @@ -809,9 +904,13 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io, #define NVDIMM_DSM_OUT_BUF_SIZE "RLEN" #define NVDIMM_DSM_OUT_BUF "ODAT" +#define NVDIMM_DSM_RFIT_STATUS "RSTA" + +#define NVDIMM_QEMU_RSVD_UUID "648B9CF2-CDA1-4312-8AD9-49C4AF32BD62" + static void nvdimm_build_common_dsm(Aml *dev) { - Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem; + Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem, *elsectx2; Aml *elsectx, *unsupport, *unpatched, *expected_uuid, *uuid_invalid; Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf, *dsm_out_buf_size; uint8_t byte_list[1]; @@ -900,9 +999,15 @@ static void nvdimm_build_common_dsm(Aml *dev) /* UUID for NVDIMM Root Device */, expected_uuid)); aml_append(method, ifctx); elsectx = aml_else(); - aml_append(elsectx, aml_store( + ifctx = aml_if(aml_equal(handle, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT))); + aml_append(ifctx, aml_store(aml_touuid(NVDIMM_QEMU_RSVD_UUID + /* UUID for QEMU internal use */), expected_uuid)); + aml_append(elsectx, ifctx); + elsectx2 = aml_else(); + aml_append(elsectx2, aml_store( aml_touuid("4309AC30-0D11-11E4-9191-0800200C9A66") /* UUID for NVDIMM Devices */, expected_uuid)); + aml_append(elsectx, elsectx2); aml_append(method, elsectx); uuid_invalid = aml_lnot(aml_equal(uuid, expected_uuid)); @@ -919,7 +1024,7 @@ static void nvdimm_build_common_dsm(Aml *dev) aml_append(unsupport, ifctx); /* No function is supported yet. */ - byte_list[0] = 1 /* Not Supported */; + byte_list[0] = NVDIMM_DSM_RET_STATUS_UNSUPPORT; aml_append(unsupport, aml_return(aml_buffer(1, byte_list))); aml_append(method, unsupport); @@ -982,6 +1087,101 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle) aml_append(dev, method); } +static void nvdimm_build_fit_method(Aml *dev) +{ + Aml *method, *pkg, *buf, *buf_size, *offset, *call_result; + Aml *whilectx, *ifcond, *ifctx, *elsectx, *fit; + + buf = aml_local(0); + buf_size = aml_local(1); + fit = aml_local(2); + + aml_append(dev, aml_name_decl(NVDIMM_DSM_RFIT_STATUS, aml_int(0))); + + /* build helper function, RFIT. */ + method = aml_method("RFIT", 1, AML_SERIALIZED); + aml_append(method, aml_name_decl("OFST", aml_int(0))); + + /* prepare input package. */ + pkg = aml_package(1); + aml_append(method, aml_store(aml_arg(0), aml_name("OFST"))); + aml_append(pkg, aml_name("OFST")); + + /* call Read_FIT function. */ + call_result = aml_call5(NVDIMM_COMMON_DSM, + aml_touuid(NVDIMM_QEMU_RSVD_UUID), + aml_int(1) /* Revision 1 */, + aml_int(0x1) /* Read FIT */, + pkg, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT)); + aml_append(method, aml_store(call_result, buf)); + + /* handle _DSM result. */ + aml_append(method, aml_create_dword_field(buf, + aml_int(0) /* offset at byte 0 */, "STAU")); + + aml_append(method, aml_store(aml_name("STAU"), + aml_name(NVDIMM_DSM_RFIT_STATUS))); + + /* if something is wrong during _DSM. */ + ifcond = aml_equal(aml_int(0 /* Success */), aml_name("STAU")); + ifctx = aml_if(aml_lnot(ifcond)); + aml_append(ifctx, aml_return(aml_buffer(0, NULL))); + aml_append(method, ifctx); + + aml_append(method, aml_create_dword_field(buf, + aml_int(4) /* offset at byte 4 */, "LENG")); + aml_append(method, aml_store(aml_name("LENG"), buf_size)); + /* if we read the end of fit. */ + ifctx = aml_if(aml_equal(buf_size, aml_int(0))); + aml_append(ifctx, aml_return(aml_buffer(0, NULL))); + aml_append(method, ifctx); + + aml_append(method, aml_store(aml_shiftleft(buf_size, aml_int(3)), + buf_size)); + aml_append(method, aml_create_field(buf, + aml_int(8 * BITS_PER_BYTE), /* offset at byte 4.*/ + buf_size, "BUFF")); + aml_append(method, aml_return(aml_name("BUFF"))); + aml_append(dev, method); + + /* build _FIT. */ + method = aml_method("_FIT", 0, AML_SERIALIZED); + offset = aml_local(3); + + aml_append(method, aml_store(aml_buffer(0, NULL), fit)); + aml_append(method, aml_store(aml_int(0), offset)); + + whilectx = aml_while(aml_int(1)); + aml_append(whilectx, aml_store(aml_call1("RFIT", offset), buf)); + aml_append(whilectx, aml_store(aml_sizeof(buf), buf_size)); + + /* + * if fit buffer was changed during RFIT, read from the beginning + * again. + */ + ifctx = aml_if(aml_equal(aml_name(NVDIMM_DSM_RFIT_STATUS), + aml_int(NVDIMM_DSM_RET_STATUS_FIT_CHANGED))); + aml_append(ifctx, aml_store(aml_buffer(0, NULL), fit)); + aml_append(ifctx, aml_store(aml_int(0), offset)); + aml_append(whilectx, ifctx); + + elsectx = aml_else(); + + /* finish fit read if no data is read out. */ + ifctx = aml_if(aml_equal(buf_size, aml_int(0))); + aml_append(ifctx, aml_return(fit)); + aml_append(elsectx, ifctx); + + /* update the offset. */ + aml_append(elsectx, aml_add(offset, buf_size, offset)); + /* append the data we read out to the fit buffer. */ + aml_append(elsectx, aml_concatenate(fit, buf, fit)); + aml_append(whilectx, elsectx); + aml_append(method, whilectx); + + aml_append(dev, method); +} + static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots) { uint32_t slot; @@ -1040,6 +1240,7 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data, /* 0 is reserved for root device. */ nvdimm_build_device_dsm(dev, 0); + nvdimm_build_fit_method(dev); nvdimm_build_nvdimm_devices(dev, ram_slots);