Message ID | 1478198186-45204-5-git-send-email-guangrong.xiao@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, 4 Nov 2016 02:36:25 +0800 Xiao Guangrong <guangrong.xiao@linux.intel.com> wrote: > _FIT is required for hotplug support, guest will inquire the > updated device info from it if a hotplug event is received > > As FIT buffer is not completely mapped into guest address space, > Read_FIT method is introduced to read NFIT structures blob from > QEMU, The buffer is concatenated before _FIT return > > Refer to docs/specs/acpi-nvdimm.txt for detailed design > > Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> > --- > docs/specs/acpi_nvdimm.txt | 61 ++++++++++++-- > hw/acpi/nvdimm.c | 198 ++++++++++++++++++++++++++++++++++++++++++++- > 2 files changed, 252 insertions(+), 7 deletions(-) > > diff --git a/docs/specs/acpi_nvdimm.txt b/docs/specs/acpi_nvdimm.txt > index 0fdd251..63cb63f 100644 > --- a/docs/specs/acpi_nvdimm.txt > +++ b/docs/specs/acpi_nvdimm.txt > @@ -65,8 +65,8 @@ _FIT(Firmware Interface Table) > The detailed definition of the structure can be found at ACPI 6.0: 5.2.25 > NVDIMM Firmware Interface Table (NFIT). > > -QEMU NVDIMM Implemention > -======================== > +QEMU NVDIMM Implementation > +========================== > QEMU uses 4 bytes IO Port starting from 0x0a18 and a RAM-based memory page > for NVDIMM ACPI. > > @@ -82,6 +82,16 @@ Memory: > ACPI writes _DSM Input Data (based on the offset in the page): > [0x0 - 0x3]: 4 bytes, NVDIMM Device Handle, 0 is reserved for NVDIMM > Root device. > + > + The handle is completely QEMU internal thing, the values in > + range [1, 0xFFFF] indicate nvdimm device. Other values are > + reserved for other purposes. > + > + Reserved handles: > + 0 is reserved for nvdimm root device named NVDR. > + 0x10000 is reserved for QEMU internal DSM function called on > + the root device. > + > [0x4 - 0x7]: 4 bytes, Revision ID, that is the Arg1 of _DSM method. > [0x8 - 0xB]: 4 bytes. Function Index, that is the Arg2 of _DSM method. > [0xC - 0xFFF]: 4084 bytes, the Arg3 of _DSM method. > @@ -127,6 +137,47 @@ _DSM process diagram: > | result from the page | | | > +--------------------------+ +--------------+ > > - _FIT implementation > - ------------------- > - TODO (will fill it when nvdimm hotplug is introduced) > +QEMU internal use only _DSM function > +------------------------------------ > +1) Read FIT > + _FIT method uses _DSM method to fetch NFIT structures blob from QEMU > + in 1 page sized increments which are then concatenated and returned > + as _FIT method result. > + > + Input parameters: > + Arg0 – UUID {set to 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62} > + Arg1 – Revision ID (set to 1) > + Arg2 - Function Index, 0x1 > + Arg3 - A package containing a buffer whose layout is as follows: > + > + +----------+--------+--------+-------------------------------------------+ > + | Field | Length | Offset | Description | > + +----------+--------+--------+-------------------------------------------+ > + | offset | 4 | 0 | offset in QEMU's NFIT structures blob to | > + | | | | read from | > + +----------+--------+--------+-------------------------------------------+ > + > + Output layout in the dsm memory page: > + +----------+--------+--------+-------------------------------------------+ > + | Field | Length | Offset | Description | > + +----------+--------+--------+-------------------------------------------+ > + | length | 4 | 0 | length of entire returned data | > + | | | | (including the header) | s/the header/this header/ > + +----------+-----------------+-------------------------------------------+ > + | | | | return status codes | > + | | | | 0x0 - success | > + | | | | 0x100 - error caused by NFIT update while | > + | status | 4 | 4 | read by _FIT wasn't completed, other | > + | | | | codes follow Chapter 3 in DSM Spec Rev1 | > + +----------+-----------------+-------------------------------------------+ > + | fit data | Varies | 8 | contains FIT data, this field is present | > + | | | | if status field is 0; | > + +----------+--------+--------+-------------------------------------------+ > + > + The FIT offset is maintained by the OSPM itself, current offset plus > + the size of the fit data returned by the function is the next offset > + OSPM should read. When all FIT data has been read out, zero length is > + returned. you mean here fit data length rather then what is in 'length' field, you should talk here in terms of 'length' field. > + > + If it returns status code 0x100, OSPM should restart to read FIT (read > + from offset 0 again). > diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c > index d9d1ef7..f82ae4a 100644 > --- a/hw/acpi/nvdimm.c > +++ b/hw/acpi/nvdimm.c > @@ -478,6 +478,22 @@ typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn; > QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) + > offsetof(NvdimmDsmIn, arg3) > 4096); > > +struct NvdimmFuncReadFITIn { > + uint32_t offset; /* the offset into FIT buffer. */ > +} QEMU_PACKED; > +typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn; > +QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) + > + offsetof(NvdimmDsmIn, arg3) > 4096); > + > +struct NvdimmFuncReadFITOut { > + /* the size of buffer filled by QEMU. */ > + uint32_t len; > + uint32_t func_ret_status; /* return status code. */ > + uint8_t fit[0]; /* the FIT data. */ > +} QEMU_PACKED; > +typedef struct NvdimmFuncReadFITOut NvdimmFuncReadFITOut; > +QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITOut) > 4096); > + > static void > nvdimm_dsm_function0(uint32_t supported_func, hwaddr dsm_mem_addr) > { > @@ -502,6 +518,73 @@ nvdimm_dsm_no_payload(uint32_t func_ret_status, hwaddr dsm_mem_addr) > #define NVDIMM_DSM_RET_STATUS_UNSUPPORT 1 /* Not Supported */ > #define NVDIMM_DSM_RET_STATUS_NOMEMDEV 2 /* Non-Existing Memory Device */ > #define NVDIMM_DSM_RET_STATUS_INVALID 3 /* Invalid Input Parameters */ > +#define NVDIMM_DSM_RET_STATUS_FIT_CHANGED 0x100 /* FIT Changed */ > + > +#define NVDIMM_QEMU_RSVD_HANDLE_ROOT 0x10000 > + > +/* Read FIT data, defined in docs/specs/acpi_nvdimm.txt. */ > +static void nvdimm_dsm_func_read_fit(AcpiNVDIMMState *state, NvdimmDsmIn *in, > + hwaddr dsm_mem_addr) > +{ > + NvdimmFitBuffer *fit_buf = &state->fit_buf; > + NvdimmFuncReadFITIn *read_fit; > + NvdimmFuncReadFITOut *read_fit_out; > + GArray *fit; > + uint32_t read_len = 0, func_ret_status, offset; > + int size; > + > + read_fit = (NvdimmFuncReadFITIn *)in->arg3; > + offset = le32_to_cpu(read_fit->offset); > + > + fit = fit_buf->fit; > + > + nvdimm_debug("Read FIT: offset %#x FIT size %#x Dirty %s.\n", > + offset, fit->len, fit_buf->dirty ? "Yes" : "No"); > + > + /* It is the first time to read FIT. */ > + if (!offset) { > + fit_buf->dirty = false; > + } else if (fit_buf->dirty) { /* FIT has been changed during RFIT. */ > + func_ret_status = NVDIMM_DSM_RET_STATUS_FIT_CHANGED; > + goto exit; > + } > + > + if (offset > fit->len) { > + func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID; > + goto exit; > + } > + > + func_ret_status = NVDIMM_DSM_RET_STATUS_SUCCESS; > + read_len = MIN(fit->len - offset, 4096 - sizeof(NvdimmFuncReadFITOut)); ^^^ should be macro that's used at place where page is allocated, so that usage places won't go out of sync someday > + > +exit: > + size = sizeof(NvdimmFuncReadFITOut) + read_len; > + read_fit_out = g_malloc(size); > + > + read_fit_out->len = cpu_to_le32(size); > + read_fit_out->func_ret_status = cpu_to_le32(func_ret_status); > + memcpy(read_fit_out->fit, fit->data + offset, read_len); > + > + cpu_physical_memory_write(dsm_mem_addr, read_fit_out, size); > + > + g_free(read_fit_out); > +} > + > +static void > +nvdimm_dsm_handle_reserved_root_method(AcpiNVDIMMState *state, > + NvdimmDsmIn *in, hwaddr dsm_mem_addr) > +{ > + switch (in->function) { > + case 0x0: > + nvdimm_dsm_function0(0x1 | 1 << 1 /* Read FIT */, dsm_mem_addr); > + return; > + case 0x1 /* Read FIT */: > + nvdimm_dsm_func_read_fit(state, in, dsm_mem_addr); > + return; > + } > + > + nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr); > +} > > static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr) > { > @@ -730,6 +813,7 @@ nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size) > static void > nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size) > { > + AcpiNVDIMMState *state = opaque; > NvdimmDsmIn *in; > hwaddr dsm_mem_addr = val; > > @@ -757,6 +841,11 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size) > goto exit; > } > > + if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) { > + nvdimm_dsm_handle_reserved_root_method(state, in, dsm_mem_addr); > + goto exit; > + } > + > /* Handle 0 is reserved for NVDIMM Root Device. */ > if (!in->handle) { > nvdimm_dsm_root(in, dsm_mem_addr); > @@ -809,9 +898,13 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io, > #define NVDIMM_DSM_OUT_BUF_SIZE "RLEN" > #define NVDIMM_DSM_OUT_BUF "ODAT" > > +#define NVDIMM_DSM_RFIT_STATUS "RSTA" > + > +#define NVDIMM_QEMU_RSVD_UUID "648B9CF2-CDA1-4312-8AD9-49C4AF32BD62" > + > static void nvdimm_build_common_dsm(Aml *dev) > { > - Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem; > + Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem, *elsectx2; > Aml *elsectx, *unsupport, *unpatched, *expected_uuid, *uuid_invalid; > Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf, *dsm_out_buf_size; > uint8_t byte_list[1]; > @@ -900,9 +993,15 @@ static void nvdimm_build_common_dsm(Aml *dev) > /* UUID for NVDIMM Root Device */, expected_uuid)); > aml_append(method, ifctx); > elsectx = aml_else(); > - aml_append(elsectx, aml_store( > + ifctx = aml_if(aml_equal(handle, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT))); > + aml_append(ifctx, aml_store(aml_touuid(NVDIMM_QEMU_RSVD_UUID > + /* UUID for QEMU internal use */), expected_uuid)); > + aml_append(elsectx, ifctx); > + elsectx2 = aml_else(); > + aml_append(elsectx2, aml_store( > aml_touuid("4309AC30-0D11-11E4-9191-0800200C9A66") > /* UUID for NVDIMM Devices */, expected_uuid)); > + aml_append(elsectx, elsectx2); > aml_append(method, elsectx); > > uuid_invalid = aml_lnot(aml_equal(uuid, expected_uuid)); > @@ -982,6 +1081,100 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle) > aml_append(dev, method); > } > > +static void nvdimm_build_fit_method(Aml *dev) > +{ > + Aml *method, *pkg, *buf, *buf_size, *offset, *call_result; > + Aml *whilectx, *ifcond, *ifctx, *elsectx, *fit; > + > + buf = aml_local(0); > + buf_size = aml_local(1); > + fit = aml_local(2); > + > + aml_append(dev, aml_name_decl(NVDIMM_DSM_RFIT_STATUS, aml_int(0))); > + > + /* build helper function, RFIT. */ > + method = aml_method("RFIT", 1, AML_SERIALIZED); > + aml_append(method, aml_name_decl("OFST", aml_int(0))); > + > + /* prepare input package. */ > + pkg = aml_package(1); > + aml_append(method, aml_store(aml_arg(0), aml_name("OFST"))); > + aml_append(pkg, aml_name("OFST")); > + > + /* call Read_FIT function. */ > + call_result = aml_call5(NVDIMM_COMMON_DSM, > + aml_touuid(NVDIMM_QEMU_RSVD_UUID), > + aml_int(1) /* Revision 1 */, > + aml_int(0x1) /* Read FIT */, > + pkg, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT)); > + aml_append(method, aml_store(call_result, buf)); > + > + /* handle _DSM result. */ > + aml_append(method, aml_create_dword_field(buf, > + aml_int(0) /* offset at byte 0 */, "STAU")); > + > + aml_append(method, aml_store(aml_name("STAU"), > + aml_name(NVDIMM_DSM_RFIT_STATUS))); > + > + /* if something is wrong during _DSM. */ > + ifcond = aml_equal(aml_int(0 /* Success */), aml_name("STAU")); > + ifctx = aml_if(aml_lnot(ifcond)); > + aml_append(ifctx, aml_return(aml_buffer(0, NULL))); > + aml_append(method, ifctx); > + > + aml_append(method, aml_store(aml_sizeof(buf), buf_size)); > + aml_append(method, aml_subtract(buf_size, > + aml_int(4) /* the size of "STAU" */, buf_size)); > + > + /* if we read the end of fit. */ > + ifctx = aml_if(aml_equal(buf_size, aml_int(0))); > + aml_append(ifctx, aml_return(aml_buffer(0, NULL))); > + aml_append(method, ifctx); > + > + aml_append(method, aml_create_field(buf, > + aml_int(4 * BITS_PER_BYTE), /* offset at byte 4.*/ > + aml_shiftleft(buf_size, aml_int(3)), "BUFF")); > + aml_append(method, aml_return(aml_name("BUFF"))); > + aml_append(dev, method); > + > + /* build _FIT. */ > + method = aml_method("_FIT", 0, AML_SERIALIZED); > + offset = aml_local(3); > + > + aml_append(method, aml_store(aml_buffer(0, NULL), fit)); > + aml_append(method, aml_store(aml_int(0), offset)); > + > + whilectx = aml_while(aml_int(1)); > + aml_append(whilectx, aml_store(aml_call1("RFIT", offset), buf)); > + aml_append(whilectx, aml_store(aml_sizeof(buf), buf_size)); > + > + /* > + * if fit buffer was changed during RFIT, read from the beginning > + * again. > + */ > + ifctx = aml_if(aml_equal(aml_name(NVDIMM_DSM_RFIT_STATUS), > + aml_int(NVDIMM_DSM_RET_STATUS_FIT_CHANGED))); > + aml_append(ifctx, aml_store(aml_buffer(0, NULL), fit)); > + aml_append(ifctx, aml_store(aml_int(0), offset)); > + aml_append(whilectx, ifctx); > + > + elsectx = aml_else(); > + > + /* finish fit read if no data is read out. */ > + ifctx = aml_if(aml_equal(buf_size, aml_int(0))); > + aml_append(ifctx, aml_return(fit)); > + aml_append(elsectx, ifctx); > + > + /* update the offset. */ > + aml_append(elsectx, aml_add(offset, buf_size, offset)); > + /* append the data we read out to the fit buffer. */ > + aml_append(elsectx, aml_concatenate(fit, buf, fit)); > + aml_append(whilectx, elsectx); > + aml_append(method, whilectx); > + > + aml_append(dev, method); > +} > + > static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots) > { > uint32_t slot; > @@ -1040,6 +1233,7 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data, > > /* 0 is reserved for root device. */ > nvdimm_build_device_dsm(dev, 0); > + nvdimm_build_fit_method(dev); > > nvdimm_build_nvdimm_devices(dev, ram_slots); >
diff --git a/docs/specs/acpi_nvdimm.txt b/docs/specs/acpi_nvdimm.txt index 0fdd251..63cb63f 100644 --- a/docs/specs/acpi_nvdimm.txt +++ b/docs/specs/acpi_nvdimm.txt @@ -65,8 +65,8 @@ _FIT(Firmware Interface Table) The detailed definition of the structure can be found at ACPI 6.0: 5.2.25 NVDIMM Firmware Interface Table (NFIT). -QEMU NVDIMM Implemention -======================== +QEMU NVDIMM Implementation +========================== QEMU uses 4 bytes IO Port starting from 0x0a18 and a RAM-based memory page for NVDIMM ACPI. @@ -82,6 +82,16 @@ Memory: ACPI writes _DSM Input Data (based on the offset in the page): [0x0 - 0x3]: 4 bytes, NVDIMM Device Handle, 0 is reserved for NVDIMM Root device. + + The handle is completely QEMU internal thing, the values in + range [1, 0xFFFF] indicate nvdimm device. Other values are + reserved for other purposes. + + Reserved handles: + 0 is reserved for nvdimm root device named NVDR. + 0x10000 is reserved for QEMU internal DSM function called on + the root device. + [0x4 - 0x7]: 4 bytes, Revision ID, that is the Arg1 of _DSM method. [0x8 - 0xB]: 4 bytes. Function Index, that is the Arg2 of _DSM method. [0xC - 0xFFF]: 4084 bytes, the Arg3 of _DSM method. @@ -127,6 +137,47 @@ _DSM process diagram: | result from the page | | | +--------------------------+ +--------------+ - _FIT implementation - ------------------- - TODO (will fill it when nvdimm hotplug is introduced) +QEMU internal use only _DSM function +------------------------------------ +1) Read FIT + _FIT method uses _DSM method to fetch NFIT structures blob from QEMU + in 1 page sized increments which are then concatenated and returned + as _FIT method result. + + Input parameters: + Arg0 – UUID {set to 648B9CF2-CDA1-4312-8AD9-49C4AF32BD62} + Arg1 – Revision ID (set to 1) + Arg2 - Function Index, 0x1 + Arg3 - A package containing a buffer whose layout is as follows: + + +----------+--------+--------+-------------------------------------------+ + | Field | Length | Offset | Description | + +----------+--------+--------+-------------------------------------------+ + | offset | 4 | 0 | offset in QEMU's NFIT structures blob to | + | | | | read from | + +----------+--------+--------+-------------------------------------------+ + + Output layout in the dsm memory page: + +----------+--------+--------+-------------------------------------------+ + | Field | Length | Offset | Description | + +----------+--------+--------+-------------------------------------------+ + | length | 4 | 0 | length of entire returned data | + | | | | (including the header) | + +----------+-----------------+-------------------------------------------+ + | | | | return status codes | + | | | | 0x0 - success | + | | | | 0x100 - error caused by NFIT update while | + | status | 4 | 4 | read by _FIT wasn't completed, other | + | | | | codes follow Chapter 3 in DSM Spec Rev1 | + +----------+-----------------+-------------------------------------------+ + | fit data | Varies | 8 | contains FIT data, this field is present | + | | | | if status field is 0; | + +----------+--------+--------+-------------------------------------------+ + + The FIT offset is maintained by the OSPM itself, current offset plus + the size of the fit data returned by the function is the next offset + OSPM should read. When all FIT data has been read out, zero length is + returned. + + If it returns status code 0x100, OSPM should restart to read FIT (read + from offset 0 again). diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index d9d1ef7..f82ae4a 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -478,6 +478,22 @@ typedef struct NvdimmFuncSetLabelDataIn NvdimmFuncSetLabelDataIn; QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncSetLabelDataIn) + offsetof(NvdimmDsmIn, arg3) > 4096); +struct NvdimmFuncReadFITIn { + uint32_t offset; /* the offset into FIT buffer. */ +} QEMU_PACKED; +typedef struct NvdimmFuncReadFITIn NvdimmFuncReadFITIn; +QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITIn) + + offsetof(NvdimmDsmIn, arg3) > 4096); + +struct NvdimmFuncReadFITOut { + /* the size of buffer filled by QEMU. */ + uint32_t len; + uint32_t func_ret_status; /* return status code. */ + uint8_t fit[0]; /* the FIT data. */ +} QEMU_PACKED; +typedef struct NvdimmFuncReadFITOut NvdimmFuncReadFITOut; +QEMU_BUILD_BUG_ON(sizeof(NvdimmFuncReadFITOut) > 4096); + static void nvdimm_dsm_function0(uint32_t supported_func, hwaddr dsm_mem_addr) { @@ -502,6 +518,73 @@ nvdimm_dsm_no_payload(uint32_t func_ret_status, hwaddr dsm_mem_addr) #define NVDIMM_DSM_RET_STATUS_UNSUPPORT 1 /* Not Supported */ #define NVDIMM_DSM_RET_STATUS_NOMEMDEV 2 /* Non-Existing Memory Device */ #define NVDIMM_DSM_RET_STATUS_INVALID 3 /* Invalid Input Parameters */ +#define NVDIMM_DSM_RET_STATUS_FIT_CHANGED 0x100 /* FIT Changed */ + +#define NVDIMM_QEMU_RSVD_HANDLE_ROOT 0x10000 + +/* Read FIT data, defined in docs/specs/acpi_nvdimm.txt. */ +static void nvdimm_dsm_func_read_fit(AcpiNVDIMMState *state, NvdimmDsmIn *in, + hwaddr dsm_mem_addr) +{ + NvdimmFitBuffer *fit_buf = &state->fit_buf; + NvdimmFuncReadFITIn *read_fit; + NvdimmFuncReadFITOut *read_fit_out; + GArray *fit; + uint32_t read_len = 0, func_ret_status, offset; + int size; + + read_fit = (NvdimmFuncReadFITIn *)in->arg3; + offset = le32_to_cpu(read_fit->offset); + + fit = fit_buf->fit; + + nvdimm_debug("Read FIT: offset %#x FIT size %#x Dirty %s.\n", + offset, fit->len, fit_buf->dirty ? "Yes" : "No"); + + /* It is the first time to read FIT. */ + if (!offset) { + fit_buf->dirty = false; + } else if (fit_buf->dirty) { /* FIT has been changed during RFIT. */ + func_ret_status = NVDIMM_DSM_RET_STATUS_FIT_CHANGED; + goto exit; + } + + if (offset > fit->len) { + func_ret_status = NVDIMM_DSM_RET_STATUS_INVALID; + goto exit; + } + + func_ret_status = NVDIMM_DSM_RET_STATUS_SUCCESS; + read_len = MIN(fit->len - offset, 4096 - sizeof(NvdimmFuncReadFITOut)); + +exit: + size = sizeof(NvdimmFuncReadFITOut) + read_len; + read_fit_out = g_malloc(size); + + read_fit_out->len = cpu_to_le32(size); + read_fit_out->func_ret_status = cpu_to_le32(func_ret_status); + memcpy(read_fit_out->fit, fit->data + offset, read_len); + + cpu_physical_memory_write(dsm_mem_addr, read_fit_out, size); + + g_free(read_fit_out); +} + +static void +nvdimm_dsm_handle_reserved_root_method(AcpiNVDIMMState *state, + NvdimmDsmIn *in, hwaddr dsm_mem_addr) +{ + switch (in->function) { + case 0x0: + nvdimm_dsm_function0(0x1 | 1 << 1 /* Read FIT */, dsm_mem_addr); + return; + case 0x1 /* Read FIT */: + nvdimm_dsm_func_read_fit(state, in, dsm_mem_addr); + return; + } + + nvdimm_dsm_no_payload(NVDIMM_DSM_RET_STATUS_UNSUPPORT, dsm_mem_addr); +} static void nvdimm_dsm_root(NvdimmDsmIn *in, hwaddr dsm_mem_addr) { @@ -730,6 +813,7 @@ nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size) static void nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size) { + AcpiNVDIMMState *state = opaque; NvdimmDsmIn *in; hwaddr dsm_mem_addr = val; @@ -757,6 +841,11 @@ nvdimm_dsm_write(void *opaque, hwaddr addr, uint64_t val, unsigned size) goto exit; } + if (in->handle == NVDIMM_QEMU_RSVD_HANDLE_ROOT) { + nvdimm_dsm_handle_reserved_root_method(state, in, dsm_mem_addr); + goto exit; + } + /* Handle 0 is reserved for NVDIMM Root Device. */ if (!in->handle) { nvdimm_dsm_root(in, dsm_mem_addr); @@ -809,9 +898,13 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io, #define NVDIMM_DSM_OUT_BUF_SIZE "RLEN" #define NVDIMM_DSM_OUT_BUF "ODAT" +#define NVDIMM_DSM_RFIT_STATUS "RSTA" + +#define NVDIMM_QEMU_RSVD_UUID "648B9CF2-CDA1-4312-8AD9-49C4AF32BD62" + static void nvdimm_build_common_dsm(Aml *dev) { - Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem; + Aml *method, *ifctx, *function, *handle, *uuid, *dsm_mem, *elsectx2; Aml *elsectx, *unsupport, *unpatched, *expected_uuid, *uuid_invalid; Aml *pckg, *pckg_index, *pckg_buf, *field, *dsm_out_buf, *dsm_out_buf_size; uint8_t byte_list[1]; @@ -900,9 +993,15 @@ static void nvdimm_build_common_dsm(Aml *dev) /* UUID for NVDIMM Root Device */, expected_uuid)); aml_append(method, ifctx); elsectx = aml_else(); - aml_append(elsectx, aml_store( + ifctx = aml_if(aml_equal(handle, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT))); + aml_append(ifctx, aml_store(aml_touuid(NVDIMM_QEMU_RSVD_UUID + /* UUID for QEMU internal use */), expected_uuid)); + aml_append(elsectx, ifctx); + elsectx2 = aml_else(); + aml_append(elsectx2, aml_store( aml_touuid("4309AC30-0D11-11E4-9191-0800200C9A66") /* UUID for NVDIMM Devices */, expected_uuid)); + aml_append(elsectx, elsectx2); aml_append(method, elsectx); uuid_invalid = aml_lnot(aml_equal(uuid, expected_uuid)); @@ -982,6 +1081,100 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle) aml_append(dev, method); } +static void nvdimm_build_fit_method(Aml *dev) +{ + Aml *method, *pkg, *buf, *buf_size, *offset, *call_result; + Aml *whilectx, *ifcond, *ifctx, *elsectx, *fit; + + buf = aml_local(0); + buf_size = aml_local(1); + fit = aml_local(2); + + aml_append(dev, aml_name_decl(NVDIMM_DSM_RFIT_STATUS, aml_int(0))); + + /* build helper function, RFIT. */ + method = aml_method("RFIT", 1, AML_SERIALIZED); + aml_append(method, aml_name_decl("OFST", aml_int(0))); + + /* prepare input package. */ + pkg = aml_package(1); + aml_append(method, aml_store(aml_arg(0), aml_name("OFST"))); + aml_append(pkg, aml_name("OFST")); + + /* call Read_FIT function. */ + call_result = aml_call5(NVDIMM_COMMON_DSM, + aml_touuid(NVDIMM_QEMU_RSVD_UUID), + aml_int(1) /* Revision 1 */, + aml_int(0x1) /* Read FIT */, + pkg, aml_int(NVDIMM_QEMU_RSVD_HANDLE_ROOT)); + aml_append(method, aml_store(call_result, buf)); + + /* handle _DSM result. */ + aml_append(method, aml_create_dword_field(buf, + aml_int(0) /* offset at byte 0 */, "STAU")); + + aml_append(method, aml_store(aml_name("STAU"), + aml_name(NVDIMM_DSM_RFIT_STATUS))); + + /* if something is wrong during _DSM. */ + ifcond = aml_equal(aml_int(0 /* Success */), aml_name("STAU")); + ifctx = aml_if(aml_lnot(ifcond)); + aml_append(ifctx, aml_return(aml_buffer(0, NULL))); + aml_append(method, ifctx); + + aml_append(method, aml_store(aml_sizeof(buf), buf_size)); + aml_append(method, aml_subtract(buf_size, + aml_int(4) /* the size of "STAU" */, buf_size)); + + /* if we read the end of fit. */ + ifctx = aml_if(aml_equal(buf_size, aml_int(0))); + aml_append(ifctx, aml_return(aml_buffer(0, NULL))); + aml_append(method, ifctx); + + aml_append(method, aml_create_field(buf, + aml_int(4 * BITS_PER_BYTE), /* offset at byte 4.*/ + aml_shiftleft(buf_size, aml_int(3)), "BUFF")); + aml_append(method, aml_return(aml_name("BUFF"))); + aml_append(dev, method); + + /* build _FIT. */ + method = aml_method("_FIT", 0, AML_SERIALIZED); + offset = aml_local(3); + + aml_append(method, aml_store(aml_buffer(0, NULL), fit)); + aml_append(method, aml_store(aml_int(0), offset)); + + whilectx = aml_while(aml_int(1)); + aml_append(whilectx, aml_store(aml_call1("RFIT", offset), buf)); + aml_append(whilectx, aml_store(aml_sizeof(buf), buf_size)); + + /* + * if fit buffer was changed during RFIT, read from the beginning + * again. + */ + ifctx = aml_if(aml_equal(aml_name(NVDIMM_DSM_RFIT_STATUS), + aml_int(NVDIMM_DSM_RET_STATUS_FIT_CHANGED))); + aml_append(ifctx, aml_store(aml_buffer(0, NULL), fit)); + aml_append(ifctx, aml_store(aml_int(0), offset)); + aml_append(whilectx, ifctx); + + elsectx = aml_else(); + + /* finish fit read if no data is read out. */ + ifctx = aml_if(aml_equal(buf_size, aml_int(0))); + aml_append(ifctx, aml_return(fit)); + aml_append(elsectx, ifctx); + + /* update the offset. */ + aml_append(elsectx, aml_add(offset, buf_size, offset)); + /* append the data we read out to the fit buffer. */ + aml_append(elsectx, aml_concatenate(fit, buf, fit)); + aml_append(whilectx, elsectx); + aml_append(method, whilectx); + + aml_append(dev, method); +} + static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots) { uint32_t slot; @@ -1040,6 +1233,7 @@ static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data, /* 0 is reserved for root device. */ nvdimm_build_device_dsm(dev, 0); + nvdimm_build_fit_method(dev); nvdimm_build_nvdimm_devices(dev, ram_slots);
_FIT is required for hotplug support, guest will inquire the updated device info from it if a hotplug event is received As FIT buffer is not completely mapped into guest address space, Read_FIT method is introduced to read NFIT structures blob from QEMU, The buffer is concatenated before _FIT return Refer to docs/specs/acpi-nvdimm.txt for detailed design Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> --- docs/specs/acpi_nvdimm.txt | 61 ++++++++++++-- hw/acpi/nvdimm.c | 198 ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 252 insertions(+), 7 deletions(-)