Message ID | 20191129075634.682-1-tao3.xu@intel.com (mailing list archive) |
---|---|
Headers | show |
Series | Build ACPI Heterogeneous Memory Attribute Table (HMAT) | expand |
Hi Michael, Could this patch series be queued? Thank you very much! Tao On 11/29/2019 3:56 PM, Xu, Tao3 wrote: > This series of patches will build Heterogeneous Memory Attribute Table (HMAT) > according to the command line. The ACPI HMAT describes the memory attributes, > such as memory side cache attributes and bandwidth and latency details, > related to the Memory Proximity Domain. > The software is expected to use HMAT information as hint for optimization. > > In the linux kernel, the codes in drivers/acpi/hmat/hmat.c parse and report > the platform's HMAT tables. > > The V19 patches link: > https://patchwork.kernel.org/cover/11265525/ > > Changelog: > v20: > - Use g_assert_true and g_assert_false to replace g_assert > (Thomas and Markus) > - Rename assoc as associativity, update the QAPI description (Markus) > - Disable cache level 0 in hmat-cache option (Igor) > - Keep base and bitmap unchanged when latency or bandwidth > out of range > - Fix the broken CI case when user input latency or bandwidth > less than required. > v19: > - Add description about the machine property 'hmat' in commit > message (Markus) > - Update the QAPI comments > - Add a check for no memory side cache > - Add some fail cases for hmat-cache when level=0 > v18: > - Defer patches 01/14~06/14 of V17, use qapi type uint64 and > only nanosecond for latency (Markus) > - Rewrite the lines over 80 characters(Igor) > v17: > - Add check when user input latency or bandwidth 0, the > lb_info_provided should also be 0. Because in ACPI 6.3 5.2.27.4, > 0 means the corresponding latency or bandwidth information is > not provided. > - Fix the infinite loop when node->latency is 0. > - Use NumaHmatCacheOptions to replace HMAT_Cache_Info (Igor) > - Add check for unordered cache level input (Igor) > - Add some fail test cases (Igor) > v16: > - Add and use qemu_strtold_finite to parse size, support full > 64bit precision, modify related test cases (Eduardo and Markus) > - Simplify struct HMAT_LB_Info and related code, unify latency > and bandwidth (Igor) > - Add cross check with hmat_lb data (Igor) > - Fields in Cache Attributes are promoted to uint32_t before > shifting (Igor) > - Add case for QMP build HMAT (Igor) > v15: > - Add a new patch to refactor do_strtosz() (Eduardo) > - Make tests without breaking CI (Michael) > v14: > - Reuse the codes of do_strtosz to build qemu_strtotime_ns > (Eduardo) > - Squash patch v13 01/12 and 02/12 together (Daniel and Eduardo) > - Drop time unit picosecond (Eric) > - Use qemu ctz64 and clz64 instead of builtin function > v13: > - Modify some text description > - Drop "initiator_valid" field in struct NodeInfo > - Reuse Garray to store the raw bandwidth and bandwidth data > - Calculate common base unit using range bitmap > - Add a patch to alculate hmat latency and bandwidth entry list > - Drop the total_levels option and use readable cache size > - Remove the unnecessary head file > - Use decimal notation with appropriate suffix for cache size > > Liu Jingqi (5): > numa: Extend CLI to provide memory latency and bandwidth information > numa: Extend CLI to provide memory side cache information > hmat acpi: Build Memory Proximity Domain Attributes Structure(s) > hmat acpi: Build System Locality Latency and Bandwidth Information > Structure(s) > hmat acpi: Build Memory Side Cache Information Structure(s) > > Tao Xu (3): > numa: Extend CLI to provide initiator information for numa nodes > tests/numa: Add case for QMP build HMAT > tests/bios-tables-test: add test cases for ACPI HMAT > > hw/acpi/Kconfig | 7 +- > hw/acpi/Makefile.objs | 1 + > hw/acpi/hmat.c | 268 +++++++++++++++++++++++ > hw/acpi/hmat.h | 42 ++++ > hw/core/machine.c | 64 ++++++ > hw/core/numa.c | 297 ++++++++++++++++++++++++++ > hw/i386/acpi-build.c | 5 + > include/sysemu/numa.h | 63 ++++++ > qapi/machine.json | 180 +++++++++++++++- > qemu-options.hx | 95 +++++++- > tests/bios-tables-test-allowed-diff.h | 8 + > tests/bios-tables-test.c | 44 ++++ > tests/data/acpi/pc/APIC.acpihmat | 0 > tests/data/acpi/pc/DSDT.acpihmat | 0 > tests/data/acpi/pc/HMAT.acpihmat | 0 > tests/data/acpi/pc/SRAT.acpihmat | 0 > tests/data/acpi/q35/APIC.acpihmat | 0 > tests/data/acpi/q35/DSDT.acpihmat | 0 > tests/data/acpi/q35/HMAT.acpihmat | 0 > tests/data/acpi/q35/SRAT.acpihmat | 0 > tests/numa-test.c | 213 ++++++++++++++++++ > 21 files changed, 1276 insertions(+), 11 deletions(-) > create mode 100644 hw/acpi/hmat.c > create mode 100644 hw/acpi/hmat.h > create mode 100644 tests/data/acpi/pc/APIC.acpihmat > create mode 100644 tests/data/acpi/pc/DSDT.acpihmat > create mode 100644 tests/data/acpi/pc/HMAT.acpihmat > create mode 100644 tests/data/acpi/pc/SRAT.acpihmat > create mode 100644 tests/data/acpi/q35/APIC.acpihmat > create mode 100644 tests/data/acpi/q35/DSDT.acpihmat > create mode 100644 tests/data/acpi/q35/HMAT.acpihmat > create mode 100644 tests/data/acpi/q35/SRAT.acpihmat >
On Tue, Dec 03, 2019 at 08:53:30AM +0800, Tao Xu wrote: > Hi Michael, > > Could this patch series be queued? > Thank you very much! > > Tao QEMU is in freeze, so not yet. Please ping after the release.
On 12/3/2019 1:35 PM, Michael S. Tsirkin wrote: > On Tue, Dec 03, 2019 at 08:53:30AM +0800, Tao Xu wrote: >> Hi Michael, >> >> Could this patch series be queued? >> Thank you very much! >> >> Tao > > QEMU is in freeze, so not yet. Please ping after the release. > OK, Thank you!
"Michael S. Tsirkin" <mst@redhat.com> writes: > On Tue, Dec 03, 2019 at 08:53:30AM +0800, Tao Xu wrote: >> Hi Michael, >> >> Could this patch series be queued? >> Thank you very much! >> >> Tao > > QEMU is in freeze, so not yet. Please ping after the release. Just to avoid confusion: it's Michael's personal preference not to process patches for the next version during freeze. Other maintainers do, and that's actually the project's policy: Subject: QEMU Summit 2017: minutes Message-ID: <CAFEAcA-b9oDkPfZbntWfhWSv1HOnbUf75p_xB_tF74h_NBGPmw@mail.gmail.com> https://lists.nongnu.org/archive/html/qemu-devel/2017-11/msg04453.html qemu-next: * Problem 1: Contributors cannot get patches merged during freeze (bad experience) [...] * Markus Armbruster: Problem 1 is solved if maintainers keep their own -next trees * Paolo Bonzini: Maintaining -next could slow down or create work for -freeze (e.g. who does backports) * Action: Maintainers mustn't tell submitters to go away just because we're in a release freeze (it's up to them whether they prefer to maintain a "-next" tree for their subsystem with patches queued for the following release, or track which patches they've accepted some other way) * We're not going to have an official project-wide "-next" tree, though Michael, would queuing up patches in a -next branch really be too much trouble for you?
On Tue, Dec 03, 2019 at 07:00:53AM +0100, Markus Armbruster wrote: > "Michael S. Tsirkin" <mst@redhat.com> writes: > > > On Tue, Dec 03, 2019 at 08:53:30AM +0800, Tao Xu wrote: > >> Hi Michael, > >> > >> Could this patch series be queued? > >> Thank you very much! > >> > >> Tao > > > > QEMU is in freeze, so not yet. Please ping after the release. > > Just to avoid confusion: it's Michael's personal preference not to > process patches for the next version during freeze. Other maintainers > do, and that's actually the project's policy: > > Subject: QEMU Summit 2017: minutes > Message-ID: <CAFEAcA-b9oDkPfZbntWfhWSv1HOnbUf75p_xB_tF74h_NBGPmw@mail.gmail.com> > https://lists.nongnu.org/archive/html/qemu-devel/2017-11/msg04453.html > > qemu-next: > * Problem 1: Contributors cannot get patches merged during freeze > (bad experience) > [...] > * Markus Armbruster: Problem 1 is solved if maintainers keep their own > -next trees > * Paolo Bonzini: Maintaining -next could slow down or create work for > -freeze (e.g. who does backports) > * Action: Maintainers mustn't tell submitters to go away just because > we're in a release freeze (it's up to them whether they prefer to > maintain a "-next" tree for their subsystem with patches queued for > the following release, or track which patches they've accepted > some other way) > * We're not going to have an official project-wide "-next" tree, though > > Michael, would queuing up patches in a -next branch really be too much > trouble for you? Thanks for pointing this out! I stopped asking for re-post since awhile ago. I don't queue patches in a public tree but I do review and do keep track of pending patches. I tend to ask contributors to also ping because sometimes there's a problem with rebase, I drop the patch but forget to tell the contributor, and it tends to happen more with big patchsets posted during freeze as there's a rush to merge changes right after that. I usually don't bother people with this for small patches though. I'll try to be clearer in my communication so contributors don't feel stressed. Would something like: "I'll queue it for merge after the release. If possible please ping me after the release to help make sure it didn't get dropped." be clearer? Hopefully windows CI efforts will soon bear fruit to the point where they stress PCI enough to make maintaining next worth the effort.
On 12/3/2019 2:25 PM, Michael S. Tsirkin wrote: > On Tue, Dec 03, 2019 at 07:00:53AM +0100, Markus Armbruster wrote: >> "Michael S. Tsirkin" <mst@redhat.com> writes: >> >>> On Tue, Dec 03, 2019 at 08:53:30AM +0800, Tao Xu wrote: >>>> Hi Michael, >>>> >>>> Could this patch series be queued? >>>> Thank you very much! >>>> >>>> Tao >>> >>> QEMU is in freeze, so not yet. Please ping after the release. >> >> Just to avoid confusion: it's Michael's personal preference not to >> process patches for the next version during freeze. Other maintainers >> do, and that's actually the project's policy: >> >> Subject: QEMU Summit 2017: minutes >> Message-ID: <CAFEAcA-b9oDkPfZbntWfhWSv1HOnbUf75p_xB_tF74h_NBGPmw@mail.gmail.com> >> https://lists.nongnu.org/archive/html/qemu-devel/2017-11/msg04453.html >> >> qemu-next: >> * Problem 1: Contributors cannot get patches merged during freeze >> (bad experience) >> [...] >> * Markus Armbruster: Problem 1 is solved if maintainers keep their own >> -next trees >> * Paolo Bonzini: Maintaining -next could slow down or create work for >> -freeze (e.g. who does backports) >> * Action: Maintainers mustn't tell submitters to go away just because >> we're in a release freeze (it's up to them whether they prefer to >> maintain a "-next" tree for their subsystem with patches queued for >> the following release, or track which patches they've accepted >> some other way) >> * We're not going to have an official project-wide "-next" tree, though >> >> Michael, would queuing up patches in a -next branch really be too much >> trouble for you? > > Thanks for pointing this out! > > I stopped asking for re-post since awhile ago. I don't queue patches in > a public tree but I do review and do keep track of pending patches. > > I tend to ask contributors to also ping because sometimes there's a > problem with rebase, I drop the patch but forget to tell the > contributor, and it tends to happen more with big patchsets posted during > freeze as there's a rush to merge changes right after that. > I usually don't bother people with this for small patches though. > > I'll try to be clearer in my communication so contributors don't feel > stressed. > > Would something like: > > "I'll queue it for merge after the release. If possible please ping me > after the release to help make sure it didn't get dropped." > > be clearer? > > Hopefully windows CI efforts will soon bear fruit to the point where > they stress PCI enough to make maintaining next worth the effort. > I see. Thanks for Markus and Michael's kindly response. I feel happy rather than stressed in QEMU community :)
"Michael S. Tsirkin" <mst@redhat.com> writes: > On Tue, Dec 03, 2019 at 07:00:53AM +0100, Markus Armbruster wrote: >> "Michael S. Tsirkin" <mst@redhat.com> writes: >> >> > On Tue, Dec 03, 2019 at 08:53:30AM +0800, Tao Xu wrote: >> >> Hi Michael, >> >> >> >> Could this patch series be queued? >> >> Thank you very much! >> >> >> >> Tao >> > >> > QEMU is in freeze, so not yet. Please ping after the release. >> >> Just to avoid confusion: it's Michael's personal preference not to >> process patches for the next version during freeze. Other maintainers >> do, and that's actually the project's policy: >> >> Subject: QEMU Summit 2017: minutes >> Message-ID: <CAFEAcA-b9oDkPfZbntWfhWSv1HOnbUf75p_xB_tF74h_NBGPmw@mail.gmail.com> >> https://lists.nongnu.org/archive/html/qemu-devel/2017-11/msg04453.html >> >> qemu-next: >> * Problem 1: Contributors cannot get patches merged during freeze >> (bad experience) >> [...] >> * Markus Armbruster: Problem 1 is solved if maintainers keep their own >> -next trees >> * Paolo Bonzini: Maintaining -next could slow down or create work for >> -freeze (e.g. who does backports) >> * Action: Maintainers mustn't tell submitters to go away just because >> we're in a release freeze (it's up to them whether they prefer to >> maintain a "-next" tree for their subsystem with patches queued for >> the following release, or track which patches they've accepted >> some other way) >> * We're not going to have an official project-wide "-next" tree, though >> >> Michael, would queuing up patches in a -next branch really be too much >> trouble for you? > > Thanks for pointing this out! > > I stopped asking for re-post since awhile ago. I don't queue patches in > a public tree but I do review and do keep track of pending patches. > > I tend to ask contributors to also ping because sometimes there's a > problem with rebase, I drop the patch but forget to tell the > contributor, and it tends to happen more with big patchsets posted during > freeze as there's a rush to merge changes right after that. > I usually don't bother people with this for small patches though. > > I'll try to be clearer in my communication so contributors don't feel > stressed. > > Would something like: > > "I'll queue it for merge after the release. If possible please ping me > after the release to help make sure it didn't get dropped." > > be clearer? Yes, that's both clearer and friendlier. Thank you! > Hopefully windows CI efforts will soon bear fruit to the point where > they stress PCI enough to make maintaining next worth the effort. CI++ :)