diff mbox series

[2/3] hw/acpi/aml-build: Fix {socket, cluster, core} IDs in PPTT

Message ID 20220518092141.1050852-3-gshan@redhat.com (mailing list archive)
State New, archived
Headers show
Series hw/acpi/aml-build: Fix {socket, cluster, core} IDs in PPTT | expand

Commit Message

Gavin Shan May 18, 2022, 9:21 a.m. UTC
The {socket, cluster, core} IDs detected from Linux guest aren't
matching with what have been provided in PPTT. The flag used for
'ACPI Processor ID valid' is missed for {socket, cluster, core}
nodes. In this case, Linux guest takes the offset between the
node and PPTT header as the corresponding IDs, as the following
logs show.

  /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64    \
  -accel kvm -machine virt,gic-version=host -cpu host        \
  -smp 8,sockets=2,clusters=2,cores=2,threads=1
    :

  # cd /sys/devices/system/cpu
  # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
    36  36  36  36  36  36  36  36
    336 336 336 336 336 336 336 336
  # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
    56  56  56  56  196 196 196 196
    356 356 356 356 496 496 496 496
  # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
    76  76  136 136 216 216 276 276
    376 376 436 436 516 516 576 576

This fixes the issue by setting 'ACPI Processor ID valid' flag for
{socket, cluster, core} nodes. With this applied, the IDs are exactly
what have been provided in PPTT.

  # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
  0 0 0 0 0 0 0 0
  1 1 1 1 1 1 1 1
  # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
  0 0 0 0 1 1 1 1
  0 0 0 0 1 1 1 1
  # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
  0 0 1 1 0 0 1 1
  0 0 1 1 0 0 1 1

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 hw/acpi/aml-build.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Comments

Igor Mammedov May 26, 2022, 12:25 p.m. UTC | #1
On Wed, 18 May 2022 17:21:40 +0800
Gavin Shan <gshan@redhat.com> wrote:

> The {socket, cluster, core} IDs detected from Linux guest aren't
> matching with what have been provided in PPTT. The flag used for
> 'ACPI Processor ID valid' is missed for {socket, cluster, core}
> nodes.

To permit this flag set  on no leaf nodes we have to have
a corresponding containers built for them in DSDT so that
'ACPI Processor ID' could be matched with containers '_UID's.
If we don not build such containers then setting this flag is
not correct. And I don't recall QEMU building CPU hierarchy
in DSDT.

> In this case, Linux guest takes the offset between the
> node and PPTT header as the corresponding IDs, as the following
> logs show.

perhaps it's kernel which should be fixed to handle
not set 'ACPI Processor ID valid' correctly.

> 
>   /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64    \
>   -accel kvm -machine virt,gic-version=host -cpu host        \
>   -smp 8,sockets=2,clusters=2,cores=2,threads=1
>     :
> 
>   # cd /sys/devices/system/cpu
>   # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
>     36  36  36  36  36  36  36  36
>     336 336 336 336 336 336 336 336
>   # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
>     56  56  56  56  196 196 196 196
>     356 356 356 356 496 496 496 496
>   # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
>     76  76  136 136 216 216 276 276
>     376 376 436 436 516 516 576 576
> 
> This fixes the issue by setting 'ACPI Processor ID valid' flag for
> {socket, cluster, core} nodes. With this applied, the IDs are exactly
> what have been provided in PPTT.
> 
>   # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
>   0 0 0 0 0 0 0 0
>   1 1 1 1 1 1 1 1
>   # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
>   0 0 0 0 1 1 1 1
>   0 0 0 0 1 1 1 1
>   # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
>   0 0 1 1 0 0 1 1
>   0 0 1 1 0 0 1 1
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  hw/acpi/aml-build.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index e6bfac95c7..89f191fd3b 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -2026,7 +2026,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>              core_id = -1;
>              socket_offset = table_data->len - pptt_start;
>              build_processor_hierarchy_node(table_data,
> -                (1 << 0), /* Physical package */
> +                (1 << 0) | /* Physical package */
> +                (1 << 1),  /* ACPI Processor ID valid */
>                  0, socket_id, NULL, 0);
>          }
>  
> @@ -2037,7 +2038,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>                  core_id = -1;
>                  cluster_offset = table_data->len - pptt_start;
>                  build_processor_hierarchy_node(table_data,
> -                    (0 << 0), /* Not a physical package */
> +                    (0 << 0) | /* Not a physical package */
> +                    (1 << 1),  /* ACPI Processor ID valid */
>                      socket_offset, cluster_id, NULL, 0);
>              }
>          } else {
> @@ -2055,7 +2057,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>                  core_id = cpus->cpus[n].props.core_id;
>                  core_offset = table_data->len - pptt_start;
>                  build_processor_hierarchy_node(table_data,
> -                    (0 << 0), /* Not a physical package */
> +                    (0 << 0) | /* Not a physical package */
> +                    (1 << 1),  /* ACPI Processor ID valid */
>                      cluster_offset, core_id, NULL, 0);
>              }
>
Gavin Shan May 26, 2022, 2:40 p.m. UTC | #2
Hi Igor,

On 5/26/22 8:25 PM, Igor Mammedov wrote:
> On Wed, 18 May 2022 17:21:40 +0800
> Gavin Shan <gshan@redhat.com> wrote:
> 
>> The {socket, cluster, core} IDs detected from Linux guest aren't
>> matching with what have been provided in PPTT. The flag used for
>> 'ACPI Processor ID valid' is missed for {socket, cluster, core}
>> nodes.
> 
> To permit this flag set  on no leaf nodes we have to have
> a corresponding containers built for them in DSDT so that
> 'ACPI Processor ID' could be matched with containers '_UID's.
> If we don not build such containers then setting this flag is
> not correct. And I don't recall QEMU building CPU hierarchy
> in DSDT.
> 

It's true that we don't have containers in DSDT. In Linux implementation,
the corresponding IDs are fetched if 'ACPI Processor ID valid' is set in
PPTT node (entry), without checking DSDT table.

I don't know how the PPTT entry is linked to DSDT for _UID, after rechecking
ACPI specification. I was thinking 'Private Resources' fields are used for
the linking, but I should be wrong after checking PPTT tables on my host.
I'm not sure if you have idea how PPTT entry (node) is linked with one
specific device in DSDT table?

On my host, one of the cluster node resides at offset 10B0h and it's ID
has been marked as valid. The 'Private Resources' fields point to the
type-1 cache structures, which resides in PPTT table either. The cluster
ID ('0x109') isn't appearing in DSDT table.

[C9Ch 3228   1]                Subtable Type : 01 [Cache Type]
[C9Dh 3229   1]                       Length : 18
[C9Eh 3230   2]                     Reserved : 0000
[CA0h 3232   4]        Flags (decoded below) : 0000005F
                                   Size valid : 1
                         Number of Sets valid : 1
                          Associativity valid : 1
                        Allocation Type valid : 1
                             Cache Type valid : 1
                           Write Policy valid : 0
                              Line Size valid : 1
                               Cache ID valid : 0
                                :
                                :
[CB4h 3252   1]                Subtable Type : 01 [Cache Type]
[CB5h 3253   1]                       Length : 18
[CB6h 3254   2]                     Reserved : 0000
[CB8h 3256   4]        Flags (decoded below) : 0000007F
                                   Size valid : 1
                         Number of Sets valid : 1
                          Associativity valid : 1
                        Allocation Type valid : 1
                             Cache Type valid : 1
                           Write Policy valid : 1
                              Line Size valid : 1
                               Cache ID valid : 0
[CBCh 3260   4]          Next Level of Cache : 00000CCC
                                  :
                                  :
[10B0h 4272   1]                Subtable Type : 00 [Processor Hierarchy Node]
[10B1h 4273   1]                       Length : 1C
[10B2h 4274   2]                     Reserved : 0000
[10B4h 4276   4]        Flags (decoded below) : 00000002
                             Physical package : 0
                      ACPI Processor ID valid : 1
                        Processor is a thread : 0
                               Node is a leaf : 0
                     Identical Implementation : 0
[10B8h 4280   4]                       Parent : 00000C6C
[10BCh 4284   4]            ACPI Processor ID : 00000109
[10C0h 4288   4]      Private Resource Number : 00000002
[10C4h 4292   4]             Private Resource : 00000C9C
[10C8h 4296   4]             Private Resource : 00000CB4

[gwshan@gshan tmp]$ grep -i 109 dsdt.dsl
[gwshan@gshan tmp]$ grep -i 265 dsdt.dsl


>> In this case, Linux guest takes the offset between the
>> node and PPTT header as the corresponding IDs, as the following
>> logs show.
> 
> perhaps it's kernel which should be fixed to handle
> not set 'ACPI Processor ID valid' correctly.
> 

This case is already handled by kernel. If 'ACPI Processor ID valid'
flag is missed, the offset between PPTT node and header is taken as
the ID. It's correct behaviour because 'ACPI Processor ID valid' flag
isn't mandatory for PPTT nodes. The problem is that the offset isn't
the desired ID, which is not understandable and mismatch with the
ID (value) we feed to PPTT table in QEMU.

>>
>>    /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64    \
>>    -accel kvm -machine virt,gic-version=host -cpu host        \
>>    -smp 8,sockets=2,clusters=2,cores=2,threads=1
>>      :
>>
>>    # cd /sys/devices/system/cpu
>>    # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
>>      36  36  36  36  36  36  36  36
>>      336 336 336 336 336 336 336 336
>>    # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
>>      56  56  56  56  196 196 196 196
>>      356 356 356 356 496 496 496 496
>>    # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
>>      76  76  136 136 216 216 276 276
>>      376 376 436 436 516 516 576 576
>>
>> This fixes the issue by setting 'ACPI Processor ID valid' flag for
>> {socket, cluster, core} nodes. With this applied, the IDs are exactly
>> what have been provided in PPTT.
>>
>>    # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
>>    0 0 0 0 0 0 0 0
>>    1 1 1 1 1 1 1 1
>>    # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
>>    0 0 0 0 1 1 1 1
>>    0 0 0 0 1 1 1 1
>>    # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
>>    0 0 1 1 0 0 1 1
>>    0 0 1 1 0 0 1 1
>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>   hw/acpi/aml-build.c | 9 ++++++---
>>   1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
>> index e6bfac95c7..89f191fd3b 100644
>> --- a/hw/acpi/aml-build.c
>> +++ b/hw/acpi/aml-build.c
>> @@ -2026,7 +2026,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>>               core_id = -1;
>>               socket_offset = table_data->len - pptt_start;
>>               build_processor_hierarchy_node(table_data,
>> -                (1 << 0), /* Physical package */
>> +                (1 << 0) | /* Physical package */
>> +                (1 << 1),  /* ACPI Processor ID valid */
>>                   0, socket_id, NULL, 0);
>>           }
>>   
>> @@ -2037,7 +2038,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>>                   core_id = -1;
>>                   cluster_offset = table_data->len - pptt_start;
>>                   build_processor_hierarchy_node(table_data,
>> -                    (0 << 0), /* Not a physical package */
>> +                    (0 << 0) | /* Not a physical package */
>> +                    (1 << 1),  /* ACPI Processor ID valid */
>>                       socket_offset, cluster_id, NULL, 0);
>>               }
>>           } else {
>> @@ -2055,7 +2057,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>>                   core_id = cpus->cpus[n].props.core_id;
>>                   core_offset = table_data->len - pptt_start;
>>                   build_processor_hierarchy_node(table_data,
>> -                    (0 << 0), /* Not a physical package */
>> +                    (0 << 0) | /* Not a physical package */
>> +                    (1 << 1),  /* ACPI Processor ID valid */
>>                       cluster_offset, core_id, NULL, 0);
>>               }
>>   
> 

Thanks,
Gavin
Igor Mammedov June 9, 2022, 4 p.m. UTC | #3
On Thu, 26 May 2022 22:40:05 +0800
Gavin Shan <gshan@redhat.com> wrote:

> Hi Igor,
> 
> On 5/26/22 8:25 PM, Igor Mammedov wrote:
> > On Wed, 18 May 2022 17:21:40 +0800
> > Gavin Shan <gshan@redhat.com> wrote:
> >   
> >> The {socket, cluster, core} IDs detected from Linux guest aren't
> >> matching with what have been provided in PPTT. The flag used for
> >> 'ACPI Processor ID valid' is missed for {socket, cluster, core}
> >> nodes.  
> > 
> > To permit this flag set  on no leaf nodes we have to have
> > a corresponding containers built for them in DSDT so that
> > 'ACPI Processor ID' could be matched with containers '_UID's.
> > If we don not build such containers then setting this flag is
> > not correct. And I don't recall QEMU building CPU hierarchy
> > in DSDT.
> >   
> 
> It's true that we don't have containers in DSDT. In Linux implementation,
> the corresponding IDs are fetched if 'ACPI Processor ID valid' is set in
> PPTT node (entry), without checking DSDT table.

linux can makeup container IDs and it is fine as long as it
does that consistently

> I don't know how the PPTT entry is linked to DSDT for _UID, after rechecking
> ACPI specification. I was thinking 'Private Resources' fields are used for
> the linking, but I should be wrong after checking PPTT tables on my host.
> I'm not sure if you have idea how PPTT entry (node) is linked with one
> specific device in DSDT table?

from spec description of 'ACPI Processor ID valid' flag:
"
For non-leaf entries in the processor topology, the ACPI Pro-
cessor ID entry can relate to a Processor container in the
namespace. The processor container will have a matching ID
value returned through the _UID method. As not every pro-
cessor hierarchy node structure in PPTT may have a matching
processor container, this flag indicates whether the ACPI pro-
cessor ID points to valid entry.
"

i.e. nothing to do with private resources
on can set this flag for a container only if there is
a container device in DSDT with _UID matching 'ACPI Processor ID'
in PPTT entry. Other possibility for setting this flag
is that processor is described in MADT (which is unlikely for
for a container)

> On my host, one of the cluster node resides at offset 10B0h and it's ID
> has been marked as valid. The 'Private Resources' fields point to the
> type-1 cache structures, which resides in PPTT table either. The cluster
> ID ('0x109') isn't appearing in DSDT table.

looks like they are cheating or spec is wrong

PS:
one of the reasons we added PPTT table is to avoid building
CPU topology hierarchy in DSDT.

> 
> [C9Ch 3228   1]                Subtable Type : 01 [Cache Type]
> [C9Dh 3229   1]                       Length : 18
> [C9Eh 3230   2]                     Reserved : 0000
> [CA0h 3232   4]        Flags (decoded below) : 0000005F
>                                    Size valid : 1
>                          Number of Sets valid : 1
>                           Associativity valid : 1
>                         Allocation Type valid : 1
>                              Cache Type valid : 1
>                            Write Policy valid : 0
>                               Line Size valid : 1
>                                Cache ID valid : 0
>                                 :
>                                 :
> [CB4h 3252   1]                Subtable Type : 01 [Cache Type]
> [CB5h 3253   1]                       Length : 18
> [CB6h 3254   2]                     Reserved : 0000
> [CB8h 3256   4]        Flags (decoded below) : 0000007F
>                                    Size valid : 1
>                          Number of Sets valid : 1
>                           Associativity valid : 1
>                         Allocation Type valid : 1
>                              Cache Type valid : 1
>                            Write Policy valid : 1
>                               Line Size valid : 1
>                                Cache ID valid : 0
> [CBCh 3260   4]          Next Level of Cache : 00000CCC
>                                   :
>                                   :
> [10B0h 4272   1]                Subtable Type : 00 [Processor Hierarchy Node]
> [10B1h 4273   1]                       Length : 1C
> [10B2h 4274   2]                     Reserved : 0000
> [10B4h 4276   4]        Flags (decoded below) : 00000002
>                              Physical package : 0
>                       ACPI Processor ID valid : 1
>                         Processor is a thread : 0
>                                Node is a leaf : 0
>                      Identical Implementation : 0
> [10B8h 4280   4]                       Parent : 00000C6C
> [10BCh 4284   4]            ACPI Processor ID : 00000109
> [10C0h 4288   4]      Private Resource Number : 00000002
> [10C4h 4292   4]             Private Resource : 00000C9C
> [10C8h 4296   4]             Private Resource : 00000CB4
> 
> [gwshan@gshan tmp]$ grep -i 109 dsdt.dsl
> [gwshan@gshan tmp]$ grep -i 265 dsdt.dsl
> 
> 
> >> In this case, Linux guest takes the offset between the
> >> node and PPTT header as the corresponding IDs, as the following
> >> logs show.  
> > 
> > perhaps it's kernel which should be fixed to handle
> > not set 'ACPI Processor ID valid' correctly.
> >   
> 
> This case is already handled by kernel. If 'ACPI Processor ID valid'
> flag is missed, the offset between PPTT node and header is taken as
> the ID. It's correct behaviour because 'ACPI Processor ID valid' flag
> isn't mandatory for PPTT nodes. The problem is that the offset isn't
> the desired ID, which is not understandable and mismatch with the
> ID (value) we feed to PPTT table in QEMU.

as long as linux can match resources different IDs are fine.
So if it's not a bug, I'd leave it alone.

> >>    /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64    \
> >>    -accel kvm -machine virt,gic-version=host -cpu host        \
> >>    -smp 8,sockets=2,clusters=2,cores=2,threads=1
> >>      :
> >>
> >>    # cd /sys/devices/system/cpu
> >>    # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
> >>      36  36  36  36  36  36  36  36
> >>      336 336 336 336 336 336 336 336
> >>    # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
> >>      56  56  56  56  196 196 196 196
> >>      356 356 356 356 496 496 496 496
> >>    # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
> >>      76  76  136 136 216 216 276 276
> >>      376 376 436 436 516 516 576 576
> >>
> >> This fixes the issue by setting 'ACPI Processor ID valid' flag for
> >> {socket, cluster, core} nodes. With this applied, the IDs are exactly
> >> what have been provided in PPTT.
> >>
> >>    # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
> >>    0 0 0 0 0 0 0 0
> >>    1 1 1 1 1 1 1 1
> >>    # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
> >>    0 0 0 0 1 1 1 1
> >>    0 0 0 0 1 1 1 1
> >>    # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
> >>    0 0 1 1 0 0 1 1
> >>    0 0 1 1 0 0 1 1

if you look at above, linux generated unique IDs for each node
while with your patch IDs are not unique for cluster/core groups
(i.e. you have only 2 clusters while in fact there are 4 total)

Should ID's in cpu$i/topology be unique or not, I don't really know.
Try to see how linux uses those values and if they are supposed
to be unique.

If you set flag to valid and provide DSDT containers
then you will be forced to generate _unique_ (within
the same _HID namespace) _UIDs to conform to the spec.


> >> Signed-off-by: Gavin Shan <gshan@redhat.com>
> >> ---
> >>   hw/acpi/aml-build.c | 9 ++++++---
> >>   1 file changed, 6 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> >> index e6bfac95c7..89f191fd3b 100644
> >> --- a/hw/acpi/aml-build.c
> >> +++ b/hw/acpi/aml-build.c
> >> @@ -2026,7 +2026,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
> >>               core_id = -1;
> >>               socket_offset = table_data->len - pptt_start;
> >>               build_processor_hierarchy_node(table_data,
> >> -                (1 << 0), /* Physical package */
> >> +                (1 << 0) | /* Physical package */
> >> +                (1 << 1),  /* ACPI Processor ID valid */
> >>                   0, socket_id, NULL, 0);
> >>           }
> >>   
> >> @@ -2037,7 +2038,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
> >>                   core_id = -1;
> >>                   cluster_offset = table_data->len - pptt_start;
> >>                   build_processor_hierarchy_node(table_data,
> >> -                    (0 << 0), /* Not a physical package */
> >> +                    (0 << 0) | /* Not a physical package */
> >> +                    (1 << 1),  /* ACPI Processor ID valid */
> >>                       socket_offset, cluster_id, NULL, 0);
> >>               }
> >>           } else {
> >> @@ -2055,7 +2057,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
> >>                   core_id = cpus->cpus[n].props.core_id;
> >>                   core_offset = table_data->len - pptt_start;
> >>                   build_processor_hierarchy_node(table_data,
> >> -                    (0 << 0), /* Not a physical package */
> >> +                    (0 << 0) | /* Not a physical package */
> >> +                    (1 << 1),  /* ACPI Processor ID valid */
> >>                       cluster_offset, core_id, NULL, 0);
> >>               }
> >>     
> >   
> 
> Thanks,
> Gavin
>
Gavin Shan June 13, 2022, 9:11 a.m. UTC | #4
Hi Igor,

On 6/10/22 12:00 AM, Igor Mammedov wrote:
> On Thu, 26 May 2022 22:40:05 +0800
> Gavin Shan <gshan@redhat.com> wrote:
>> On 5/26/22 8:25 PM, Igor Mammedov wrote:
>>> On Wed, 18 May 2022 17:21:40 +0800
>>> Gavin Shan <gshan@redhat.com> wrote:
>>>    
>>>> The {socket, cluster, core} IDs detected from Linux guest aren't
>>>> matching with what have been provided in PPTT. The flag used for
>>>> 'ACPI Processor ID valid' is missed for {socket, cluster, core}
>>>> nodes.
>>>
>>> To permit this flag set  on no leaf nodes we have to have
>>> a corresponding containers built for them in DSDT so that
>>> 'ACPI Processor ID' could be matched with containers '_UID's.
>>> If we don not build such containers then setting this flag is
>>> not correct. And I don't recall QEMU building CPU hierarchy
>>> in DSDT.
>>>    
>>
>> It's true that we don't have containers in DSDT. In Linux implementation,
>> the corresponding IDs are fetched if 'ACPI Processor ID valid' is set in
>> PPTT node (entry), without checking DSDT table.
> 
> linux can makeup container IDs and it is fine as long as it
> does that consistently
> 

Ok. I think it's fine except that the container IDs aren't readable, because
the offset of PPTT entries are taken as the container IDs.

>> I don't know how the PPTT entry is linked to DSDT for _UID, after rechecking
>> ACPI specification. I was thinking 'Private Resources' fields are used for
>> the linking, but I should be wrong after checking PPTT tables on my host.
>> I'm not sure if you have idea how PPTT entry (node) is linked with one
>> specific device in DSDT table?
> 
> from spec description of 'ACPI Processor ID valid' flag:
> "
> For non-leaf entries in the processor topology, the ACPI Pro-
> cessor ID entry can relate to a Processor container in the
> namespace. The processor container will have a matching ID
> value returned through the _UID method. As not every pro-
> cessor hierarchy node structure in PPTT may have a matching
> processor container, this flag indicates whether the ACPI pro-
> cessor ID points to valid entry.
> "
> 
> i.e. nothing to do with private resources
> on can set this flag for a container only if there is
> a container device in DSDT with _UID matching 'ACPI Processor ID'
> in PPTT entry. Other possibility for setting this flag
> is that processor is described in MADT (which is unlikely for
> for a container)
> 

Agreed. I don't think the private resources are relevant to
the IDs. However, I don't understand how the DSDT is linked with
PPTT in regard of the container IDs.

>> On my host, one of the cluster node resides at offset 10B0h and it's ID
>> has been marked as valid. The 'Private Resources' fields point to the
>> type-1 cache structures, which resides in PPTT table either. The cluster
>> ID ('0x109') isn't appearing in DSDT table.
> 
> looks like they are cheating or spec is wrong
> 
> PS:
> one of the reasons we added PPTT table is to avoid building
> CPU topology hierarchy in DSDT.
> 

Yes, I don't think the spec is clear enough in this regard. Anyway,
I checked PPTT table on my host, where the container IDs are specified
by PPTT table entries without corresponding entries in DSDT.

>>
>> [C9Ch 3228   1]                Subtable Type : 01 [Cache Type]
>> [C9Dh 3229   1]                       Length : 18
>> [C9Eh 3230   2]                     Reserved : 0000
>> [CA0h 3232   4]        Flags (decoded below) : 0000005F
>>                                     Size valid : 1
>>                           Number of Sets valid : 1
>>                            Associativity valid : 1
>>                          Allocation Type valid : 1
>>                               Cache Type valid : 1
>>                             Write Policy valid : 0
>>                                Line Size valid : 1
>>                                 Cache ID valid : 0
>>                                  :
>>                                  :
>> [CB4h 3252   1]                Subtable Type : 01 [Cache Type]
>> [CB5h 3253   1]                       Length : 18
>> [CB6h 3254   2]                     Reserved : 0000
>> [CB8h 3256   4]        Flags (decoded below) : 0000007F
>>                                     Size valid : 1
>>                           Number of Sets valid : 1
>>                            Associativity valid : 1
>>                          Allocation Type valid : 1
>>                               Cache Type valid : 1
>>                             Write Policy valid : 1
>>                                Line Size valid : 1
>>                                 Cache ID valid : 0
>> [CBCh 3260   4]          Next Level of Cache : 00000CCC
>>                                    :
>>                                    :
>> [10B0h 4272   1]                Subtable Type : 00 [Processor Hierarchy Node]
>> [10B1h 4273   1]                       Length : 1C
>> [10B2h 4274   2]                     Reserved : 0000
>> [10B4h 4276   4]        Flags (decoded below) : 00000002
>>                               Physical package : 0
>>                        ACPI Processor ID valid : 1
>>                          Processor is a thread : 0
>>                                 Node is a leaf : 0
>>                       Identical Implementation : 0
>> [10B8h 4280   4]                       Parent : 00000C6C
>> [10BCh 4284   4]            ACPI Processor ID : 00000109
>> [10C0h 4288   4]      Private Resource Number : 00000002
>> [10C4h 4292   4]             Private Resource : 00000C9C
>> [10C8h 4296   4]             Private Resource : 00000CB4
>>
>> [gwshan@gshan tmp]$ grep -i 109 dsdt.dsl
>> [gwshan@gshan tmp]$ grep -i 265 dsdt.dsl
>>
>>
>>>> In this case, Linux guest takes the offset between the
>>>> node and PPTT header as the corresponding IDs, as the following
>>>> logs show.
>>>
>>> perhaps it's kernel which should be fixed to handle
>>> not set 'ACPI Processor ID valid' correctly.
>>>    
>>
>> This case is already handled by kernel. If 'ACPI Processor ID valid'
>> flag is missed, the offset between PPTT node and header is taken as
>> the ID. It's correct behaviour because 'ACPI Processor ID valid' flag
>> isn't mandatory for PPTT nodes. The problem is that the offset isn't
>> the desired ID, which is not understandable and mismatch with the
>> ID (value) we feed to PPTT table in QEMU.
> 
> as long as linux can match resources different IDs are fine.
> So if it's not a bug, I'd leave it alone.
> 

Ok. So lets ignore the series. If we need, we may revisit this
series in the future.

>>>>     /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64    \
>>>>     -accel kvm -machine virt,gic-version=host -cpu host        \
>>>>     -smp 8,sockets=2,clusters=2,cores=2,threads=1
>>>>       :
>>>>
>>>>     # cd /sys/devices/system/cpu
>>>>     # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
>>>>       36  36  36  36  36  36  36  36
>>>>       336 336 336 336 336 336 336 336
>>>>     # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
>>>>       56  56  56  56  196 196 196 196
>>>>       356 356 356 356 496 496 496 496
>>>>     # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
>>>>       76  76  136 136 216 216 276 276
>>>>       376 376 436 436 516 516 576 576
>>>>
>>>> This fixes the issue by setting 'ACPI Processor ID valid' flag for
>>>> {socket, cluster, core} nodes. With this applied, the IDs are exactly
>>>> what have been provided in PPTT.
>>>>
>>>>     # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
>>>>     0 0 0 0 0 0 0 0
>>>>     1 1 1 1 1 1 1 1
>>>>     # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
>>>>     0 0 0 0 1 1 1 1
>>>>     0 0 0 0 1 1 1 1
>>>>     # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
>>>>     0 0 1 1 0 0 1 1
>>>>     0 0 1 1 0 0 1 1
> 
> if you look at above, linux generated unique IDs for each node
> while with your patch IDs are not unique for cluster/core groups
> (i.e. you have only 2 clusters while in fact there are 4 total)
> 
> Should ID's in cpu$i/topology be unique or not, I don't really know.
> Try to see how linux uses those values and if they are supposed
> to be unique.
> 
> If you set flag to valid and provide DSDT containers
> then you will be forced to generate _unique_ (within
> the same _HID namespace) _UIDs to conform to the spec.
> 

I don't think it's required to have differentiated IDs, meaning the
IDs for multiple CPUs can be same. Note that one CPU is identified by
the combination of {socket,cluster,core,thread} IDs, instead of a
single ID.

According to the spec, it's needed to present containers IDs
in DSDT. However, the spec doesn't state how this can be done
clearly. As I said above, lets ignore the improvement in this
series for now and we may revisit this series in the future,
if needed.

> 
>>>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>>>> ---
>>>>    hw/acpi/aml-build.c | 9 ++++++---
>>>>    1 file changed, 6 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
>>>> index e6bfac95c7..89f191fd3b 100644
>>>> --- a/hw/acpi/aml-build.c
>>>> +++ b/hw/acpi/aml-build.c
>>>> @@ -2026,7 +2026,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>>>>                core_id = -1;
>>>>                socket_offset = table_data->len - pptt_start;
>>>>                build_processor_hierarchy_node(table_data,
>>>> -                (1 << 0), /* Physical package */
>>>> +                (1 << 0) | /* Physical package */
>>>> +                (1 << 1),  /* ACPI Processor ID valid */
>>>>                    0, socket_id, NULL, 0);
>>>>            }
>>>>    
>>>> @@ -2037,7 +2038,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>>>>                    core_id = -1;
>>>>                    cluster_offset = table_data->len - pptt_start;
>>>>                    build_processor_hierarchy_node(table_data,
>>>> -                    (0 << 0), /* Not a physical package */
>>>> +                    (0 << 0) | /* Not a physical package */
>>>> +                    (1 << 1),  /* ACPI Processor ID valid */
>>>>                        socket_offset, cluster_id, NULL, 0);
>>>>                }
>>>>            } else {
>>>> @@ -2055,7 +2057,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>>>>                    core_id = cpus->cpus[n].props.core_id;
>>>>                    core_offset = table_data->len - pptt_start;
>>>>                    build_processor_hierarchy_node(table_data,
>>>> -                    (0 << 0), /* Not a physical package */
>>>> +                    (0 << 0) | /* Not a physical package */
>>>> +                    (1 << 1),  /* ACPI Processor ID valid */
>>>>                        cluster_offset, core_id, NULL, 0);
>>>>                }
>>>>      
>>>    

Thanks,
Gavin
diff mbox series

Patch

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index e6bfac95c7..89f191fd3b 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -2026,7 +2026,8 @@  void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
             core_id = -1;
             socket_offset = table_data->len - pptt_start;
             build_processor_hierarchy_node(table_data,
-                (1 << 0), /* Physical package */
+                (1 << 0) | /* Physical package */
+                (1 << 1),  /* ACPI Processor ID valid */
                 0, socket_id, NULL, 0);
         }
 
@@ -2037,7 +2038,8 @@  void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
                 core_id = -1;
                 cluster_offset = table_data->len - pptt_start;
                 build_processor_hierarchy_node(table_data,
-                    (0 << 0), /* Not a physical package */
+                    (0 << 0) | /* Not a physical package */
+                    (1 << 1),  /* ACPI Processor ID valid */
                     socket_offset, cluster_id, NULL, 0);
             }
         } else {
@@ -2055,7 +2057,8 @@  void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
                 core_id = cpus->cpus[n].props.core_id;
                 core_offset = table_data->len - pptt_start;
                 build_processor_hierarchy_node(table_data,
-                    (0 << 0), /* Not a physical package */
+                    (0 << 0) | /* Not a physical package */
+                    (1 << 1),  /* ACPI Processor ID valid */
                     cluster_offset, core_id, NULL, 0);
             }