diff mbox series

[v2,1/8] x86/CPU/AMD: Save NodeId on AMD-based systems

Message ID 20200903200144.310991-2-Yazen.Ghannam@amd.com (mailing list archive)
State New, archived
Headers show
Series AMD MCA Address Translation Updates | expand

Commit Message

Yazen Ghannam Sept. 3, 2020, 8:01 p.m. UTC
From: Yazen Ghannam <yazen.ghannam@amd.com>

AMD systems provide a "NodeId" value that represents a global ID
indicating to which "Node" a logical CPU belongs. The "Node" is a
physical structure equivalent to a Die, and it should not be confused
with logical structures like NUMA node. Logical nodes can be adjusted
based on firmware or other settings whereas the physical nodes/dies are
fixed based on hardware topology.

The NodeId value can be used when a physical ID is needed by software.

Save the AMD NodeId to struct cpuinfo_x86. Use the value from CPUID or
MSR as appropriate. Default to phys_proc_id otherwise. Do so for both
AMD and Hygon systems.

Drop the node_id parameter from cacheinfo_*_init_llc_id() as it is no
longer needed.

Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Link:
https://lkml.kernel.org/r/20200814191449.183998-2-Yazen.Ghannam@amd.com

v1 -> v2:
* New patch based on review comment to save value to struct cpuinfo_x86.

 arch/x86/include/asm/cacheinfo.h |  4 ++--
 arch/x86/include/asm/processor.h |  1 +
 arch/x86/kernel/cpu/amd.c        | 11 +++++------
 arch/x86/kernel/cpu/cacheinfo.c  |  6 +++---
 arch/x86/kernel/cpu/hygon.c      | 11 +++++------
 5 files changed, 16 insertions(+), 17 deletions(-)

Comments

Borislav Petkov Sept. 9, 2020, 6:06 p.m. UTC | #1
On Thu, Sep 03, 2020 at 08:01:37PM +0000, Yazen Ghannam wrote:
> From: Yazen Ghannam <yazen.ghannam@amd.com>
> 
> AMD systems provide a "NodeId" value that represents a global ID
> indicating to which "Node" a logical CPU belongs. The "Node" is a
> physical structure equivalent to a Die, and it should not be confused
> with logical structures like NUMA node.

So we said in Documentation/x86/topology.rst that:

"The kernel does not care about the concept of physical sockets because
a socket has no relevance to software. It's an electromechanical
component."

Now, you're talking, AFAIU, about physical components. Why do you need
them?

What is then:

  - cpuinfo_x86.phys_proc_id:

    The physical ID of the package. This information is retrieved via CPUID
    and deduced from the APIC IDs of the cores in the package.

supposed to mean?

Why isn't phys_proc_id != node_id?

And so on and so on.

Please get the nomenclature straight first and then we can talk changes.

Thx.
Yazen Ghannam Sept. 9, 2020, 8:17 p.m. UTC | #2
On Wed, Sep 09, 2020 at 08:06:47PM +0200, Borislav Petkov wrote:
> On Thu, Sep 03, 2020 at 08:01:37PM +0000, Yazen Ghannam wrote:
> > From: Yazen Ghannam <yazen.ghannam@amd.com>
> > 
> > AMD systems provide a "NodeId" value that represents a global ID
> > indicating to which "Node" a logical CPU belongs. The "Node" is a
> > physical structure equivalent to a Die, and it should not be confused
> > with logical structures like NUMA node.
> 
> So we said in Documentation/x86/topology.rst that:
> 
> "The kernel does not care about the concept of physical sockets because
> a socket has no relevance to software. It's an electromechanical
> component."
> 

Yes, I agree with this.

> Now, you're talking, AFAIU, about physical components. Why do you need
> them?
> 

We need to access specific instances of hardware registers in the
Northbridge or Data Fabric. The code in arch/x86/kernel/amd_nb.c does
this.

> What is then:
> 
>   - cpuinfo_x86.phys_proc_id:
> 
>     The physical ID of the package. This information is retrieved via CPUID
>     and deduced from the APIC IDs of the cores in the package.
> 
> supposed to mean?
> 

Package = Socket, i.e. a field replaceable unit. Socket may not be
useful for software, but I think it helps users identify the hardware.

I think the following could be changed in the documentation:

"In the past a socket always contained a single package (see below), but
with the advent of Multi Chip Modules (MCM) a socket can hold more than one
package."

Replace "package" with "die".

You take multiple dies from the foundry and you "package" them together
into a single unit.

> Why isn't phys_proc_id != node_id?
> 

They could be equal depending on the system. The values are different on
MCM systems like Bulldozer and Naples though.

The functions and structures in amd_nb.c are indexed by the node_id.
This is done implicitly right now by using amd_get_nb_id()/cpu_llc_id.
But the LLC isn't always equal to the Node/Die like in Naples. So the
patches in this set save and explicitly use the node_id when needed.

What do you think?

Thanks,
Yazen
Borislav Petkov Sept. 10, 2020, 10:14 a.m. UTC | #3
On Wed, Sep 09, 2020 at 03:17:55PM -0500, Yazen Ghannam wrote:
> We need to access specific instances of hardware registers in the
> Northbridge or Data Fabric. The code in arch/x86/kernel/amd_nb.c does
> this.

So you don't need the node_id - you need the northbridge/data fabric ID?
I'm guessing NB == DF, i.e., it was NB before Zen and it is DF now.

Yes?

> Package = Socket, i.e. a field replaceable unit. Socket may not be
> useful for software, but I think it helps users identify the hardware.
> 
> I think the following could be changed in the documentation:
> 
> "In the past a socket always contained a single package (see below), but
> with the advent of Multi Chip Modules (MCM) a socket can hold more than one
> package."
> 
> Replace "package" with "die".

So first of all, we have:

"AMD nomenclature for package is 'Node'."

so we either change that because as you explain, node != package on AMD.

What you need is the ID of that northbridge or data fabric instance,
AFAIU.

> You take multiple dies from the foundry and you "package" them together
> into a single unit.

I think you're overloading the word "package" here and that leads to
more confusion. Package in our definition - Linux' - is:

"Packages contain a number of cores plus shared resources, e.g. DRAM
controller, shared caches etc." If you glue several packages together,
you get an MCM.

> They could be equal depending on the system. The values are different on
> MCM systems like Bulldozer and Naples though.
> 
> The functions and structures in amd_nb.c are indexed by the node_id.
> This is done implicitly right now by using amd_get_nb_id()/cpu_llc_id.
> But the LLC isn't always equal to the Node/Die like in Naples. So the
> patches in this set save and explicitly use the node_id when needed.
> 
> What do you think?

Sounds to me that you want to ID that data fabric instance which
logically belongs to one or multiple packages. Or can a DF a single
package?

So let's start simple: how does a DF instance map to a logical NUMA
node or package? Can a DF serve multiple packages?

You could use the examples at the end of Documentation/x86/topology.rst
to explain how those things play together. And remember to not think
about the physical aspect of the hardware structure because it doesn't
mean anything to software. All you wanna do is address the proper DF
instance so this needs to be enumerable and properly represented by sw.

Confused?

I am.

:-)
Yazen Ghannam Sept. 14, 2020, 7:20 p.m. UTC | #4
On Thu, Sep 10, 2020 at 12:14:43PM +0200, Borislav Petkov wrote:
> On Wed, Sep 09, 2020 at 03:17:55PM -0500, Yazen Ghannam wrote:
> > We need to access specific instances of hardware registers in the
> > Northbridge or Data Fabric. The code in arch/x86/kernel/amd_nb.c does
> > this.
> 
> So you don't need the node_id - you need the northbridge/data fabric ID?
> I'm guessing NB == DF, i.e., it was NB before Zen and it is DF now.
> 
> Yes?
>

Yes, that's right.

I called it "node_id" based on the AMD documentation and what it's
called today in the Linux code. It's called other things like nb_id and
nid too.

I think we can call it something else to avoid confusion with NUMA nodes
if that'll help.

> > Package = Socket, i.e. a field replaceable unit. Socket may not be
> > useful for software, but I think it helps users identify the hardware.
> > 
> > I think the following could be changed in the documentation:
> > 
> > "In the past a socket always contained a single package (see below), but
> > with the advent of Multi Chip Modules (MCM) a socket can hold more than one
> > package."
> > 
> > Replace "package" with "die".
> 
> So first of all, we have:
> 
> "AMD nomenclature for package is 'Node'."
> 
> so we either change that because as you explain, node != package on AMD.
> 
> What you need is the ID of that northbridge or data fabric instance,
> AFAIU.
> 
> > You take multiple dies from the foundry and you "package" them together
> > into a single unit.
> 
> I think you're overloading the word "package" here and that leads to
> more confusion. Package in our definition - Linux' - is:
> 
> "Packages contain a number of cores plus shared resources, e.g. DRAM
> controller, shared caches etc." If you glue several packages together,
> you get an MCM.
> 

Yes, you're right. The AMD documentation is different, so I'll try to
stick with the Linux documentation and qualify names with "AMD" when
noting the usage by the AMD docs.

> > They could be equal depending on the system. The values are different on
> > MCM systems like Bulldozer and Naples though.
> > 
> > The functions and structures in amd_nb.c are indexed by the node_id.
> > This is done implicitly right now by using amd_get_nb_id()/cpu_llc_id.
> > But the LLC isn't always equal to the Node/Die like in Naples. So the
> > patches in this set save and explicitly use the node_id when needed.
> > 
> > What do you think?
> 
> Sounds to me that you want to ID that data fabric instance which
> logically belongs to one or multiple packages. Or can a DF a single
> package?
> 
> So let's start simple: how does a DF instance map to a logical NUMA
> node or package? Can a DF serve multiple packages?
> 

There's one DF/NB per package and it's a fixed value, i.e. it shouldn't
change based on the NUMA configuration.

Here's an example of a 2 socket Naples system with 4 packages per socket
and setup to have 1 NUMA node. The "node_id" value is the AMD NodeId
from CPUID.

CPU=0 phys_proc_id=0 node_id=0 cpu_to_node()=0
CPU=8 phys_proc_id=0 node_id=1 cpu_to_node()=0
CPU=16 phys_proc_id=0 node_id=2 cpu_to_node()=0
CPU=24 phys_proc_id=0 node_id=3 cpu_to_node()=0
CPU=32 phys_proc_id=1 node_id=4 cpu_to_node()=0
CPU=40 phys_proc_id=1 node_id=5 cpu_to_node()=0
CPU=48 phys_proc_id=1 node_id=6 cpu_to_node()=0
CPU=56 phys_proc_id=1 node_id=7 cpu_to_node()=0

> You could use the examples at the end of Documentation/x86/topology.rst
> to explain how those things play together. And remember to not think
> about the physical aspect of the hardware structure because it doesn't
> mean anything to software. All you wanna do is address the proper DF
> instance so this needs to be enumerable and properly represented by sw.
>

Yeah, I think example 4b works here. The mismatch though is with
phys_proc_id and package on AMD systems. You can see above that
phys_proc_id gives a socket number, and the AMD NodeId gives a package
number.

Should we add a note under cpuinfo_x86.phys_proc_id to make this
distinction?

> Confused?
> 
> I am.
> 
> :-)
>

Yeah, me too. :)

Thanks,
Yazen
Borislav Petkov Sept. 15, 2020, 8:35 a.m. UTC | #5
On Mon, Sep 14, 2020 at 02:20:39PM -0500, Yazen Ghannam wrote:
> Yes, that's right.
> 
> I called it "node_id" based on the AMD documentation and what it's
> called today in the Linux code. It's called other things like nb_id and
> nid too.
> 
> I think we can call it something else to avoid confusion with NUMA nodes
> if that'll help.

Yes, whatever we end up calling it, it should be added to that
documentation file I pointed you at. Because months and years from now,
it'll be the only place we look first, before changing the topology
again.

> Yes, you're right. The AMD documentation is different, so I'll try to
> stick with the Linux documentation and qualify names with "AMD" when
> noting the usage by the AMD docs.

Thanks, yes, because Linux is trying to map its view of the topology to
the vendor's and model all vendors properly, if possible.

> There's one DF/NB per package and it's a fixed value, i.e. it shouldn't
> change based on the NUMA configuration.

Aha, so the NB kinda serves the package and is part of it. That makes a
lot of sense and clears some confusion.

> Here's an example of a 2 socket Naples system with 4 packages per socket
> and setup to have 1 NUMA node. The "node_id" value is the AMD NodeId
> from CPUID.
> 
> CPU=0 phys_proc_id=0 node_id=0 cpu_to_node()=0
> CPU=8 phys_proc_id=0 node_id=1 cpu_to_node()=0
> CPU=16 phys_proc_id=0 node_id=2 cpu_to_node()=0
> CPU=24 phys_proc_id=0 node_id=3 cpu_to_node()=0
> CPU=32 phys_proc_id=1 node_id=4 cpu_to_node()=0
> CPU=40 phys_proc_id=1 node_id=5 cpu_to_node()=0
> CPU=48 phys_proc_id=1 node_id=6 cpu_to_node()=0
> CPU=56 phys_proc_id=1 node_id=7 cpu_to_node()=0

Ok, node_id is the DF instance number in this case.

> Yeah, I think example 4b works here. The mismatch though is with
> phys_proc_id and package on AMD systems. You can see above that
> phys_proc_id gives a socket number, and the AMD NodeId gives a package
> number.

Ok, now looka here:

"  - cpuinfo_x86.logical_proc_id:

    The logical ID of the package. As we do not trust BIOSes to enumerate the
    packages in a consistent way, we introduced the concept of logical package
    ID so we can sanely calculate the number of maximum possible packages in
    the system and have the packages enumerated linearly."

Doesn't that sound like exactly what you need?

Because that DF ID *is* practically the package ID as there's 1:1
mapping between DF and a package, as you say above.

Right?

Now, it says

[    7.670791] smpboot: Max logical packages: 2

on my Rome box but what you want sounds very much like the logical
package ID and if we define that on AMD to be that and document it this
way, I guess that should work too, provided there are no caveats like
sched is using this info for proper task placement and so on. That would
need code audit, of course...

Thx.
Yazen Ghannam Sept. 16, 2020, 7:51 p.m. UTC | #6
On Tue, Sep 15, 2020 at 10:35:15AM +0200, Borislav Petkov wrote:
...
> > Yeah, I think example 4b works here. The mismatch though is with
> > phys_proc_id and package on AMD systems. You can see above that
> > phys_proc_id gives a socket number, and the AMD NodeId gives a package
> > number.
> 
> Ok, now looka here:
> 
> "  - cpuinfo_x86.logical_proc_id:
> 
>     The logical ID of the package. As we do not trust BIOSes to enumerate the
>     packages in a consistent way, we introduced the concept of logical package
>     ID so we can sanely calculate the number of maximum possible packages in
>     the system and have the packages enumerated linearly."
> 
> Doesn't that sound like exactly what you need?
> 
> Because that DF ID *is* practically the package ID as there's 1:1
> mapping between DF and a package, as you say above.
> 
> Right?
> 
> Now, it says
> 
> [    7.670791] smpboot: Max logical packages: 2
> 
> on my Rome box but what you want sounds very much like the logical
> package ID and if we define that on AMD to be that and document it this
> way, I guess that should work too, provided there are no caveats like
> sched is using this info for proper task placement and so on. That would
> need code audit, of course...
>

The only use of logical_proc_id seems to be in hswep_uncore_cpu_init().
So I think maybe we can use this.

However, I think there are two issues.

1) The logical_proc_id seems like it should refer to the same type of
structure as phys_proc_id. In our case, this won't be true as
phys_proc_id would refer to the "socket" on AMD and logical_proc_id
would refer to the package/AMD NodeId.

2) The AMD NodeId is read during c_init()/init_amd(), so logical_proc_id
can be set here. But then logical_proc_id will get overwritten later in 
topology_update_package_map(). I don't know if it'd be good to modify
the generic flow to support this vendor-specific behavior.

What do you think?

Thanks,
Yazen
Borislav Petkov Sept. 17, 2020, 10:37 a.m. UTC | #7
On Wed, Sep 16, 2020 at 02:51:52PM -0500, Yazen Ghannam wrote:
> What do you think?

Yeah, forget logical_proc_id - the galactic senate of x86 maintainers
said that we're keeping that for when BIOS vendors f*ck up with the
phys_proc_id enumeration on AMD. Then we'll need that as a workaround.

Look instead at:

struct cpuinfo_x86 {

	...

        u16                     cpu_die_id;
        u16                     logical_die_id;

and

7745f03eb395 ("x86/topology: Add CPUID.1F multi-die/package support")

"Some new systems have multiple software-visible die within each
package."

and you could map the AMD packages to those dies. And if you guys
implement CPUID.1F to enumerate those packages the same way, then all
should just work (famous last words).

Because Intel dies is basically AMD packages consisting of a CCX, caches
and DF.

We would have to update the documentation in the end to denote that but
let's see if this should work for you too first. Because the concepts
sound very similar, if not identical...

Thx.
Yazen Ghannam Sept. 17, 2020, 4:20 p.m. UTC | #8
On Thu, Sep 17, 2020 at 12:37:20PM +0200, Borislav Petkov wrote:
> On Wed, Sep 16, 2020 at 02:51:52PM -0500, Yazen Ghannam wrote:
> > What do you think?
> 
> Yeah, forget logical_proc_id - the galactic senate of x86 maintainers
> said that we're keeping that for when BIOS vendors f*ck up with the
> phys_proc_id enumeration on AMD. Then we'll need that as a workaround.
> 
> Look instead at:
> 
> struct cpuinfo_x86 {
> 
> 	...
> 
>         u16                     cpu_die_id;
>         u16                     logical_die_id;
> 
> and
> 
> 7745f03eb395 ("x86/topology: Add CPUID.1F multi-die/package support")
> 
> "Some new systems have multiple software-visible die within each
> package."
> 
> and you could map the AMD packages to those dies. And if you guys
> implement CPUID.1F to enumerate those packages the same way, then all
> should just work (famous last words).
>
> Because Intel dies is basically AMD packages consisting of a CCX, caches
> and DF.
> 
> We would have to update the documentation in the end to denote that but
> let's see if this should work for you too first. Because the concepts
> sound very similar, if not identical...
>

Yep, we could ask the hardware folks to implement CPUID Leaf 0x1F, but
that'll be in some future products. 

I actually tried using cpu_die_id, but I ran into an issue on newer
systems.

On older systems, there is no CPUID Leaf 0xB or 0x1F, and cpu_die_id
doesn't get explicitly set. So setting cpu_die_id equal to AMD NodeId
would work. But newer systems support CPUID Leaf 0xB, so cpu_die_id
will get explicitly set by detect_extended_topology(). The value set is
different from the AMD NodeId. And at that point I shied away from
doing any override or fixup.

Thanks,
Yazen
Borislav Petkov Sept. 17, 2020, 4:40 p.m. UTC | #9
On Thu, Sep 17, 2020 at 11:20:53AM -0500, Yazen Ghannam wrote:
> But newer systems support CPUID Leaf 0xB, so cpu_die_id will get
> explicitly set by detect_extended_topology(). The value set is
> different from the AMD NodeId. And at that point I shied away from
> doing any override or fixup.

Well, different how? Can you extract the node_id you need
from CPUID(0xb)? If yes, we can do an AMD-specific branch in
detect_extended_topology() but that better be future proof.

IOW, is information from CPUID(0xb) ever going to be needed in the
kernel?

Also, and independently, if its definition do not give you the
node_id you need, then you can just as well overwrite ->cpu_die_id in
detect_extended_topology() because that value - whatever that is, could
be garbage, just as well - is wrong on AMD anyway.

So it would be a fix for the leaf parsing, regardless of whether you
need it or not.

Makes sense?
Yazen Ghannam Sept. 17, 2020, 7:44 p.m. UTC | #10
On Thu, Sep 17, 2020 at 06:40:48PM +0200, Borislav Petkov wrote:
> On Thu, Sep 17, 2020 at 11:20:53AM -0500, Yazen Ghannam wrote:
> > But newer systems support CPUID Leaf 0xB, so cpu_die_id will get
> > explicitly set by detect_extended_topology(). The value set is
> > different from the AMD NodeId. And at that point I shied away from
> > doing any override or fixup.
> 
> Well, different how? Can you extract the node_id you need
> from CPUID(0xb)? If yes, we can do an AMD-specific branch in
> detect_extended_topology() but that better be future proof.
> 
> IOW, is information from CPUID(0xb) ever going to be needed in the
> kernel?
> 
> Also, and independently, if its definition do not give you the
> node_id you need, then you can just as well overwrite ->cpu_die_id in
> detect_extended_topology() because that value - whatever that is, could
> be garbage, just as well - is wrong on AMD anyway.
> 
> So it would be a fix for the leaf parsing, regardless of whether you
> need it or not.
> 
> Makes sense?
>

Yes, I think so. "Die" is not defined in CPUID(0xb), only SMT and Core,
so the cpu_die_id value is not valid. In which case, we can overwrite
it. CPUID(0xb) doesn't have anything equivalent to AMD NodeId. So on
systems with CPUID < 0x1F, we should be okay with using cpu_die_id equal
to AMD NodeId.

I have an idea on what to do, so I'll send another rev if that's okay.
Do you have any comments on the other patches in the set?

Thanks,
Yazen
Borislav Petkov Sept. 17, 2020, 8:10 p.m. UTC | #11
On Thu, Sep 17, 2020 at 02:44:25PM -0500, Yazen Ghannam wrote:
> Yes, I think so. "Die" is not defined in CPUID(0xb), only SMT and Core,
> so the cpu_die_id value is not valid.

Right.

> In which case, we can overwrite it. CPUID(0xb) doesn't have anything
> equivalent to AMD NodeId. So on systems with CPUID < 0x1F, we should
> be okay with using cpu_die_id equal to AMD NodeId.

Right.

> I have an idea on what to do, so I'll send another rev if that's okay.
> Do you have any comments on the other patches in the set?

Lemme go through them these days.

Thx.
diff mbox series

Patch

diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
index 86b63c7feab7..86b2e0dcc4bf 100644
--- a/arch/x86/include/asm/cacheinfo.h
+++ b/arch/x86/include/asm/cacheinfo.h
@@ -2,7 +2,7 @@ 
 #ifndef _ASM_X86_CACHEINFO_H
 #define _ASM_X86_CACHEINFO_H
 
-void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id);
-void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id);
+void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu);
+void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 
 #endif /* _ASM_X86_CACHEINFO_H */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 97143d87994c..a776b7886ec0 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -95,6 +95,7 @@  struct cpuinfo_x86 {
 	/* CPUID returned core id bits: */
 	__u8			x86_coreid_bits;
 	__u8			cu_id;
+	__u8			node_id;
 	/* Max extended CPUID function supported: */
 	__u32			extended_cpuid_level;
 	/* Maximum supported CPUID level, -1=no CPUID: */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index dcc3d943c68f..5eef4cc1e5b7 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -330,7 +330,6 @@  static void legacy_fixup_core_id(struct cpuinfo_x86 *c)
  */
 static void amd_get_topology(struct cpuinfo_x86 *c)
 {
-	u8 node_id;
 	int cpu = smp_processor_id();
 
 	/* get information required for multi-node processors */
@@ -340,7 +339,7 @@  static void amd_get_topology(struct cpuinfo_x86 *c)
 
 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
 
-		node_id  = ecx & 0xff;
+		c->node_id  = ecx & 0xff;
 
 		if (c->x86 == 0x15)
 			c->cu_id = ebx & 0xff;
@@ -360,15 +359,15 @@  static void amd_get_topology(struct cpuinfo_x86 *c)
 		if (!err)
 			c->x86_coreid_bits = get_count_order(c->x86_max_cores);
 
-		cacheinfo_amd_init_llc_id(c, cpu, node_id);
+		cacheinfo_amd_init_llc_id(c, cpu);
 
 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
 		u64 value;
 
 		rdmsrl(MSR_FAM10H_NODE_ID, value);
-		node_id = value & 7;
+		c->node_id = value & 7;
 
-		per_cpu(cpu_llc_id, cpu) = node_id;
+		per_cpu(cpu_llc_id, cpu) = c->node_id;
 	} else
 		return;
 
@@ -393,7 +392,7 @@  static void amd_detect_cmp(struct cpuinfo_x86 *c)
 	/* Convert the initial APIC ID into the socket ID */
 	c->phys_proc_id = c->initial_apicid >> bits;
 	/* use socket ID also for last level cache */
-	per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
+	per_cpu(cpu_llc_id, cpu) = c->node_id = c->phys_proc_id;
 }
 
 static void amd_detect_ppin(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 57074cf3ad7c..81dfddae4470 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -646,7 +646,7 @@  static int find_num_cache_leaves(struct cpuinfo_x86 *c)
 	return i;
 }
 
-void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
+void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu)
 {
 	/*
 	 * We may have multiple LLCs if L3 caches exist, so check if we
@@ -657,7 +657,7 @@  void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
 
 	if (c->x86 < 0x17) {
 		/* LLC is at the node level. */
-		per_cpu(cpu_llc_id, cpu) = node_id;
+		per_cpu(cpu_llc_id, cpu) = c->node_id;
 	} else if (c->x86 == 0x17 && c->x86_model <= 0x1F) {
 		/*
 		 * LLC is at the core complex level.
@@ -684,7 +684,7 @@  void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
 	}
 }
 
-void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id)
+void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu)
 {
 	/*
 	 * We may have multiple LLCs if L3 caches exist, so check if we
diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
index ac6c30e5801d..604b0315211a 100644
--- a/arch/x86/kernel/cpu/hygon.c
+++ b/arch/x86/kernel/cpu/hygon.c
@@ -65,7 +65,6 @@  static void hygon_get_topology_early(struct cpuinfo_x86 *c)
  */
 static void hygon_get_topology(struct cpuinfo_x86 *c)
 {
-	u8 node_id;
 	int cpu = smp_processor_id();
 
 	/* get information required for multi-node processors */
@@ -75,7 +74,7 @@  static void hygon_get_topology(struct cpuinfo_x86 *c)
 
 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
 
-		node_id  = ecx & 0xff;
+		c->node_id  = ecx & 0xff;
 
 		c->cpu_core_id = ebx & 0xff;
 
@@ -93,14 +92,14 @@  static void hygon_get_topology(struct cpuinfo_x86 *c)
 		/* Socket ID is ApicId[6] for these processors. */
 		c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT;
 
-		cacheinfo_hygon_init_llc_id(c, cpu, node_id);
+		cacheinfo_hygon_init_llc_id(c, cpu);
 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
 		u64 value;
 
 		rdmsrl(MSR_FAM10H_NODE_ID, value);
-		node_id = value & 7;
+		c->node_id = value & 7;
 
-		per_cpu(cpu_llc_id, cpu) = node_id;
+		per_cpu(cpu_llc_id, cpu) = c->node_id;
 	} else
 		return;
 
@@ -123,7 +122,7 @@  static void hygon_detect_cmp(struct cpuinfo_x86 *c)
 	/* Convert the initial APIC ID into the socket ID */
 	c->phys_proc_id = c->initial_apicid >> bits;
 	/* use socket ID also for last level cache */
-	per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
+	per_cpu(cpu_llc_id, cpu) = c->node_id = c->phys_proc_id;
 }
 
 static void srat_detect_node(struct cpuinfo_x86 *c)