diff mbox series

[v7,01/12] EDAC/amd64: Document heterogeneous enumeration

Message ID 20220203174942.31630-2-nchatrad@amd.com (mailing list archive)
State New, archived
Headers show
Series x86/edac/amd64: Add support for GPU nodes | expand

Commit Message

Naveen Krishna Chatradhi Feb. 3, 2022, 5:49 p.m. UTC
From: Muralidhara M K <muralimk@amd.com>

The Documentation notes have been added in amd64_edac.h and will be
referring to driver-api wherever needed.

Explains how the physical topology is enumerated in the software and
edac module populates the sysfs ABIs.

Signed-off-by: Muralidhara M K <muralimk@amd.com>
Signed-off-by: Naveen Krishna Chatradhi <nchatrad@amd.com>
---
v6->v7:
* New in v7

 Documentation/driver-api/edac.rst |   9 +++
 drivers/edac/amd64_edac.h         | 101 ++++++++++++++++++++++++++++++
 2 files changed, 110 insertions(+)

Comments

Yazen Ghannam Feb. 9, 2022, 10:34 p.m. UTC | #1
On Thu, Feb 03, 2022 at 11:49:31AM -0600, Naveen Krishna Chatradhi wrote:
> From: Muralidhara M K <muralimk@amd.com>
> 
> The Documentation notes have been added in amd64_edac.h and will be
> referring to driver-api wherever needed.

I don't see the comment in amd64_edac.h referring to driver-api/edac.rst. So
I'm not sure what this sentence is saying.

> 
> Explains how the physical topology is enumerated in the software and
> edac module populates the sysfs ABIs.
>

Also, please make sure the message is imperative, e.g "Add...", "Explain...",
etc.
 
> Signed-off-by: Muralidhara M K <muralimk@amd.com>
> Signed-off-by: Naveen Krishna Chatradhi <nchatrad@amd.com>
> ---
> v6->v7:
> * New in v7
> 
>  Documentation/driver-api/edac.rst |   9 +++
>  drivers/edac/amd64_edac.h         | 101 ++++++++++++++++++++++++++++++
>  2 files changed, 110 insertions(+)
> 
> diff --git a/Documentation/driver-api/edac.rst b/Documentation/driver-api/edac.rst
> index b8c742aa0a71..0dd07d0d0e47 100644
> --- a/Documentation/driver-api/edac.rst
> +++ b/Documentation/driver-api/edac.rst
> @@ -106,6 +106,15 @@ will occupy those chip-select rows.
>  This term is avoided because it is unclear when needing to distinguish
>  between chip-select rows and socket sets.
>  
> +* High Bandwidth Memory (HBM)
> +
> +HBM is a new type of memory chip with low power consumption and ultra-wide
> +communication lanes. It uses vertically stacked memory chips (DRAM dies)
> +interconnected by microscopic wires called "through-silicon vias," or TSVs.
> +
> +Several stacks of HBM chips connect to the CPU or GPU through an ultra-fast
> +interconnect called the “interposer". So that HBM’s characteristics are
> +nearly indistinguishable from on-chip integrated RAM.
> 

I think this makes sense.
 
>  Memory Controllers
>  ------------------
> diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
> index 6f8147abfa71..6a112270a84b 100644
> --- a/drivers/edac/amd64_edac.h
> +++ b/drivers/edac/amd64_edac.h
> @@ -559,3 +559,104 @@ static inline u32 dct_sel_baseaddr(struct amd64_pvt *pvt)
>  	}
>  	return (pvt)->dct_sel_lo & 0xFFFFF800;
>  }
> +
> +/*
> + * AMD Heterogeneous system support on EDAC subsystem
> + * --------------------------------------------------
> + *
> + * An AMD heterogeneous system built by connecting the data fabrics of both CPUs
> + * and GPUs via custom xGMI links. So, the Data Fabric on the GPU nodes can be
> + * accessed the same way as the Data Fabric on CPU nodes.
> + *
> + * An Aldebaran GPUs has 2 Data Fabrics, each GPU DF contains four Unified
> + * Memory Controllers (UMC). Each UMC contains eight Channels. Each UMC Channel
> + * controls one 128-bit HBM2e (2GB) channel (equivalent to 8 X 2GB ranks),
> + * this creates a total of 4096-bits of DRAM data bus.
> + *
> + * While UMC is interfacing a 16GB (8H X 2GB DRAM) HBM stack, each UMC channel is

What is "8H"? Is that 8 "high"?

> + * interfacing 2GB of DRAM (represented as rank).
> + *
> + * Memory controllers on AMD GPU nodes can be represented in EDAC is as below:
> + *       GPU DF / GPU Node -> EDAC MC
> + *       GPU UMC           -> EDAC CSROW
> + *       GPU UMC channel   -> EDAC CHANNEL
> + *
> + * Eg: An heterogeneous system with 1 AMD CPU is connected to 4 Aldebaran GPUs using xGMI.
> + *
> + * AMD GPU Nodes are enumerated in sequential order based on the PCI hierarchy, and the
> + * first GPU node is assumed to have an "Node ID" value after CPU Nodes are fully
> + * populated.
> + *
> + * $ ls /sys/devices/system/edac/mc/
> + *	mc0   - CPU MC node 0
> + *	mc1  |
> + *	mc2  |- GPU card[0] => node 0(mc1), node 1(mc2)
> + *	mc3  |
> + *	mc4  |- GPU card[1] => node 0(mc3), node 1(mc4)
> + *	mc5  |
> + *	mc6  |- GPU card[2] => node 0(mc5), node 1(mc6)
> + *	mc7  |
> + *	mc8  |- GPU card[3] => node 0(mc7), node 1(mc8)
> + *
> + * sysfs entries will be populated as below:
> + *
> + *	CPU			# CPU node
> + *	├── mc 0
> + *
> + *	GPU Nodes are enumerated sequentially after CPU nodes are populated
> + *	GPU card 1		# Each Aldebaran GPU has 2 nodes/mcs
> + *	├── mc 1		# GPU node 0 == mc1, Each MC node has 4 UMCs/CSROWs
> + *	│   ├── csrow 0		# UMC 0
> + *	│   │   ├── channel 0	# Each UMC has 8 channels
> + *	│   │   ├── channel 1   # size of each channel is 2 GB, so each UMC has 16 GB
> + *	│   │   ├── channel 2
> + *	│   │   ├── channel 3
> + *	│   │   ├── channel 4
> + *	│   │   ├── channel 5
> + *	│   │   ├── channel 6
> + *	│   │   ├── channel 7
> + *	│   ├── csrow 1		# UMC 1
> + *	│   │   ├── channel 0
> + *	│   │   ├── ..
> + *	│   │   ├── channel 7
> + *	│   ├── ..		..
> + *	│   ├── csrow 3		# UMC 3
> + *	│   │   ├── channel 0
> + *	│   │   ├── ..
> + *	│   │   ├── channel 7
> + *	│   ├── rank 0
> + *	│   ├── ..		..
> + *	│   ├── rank 31		# total 32 ranks/dimms from 4 UMCs
> + *	├
> + *	├── mc 2		# GPU node 1 == mc2
> + *	│   ├── ..		# each GPU has total 64 GB
> + *
> + *	GPU card 2
> + *	├── mc 3
> + *	│   ├── ..
> + *	├── mc 4
> + *	│   ├── ..
> + *
> + *	GPU card 3
> + *	├── mc 5
> + *	│   ├── ..
> + *	├── mc 6
> + *	│   ├── ..
> + *
> + *	GPU card 4
> + *	├── mc 7
> + *	│   ├── ..
> + *	├── mc 8
> + *	│   ├── ..
> + *
> + *
> + * Heterogeneous hardware details for above context as below:
> + * - The CPU UMC (Unified Memory Controller) is mostly the same as the GPU UMC.
> + *   They have chip selects (csrows) and channels. However, the layouts are different
> + *   for performance, physical layout, or other reasons.
> + * - CPU UMCs use 1 channel. So we say UMC = EDAC Channel. This follows the
> + *   marketing speak, example. CPU has X memory channels, etc.
> + * - CPU UMCs use up to 4 chip selects. So we say UMC chip select = EDAC CSROW.
> + * - GPU UMCs use 1 chip select. So we say UMC = EDAC CSROW.
> + * - GPU UMCs use 8 channels. So we say UMC Channel = EDAC Channel.
> + */
> --

This makes sense to me. I'm interested to see if there's any feedback from
others though.

Please fix up the commit message. Otherwise, I think this looks good.

Reviewed-by: Yazen Ghannam <yazen.ghannam@amd.com>

Thanks,
Yazen
diff mbox series

Patch

diff --git a/Documentation/driver-api/edac.rst b/Documentation/driver-api/edac.rst
index b8c742aa0a71..0dd07d0d0e47 100644
--- a/Documentation/driver-api/edac.rst
+++ b/Documentation/driver-api/edac.rst
@@ -106,6 +106,15 @@  will occupy those chip-select rows.
 This term is avoided because it is unclear when needing to distinguish
 between chip-select rows and socket sets.
 
+* High Bandwidth Memory (HBM)
+
+HBM is a new type of memory chip with low power consumption and ultra-wide
+communication lanes. It uses vertically stacked memory chips (DRAM dies)
+interconnected by microscopic wires called "through-silicon vias," or TSVs.
+
+Several stacks of HBM chips connect to the CPU or GPU through an ultra-fast
+interconnect called the “interposer". So that HBM’s characteristics are
+nearly indistinguishable from on-chip integrated RAM.
 
 Memory Controllers
 ------------------
diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
index 6f8147abfa71..6a112270a84b 100644
--- a/drivers/edac/amd64_edac.h
+++ b/drivers/edac/amd64_edac.h
@@ -559,3 +559,104 @@  static inline u32 dct_sel_baseaddr(struct amd64_pvt *pvt)
 	}
 	return (pvt)->dct_sel_lo & 0xFFFFF800;
 }
+
+/*
+ * AMD Heterogeneous system support on EDAC subsystem
+ * --------------------------------------------------
+ *
+ * An AMD heterogeneous system built by connecting the data fabrics of both CPUs
+ * and GPUs via custom xGMI links. So, the Data Fabric on the GPU nodes can be
+ * accessed the same way as the Data Fabric on CPU nodes.
+ *
+ * An Aldebaran GPUs has 2 Data Fabrics, each GPU DF contains four Unified
+ * Memory Controllers (UMC). Each UMC contains eight Channels. Each UMC Channel
+ * controls one 128-bit HBM2e (2GB) channel (equivalent to 8 X 2GB ranks),
+ * this creates a total of 4096-bits of DRAM data bus.
+ *
+ * While UMC is interfacing a 16GB (8H X 2GB DRAM) HBM stack, each UMC channel is
+ * interfacing 2GB of DRAM (represented as rank).
+ *
+ * Memory controllers on AMD GPU nodes can be represented in EDAC is as below:
+ *       GPU DF / GPU Node -> EDAC MC
+ *       GPU UMC           -> EDAC CSROW
+ *       GPU UMC channel   -> EDAC CHANNEL
+ *
+ * Eg: An heterogeneous system with 1 AMD CPU is connected to 4 Aldebaran GPUs using xGMI.
+ *
+ * AMD GPU Nodes are enumerated in sequential order based on the PCI hierarchy, and the
+ * first GPU node is assumed to have an "Node ID" value after CPU Nodes are fully
+ * populated.
+ *
+ * $ ls /sys/devices/system/edac/mc/
+ *	mc0   - CPU MC node 0
+ *	mc1  |
+ *	mc2  |- GPU card[0] => node 0(mc1), node 1(mc2)
+ *	mc3  |
+ *	mc4  |- GPU card[1] => node 0(mc3), node 1(mc4)
+ *	mc5  |
+ *	mc6  |- GPU card[2] => node 0(mc5), node 1(mc6)
+ *	mc7  |
+ *	mc8  |- GPU card[3] => node 0(mc7), node 1(mc8)
+ *
+ * sysfs entries will be populated as below:
+ *
+ *	CPU			# CPU node
+ *	├── mc 0
+ *
+ *	GPU Nodes are enumerated sequentially after CPU nodes are populated
+ *	GPU card 1		# Each Aldebaran GPU has 2 nodes/mcs
+ *	├── mc 1		# GPU node 0 == mc1, Each MC node has 4 UMCs/CSROWs
+ *	│   ├── csrow 0		# UMC 0
+ *	│   │   ├── channel 0	# Each UMC has 8 channels
+ *	│   │   ├── channel 1   # size of each channel is 2 GB, so each UMC has 16 GB
+ *	│   │   ├── channel 2
+ *	│   │   ├── channel 3
+ *	│   │   ├── channel 4
+ *	│   │   ├── channel 5
+ *	│   │   ├── channel 6
+ *	│   │   ├── channel 7
+ *	│   ├── csrow 1		# UMC 1
+ *	│   │   ├── channel 0
+ *	│   │   ├── ..
+ *	│   │   ├── channel 7
+ *	│   ├── ..		..
+ *	│   ├── csrow 3		# UMC 3
+ *	│   │   ├── channel 0
+ *	│   │   ├── ..
+ *	│   │   ├── channel 7
+ *	│   ├── rank 0
+ *	│   ├── ..		..
+ *	│   ├── rank 31		# total 32 ranks/dimms from 4 UMCs
+ *	├
+ *	├── mc 2		# GPU node 1 == mc2
+ *	│   ├── ..		# each GPU has total 64 GB
+ *
+ *	GPU card 2
+ *	├── mc 3
+ *	│   ├── ..
+ *	├── mc 4
+ *	│   ├── ..
+ *
+ *	GPU card 3
+ *	├── mc 5
+ *	│   ├── ..
+ *	├── mc 6
+ *	│   ├── ..
+ *
+ *	GPU card 4
+ *	├── mc 7
+ *	│   ├── ..
+ *	├── mc 8
+ *	│   ├── ..
+ *
+ *
+ * Heterogeneous hardware details for above context as below:
+ * - The CPU UMC (Unified Memory Controller) is mostly the same as the GPU UMC.
+ *   They have chip selects (csrows) and channels. However, the layouts are different
+ *   for performance, physical layout, or other reasons.
+ * - CPU UMCs use 1 channel. So we say UMC = EDAC Channel. This follows the
+ *   marketing speak, example. CPU has X memory channels, etc.
+ * - CPU UMCs use up to 4 chip selects. So we say UMC chip select = EDAC CSROW.
+ * - GPU UMCs use 1 chip select. So we say UMC = EDAC CSROW.
+ * - GPU UMCs use 8 channels. So we say UMC Channel = EDAC Channel.
+ */