Message ID | 20171012194856.13844-2-jeremy.linton@arm.com (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers | show |
Hi Jeremy, Please see below some suggestions. On 12/10/17 20:48, Jeremy Linton wrote: > ACPI 6.2 adds a new table, which describes how processing units > are related to each other in tree like fashion. Caches are > also sprinkled throughout the tree and describe the properties > of the caches in relation to other caches and processing units. > > Add the code to parse the cache hierarchy and report the total > number of levels of cache for a given core using > acpi_find_last_cache_level() as well as fill out the individual > cores cache information with cache_setup_acpi() once the > cpu_cacheinfo structure has been populated by the arch specific > code. > > Further, report peers in the topology using setup_acpi_cpu_topology() > to report a unique ID for each processing unit at a given level > in the tree. These unique id's can then be used to match related > processing units which exist as threads, COD (clusters > on die), within a given package, etc. > > Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> > --- > drivers/acpi/pptt.c | 485 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 485 insertions(+) > create mode 100644 drivers/acpi/pptt.c > > diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c > new file mode 100644 > index 000000000000..c86715fed4a7 > --- /dev/null > +++ b/drivers/acpi/pptt.c > @@ -0,1 +1,485 @@ > +/* > + * Copyright (C) 2017, ARM > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms and conditions of the GNU General Public License, > + * version 2, as published by the Free Software Foundation. > + * > + * This program is distributed in the hope it will be useful, but WITHOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > + * more details. > + * > + * This file implements parsing of Processor Properties Topology Table (PPTT) > + * which is optionally used to describe the processor and cache topology. > + * Due to the relative pointers used throughout the table, this doesn't > + * leverage the existing subtable parsing in the kernel. > + */ > +#define pr_fmt(fmt) "ACPI PPTT: " fmt > + > +#include <linux/acpi.h> > +#include <linux/cacheinfo.h> > +#include <acpi/processor.h> > + > +/* > + * Given the PPTT table, find and verify that the subtable entry > + * is located within the table > + */ > +static struct acpi_subtable_header *fetch_pptt_subtable( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + struct acpi_subtable_header *entry; > + > + /* there isn't a subtable at reference 0 */ > + if (!pptt_ref) > + return NULL; Seeing the usage of pptt_ref to retrieve the subtable, would the following be a more accurate check? if (pptt_ref < sizeof(struct acpi_table_header)) return NULL; > + > + if (pptt_ref + sizeof(struct acpi_subtable_header) > table_hdr->length) > + return NULL; > + > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); > + > + if (pptt_ref + entry->length > table_hdr->length) > + return NULL; > + > + return entry; > +} > + > +static struct acpi_pptt_processor *fetch_pptt_node( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_processor *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_pptt_cache *fetch_pptt_cache( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_subtable_header *acpi_get_pptt_resource( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node, int resource) > +{ > + u32 ref; > + > + if (resource >= node->number_of_priv_resources) > + return NULL; > + > + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + > + sizeof(u32) * resource); > + I think this can be simplified as: ref = *((u32 *)(node + 1) + resource); > + return fetch_pptt_subtable(table_hdr, ref); > +} > + > +/* > + * given a pptt resource, verify that it is a cache node, then walk > + * down each level of caches, counting how many levels are found > + * as well as checking the cache type (icache, dcache, unified). If a > + * level & type match, then we set found, and continue the search. > + * Once the entire cache branch has been walked return its max > + * depth. > + */ > +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, > + int local_level, > + struct acpi_subtable_header *res, > + struct acpi_pptt_cache **found, > + int level, int type) > +{ > + struct acpi_pptt_cache *cache; > + > + if (res->type != ACPI_PPTT_TYPE_CACHE) > + return 0; > + > + cache = (struct acpi_pptt_cache *) res; > + while (cache) { > + local_level++; > + > + if ((local_level == level) && > + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && > + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { > + if (*found != NULL) > + pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); > + > + pr_debug("Found cache @ level %d\n", level); > + *found = cache; > + /* > + * continue looking at this node's resource list > + * to verify that we don't find a duplicate > + * cache node. > + */ > + } > + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); > + } > + return local_level; > +} > + > +/* > + * Given a CPU node look for cache levels that exist at this level, and then > + * for each cache node, count how many levels exist below (logically above) it. > + * If a level and type are specified, and we find that level/type, abort > + * processing and return the acpi_pptt_cache structure. > + */ > +static struct acpi_pptt_cache *acpi_find_cache_level( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node, > + int *starting_level, int level, int type) > +{ > + struct acpi_subtable_header *res; > + int number_of_levels = *starting_level; > + int resource = 0; > + struct acpi_pptt_cache *ret = NULL; > + int local_level; > + > + /* walk down from the processor node */ > + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { > + resource++; > + > + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, > + res, &ret, level, type); > + /* > + * we are looking for the max depth. Since its potentially > + * possible for a given node to have resources with differing > + * depths verify that the depth we have found is the largest. > + */ > + if (number_of_levels < local_level) > + number_of_levels = local_level; > + } > + if (number_of_levels > *starting_level) > + *starting_level = number_of_levels; > + > + return ret; > +} > + > +/* > + * given a processor node containing a processing unit, walk into it and count > + * how many levels exist solely for it, and then walk up each level until we hit > + * the root node (ignore the package level because it may be possible to have > + * caches that exist across packages). Count the number of cache levels that > + * exist at each level on the way up. > + */ > +static int acpi_process_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node) > +{ > + int total_levels = 0; > + > + do { > + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while (cpu_node); > + > + return total_levels; > +} > + > +/* determine if the given node is a leaf node */ > +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + u32 node_entry; > + struct acpi_pptt_processor *cpu_node; Can cpu_node be defined inside the loop? It isn't used outside. > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); > + > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { while ((unsigned long) (entry + 1) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + (cpu_node->parent == node_entry)) > + return 0; > + entry = (struct acpi_subtable_header *)((u8 *)entry + entry->length); > + } > + return 1; > +} > + > +/* > + * Find the subtable entry describing the provided processor > + */ > +static struct acpi_pptt_processor *acpi_find_processor_node( > + struct acpi_table_header *table_hdr, > + u32 acpi_cpu_id) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); Can I suggest having two inline functions for this and the above function? static inline unsigned long acpi_get_table_end(const struct acpi_table_header *); static inline struct acpi_subtable_header *acpi_get_first_entry(const struct acpi_table_header *); (Feel free to adapt the names of course) > + > + /* find the processor structure associated with this cpuid */ > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { Same as above -> (unsigned long) (entry + 1). > + cpu_node = (struct acpi_pptt_processor *)entry; > + > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + acpi_pptt_leaf_node(table_hdr, cpu_node)) { > + pr_debug("checking phy_cpu_id %d against acpi id %d\n", > + acpi_cpu_id, cpu_node->acpi_processor_id); > + if (acpi_cpu_id == cpu_node->acpi_processor_id) { > + /* found the correct entry */ > + pr_debug("match found!\n"); > + return (struct acpi_pptt_processor *)entry; > + } > + } > + > + if (entry->length == 0) { > + pr_err("Invalid zero length subtable\n"); > + break; > + } > + entry = (struct acpi_subtable_header *) > + ((u8 *)entry + entry->length); I also think it would be nicer to have an inline function for this: static struct acpi_subtable_header *acpi_get_next_entry(const struct acpi_subtable_header *); > + } > + > + return NULL; > +} > + > +/* > + * Given a acpi_pptt_processor node, walk up until we identify the > + * package that the node is associated with or we run out of levels > + * to request. > + */ > +static struct acpi_pptt_processor *acpi_find_processor_package_id( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu, > + int level) > +{ > + struct acpi_pptt_processor *prev_node; > + > + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { > + pr_debug("level %d\n", level); > + prev_node = fetch_pptt_node(table_hdr, cpu->parent); > + if (prev_node == NULL) > + break; > + cpu = prev_node; > + level--; > + } > + return cpu; > +} > + > +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) > +{ > + int number_of_levels = 0; > + struct acpi_pptt_processor *cpu; > + > + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (cpu) > + number_of_levels = acpi_process_node(table_hdr, cpu); > + > + return number_of_levels; > +} > + > +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) > +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) > +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) > +#define ACPI_6_2_CACHE_POLICY_WB (0x0) > +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) > +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) > +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) > +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) > + > +static u8 acpi_cache_type(enum cache_type type) > +{ > + switch (type) { > + case CACHE_TYPE_DATA: > + pr_debug("Looking for data cache\n"); > + return ACPI_6_2_CACHE_TYPE_DATA; > + case CACHE_TYPE_INST: > + pr_debug("Looking for instruction cache\n"); > + return ACPI_6_2_CACHE_TYPE_INSTR; > + default: > + pr_debug("Unknown cache type, assume unified\n"); > + case CACHE_TYPE_UNIFIED: > + pr_debug("Looking for unified cache\n"); > + return ACPI_6_2_CACHE_TYPE_UNIFIED; > + } > +} > + > +/* find the ACPI node describing the cache type/level for the given CPU */ > +static struct acpi_pptt_cache *acpi_find_cache_node( > + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, > + enum cache_type type, unsigned int level, > + struct acpi_pptt_processor **node) > +{ > + int total_levels = 0; > + struct acpi_pptt_cache *found = NULL; > + struct acpi_pptt_processor *cpu_node; > + u8 acpi_type = acpi_cache_type(type); > + > + pr_debug("Looking for CPU %d's level %d cache type %d\n", > + acpi_cpu_id, level, acpi_type); > + > + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (!cpu_node) > + return NULL; > + > + do { > + found = acpi_find_cache_level(table_hdr, cpu_node, &total_levels, level, acpi_type); > + *node = cpu_node; > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while ((cpu_node) && (!found)); Why not combine the do...while loop and the pevious check in a simple while loop? The same condion should work as such for a while loop. Cheers,
Hi Jeremy, On 12.10.2017 21:48, Jeremy Linton wrote: > ACPI 6.2 adds a new table, which describes how processing units > are related to each other in tree like fashion. Caches are > also sprinkled throughout the tree and describe the properties > of the caches in relation to other caches and processing units. > > Add the code to parse the cache hierarchy and report the total > number of levels of cache for a given core using > acpi_find_last_cache_level() as well as fill out the individual > cores cache information with cache_setup_acpi() once the > cpu_cacheinfo structure has been populated by the arch specific > code. > > Further, report peers in the topology using setup_acpi_cpu_topology() > to report a unique ID for each processing unit at a given level > in the tree. These unique id's can then be used to match related > processing units which exist as threads, COD (clusters > on die), within a given package, etc. > > Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> > --- > drivers/acpi/pptt.c | 485 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 485 insertions(+) > create mode 100644 drivers/acpi/pptt.c > > diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c > new file mode 100644 > index 000000000000..c86715fed4a7 > --- /dev/null > +++ b/drivers/acpi/pptt.c > @@ -0,1 +1,485 @@ > +/* > + * Copyright (C) 2017, ARM > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms and conditions of the GNU General Public License, > + * version 2, as published by the Free Software Foundation. > + * > + * This program is distributed in the hope it will be useful, but WITHOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > + * more details. > + * > + * This file implements parsing of Processor Properties Topology Table (PPTT) > + * which is optionally used to describe the processor and cache topology. > + * Due to the relative pointers used throughout the table, this doesn't > + * leverage the existing subtable parsing in the kernel. > + */ > +#define pr_fmt(fmt) "ACPI PPTT: " fmt > + > +#include <linux/acpi.h> > +#include <linux/cacheinfo.h> > +#include <acpi/processor.h> > + > +/* > + * Given the PPTT table, find and verify that the subtable entry > + * is located within the table > + */ > +static struct acpi_subtable_header *fetch_pptt_subtable( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + struct acpi_subtable_header *entry; > + > + /* there isn't a subtable at reference 0 */ > + if (!pptt_ref) > + return NULL; > + > + if (pptt_ref + sizeof(struct acpi_subtable_header) > table_hdr->length) > + return NULL; > + > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); You can use ACPI_ADD_PTR() here. > + > + if (pptt_ref + entry->length > table_hdr->length) > + return NULL; > + > + return entry; > +} > + > +static struct acpi_pptt_processor *fetch_pptt_node( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_processor *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_pptt_cache *fetch_pptt_cache( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_subtable_header *acpi_get_pptt_resource( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node, int resource) > +{ > + u32 ref; > + > + if (resource >= node->number_of_priv_resources) > + return NULL; > + > + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + > + sizeof(u32) * resource); ACPI_ADD_PTR() > + > + return fetch_pptt_subtable(table_hdr, ref); > +} > + > +/* > + * given a pptt resource, verify that it is a cache node, then walk > + * down each level of caches, counting how many levels are found > + * as well as checking the cache type (icache, dcache, unified). If a > + * level & type match, then we set found, and continue the search. > + * Once the entire cache branch has been walked return its max > + * depth. > + */ > +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, > + int local_level, > + struct acpi_subtable_header *res, > + struct acpi_pptt_cache **found, > + int level, int type) > +{ > + struct acpi_pptt_cache *cache; > + > + if (res->type != ACPI_PPTT_TYPE_CACHE) > + return 0; > + > + cache = (struct acpi_pptt_cache *) res; > + while (cache) { > + local_level++; > + > + if ((local_level == level) && > + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && > + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { > + if (*found != NULL) > + pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); > + > + pr_debug("Found cache @ level %d\n", level); > + *found = cache; > + /* > + * continue looking at this node's resource list > + * to verify that we don't find a duplicate > + * cache node. > + */ > + } > + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); > + } > + return local_level; > +} > + > +/* > + * Given a CPU node look for cache levels that exist at this level, and then > + * for each cache node, count how many levels exist below (logically above) it. > + * If a level and type are specified, and we find that level/type, abort > + * processing and return the acpi_pptt_cache structure. > + */ > +static struct acpi_pptt_cache *acpi_find_cache_level( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node, > + int *starting_level, int level, int type) > +{ > + struct acpi_subtable_header *res; > + int number_of_levels = *starting_level; > + int resource = 0; > + struct acpi_pptt_cache *ret = NULL; > + int local_level; > + > + /* walk down from the processor node */ > + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { > + resource++; > + > + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, > + res, &ret, level, type); > + /* > + * we are looking for the max depth. Since its potentially > + * possible for a given node to have resources with differing > + * depths verify that the depth we have found is the largest. > + */ > + if (number_of_levels < local_level) > + number_of_levels = local_level; > + } > + if (number_of_levels > *starting_level) > + *starting_level = number_of_levels; > + > + return ret; > +} > + > +/* > + * given a processor node containing a processing unit, walk into it and count > + * how many levels exist solely for it, and then walk up each level until we hit > + * the root node (ignore the package level because it may be possible to have > + * caches that exist across packages). Count the number of cache levels that > + * exist at each level on the way up. > + */ > +static int acpi_process_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node) > +{ > + int total_levels = 0; > + > + do { > + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while (cpu_node); > + > + return total_levels; > +} > + > +/* determine if the given node is a leaf node */ > +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + u32 node_entry; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); ACPI_ADD_PTR() > + > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + (cpu_node->parent == node_entry)) > + return 0; > + entry = (struct acpi_subtable_header *)((u8 *)entry + entry->length); > + } > + return 1; > +} > + > +/* > + * Find the subtable entry describing the provided processor > + */ > +static struct acpi_pptt_processor *acpi_find_processor_node( > + struct acpi_table_header *table_hdr, > + u32 acpi_cpu_id) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); ACPI_ADD_PTR() > + > + /* find the processor structure associated with this cpuid */ > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + acpi_pptt_leaf_node(table_hdr, cpu_node)) { > + pr_debug("checking phy_cpu_id %d against acpi id %d\n", > + acpi_cpu_id, cpu_node->acpi_processor_id); > + if (acpi_cpu_id == cpu_node->acpi_processor_id) { > + /* found the correct entry */ > + pr_debug("match found!\n"); > + return (struct acpi_pptt_processor *)entry; > + } > + } > + > + if (entry->length == 0) { > + pr_err("Invalid zero length subtable\n"); > + break; > + } For a better table content validation, this could be done at the beginning of the loop, like that: if (WARN_TAINT(entry->length == 0, TAINT_FIRMWARE_WORKAROUND, "Invalid zero length subtable, bad PPTT table!\n")) break; > + entry = (struct acpi_subtable_header *) > + ((u8 *)entry + entry->length); ACPI_ADD_PTR() > + } > + > + return NULL; > +} > + > +/* > + * Given a acpi_pptt_processor node, walk up until we identify the > + * package that the node is associated with or we run out of levels > + * to request. > + */ > +static struct acpi_pptt_processor *acpi_find_processor_package_id( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu, > + int level) > +{ > + struct acpi_pptt_processor *prev_node; > + > + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { > + pr_debug("level %d\n", level); > + prev_node = fetch_pptt_node(table_hdr, cpu->parent); > + if (prev_node == NULL) > + break; > + cpu = prev_node; > + level--; > + } > + return cpu; > +} > + > +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) > +{ > + int number_of_levels = 0; > + struct acpi_pptt_processor *cpu; > + > + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (cpu) > + number_of_levels = acpi_process_node(table_hdr, cpu); > + > + return number_of_levels; > +} > + Based on ACPI spec 6.2: > +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) > +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) > +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) Bits:3:2: Cache type: 0x0 Data 0x1 Instruction 0x2 or 0x3 Indicate a unified cache > +#define ACPI_6_2_CACHE_POLICY_WB (0x0) > +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) > +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) > +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) > +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) Bits 1:0: Allocation type 0x0 - Read allocate 0x1 - Write allocate 0x2 or 0x03 indicate Read and Write allocate BTW, why these are not part of ACPICA code (actbl1.h header) and have ACPI_PPTT prefixes? > + > +static u8 acpi_cache_type(enum cache_type type) > +{ > + switch (type) { > + case CACHE_TYPE_DATA: > + pr_debug("Looking for data cache\n"); > + return ACPI_6_2_CACHE_TYPE_DATA; > + case CACHE_TYPE_INST: > + pr_debug("Looking for instruction cache\n"); > + return ACPI_6_2_CACHE_TYPE_INSTR; > + default: > + pr_debug("Unknown cache type, assume unified\n"); > + case CACHE_TYPE_UNIFIED: > + pr_debug("Looking for unified cache\n"); > + return ACPI_6_2_CACHE_TYPE_UNIFIED; > + } > +} > + > +/* find the ACPI node describing the cache type/level for the given CPU */ > +static struct acpi_pptt_cache *acpi_find_cache_node( > + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, > + enum cache_type type, unsigned int level, > + struct acpi_pptt_processor **node) > +{ > + int total_levels = 0; > + struct acpi_pptt_cache *found = NULL; > + struct acpi_pptt_processor *cpu_node; > + u8 acpi_type = acpi_cache_type(type); > + > + pr_debug("Looking for CPU %d's level %d cache type %d\n", > + acpi_cpu_id, level, acpi_type); > + > + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (!cpu_node) > + return NULL; > + > + do { > + found = acpi_find_cache_level(table_hdr, cpu_node, &total_levels, level, acpi_type); Please align line to 80 characters at maximum. > + *node = cpu_node; > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while ((cpu_node) && (!found)); > + > + return found; > +} > + > +int acpi_find_last_cache_level(unsigned int cpu) > +{ > + u32 acpi_cpu_id; > + struct acpi_table_header *table; > + int number_of_levels = 0; > + acpi_status status; > + > + pr_debug("Cache Setup find last level cpu=%d\n", cpu); > + > + acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > + } else { > + number_of_levels = acpi_parse_pptt(table, acpi_cpu_id); > + acpi_put_table(table); > + } > + pr_debug("Cache Setup find last level level=%d\n", number_of_levels); > + > + return number_of_levels; > +} > + > +/* > + * The ACPI spec implies that the fields in the cache structures are used to > + * extend and correct the information probed from the hardware. In the case > + * of arm64 the CCSIDR probing has been removed because it might be incorrect. > + */ > +static void update_cache_properties(struct cacheinfo *this_leaf, > + struct acpi_pptt_cache *found_cache, > + struct acpi_pptt_processor *cpu_node) > +{ > + if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) > + this_leaf->size = found_cache->size; > + if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) > + this_leaf->coherency_line_size = found_cache->line_size; > + if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) > + this_leaf->number_of_sets = found_cache->number_of_sets; > + if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) > + this_leaf->ways_of_associativity = found_cache->associativity; > + if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) > + switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) { > + case ACPI_6_2_CACHE_POLICY_WT: > + this_leaf->attributes = CACHE_WRITE_THROUGH; > + break; > + case ACPI_6_2_CACHE_POLICY_WB: > + this_leaf->attributes = CACHE_WRITE_BACK; > + break; > + default: > + pr_err("Unknown ACPI cache policy %d\n", > + found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY); > + } The 'default' case can never happen, please remove dead code. > + if (found_cache->flags & ACPI_PPTT_ALLOCATION_TYPE_VALID) > + switch (found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE) { > + case ACPI_6_2_CACHE_READ_ALLOCATE: > + this_leaf->attributes |= CACHE_READ_ALLOCATE; > + break; > + case ACPI_6_2_CACHE_WRITE_ALLOCATE: > + this_leaf->attributes |= CACHE_WRITE_ALLOCATE; > + break; > + case ACPI_6_2_CACHE_RW_ALLOCATE: > + this_leaf->attributes |= > + CACHE_READ_ALLOCATE|CACHE_WRITE_ALLOCATE; > + break; > + default: > + pr_err("Unknown ACPI cache allocation policy %d\n", > + found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE); > + } Same here if you fix bits definitions. > +} > + > +static void cache_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu) > +{ > + struct acpi_pptt_cache *found_cache; > + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + struct cacheinfo *this_leaf; > + unsigned int index = 0; > + struct acpi_pptt_processor *cpu_node = NULL; > + > + while (index < get_cpu_cacheinfo(cpu)->num_leaves) { > + this_leaf = this_cpu_ci->info_list + index; > + found_cache = acpi_find_cache_node(table, acpi_cpu_id, > + this_leaf->type, > + this_leaf->level, > + &cpu_node); > + pr_debug("found = %p %p\n", found_cache, cpu_node); > + if (found_cache) > + update_cache_properties(this_leaf, > + found_cache, > + cpu_node); > + > + index++; > + } > +} > + > +static int topology_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu, int level) > +{ > + struct acpi_pptt_processor *cpu_node; > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + > + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); > + if (cpu_node) { > + cpu_node = acpi_find_processor_package_id(table, cpu_node, level); > + /* Only the first level has a guaranteed id */ > + if (level == 0) > + return cpu_node->acpi_processor_id; > + return (int)((u8 *)cpu_node - (u8 *)table); > + } > + pr_err_once("PPTT table found, but unable to locate core for %d\n", > + cpu); > + return -ENOENT; > +} > + > +/* > + * simply assign a ACPI cache entry to each known CPU cache entry > + * determining which entries are shared is done later. > + */ > +int cache_setup_acpi(unsigned int cpu) > +{ > + struct acpi_table_header *table; > + acpi_status status; > + > + pr_debug("Cache Setup ACPI cpu %d\n", cpu); > + > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > + return -ENOENT; > + } > + > + cache_setup_acpi_cpu(table, cpu); > + acpi_put_table(table); > + > + return status; > +} > + > +/* > + * Determine a topology unique ID for each thread/core/cluster/socket/etc. > + * This ID can then be used to group peers. > + */ > +int setup_acpi_cpu_topology(unsigned int cpu, int level) > +{ > + struct acpi_table_header *table; > + acpi_status status; > + int retval; > + > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cpu topology may be inaccurate\n"); > + return -ENOENT; > + } > + retval = topology_setup_acpi_cpu(table, cpu, level); > + pr_debug("Topology Setup ACPI cpu %d, level %d ret = %d\n", > + cpu, level, retval); > + acpi_put_table(table); > + > + return retval; > +} > Thanks, Tomasz
Hi, On 10/13/2017 09:23 AM, tn wrote: > Hi Jeremy, > > On 12.10.2017 21:48, Jeremy Linton wrote: >> ACPI 6.2 adds a new table, which describes how processing units >> are related to each other in tree like fashion. Caches are >> also sprinkled throughout the tree and describe the properties >> of the caches in relation to other caches and processing units. >> >> Add the code to parse the cache hierarchy and report the total >> number of levels of cache for a given core using >> acpi_find_last_cache_level() as well as fill out the individual >> cores cache information with cache_setup_acpi() once the >> cpu_cacheinfo structure has been populated by the arch specific >> code. >> >> Further, report peers in the topology using setup_acpi_cpu_topology() >> to report a unique ID for each processing unit at a given level >> in the tree. These unique id's can then be used to match related >> processing units which exist as threads, COD (clusters >> on die), within a given package, etc. >> >> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> >> --- >> drivers/acpi/pptt.c | 485 >> ++++++++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 485 insertions(+) >> create mode 100644 drivers/acpi/pptt.c >> >> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c >> new file mode 100644 >> index 000000000000..c86715fed4a7 >> --- /dev/null >> +++ b/drivers/acpi/pptt.c >> @@ -0,1 +1,485 @@ >> +/* >> + * Copyright (C) 2017, ARM >> + * >> + * This program is free software; you can redistribute it and/or >> modify it >> + * under the terms and conditions of the GNU General Public License, >> + * version 2, as published by the Free Software Foundation. >> + * >> + * This program is distributed in the hope it will be useful, but >> WITHOUT >> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public >> License for >> + * more details. >> + * >> + * This file implements parsing of Processor Properties Topology >> Table (PPTT) >> + * which is optionally used to describe the processor and cache >> topology. >> + * Due to the relative pointers used throughout the table, this doesn't >> + * leverage the existing subtable parsing in the kernel. >> + */ >> +#define pr_fmt(fmt) "ACPI PPTT: " fmt >> + >> +#include <linux/acpi.h> >> +#include <linux/cacheinfo.h> >> +#include <acpi/processor.h> >> + >> +/* >> + * Given the PPTT table, find and verify that the subtable entry >> + * is located within the table >> + */ >> +static struct acpi_subtable_header *fetch_pptt_subtable( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + struct acpi_subtable_header *entry; >> + >> + /* there isn't a subtable at reference 0 */ >> + if (!pptt_ref) >> + return NULL; >> + >> + if (pptt_ref + sizeof(struct acpi_subtable_header) > >> table_hdr->length) >> + return NULL; >> + >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); > > You can use ACPI_ADD_PTR() here. Hmmm, that is a useful macro. > >> + >> + if (pptt_ref + entry->length > table_hdr->length) >> + return NULL; >> + >> + return entry; >> +} >> + >> +static struct acpi_pptt_processor *fetch_pptt_node( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + return (struct acpi_pptt_processor >> *)fetch_pptt_subtable(table_hdr, pptt_ref); >> +} >> + >> +static struct acpi_pptt_cache *fetch_pptt_cache( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, >> pptt_ref); >> +} >> + >> +static struct acpi_subtable_header *acpi_get_pptt_resource( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *node, int resource) >> +{ >> + u32 ref; >> + >> + if (resource >= node->number_of_priv_resources) >> + return NULL; >> + >> + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + >> + sizeof(u32) * resource); > > ACPI_ADD_PTR() > >> + >> + return fetch_pptt_subtable(table_hdr, ref); >> +} >> + >> +/* >> + * given a pptt resource, verify that it is a cache node, then walk >> + * down each level of caches, counting how many levels are found >> + * as well as checking the cache type (icache, dcache, unified). If a >> + * level & type match, then we set found, and continue the search. >> + * Once the entire cache branch has been walked return its max >> + * depth. >> + */ >> +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, >> + int local_level, >> + struct acpi_subtable_header *res, >> + struct acpi_pptt_cache **found, >> + int level, int type) >> +{ >> + struct acpi_pptt_cache *cache; >> + >> + if (res->type != ACPI_PPTT_TYPE_CACHE) >> + return 0; >> + >> + cache = (struct acpi_pptt_cache *) res; >> + while (cache) { >> + local_level++; >> + >> + if ((local_level == level) && >> + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && >> + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { >> + if (*found != NULL) >> + pr_err("Found duplicate cache level/type unable to >> determine uniqueness\n"); >> + >> + pr_debug("Found cache @ level %d\n", level); >> + *found = cache; >> + /* >> + * continue looking at this node's resource list >> + * to verify that we don't find a duplicate >> + * cache node. >> + */ >> + } >> + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); >> + } >> + return local_level; >> +} >> + >> +/* >> + * Given a CPU node look for cache levels that exist at this level, >> and then >> + * for each cache node, count how many levels exist below (logically >> above) it. >> + * If a level and type are specified, and we find that level/type, abort >> + * processing and return the acpi_pptt_cache structure. >> + */ >> +static struct acpi_pptt_cache *acpi_find_cache_level( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu_node, >> + int *starting_level, int level, int type) >> +{ >> + struct acpi_subtable_header *res; >> + int number_of_levels = *starting_level; >> + int resource = 0; >> + struct acpi_pptt_cache *ret = NULL; >> + int local_level; >> + >> + /* walk down from the processor node */ >> + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, >> resource))) { >> + resource++; >> + >> + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, >> + res, &ret, level, type); >> + /* >> + * we are looking for the max depth. Since its potentially >> + * possible for a given node to have resources with differing >> + * depths verify that the depth we have found is the largest. >> + */ >> + if (number_of_levels < local_level) >> + number_of_levels = local_level; >> + } >> + if (number_of_levels > *starting_level) >> + *starting_level = number_of_levels; >> + >> + return ret; >> +} >> + >> +/* >> + * given a processor node containing a processing unit, walk into it >> and count >> + * how many levels exist solely for it, and then walk up each level >> until we hit >> + * the root node (ignore the package level because it may be possible >> to have >> + * caches that exist across packages). Count the number of cache >> levels that >> + * exist at each level on the way up. >> + */ >> +static int acpi_process_node(struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu_node) >> +{ >> + int total_levels = 0; >> + >> + do { >> + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); >> + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >> + } while (cpu_node); >> + >> + return total_levels; >> +} >> + >> +/* determine if the given node is a leaf node */ >> +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *node) >> +{ >> + struct acpi_subtable_header *entry; >> + unsigned long table_end; >> + u32 node_entry; >> + struct acpi_pptt_processor *cpu_node; >> + >> + table_end = (unsigned long)table_hdr + table_hdr->length; >> + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >> + sizeof(struct acpi_table_pptt)); > > ACPI_ADD_PTR() > >> + >> + while (((unsigned long)entry) + sizeof(struct >> acpi_subtable_header) < table_end) { >> + cpu_node = (struct acpi_pptt_processor *)entry; >> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >> + (cpu_node->parent == node_entry)) >> + return 0; >> + entry = (struct acpi_subtable_header *)((u8 *)entry + >> entry->length); >> + } >> + return 1; >> +} >> + >> +/* >> + * Find the subtable entry describing the provided processor >> + */ >> +static struct acpi_pptt_processor *acpi_find_processor_node( >> + struct acpi_table_header *table_hdr, >> + u32 acpi_cpu_id) >> +{ >> + struct acpi_subtable_header *entry; >> + unsigned long table_end; >> + struct acpi_pptt_processor *cpu_node; >> + >> + table_end = (unsigned long)table_hdr + table_hdr->length; >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >> + sizeof(struct acpi_table_pptt)); > > ACPI_ADD_PTR() > >> + >> + /* find the processor structure associated with this cpuid */ >> + while (((unsigned long)entry) + sizeof(struct >> acpi_subtable_header) < table_end) { >> + cpu_node = (struct acpi_pptt_processor *)entry; >> + >> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >> + acpi_pptt_leaf_node(table_hdr, cpu_node)) { >> + pr_debug("checking phy_cpu_id %d against acpi id %d\n", >> + acpi_cpu_id, cpu_node->acpi_processor_id); >> + if (acpi_cpu_id == cpu_node->acpi_processor_id) { >> + /* found the correct entry */ >> + pr_debug("match found!\n"); >> + return (struct acpi_pptt_processor *)entry; >> + } >> + } >> + >> + if (entry->length == 0) { >> + pr_err("Invalid zero length subtable\n"); >> + break; >> + } > > For a better table content validation, this could be done at the > beginning of the loop, like that: > > if (WARN_TAINT(entry->length == 0, TAINT_FIRMWARE_WORKAROUND, > "Invalid zero length subtable, bad PPTT table!\n")) > break; > > >> + entry = (struct acpi_subtable_header *) >> + ((u8 *)entry + entry->length); > > ACPI_ADD_PTR() > >> + } >> + >> + return NULL; >> +} >> + >> +/* >> + * Given a acpi_pptt_processor node, walk up until we identify the >> + * package that the node is associated with or we run out of levels >> + * to request. >> + */ >> +static struct acpi_pptt_processor *acpi_find_processor_package_id( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu, >> + int level) >> +{ >> + struct acpi_pptt_processor *prev_node; >> + >> + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { >> + pr_debug("level %d\n", level); >> + prev_node = fetch_pptt_node(table_hdr, cpu->parent); >> + if (prev_node == NULL) >> + break; >> + cpu = prev_node; >> + level--; >> + } >> + return cpu; >> +} >> + >> +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 >> acpi_cpu_id) >> +{ >> + int number_of_levels = 0; >> + struct acpi_pptt_processor *cpu; >> + >> + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> + if (cpu) >> + number_of_levels = acpi_process_node(table_hdr, cpu); >> + >> + return number_of_levels; >> +} >> + > > Based on ACPI spec 6.2: > >> +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) >> +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) >> +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) > > Bits:3:2: Cache type: > 0x0 Data > 0x1 Instruction > 0x2 or 0x3 Indicate a unified cache Originally I was trying to do something more clever than the switch (given the less than optimal bit definitions), but the result wasn't as clear as the switch, so I just plugged that in but forgot about the 3rd case. > >> +#define ACPI_6_2_CACHE_POLICY_WB (0x0) >> +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) >> +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) >> +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) >> +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) > > Bits 1:0: Allocation type > 0x0 - Read allocate > 0x1 - Write allocate > 0x2 or 0x03 indicate Read and Write allocate > > BTW, why these are not part of ACPICA code (actbl1.h header) and have > ACPI_PPTT prefixes? Well I guess they probably should be the only question is how one goes about defining the duplicates.. AKA: #define ACPI_PPTT_CACHE_RW_ALLOCATE (0x02) #define ACPI_PPTT_CACHE_RW_ALLOCATE_ALT (0x03) > >> + >> +static u8 acpi_cache_type(enum cache_type type) >> +{ >> + switch (type) { >> + case CACHE_TYPE_DATA: >> + pr_debug("Looking for data cache\n"); >> + return ACPI_6_2_CACHE_TYPE_DATA; >> + case CACHE_TYPE_INST: >> + pr_debug("Looking for instruction cache\n"); >> + return ACPI_6_2_CACHE_TYPE_INSTR; >> + default: >> + pr_debug("Unknown cache type, assume unified\n"); >> + case CACHE_TYPE_UNIFIED: >> + pr_debug("Looking for unified cache\n"); >> + return ACPI_6_2_CACHE_TYPE_UNIFIED; >> + } >> +} >> + >> +/* find the ACPI node describing the cache type/level for the given >> CPU */ >> +static struct acpi_pptt_cache *acpi_find_cache_node( >> + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, >> + enum cache_type type, unsigned int level, >> + struct acpi_pptt_processor **node) >> +{ >> + int total_levels = 0; >> + struct acpi_pptt_cache *found = NULL; >> + struct acpi_pptt_processor *cpu_node; >> + u8 acpi_type = acpi_cache_type(type); >> + >> + pr_debug("Looking for CPU %d's level %d cache type %d\n", >> + acpi_cpu_id, level, acpi_type); >> + >> + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> + if (!cpu_node) >> + return NULL; >> + >> + do { >> + found = acpi_find_cache_level(table_hdr, cpu_node, >> &total_levels, level, acpi_type); > > Please align line to 80 characters at maximum. ok, > >> + *node = cpu_node; >> + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >> + } while ((cpu_node) && (!found)); >> + >> + return found; >> +} >> + >> +int acpi_find_last_cache_level(unsigned int cpu) >> +{ >> + u32 acpi_cpu_id; >> + struct acpi_table_header *table; >> + int number_of_levels = 0; >> + acpi_status status; >> + >> + pr_debug("Cache Setup find last level cpu=%d\n", cpu); >> + >> + acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; >> + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); >> + if (ACPI_FAILURE(status)) { >> + pr_err_once("No PPTT table found, cache topology may be >> inaccurate\n"); >> + } else { >> + number_of_levels = acpi_parse_pptt(table, acpi_cpu_id); >> + acpi_put_table(table); >> + } >> + pr_debug("Cache Setup find last level level=%d\n", >> number_of_levels); >> + >> + return number_of_levels; >> +} >> + >> +/* >> + * The ACPI spec implies that the fields in the cache structures are >> used to >> + * extend and correct the information probed from the hardware. In >> the case >> + * of arm64 the CCSIDR probing has been removed because it might be >> incorrect. >> + */ >> +static void update_cache_properties(struct cacheinfo *this_leaf, >> + struct acpi_pptt_cache *found_cache, >> + struct acpi_pptt_processor *cpu_node) >> +{ >> + if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) >> + this_leaf->size = found_cache->size; >> + if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) >> + this_leaf->coherency_line_size = found_cache->line_size; >> + if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) >> + this_leaf->number_of_sets = found_cache->number_of_sets; >> + if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) >> + this_leaf->ways_of_associativity = found_cache->associativity; >> + if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) >> + switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) { >> + case ACPI_6_2_CACHE_POLICY_WT: >> + this_leaf->attributes = CACHE_WRITE_THROUGH; >> + break; >> + case ACPI_6_2_CACHE_POLICY_WB: >> + this_leaf->attributes = CACHE_WRITE_BACK; >> + break; >> + default: >> + pr_err("Unknown ACPI cache policy %d\n", >> + found_cache->attributes & >> ACPI_PPTT_MASK_WRITE_POLICY); >> + } > > The 'default' case can never happen, please remove dead code. Ok, > >> + if (found_cache->flags & ACPI_PPTT_ALLOCATION_TYPE_VALID) >> + switch (found_cache->attributes & >> ACPI_PPTT_MASK_ALLOCATION_TYPE) { >> + case ACPI_6_2_CACHE_READ_ALLOCATE: >> + this_leaf->attributes |= CACHE_READ_ALLOCATE; >> + break; >> + case ACPI_6_2_CACHE_WRITE_ALLOCATE: >> + this_leaf->attributes |= CACHE_WRITE_ALLOCATE; >> + break; >> + case ACPI_6_2_CACHE_RW_ALLOCATE: >> + this_leaf->attributes |= >> + CACHE_READ_ALLOCATE|CACHE_WRITE_ALLOCATE; >> + break; >> + default: >> + pr_err("Unknown ACPI cache allocation policy %d\n", >> + found_cache->attributes & >> ACPI_PPTT_MASK_ALLOCATION_TYPE); >> + } > > Same here if you fix bits definitions. Sure, > >> +} >> + >> +static void cache_setup_acpi_cpu(struct acpi_table_header *table, >> + unsigned int cpu) >> +{ >> + struct acpi_pptt_cache *found_cache; >> + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); >> + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; >> + struct cacheinfo *this_leaf; >> + unsigned int index = 0; >> + struct acpi_pptt_processor *cpu_node = NULL; >> + >> + while (index < get_cpu_cacheinfo(cpu)->num_leaves) { >> + this_leaf = this_cpu_ci->info_list + index; >> + found_cache = acpi_find_cache_node(table, acpi_cpu_id, >> + this_leaf->type, >> + this_leaf->level, >> + &cpu_node); >> + pr_debug("found = %p %p\n", found_cache, cpu_node); >> + if (found_cache) >> + update_cache_properties(this_leaf, >> + found_cache, >> + cpu_node); >> + >> + index++; >> + } >> +} >> + >> +static int topology_setup_acpi_cpu(struct acpi_table_header *table, >> + unsigned int cpu, int level) >> +{ >> + struct acpi_pptt_processor *cpu_node; >> + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; >> + >> + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); >> + if (cpu_node) { >> + cpu_node = acpi_find_processor_package_id(table, cpu_node, >> level); >> + /* Only the first level has a guaranteed id */ >> + if (level == 0) >> + return cpu_node->acpi_processor_id; >> + return (int)((u8 *)cpu_node - (u8 *)table); >> + } >> + pr_err_once("PPTT table found, but unable to locate core for %d\n", >> + cpu); >> + return -ENOENT; >> +} >> + >> +/* >> + * simply assign a ACPI cache entry to each known CPU cache entry >> + * determining which entries are shared is done later. >> + */ >> +int cache_setup_acpi(unsigned int cpu) >> +{ >> + struct acpi_table_header *table; >> + acpi_status status; >> + >> + pr_debug("Cache Setup ACPI cpu %d\n", cpu); >> + >> + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); >> + if (ACPI_FAILURE(status)) { >> + pr_err_once("No PPTT table found, cache topology may be >> inaccurate\n"); >> + return -ENOENT; >> + } >> + >> + cache_setup_acpi_cpu(table, cpu); >> + acpi_put_table(table); >> + >> + return status; >> +} >> + >> +/* >> + * Determine a topology unique ID for each >> thread/core/cluster/socket/etc. >> + * This ID can then be used to group peers. >> + */ >> +int setup_acpi_cpu_topology(unsigned int cpu, int level) >> +{ >> + struct acpi_table_header *table; >> + acpi_status status; >> + int retval; >> + >> + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); >> + if (ACPI_FAILURE(status)) { >> + pr_err_once("No PPTT table found, cpu topology may be >> inaccurate\n"); >> + return -ENOENT; >> + } >> + retval = topology_setup_acpi_cpu(table, cpu, level); >> + pr_debug("Topology Setup ACPI cpu %d, level %d ret = %d\n", >> + cpu, level, retval); >> + acpi_put_table(table); >> + >> + return retval; >> +} >> > > Thanks, > Tomasz > Thanks for taking the time to look at this.
Hi, Thanks for spending the time to take a look at this. On 10/13/2017 04:56 AM, Julien Thierry wrote: > Hi Jeremy, > > Please see below some suggestions. > > On 12/10/17 20:48, Jeremy Linton wrote: >> ACPI 6.2 adds a new table, which describes how processing units >> are related to each other in tree like fashion. Caches are >> also sprinkled throughout the tree and describe the properties >> of the caches in relation to other caches and processing units. >> >> Add the code to parse the cache hierarchy and report the total >> number of levels of cache for a given core using >> acpi_find_last_cache_level() as well as fill out the individual >> cores cache information with cache_setup_acpi() once the >> cpu_cacheinfo structure has been populated by the arch specific >> code. >> >> Further, report peers in the topology using setup_acpi_cpu_topology() >> to report a unique ID for each processing unit at a given level >> in the tree. These unique id's can then be used to match related >> processing units which exist as threads, COD (clusters >> on die), within a given package, etc. >> >> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> >> --- >> drivers/acpi/pptt.c | 485 >> ++++++++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 485 insertions(+) >> create mode 100644 drivers/acpi/pptt.c >> >> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c >> new file mode 100644 >> index 000000000000..c86715fed4a7 >> --- /dev/null >> +++ b/drivers/acpi/pptt.c >> @@ -0,1 +1,485 @@ >> +/* >> + * Copyright (C) 2017, ARM >> + * >> + * This program is free software; you can redistribute it and/or >> modify it >> + * under the terms and conditions of the GNU General Public License, >> + * version 2, as published by the Free Software Foundation. >> + * >> + * This program is distributed in the hope it will be useful, but >> WITHOUT >> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public >> License for >> + * more details. >> + * >> + * This file implements parsing of Processor Properties Topology >> Table (PPTT) >> + * which is optionally used to describe the processor and cache >> topology. >> + * Due to the relative pointers used throughout the table, this doesn't >> + * leverage the existing subtable parsing in the kernel. >> + */ >> +#define pr_fmt(fmt) "ACPI PPTT: " fmt >> + >> +#include <linux/acpi.h> >> +#include <linux/cacheinfo.h> >> +#include <acpi/processor.h> >> + >> +/* >> + * Given the PPTT table, find and verify that the subtable entry >> + * is located within the table >> + */ >> +static struct acpi_subtable_header *fetch_pptt_subtable( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + struct acpi_subtable_header *entry; >> + >> + /* there isn't a subtable at reference 0 */ >> + if (!pptt_ref) >> + return NULL; > > Seeing the usage of pptt_ref to retrieve the subtable, would the > following be a more accurate check? > > if (pptt_ref < sizeof(struct acpi_table_header)) > return NULL; Yes, that makes it better match the comment, and I guess tightens up the sanity checking. The original intention was just to catch null references that were encoded as parent/etc fields. > >> + >> + if (pptt_ref + sizeof(struct acpi_subtable_header) > >> table_hdr->length) >> + return NULL; >> + >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); >> + >> + if (pptt_ref + entry->length > table_hdr->length) >> + return NULL; >> + >> + return entry; >> +} >> + >> +static struct acpi_pptt_processor *fetch_pptt_node( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + return (struct acpi_pptt_processor >> *)fetch_pptt_subtable(table_hdr, pptt_ref); >> +} >> + >> +static struct acpi_pptt_cache *fetch_pptt_cache( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, >> pptt_ref); >> +} >> + >> +static struct acpi_subtable_header *acpi_get_pptt_resource( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *node, int resource) >> +{ >> + u32 ref; >> + >> + if (resource >= node->number_of_priv_resources) >> + return NULL; >> + >> + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + >> + sizeof(u32) * resource); >> + > > I think this can be simplified as: > > ref = *((u32 *)(node + 1) + resource); I think Thomasz had a better suggestion with regard to ACPI_ADD_PTR() for avoiding the explicit pointer math, although it may not be that clean either because it doesn't fit 1:1 with the macro at the moment, maybe i'm doing it wrong... > >> + return fetch_pptt_subtable(table_hdr, ref); >> +} >> + >> +/* >> + * given a pptt resource, verify that it is a cache node, then walk >> + * down each level of caches, counting how many levels are found >> + * as well as checking the cache type (icache, dcache, unified). If a >> + * level & type match, then we set found, and continue the search. >> + * Once the entire cache branch has been walked return its max >> + * depth. >> + */ >> +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, >> + int local_level, >> + struct acpi_subtable_header *res, >> + struct acpi_pptt_cache **found, >> + int level, int type) >> +{ >> + struct acpi_pptt_cache *cache; >> + >> + if (res->type != ACPI_PPTT_TYPE_CACHE) >> + return 0; >> + >> + cache = (struct acpi_pptt_cache *) res; >> + while (cache) { >> + local_level++; >> + >> + if ((local_level == level) && >> + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && >> + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { >> + if (*found != NULL) >> + pr_err("Found duplicate cache level/type unable to >> determine uniqueness\n"); >> + >> + pr_debug("Found cache @ level %d\n", level); >> + *found = cache; >> + /* >> + * continue looking at this node's resource list >> + * to verify that we don't find a duplicate >> + * cache node. >> + */ >> + } >> + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); >> + } >> + return local_level; >> +} >> + >> +/* >> + * Given a CPU node look for cache levels that exist at this level, >> and then >> + * for each cache node, count how many levels exist below (logically >> above) it. >> + * If a level and type are specified, and we find that level/type, abort >> + * processing and return the acpi_pptt_cache structure. >> + */ >> +static struct acpi_pptt_cache *acpi_find_cache_level( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu_node, >> + int *starting_level, int level, int type) >> +{ >> + struct acpi_subtable_header *res; >> + int number_of_levels = *starting_level; >> + int resource = 0; >> + struct acpi_pptt_cache *ret = NULL; >> + int local_level; >> + >> + /* walk down from the processor node */ >> + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, >> resource))) { >> + resource++; >> + >> + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, >> + res, &ret, level, type); >> + /* >> + * we are looking for the max depth. Since its potentially >> + * possible for a given node to have resources with differing >> + * depths verify that the depth we have found is the largest. >> + */ >> + if (number_of_levels < local_level) >> + number_of_levels = local_level; >> + } >> + if (number_of_levels > *starting_level) >> + *starting_level = number_of_levels; >> + >> + return ret; >> +} >> + >> +/* >> + * given a processor node containing a processing unit, walk into it >> and count >> + * how many levels exist solely for it, and then walk up each level >> until we hit >> + * the root node (ignore the package level because it may be possible >> to have >> + * caches that exist across packages). Count the number of cache >> levels that >> + * exist at each level on the way up. >> + */ >> +static int acpi_process_node(struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu_node) >> +{ >> + int total_levels = 0; >> + >> + do { >> + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); >> + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >> + } while (cpu_node); >> + >> + return total_levels; >> +} >> + >> +/* determine if the given node is a leaf node */ >> +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *node) >> +{ >> + struct acpi_subtable_header *entry; >> + unsigned long table_end; >> + u32 node_entry; >> + struct acpi_pptt_processor *cpu_node; > > Can cpu_node be defined inside the loop? It isn't used outside. Yes, but i'm not sure that is the style of the acpi code, if you look at scan.c, acpi_ipmi.c maybe others, they seem to be following the "all definitions at the top of the block" form despite having a few loops with variables that are only used in the block. > >> + >> + table_end = (unsigned long)table_hdr + table_hdr->length; >> + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >> + sizeof(struct acpi_table_pptt)); >> + >> + while (((unsigned long)entry) + sizeof(struct >> acpi_subtable_header) < table_end) { > > while ((unsigned long) (entry + 1) < table_end) { > >> + cpu_node = (struct acpi_pptt_processor *)entry; >> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >> + (cpu_node->parent == node_entry)) >> + return 0; >> + entry = (struct acpi_subtable_header *)((u8 *)entry + >> entry->length); >> + } >> + return 1; >> +} >> + >> +/* >> + * Find the subtable entry describing the provided processor >> + */ >> +static struct acpi_pptt_processor *acpi_find_processor_node( >> + struct acpi_table_header *table_hdr, >> + u32 acpi_cpu_id) >> +{ >> + struct acpi_subtable_header *entry; >> + unsigned long table_end; >> + struct acpi_pptt_processor *cpu_node; >> + >> + table_end = (unsigned long)table_hdr + table_hdr->length; >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >> + sizeof(struct acpi_table_pptt)); > > Can I suggest having two inline functions for this and the above function? > > static inline unsigned long acpi_get_table_end(const struct > acpi_table_header *); Which is a bit overkill for an add, let me think about this one. > > static inline struct acpi_subtable_header *acpi_get_first_entry(const > struct acpi_table_header *); This one and the below are really just degenerate cases of fetch_pptt_subtable(). > > (Feel free to adapt the names of course) > >> + >> + /* find the processor structure associated with this cpuid */ >> + while (((unsigned long)entry) + sizeof(struct >> acpi_subtable_header) < table_end) { > > Same as above -> (unsigned long) (entry + 1). > > >> + cpu_node = (struct acpi_pptt_processor *)entry; >> + >> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >> + acpi_pptt_leaf_node(table_hdr, cpu_node)) { >> + pr_debug("checking phy_cpu_id %d against acpi id %d\n", >> + acpi_cpu_id, cpu_node->acpi_processor_id); >> + if (acpi_cpu_id == cpu_node->acpi_processor_id) { >> + /* found the correct entry */ >> + pr_debug("match found!\n"); >> + return (struct acpi_pptt_processor *)entry; >> + } >> + } >> + >> + if (entry->length == 0) { >> + pr_err("Invalid zero length subtable\n"); >> + break; >> + } >> + entry = (struct acpi_subtable_header *) >> + ((u8 *)entry + entry->length); > > > I also think it would be nicer to have an inline function for this: > > static struct acpi_subtable_header *acpi_get_next_entry(const struct > acpi_subtable_header *); Which is just a degenerate case of fetch_pptt_subtable() in both cases after having had the macro in actypes.h pointed out, I think most of this manipulation is going to just get buried behind those macros. > > >> + } >> + >> + return NULL; >> +} >> + >> +/* >> + * Given a acpi_pptt_processor node, walk up until we identify the >> + * package that the node is associated with or we run out of levels >> + * to request. >> + */ >> +static struct acpi_pptt_processor *acpi_find_processor_package_id( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu, >> + int level) >> +{ >> + struct acpi_pptt_processor *prev_node; >> + >> + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { >> + pr_debug("level %d\n", level); >> + prev_node = fetch_pptt_node(table_hdr, cpu->parent); >> + if (prev_node == NULL) >> + break; >> + cpu = prev_node; >> + level--; >> + } >> + return cpu; >> +} >> + >> +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 >> acpi_cpu_id) >> +{ >> + int number_of_levels = 0; >> + struct acpi_pptt_processor *cpu; >> + >> + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> + if (cpu) >> + number_of_levels = acpi_process_node(table_hdr, cpu); >> + >> + return number_of_levels; >> +} >> + >> +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) >> +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) >> +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) >> +#define ACPI_6_2_CACHE_POLICY_WB (0x0) >> +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) >> +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) >> +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) >> +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) >> + >> +static u8 acpi_cache_type(enum cache_type type) >> +{ >> + switch (type) { >> + case CACHE_TYPE_DATA: >> + pr_debug("Looking for data cache\n"); >> + return ACPI_6_2_CACHE_TYPE_DATA; >> + case CACHE_TYPE_INST: >> + pr_debug("Looking for instruction cache\n"); >> + return ACPI_6_2_CACHE_TYPE_INSTR; >> + default: >> + pr_debug("Unknown cache type, assume unified\n"); >> + case CACHE_TYPE_UNIFIED: >> + pr_debug("Looking for unified cache\n"); >> + return ACPI_6_2_CACHE_TYPE_UNIFIED; >> + } >> +} >> + >> +/* find the ACPI node describing the cache type/level for the given >> CPU */ >> +static struct acpi_pptt_cache *acpi_find_cache_node( >> + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, >> + enum cache_type type, unsigned int level, >> + struct acpi_pptt_processor **node) >> +{ >> + int total_levels = 0; >> + struct acpi_pptt_cache *found = NULL; >> + struct acpi_pptt_processor *cpu_node; >> + u8 acpi_type = acpi_cache_type(type); >> + >> + pr_debug("Looking for CPU %d's level %d cache type %d\n", >> + acpi_cpu_id, level, acpi_type); >> + >> + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> + if (!cpu_node) >> + return NULL; >> + >> + do { >> + found = acpi_find_cache_level(table_hdr, cpu_node, >> &total_levels, level, acpi_type); >> + *node = cpu_node; >> + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >> + } while ((cpu_node) && (!found)); > > Why not combine the do...while loop and the pevious check in a simple > while loop? The same condion should work as such for a while loop. Ok, sure...
On 12/10/2017 20:48, Jeremy Linton wrote: > ACPI 6.2 adds a new table, which describes how processing units > are related to each other in tree like fashion. Caches are > also sprinkled throughout the tree and describe the properties > of the caches in relation to other caches and processing units. > > Add the code to parse the cache hierarchy and report the total > number of levels of cache for a given core using > acpi_find_last_cache_level() as well as fill out the individual > cores cache information with cache_setup_acpi() once the > cpu_cacheinfo structure has been populated by the arch specific > code. > > Further, report peers in the topology using setup_acpi_cpu_topology() > to report a unique ID for each processing unit at a given level > in the tree. These unique id's can then be used to match related > processing units which exist as threads, COD (clusters > on die), within a given package, etc. > As already commented, there are many lines over 80 characters. And so far I only really looked at cpu topology part. > Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> > --- > drivers/acpi/pptt.c | 485 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 485 insertions(+) > create mode 100644 drivers/acpi/pptt.c > > diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c > new file mode 100644 > index 000000000000..c86715fed4a7 > --- /dev/null > +++ b/drivers/acpi/pptt.c > @@ -0,1 +1,485 @@ > +/* > + * Copyright (C) 2017, ARM > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms and conditions of the GNU General Public License, > + * version 2, as published by the Free Software Foundation. > + * > + * This program is distributed in the hope it will be useful, but WITHOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > + * more details. > + * > + * This file implements parsing of Processor Properties Topology Table (PPTT) > + * which is optionally used to describe the processor and cache topology. > + * Due to the relative pointers used throughout the table, this doesn't > + * leverage the existing subtable parsing in the kernel. > + */ > +#define pr_fmt(fmt) "ACPI PPTT: " fmt > + > +#include <linux/acpi.h> > +#include <linux/cacheinfo.h> > +#include <acpi/processor.h> > + > +/* > + * Given the PPTT table, find and verify that the subtable entry > + * is located within the table > + */ > +static struct acpi_subtable_header *fetch_pptt_subtable( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + struct acpi_subtable_header *entry; > + > + /* there isn't a subtable at reference 0 */ > + if (!pptt_ref) > + return NULL; > + > + if (pptt_ref + sizeof(struct acpi_subtable_header) > table_hdr->length) > + return NULL; > + > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); > + > + if (pptt_ref + entry->length > table_hdr->length) > + return NULL; > + > + return entry; > +} > + > +static struct acpi_pptt_processor *fetch_pptt_node( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_processor *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_pptt_cache *fetch_pptt_cache( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_subtable_header *acpi_get_pptt_resource( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node, int resource) > +{ > + u32 ref; > + > + if (resource >= node->number_of_priv_resources) > + return NULL; > + > + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + > + sizeof(u32) * resource); > + > + return fetch_pptt_subtable(table_hdr, ref); > +} > + > +/* > + * given a pptt resource, verify that it is a cache node, then walk /s/given/Given/ > + * down each level of caches, counting how many levels are found > + * as well as checking the cache type (icache, dcache, unified). If a > + * level & type match, then we set found, and continue the search. > + * Once the entire cache branch has been walked return its max > + * depth. > + */ > +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, > + int local_level, > + struct acpi_subtable_header *res, > + struct acpi_pptt_cache **found, > + int level, int type) > +{ > + struct acpi_pptt_cache *cache; > + > + if (res->type != ACPI_PPTT_TYPE_CACHE) > + return 0; > + > + cache = (struct acpi_pptt_cache *) res; please remove whitespace before res > + while (cache) { > + local_level++; > + > + if ((local_level == level) && > + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && > + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { > + if (*found != NULL) > + pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); > + > + pr_debug("Found cache @ level %d\n", level); > + *found = cache; > + /* > + * continue looking at this node's resource list > + * to verify that we don't find a duplicate > + * cache node. > + */ > + } > + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); > + } > + return local_level; > +} > + > +/* > + * Given a CPU node look for cache levels that exist at this level, and then > + * for each cache node, count how many levels exist below (logically above) it. > + * If a level and type are specified, and we find that level/type, abort > + * processing and return the acpi_pptt_cache structure. > + */ > +static struct acpi_pptt_cache *acpi_find_cache_level( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node, > + int *starting_level, int level, int type) > +{ > + struct acpi_subtable_header *res; > + int number_of_levels = *starting_level; > + int resource = 0; > + struct acpi_pptt_cache *ret = NULL; > + int local_level; > + > + /* walk down from the processor node */ > + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { > + resource++; > + > + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, > + res, &ret, level, type); > + /* > + * we are looking for the max depth. Since its potentially > + * possible for a given node to have resources with differing > + * depths verify that the depth we have found is the largest. > + */ > + if (number_of_levels < local_level) > + number_of_levels = local_level; > + } > + if (number_of_levels > *starting_level) > + *starting_level = number_of_levels; > + > + return ret; > +} > + > +/* > + * given a processor node containing a processing unit, walk into it and count > + * how many levels exist solely for it, and then walk up each level until we hit > + * the root node (ignore the package level because it may be possible to have > + * caches that exist across packages). Count the number of cache levels that > + * exist at each level on the way up. > + */ > +static int acpi_process_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node) > +{ > + int total_levels = 0; > + > + do { > + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while (cpu_node); > + > + return total_levels; > +} > + > +/* determine if the given node is a leaf node */ > +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + u32 node_entry; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); > + > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + (cpu_node->parent == node_entry)) > + return 0; > + entry = (struct acpi_subtable_header *)((u8 *)entry + entry->length); > + } > + return 1; > +} > + > +/* > + * Find the subtable entry describing the provided processor > + */ > +static struct acpi_pptt_processor *acpi_find_processor_node( > + struct acpi_table_header *table_hdr, > + u32 acpi_cpu_id) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); > + This is the first time looking at your implementation of the PPTT driver. Has it been mentioned before that it is rather inefficient to re-parse the table for every cpu in the system? Actually within the cpu parsing loop we call acpi_pptt_leaf_node(), which does another table parsing loop. However this is simple. I do know the version from wangxiongfeng had a kmalloc per node, which would also be somewhat inefficient. But I worry that this implementation does not scale with larger numbers of CPUs. > + /* find the processor structure associated with this cpuid */ > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + acpi_pptt_leaf_node(table_hdr, cpu_node)) { > + pr_debug("checking phy_cpu_id %d against acpi id %d\n", > + acpi_cpu_id, cpu_node->acpi_processor_id); > + if (acpi_cpu_id == cpu_node->acpi_processor_id) { > + /* found the correct entry */ > + pr_debug("match found!\n"); Do we really need to add 2 debug messages for case of checking for and finding a valid match? > + return (struct acpi_pptt_processor *)entry; > + } > + } > + > + if (entry->length == 0) { > + pr_err("Invalid zero length subtable\n"); > + break; > + } > + entry = (struct acpi_subtable_header *) > + ((u8 *)entry + entry->length); > + } > + > + return NULL; > +} > + > +/* > + * Given a acpi_pptt_processor node, walk up until we identify the > + * package that the node is associated with or we run out of levels > + * to request. > + */ > +static struct acpi_pptt_processor *acpi_find_processor_package_id( One would assume from the name that this function returns an integer value, that being an index for the package for that cpu > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu, Do you think that "cpu_node" would be a better name? > + int level) > +{ > + struct acpi_pptt_processor *prev_node; > + > + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { > + pr_debug("level %d\n", level); that's not such a useful message and I can imagine it creates so many prints > + prev_node = fetch_pptt_node(table_hdr, cpu->parent); > + if (prev_node == NULL) > + break; > + cpu = prev_node; > + level--; > + } > + return cpu; > +} > + > +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) > +{ > + int number_of_levels = 0; > + struct acpi_pptt_processor *cpu; > + > + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (cpu) > + number_of_levels = acpi_process_node(table_hdr, cpu); > + > + return number_of_levels; > +} > + > +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) > +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) > +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) > +#define ACPI_6_2_CACHE_POLICY_WB (0x0) > +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) > +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) > +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) > +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) > + > +static u8 acpi_cache_type(enum cache_type type) > +{ > + switch (type) { > + case CACHE_TYPE_DATA: > + pr_debug("Looking for data cache\n"); > + return ACPI_6_2_CACHE_TYPE_DATA; > + case CACHE_TYPE_INST: > + pr_debug("Looking for instruction cache\n"); > + return ACPI_6_2_CACHE_TYPE_INSTR; > + default: > + pr_debug("Unknown cache type, assume unified\n"); > + case CACHE_TYPE_UNIFIED: > + pr_debug("Looking for unified cache\n"); > + return ACPI_6_2_CACHE_TYPE_UNIFIED; > + } > +} > + > +/* find the ACPI node describing the cache type/level for the given CPU */ > +static struct acpi_pptt_cache *acpi_find_cache_node( > + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, > + enum cache_type type, unsigned int level, > + struct acpi_pptt_processor **node) > +{ > + int total_levels = 0; > + struct acpi_pptt_cache *found = NULL; > + struct acpi_pptt_processor *cpu_node; > + u8 acpi_type = acpi_cache_type(type); > + > + pr_debug("Looking for CPU %d's level %d cache type %d\n", > + acpi_cpu_id, level, acpi_type); > + > + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (!cpu_node) > + return NULL; > + > + do { > + found = acpi_find_cache_level(table_hdr, cpu_node, &total_levels, level, acpi_type); > + *node = cpu_node; > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while ((cpu_node) && (!found)); > + > + return found; > +} > + > +int acpi_find_last_cache_level(unsigned int cpu) > +{ > + u32 acpi_cpu_id; > + struct acpi_table_header *table; > + int number_of_levels = 0; > + acpi_status status; > + > + pr_debug("Cache Setup find last level cpu=%d\n", cpu); these prints (and others) probably should be verbose level or removed > + > + acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); is this really an error? > + } else { > + number_of_levels = acpi_parse_pptt(table, acpi_cpu_id); > + acpi_put_table(table); > + } > + pr_debug("Cache Setup find last level level=%d\n", number_of_levels); I think that this could be better worded > + > + return number_of_levels; > +} > + > +/* > + * The ACPI spec implies that the fields in the cache structures are used to > + * extend and correct the information probed from the hardware. In the case > + * of arm64 the CCSIDR probing has been removed because it might be incorrect. > + */ > +static void update_cache_properties(struct cacheinfo *this_leaf, > + struct acpi_pptt_cache *found_cache, > + struct acpi_pptt_processor *cpu_node) > +{ > + if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) > + this_leaf->size = found_cache->size; > + if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) > + this_leaf->coherency_line_size = found_cache->line_size; > + if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) > + this_leaf->number_of_sets = found_cache->number_of_sets; > + if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) > + this_leaf->ways_of_associativity = found_cache->associativity; > + if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) > + switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) { > + case ACPI_6_2_CACHE_POLICY_WT: > + this_leaf->attributes = CACHE_WRITE_THROUGH; > + break; > + case ACPI_6_2_CACHE_POLICY_WB: > + this_leaf->attributes = CACHE_WRITE_BACK; > + break; > + default: > + pr_err("Unknown ACPI cache policy %d\n", > + found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY); > + } > + if (found_cache->flags & ACPI_PPTT_ALLOCATION_TYPE_VALID) > + switch (found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE) { > + case ACPI_6_2_CACHE_READ_ALLOCATE: > + this_leaf->attributes |= CACHE_READ_ALLOCATE; > + break; > + case ACPI_6_2_CACHE_WRITE_ALLOCATE: > + this_leaf->attributes |= CACHE_WRITE_ALLOCATE; > + break; > + case ACPI_6_2_CACHE_RW_ALLOCATE: > + this_leaf->attributes |= > + CACHE_READ_ALLOCATE|CACHE_WRITE_ALLOCATE; > + break; > + default: > + pr_err("Unknown ACPI cache allocation policy %d\n", > + found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE); > + } > +} > + > +static void cache_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu) > +{ > + struct acpi_pptt_cache *found_cache; > + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + struct cacheinfo *this_leaf; > + unsigned int index = 0; > + struct acpi_pptt_processor *cpu_node = NULL; > + > + while (index < get_cpu_cacheinfo(cpu)->num_leaves) { cpu does not change, so, for efficiency, can you use this_cpu_ci->num_leaves? > + this_leaf = this_cpu_ci->info_list + index; > + found_cache = acpi_find_cache_node(table, acpi_cpu_id, > + this_leaf->type, > + this_leaf->level, > + &cpu_node); > + pr_debug("found = %p %p\n", found_cache, cpu_node); I am not sure how useful printing pointers is to the user, even if NULL. Therse prints are too verbose. > + if (found_cache) > + update_cache_properties(this_leaf, > + found_cache, > + cpu_node); > + > + index++; > + } > +} > + > +static int topology_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu, int level) > +{ It's not clear what is suppossed to be returned from this function, if anything, since it's job is seemingly to "setup" > + struct acpi_pptt_processor *cpu_node; > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + > + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); > + if (cpu_node) { > + cpu_node = acpi_find_processor_package_id(table, cpu_node, level); > + /* Only the first level has a guaranteed id */ > + if (level == 0) > + return cpu_node->acpi_processor_id; > + return (int)((u8 *)cpu_node - (u8 *)table); Sorry, but I just don't get this. As I understand, our intention is to find the core/cluster/package index in the system for a given cpu, right? If so, how is the distance between the table base and cpu level's node the same as the system index? I would say that this value is unique, but are we expecting sequential system indexing per level? > + } > + pr_err_once("PPTT table found, but unable to locate core for %d\n", the code seems to intermix terms "core" and "cpu" - is this intentional? > + cpu); > + return -ENOENT; > +} > + > +/* > + * simply assign a ACPI cache entry to each known CPU cache entry > + * determining which entries are shared is done later. > + */ > +int cache_setup_acpi(unsigned int cpu) > +{ > + struct acpi_table_header *table; > + acpi_status status; > + > + pr_debug("Cache Setup ACPI cpu %d\n", cpu); Please don't leave whitespace after "cpu" > + > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > + return -ENOENT; > + } > + > + cache_setup_acpi_cpu(table, cpu); > + acpi_put_table(table); > + > + return status; > +} > + > +/* > + * Determine a topology unique ID for each thread/core/cluster/socket/etc. > + * This ID can then be used to group peers. > + */ > +int setup_acpi_cpu_topology(unsigned int cpu, int level) I think that you should add a function description to explain what cpu and level are. This function does no setup either. I think that doing a setup would be doing something which has a persistent result. Instead, this function gets the cpu topology index for a certain hierarchy level. > +{ > + struct acpi_table_header *table; > + acpi_status status; > + int retval; > + > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cpu topology may be inaccurate\n"); As before, is this really an error? > + return -ENOENT; > + } > + retval = topology_setup_acpi_cpu(table, cpu, level); > + pr_debug("Topology Setup ACPI cpu %d, level %d ret = %d\n", > + cpu, level, retval); > + acpi_put_table(table); > + > + return retval; > +} > Thanks, John
Hi Jeremy, I did second round of review and have some more comments, please see below: On 12.10.2017 21:48, Jeremy Linton wrote: > ACPI 6.2 adds a new table, which describes how processing units > are related to each other in tree like fashion. Caches are > also sprinkled throughout the tree and describe the properties > of the caches in relation to other caches and processing units. > > Add the code to parse the cache hierarchy and report the total > number of levels of cache for a given core using > acpi_find_last_cache_level() as well as fill out the individual > cores cache information with cache_setup_acpi() once the > cpu_cacheinfo structure has been populated by the arch specific > code. > > Further, report peers in the topology using setup_acpi_cpu_topology() > to report a unique ID for each processing unit at a given level > in the tree. These unique id's can then be used to match related > processing units which exist as threads, COD (clusters > on die), within a given package, etc. > > Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> > --- > drivers/acpi/pptt.c | 485 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 485 insertions(+) > create mode 100644 drivers/acpi/pptt.c > > diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c > new file mode 100644 > index 000000000000..c86715fed4a7 > --- /dev/null > +++ b/drivers/acpi/pptt.c > @@ -0,1 +1,485 @@ > +/* > + * Copyright (C) 2017, ARM > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms and conditions of the GNU General Public License, > + * version 2, as published by the Free Software Foundation. > + * > + * This program is distributed in the hope it will be useful, but WITHOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > + * more details. > + * > + * This file implements parsing of Processor Properties Topology Table (PPTT) > + * which is optionally used to describe the processor and cache topology. > + * Due to the relative pointers used throughout the table, this doesn't > + * leverage the existing subtable parsing in the kernel. > + */ > +#define pr_fmt(fmt) "ACPI PPTT: " fmt > + > +#include <linux/acpi.h> > +#include <linux/cacheinfo.h> > +#include <acpi/processor.h> > + > +/* > + * Given the PPTT table, find and verify that the subtable entry > + * is located within the table > + */ > +static struct acpi_subtable_header *fetch_pptt_subtable( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + struct acpi_subtable_header *entry; > + > + /* there isn't a subtable at reference 0 */ > + if (!pptt_ref) > + return NULL; > + > + if (pptt_ref + sizeof(struct acpi_subtable_header) > table_hdr->length) > + return NULL; > + > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); > + > + if (pptt_ref + entry->length > table_hdr->length) > + return NULL; > + > + return entry; > +} > + > +static struct acpi_pptt_processor *fetch_pptt_node( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_processor *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_pptt_cache *fetch_pptt_cache( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_subtable_header *acpi_get_pptt_resource( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node, int resource) > +{ > + u32 ref; > + > + if (resource >= node->number_of_priv_resources) > + return NULL; > + > + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + > + sizeof(u32) * resource); > + > + return fetch_pptt_subtable(table_hdr, ref); > +} > + > +/* > + * given a pptt resource, verify that it is a cache node, then walk > + * down each level of caches, counting how many levels are found > + * as well as checking the cache type (icache, dcache, unified). If a > + * level & type match, then we set found, and continue the search. > + * Once the entire cache branch has been walked return its max > + * depth. > + */ > +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, > + int local_level, > + struct acpi_subtable_header *res, > + struct acpi_pptt_cache **found, > + int level, int type) > +{ > + struct acpi_pptt_cache *cache; > + > + if (res->type != ACPI_PPTT_TYPE_CACHE) > + return 0; > + > + cache = (struct acpi_pptt_cache *) res; > + while (cache) { > + local_level++; > + > + if ((local_level == level) && > + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && > + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { Attributes have to be shifted: (cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 > + if (*found != NULL) > + pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); > + > + pr_debug("Found cache @ level %d\n", level); > + *found = cache; > + /* > + * continue looking at this node's resource list > + * to verify that we don't find a duplicate > + * cache node. > + */ > + } > + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); > + } > + return local_level; > +} > + > +/* > + * Given a CPU node look for cache levels that exist at this level, and then > + * for each cache node, count how many levels exist below (logically above) it. > + * If a level and type are specified, and we find that level/type, abort > + * processing and return the acpi_pptt_cache structure. > + */ > +static struct acpi_pptt_cache *acpi_find_cache_level( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node, > + int *starting_level, int level, int type) > +{ > + struct acpi_subtable_header *res; > + int number_of_levels = *starting_level; > + int resource = 0; > + struct acpi_pptt_cache *ret = NULL; > + int local_level; > + > + /* walk down from the processor node */ > + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { > + resource++; > + > + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, > + res, &ret, level, type); > + /* > + * we are looking for the max depth. Since its potentially > + * possible for a given node to have resources with differing > + * depths verify that the depth we have found is the largest. > + */ > + if (number_of_levels < local_level) > + number_of_levels = local_level; > + } > + if (number_of_levels > *starting_level) > + *starting_level = number_of_levels; > + > + return ret; > +} > + > +/* > + * given a processor node containing a processing unit, walk into it and count > + * how many levels exist solely for it, and then walk up each level until we hit > + * the root node (ignore the package level because it may be possible to have > + * caches that exist across packages). Count the number of cache levels that > + * exist at each level on the way up. > + */ > +static int acpi_process_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node) > +{ > + int total_levels = 0; > + > + do { > + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while (cpu_node); > + > + return total_levels; > +} > + > +/* determine if the given node is a leaf node */ > +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + u32 node_entry; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); > + > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + (cpu_node->parent == node_entry)) > + return 0; > + entry = (struct acpi_subtable_header *)((u8 *)entry + entry->length); > + } > + return 1; > +} > + > +/* > + * Find the subtable entry describing the provided processor > + */ > +static struct acpi_pptt_processor *acpi_find_processor_node( > + struct acpi_table_header *table_hdr, > + u32 acpi_cpu_id) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); > + > + /* find the processor structure associated with this cpuid */ > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + acpi_pptt_leaf_node(table_hdr, cpu_node)) { > + pr_debug("checking phy_cpu_id %d against acpi id %d\n", > + acpi_cpu_id, cpu_node->acpi_processor_id); > + if (acpi_cpu_id == cpu_node->acpi_processor_id) { > + /* found the correct entry */ > + pr_debug("match found!\n"); > + return (struct acpi_pptt_processor *)entry; > + } > + } > + > + if (entry->length == 0) { > + pr_err("Invalid zero length subtable\n"); > + break; > + } > + entry = (struct acpi_subtable_header *) > + ((u8 *)entry + entry->length); > + } > + > + return NULL; > +} > + > +/* > + * Given a acpi_pptt_processor node, walk up until we identify the > + * package that the node is associated with or we run out of levels > + * to request. > + */ > +static struct acpi_pptt_processor *acpi_find_processor_package_id( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu, > + int level) > +{ > + struct acpi_pptt_processor *prev_node; > + > + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { > + pr_debug("level %d\n", level); > + prev_node = fetch_pptt_node(table_hdr, cpu->parent); > + if (prev_node == NULL) > + break; > + cpu = prev_node; > + level--; > + } > + return cpu; > +} > + > +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) The function name can be more descriptive. How about: acpi_count_cache_level() ? > +{ > + int number_of_levels = 0; > + struct acpi_pptt_processor *cpu; > + > + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (cpu) > + number_of_levels = acpi_process_node(table_hdr, cpu); > + > + return number_of_levels; > +} It is hard to follow what acpi_find_cache_level() and acpi_pptt_walk_cache() really do. It is because they are trying to do too many things at the same time. IMO, splitting acpi_find_cache_level() logic to: 1. counting the cache levels (max depth) 2. finding the specific cache node makes sense. Also, seems like we can merge acpi_parse_pptt() & acpi_process_node(). Here are my suggestions: static struct acpi_pptt_cache *acpi_pptt_cache_type_level( struct acpi_table_header *table_hdr, struct acpi_subtable_header *res, int *local_level, int level, int type) { struct acpi_pptt_cache *cache = (struct acpi_pptt_cache *) res; if (res->type != ACPI_PPTT_TYPE_CACHE) return NULL; while (cache) { if ((*local_level == level) && (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 == type)) { pr_debug("Found cache @ level %d\n", level); return cache; } cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); (*local_level)++; } return NULL; } static struct acpi_pptt_cache *_acpi_find_cache_node( struct acpi_table_header *table_hdr, struct acpi_pptt_processor *cpu_node, int *local_level, int level, int type) { struct acpi_subtable_header *res; struct acpi_pptt_cache *cache_tmp, *cache = NULL; int resource = 0; /* walk down from the processor node */ while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { cache_tmp = acpi_pptt_cache_type_level(table_hdr, res, local_level, level, type); if (cache_tmp) { if (cache) pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); cache = cache_tmp; } resource++; } return cache; } /* find the ACPI node describing the cache type/level for the given CPU */ static struct acpi_pptt_cache *acpi_find_cache_node( struct acpi_table_header *table_hdr, u32 acpi_cpu_id, enum cache_type type, unsigned int level, struct acpi_pptt_processor **node) { int total_levels = 0; struct acpi_pptt_cache *found = NULL; struct acpi_pptt_processor *cpu_node; u8 acpi_type = acpi_cache_type(type); pr_debug("Looking for CPU %d's level %d cache type %d\n", acpi_cpu_id, level, acpi_type); cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); if (!cpu_node) return NULL; do { found = _acpi_find_cache_node(table_hdr, cpu_node, &total_levels, level, acpi_type); *node = cpu_node; cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); } while ((cpu_node) && (!found)); return found; } static int acpi_pptt_cache_level(struct acpi_table_header *table_hdr, struct acpi_subtable_header *res) { struct acpi_pptt_cache *cache = (struct acpi_pptt_cache *) res; int local_level = 1; if (res->type != ACPI_PPTT_TYPE_CACHE) return 0; while ((cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache))) local_level++; return local_level; } static int _acpi_count_cache_level( struct acpi_table_header *table_hdr, struct acpi_pptt_processor *cpu_node) { struct acpi_subtable_header *res; int levels = 0, resource = 0, number_of_levels = 0; /* walk down from the processor node */ while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { levels = acpi_pptt_cache_level(table_hdr, res); /* * we are looking for the max depth. Since its potentially * possible for a given node to have resources with differing * depths verify that the depth we have found is the largest. */ if (levels > number_of_levels) number_of_levels = levels; resource++; } return number_of_levels; } static int acpi_count_cache_level(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) { int total_levels = 0; struct acpi_pptt_processor *cpu_node; cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); while (cpu_node) { total_levels += _acpi_count_cache_level(table_hdr, cpu_node); cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); } return total_levels; } Did not compile the code so I may have missed somthing. Thanks, Tomasz
Hi, On 10/17/2017 08:25 AM, Tomasz Nowicki wrote: > Hi Jeremy, > > I did second round of review and have some more comments, please see below: > > On 12.10.2017 21:48, Jeremy Linton wrote: >> ACPI 6.2 adds a new table, which describes how processing units >> are related to each other in tree like fashion. Caches are >> also sprinkled throughout the tree and describe the properties >> of the caches in relation to other caches and processing units. >> >> Add the code to parse the cache hierarchy and report the total >> number of levels of cache for a given core using >> acpi_find_last_cache_level() as well as fill out the individual >> cores cache information with cache_setup_acpi() once the >> cpu_cacheinfo structure has been populated by the arch specific >> code. >> >> Further, report peers in the topology using setup_acpi_cpu_topology() >> to report a unique ID for each processing unit at a given level >> in the tree. These unique id's can then be used to match related >> processing units which exist as threads, COD (clusters >> on die), within a given package, etc. >> >> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> >> --- >> drivers/acpi/pptt.c | 485 >> ++++++++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 485 insertions(+) >> create mode 100644 drivers/acpi/pptt.c >> >> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c >> new file mode 100644 >> index 000000000000..c86715fed4a7 >> --- /dev/null >> +++ b/drivers/acpi/pptt.c >> @@ -0,1 +1,485 @@ >> +/* >> + * Copyright (C) 2017, ARM >> + * >> + * This program is free software; you can redistribute it and/or >> modify it >> + * under the terms and conditions of the GNU General Public License, >> + * version 2, as published by the Free Software Foundation. >> + * >> + * This program is distributed in the hope it will be useful, but >> WITHOUT >> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or >> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public >> License for >> + * more details. >> + * >> + * This file implements parsing of Processor Properties Topology >> Table (PPTT) >> + * which is optionally used to describe the processor and cache >> topology. >> + * Due to the relative pointers used throughout the table, this doesn't >> + * leverage the existing subtable parsing in the kernel. >> + */ >> +#define pr_fmt(fmt) "ACPI PPTT: " fmt >> + >> +#include <linux/acpi.h> >> +#include <linux/cacheinfo.h> >> +#include <acpi/processor.h> >> + >> +/* >> + * Given the PPTT table, find and verify that the subtable entry >> + * is located within the table >> + */ >> +static struct acpi_subtable_header *fetch_pptt_subtable( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + struct acpi_subtable_header *entry; >> + >> + /* there isn't a subtable at reference 0 */ >> + if (!pptt_ref) >> + return NULL; >> + >> + if (pptt_ref + sizeof(struct acpi_subtable_header) > >> table_hdr->length) >> + return NULL; >> + >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); >> + >> + if (pptt_ref + entry->length > table_hdr->length) >> + return NULL; >> + >> + return entry; >> +} >> + >> +static struct acpi_pptt_processor *fetch_pptt_node( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + return (struct acpi_pptt_processor >> *)fetch_pptt_subtable(table_hdr, pptt_ref); >> +} >> + >> +static struct acpi_pptt_cache *fetch_pptt_cache( >> + struct acpi_table_header *table_hdr, u32 pptt_ref) >> +{ >> + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, >> pptt_ref); >> +} >> + >> +static struct acpi_subtable_header *acpi_get_pptt_resource( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *node, int resource) >> +{ >> + u32 ref; >> + >> + if (resource >= node->number_of_priv_resources) >> + return NULL; >> + >> + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + >> + sizeof(u32) * resource); >> + >> + return fetch_pptt_subtable(table_hdr, ref); >> +} >> + >> +/* >> + * given a pptt resource, verify that it is a cache node, then walk >> + * down each level of caches, counting how many levels are found >> + * as well as checking the cache type (icache, dcache, unified). If a >> + * level & type match, then we set found, and continue the search. >> + * Once the entire cache branch has been walked return its max >> + * depth. >> + */ >> +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, >> + int local_level, >> + struct acpi_subtable_header *res, >> + struct acpi_pptt_cache **found, >> + int level, int type) >> +{ >> + struct acpi_pptt_cache *cache; >> + >> + if (res->type != ACPI_PPTT_TYPE_CACHE) >> + return 0; >> + >> + cache = (struct acpi_pptt_cache *) res; >> + while (cache) { >> + local_level++; >> + >> + if ((local_level == level) && >> + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && >> + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { > > Attributes have to be shifted: > > (cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 Hmmm, I'm not sure that is true, the top level function in this routine convert the "linux" constant to the ACPI version of that constant. In that case the "type" field is pre-shifted, so that it matches the result of just anding against the field... That is unless I messed something up, which I don't see at the moment (and the code of course has been tested with PPTT's from multiple people at this point). > >> + if (*found != NULL) >> + pr_err("Found duplicate cache level/type unable to >> determine uniqueness\n"); >> + >> + pr_debug("Found cache @ level %d\n", level); >> + *found = cache; >> + /* >> + * continue looking at this node's resource list >> + * to verify that we don't find a duplicate >> + * cache node. >> + */ >> + } >> + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); >> + } >> + return local_level; >> +} >> + >> +/* >> + * Given a CPU node look for cache levels that exist at this level, >> and then >> + * for each cache node, count how many levels exist below (logically >> above) it. >> + * If a level and type are specified, and we find that level/type, abort >> + * processing and return the acpi_pptt_cache structure. >> + */ >> +static struct acpi_pptt_cache *acpi_find_cache_level( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu_node, >> + int *starting_level, int level, int type) >> +{ >> + struct acpi_subtable_header *res; >> + int number_of_levels = *starting_level; >> + int resource = 0; >> + struct acpi_pptt_cache *ret = NULL; >> + int local_level; >> + >> + /* walk down from the processor node */ >> + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, >> resource))) { >> + resource++; >> + >> + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, >> + res, &ret, level, type); >> + /* >> + * we are looking for the max depth. Since its potentially >> + * possible for a given node to have resources with differing >> + * depths verify that the depth we have found is the largest. >> + */ >> + if (number_of_levels < local_level) >> + number_of_levels = local_level; >> + } >> + if (number_of_levels > *starting_level) >> + *starting_level = number_of_levels; >> + >> + return ret; >> +} >> + >> +/* >> + * given a processor node containing a processing unit, walk into it >> and count >> + * how many levels exist solely for it, and then walk up each level >> until we hit >> + * the root node (ignore the package level because it may be possible >> to have >> + * caches that exist across packages). Count the number of cache >> levels that >> + * exist at each level on the way up. >> + */ >> +static int acpi_process_node(struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu_node) >> +{ >> + int total_levels = 0; >> + >> + do { >> + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); >> + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >> + } while (cpu_node); >> + >> + return total_levels; >> +} >> + >> +/* determine if the given node is a leaf node */ >> +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *node) >> +{ >> + struct acpi_subtable_header *entry; >> + unsigned long table_end; >> + u32 node_entry; >> + struct acpi_pptt_processor *cpu_node; >> + >> + table_end = (unsigned long)table_hdr + table_hdr->length; >> + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >> + sizeof(struct acpi_table_pptt)); >> + >> + while (((unsigned long)entry) + sizeof(struct >> acpi_subtable_header) < table_end) { >> + cpu_node = (struct acpi_pptt_processor *)entry; >> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >> + (cpu_node->parent == node_entry)) >> + return 0; >> + entry = (struct acpi_subtable_header *)((u8 *)entry + >> entry->length); >> + } >> + return 1; >> +} >> + >> +/* >> + * Find the subtable entry describing the provided processor >> + */ >> +static struct acpi_pptt_processor *acpi_find_processor_node( >> + struct acpi_table_header *table_hdr, >> + u32 acpi_cpu_id) >> +{ >> + struct acpi_subtable_header *entry; >> + unsigned long table_end; >> + struct acpi_pptt_processor *cpu_node; >> + >> + table_end = (unsigned long)table_hdr + table_hdr->length; >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >> + sizeof(struct acpi_table_pptt)); >> + >> + /* find the processor structure associated with this cpuid */ >> + while (((unsigned long)entry) + sizeof(struct >> acpi_subtable_header) < table_end) { >> + cpu_node = (struct acpi_pptt_processor *)entry; >> + >> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >> + acpi_pptt_leaf_node(table_hdr, cpu_node)) { >> + pr_debug("checking phy_cpu_id %d against acpi id %d\n", >> + acpi_cpu_id, cpu_node->acpi_processor_id); >> + if (acpi_cpu_id == cpu_node->acpi_processor_id) { >> + /* found the correct entry */ >> + pr_debug("match found!\n"); >> + return (struct acpi_pptt_processor *)entry; >> + } >> + } >> + >> + if (entry->length == 0) { >> + pr_err("Invalid zero length subtable\n"); >> + break; >> + } >> + entry = (struct acpi_subtable_header *) >> + ((u8 *)entry + entry->length); >> + } >> + >> + return NULL; >> +} >> + >> +/* >> + * Given a acpi_pptt_processor node, walk up until we identify the >> + * package that the node is associated with or we run out of levels >> + * to request. >> + */ >> +static struct acpi_pptt_processor *acpi_find_processor_package_id( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu, >> + int level) >> +{ >> + struct acpi_pptt_processor *prev_node; >> + >> + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { >> + pr_debug("level %d\n", level); >> + prev_node = fetch_pptt_node(table_hdr, cpu->parent); >> + if (prev_node == NULL) >> + break; >> + cpu = prev_node; >> + level--; >> + } >> + return cpu; >> +} >> + >> +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 >> acpi_cpu_id) > > The function name can be more descriptive. How about: > > acpi_count_cache_level() ? The naming has drifted a bit, so yes, that routine is only used by the portion which is determining the number of cache levels for a given PE. > >> +{ >> + int number_of_levels = 0; >> + struct acpi_pptt_processor *cpu; >> + >> + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> + if (cpu) >> + number_of_levels = acpi_process_node(table_hdr, cpu); >> + >> + return number_of_levels; >> +} > > It is hard to follow what acpi_find_cache_level() and > acpi_pptt_walk_cache() really do. It is because they are trying to do > too many things at the same time. IMO, splitting acpi_find_cache_level() > logic to: > 1. counting the cache levels (max depth) > 2. finding the specific cache node > makes sense. I disagree, that routine is shared by the two code paths because its functionality is 99% duplicated between the two. The difference being whether it terminates the search at a given level, or continues searching until it runs out of nodes. The latter case is simply a degenerate version of the first. > > Also, seems like we can merge acpi_parse_pptt() & acpi_process_node(). That is true, but I fail to see how any of this is actually fixes anything. There are a million ways to do this, including as pointed out by building another data-structure to simplify the parsing what is a table that is less than ideal for runtime parsing (starting with the direction of the relative pointers, and ending with having to "infer" information that isn't directly flagged). I actually built a couple other versions of this, including a nice cute version which is about 1/8 this size of this and really easy to understand but of course is recursive... > > Here are my suggestions: > > > static struct acpi_pptt_cache *acpi_pptt_cache_type_level( > struct acpi_table_header *table_hdr, > struct acpi_subtable_header *res, > int *local_level, > int level, int type) > { > struct acpi_pptt_cache *cache = (struct acpi_pptt_cache *) res; > > if (res->type != ACPI_PPTT_TYPE_CACHE) > return NULL; > > while (cache) { > if ((*local_level == level) && > (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && > ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 == > type)) { > > pr_debug("Found cache @ level %d\n", level); > return cache; > } > cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); > (*local_level)++; > } > return NULL; > } > > static struct acpi_pptt_cache *_acpi_find_cache_node( > struct acpi_table_header *table_hdr, > struct acpi_pptt_processor *cpu_node, > int *local_level, int level, int type) > { > struct acpi_subtable_header *res; > struct acpi_pptt_cache *cache_tmp, *cache = NULL; > int resource = 0; > > /* walk down from the processor node */ > while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, > resource))) { > > cache_tmp = acpi_pptt_cache_type_level(table_hdr, res, > local_level, level, type); > if (cache_tmp) { > if (cache) > pr_err("Found duplicate cache level/type unable to > determine uniqueness\n"); > > cache = cache_tmp; > } > resource++; > } > return cache; > } > > /* find the ACPI node describing the cache type/level for the given CPU */ > static struct acpi_pptt_cache *acpi_find_cache_node( > struct acpi_table_header *table_hdr, u32 acpi_cpu_id, > enum cache_type type, unsigned int level, > struct acpi_pptt_processor **node) > { > int total_levels = 0; > struct acpi_pptt_cache *found = NULL; > struct acpi_pptt_processor *cpu_node; > u8 acpi_type = acpi_cache_type(type); > > pr_debug("Looking for CPU %d's level %d cache type %d\n", > acpi_cpu_id, level, acpi_type); > > cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); > if (!cpu_node) > return NULL; > > do { > found = _acpi_find_cache_node(table_hdr, cpu_node, > &total_levels, level, acpi_type); > *node = cpu_node; > cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > } while ((cpu_node) && (!found)); > > return found; > } > > static int acpi_pptt_cache_level(struct acpi_table_header *table_hdr, > struct acpi_subtable_header *res) > { > struct acpi_pptt_cache *cache = (struct acpi_pptt_cache *) res; > int local_level = 1; > > if (res->type != ACPI_PPTT_TYPE_CACHE) > return 0; > > while ((cache = fetch_pptt_cache(table_hdr, > cache->next_level_of_cache))) > local_level++; > return local_level; > } > > static int _acpi_count_cache_level( > struct acpi_table_header *table_hdr, > struct acpi_pptt_processor *cpu_node) > { > struct acpi_subtable_header *res; > int levels = 0, resource = 0, number_of_levels = 0; > > /* walk down from the processor node */ > while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, > resource))) { > levels = acpi_pptt_cache_level(table_hdr, res); > > /* > * we are looking for the max depth. Since its potentially > * possible for a given node to have resources with differing > * depths verify that the depth we have found is the largest. > */ > if (levels > number_of_levels) > number_of_levels = levels; > > resource++; > } > return number_of_levels; > } > > static int acpi_count_cache_level(struct acpi_table_header *table_hdr, > u32 acpi_cpu_id) > { > int total_levels = 0; > struct acpi_pptt_processor *cpu_node; > > cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); > while (cpu_node) { > total_levels += _acpi_count_cache_level(table_hdr, cpu_node); > cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > } > > return total_levels; > } > > > Did not compile the code so I may have missed somthing. > > Thanks, > Tomasz
Hi Jeremy, On 2017/10/17 23:22, Jeremy Linton wrote: > Hi, > > On 10/17/2017 08:25 AM, Tomasz Nowicki wrote: >> Hi Jeremy, >> >> I did second round of review and have some more comments, please see below: >> >> On 12.10.2017 21:48, Jeremy Linton wrote: > I disagree, that routine is shared by the two code paths because its functionality is 99% duplicated between the two. The difference being whether it terminates the search at a given level, or continues searching until it runs out of nodes. The latter case is simply a degenerate version of the first. > > >> >> Also, seems like we can merge acpi_parse_pptt() & acpi_process_node(). > > That is true, but I fail to see how any of this is actually fixes anything. There are a million ways to do this, including as pointed out by building another data-structure to simplify the parsing what is a table that is less than ideal for runtime parsing (starting with the direction of the relative pointers, and ending with having to "infer" information that isn't directly flagged). I actually built a couple other versions of this, including a nice cute version which is about 1/8 this size of this and really easy to understand but of course is recursive... > > Maybe you can see my version below. It is similar to what you said above. It may give some help. https://github.com/fenghusthu/acpi_pptt Thanks Xiongfeng Wang >> >> Here are my suggestions: >> >> >> static struct acpi_pptt_cache *acpi_pptt_cache_type_level( >> struct acpi_table_header *table_hdr, >> struct acpi_subtable_header *res, >> int *local_level, >> int level, int type) >> { >> struct acpi_pptt_cache *cache = (struct acpi_pptt_cache *) res; >> >> if (res->type != ACPI_PPTT_TYPE_CACHE) >> return NULL; >> >> while (cache) { >> if ((*local_level == level) && >> (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && >> ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 == type)) { >> >> pr_debug("Found cache @ level %d\n", level); >> return cache; >> } >> cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); >> (*local_level)++; >> } >> return NULL; >> } >> >> static struct acpi_pptt_cache *_acpi_find_cache_node( >> struct acpi_table_header *table_hdr, >> struct acpi_pptt_processor *cpu_node, >> int *local_level, int level, int type) >> { >> struct acpi_subtable_header *res; >> struct acpi_pptt_cache *cache_tmp, *cache = NULL; >> int resource = 0; >> >> /* walk down from the processor node */ >> while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { >> >> cache_tmp = acpi_pptt_cache_type_level(table_hdr, res, >> local_level, level, type); >> if (cache_tmp) { >> if (cache) >> pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); >> >> cache = cache_tmp; >> } >> resource++; >> } >> return cache; >> } >> >> /* find the ACPI node describing the cache type/level for the given CPU */ >> static struct acpi_pptt_cache *acpi_find_cache_node( >> struct acpi_table_header *table_hdr, u32 acpi_cpu_id, >> enum cache_type type, unsigned int level, >> struct acpi_pptt_processor **node) >> { >> int total_levels = 0; >> struct acpi_pptt_cache *found = NULL; >> struct acpi_pptt_processor *cpu_node; >> u8 acpi_type = acpi_cache_type(type); >> >> pr_debug("Looking for CPU %d's level %d cache type %d\n", >> acpi_cpu_id, level, acpi_type); >> >> cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> if (!cpu_node) >> return NULL; >> >> do { >> found = _acpi_find_cache_node(table_hdr, cpu_node, >> &total_levels, level, acpi_type); >> *node = cpu_node; >> cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >> } while ((cpu_node) && (!found)); >> >> return found; >> } >> >> static int acpi_pptt_cache_level(struct acpi_table_header *table_hdr, >> struct acpi_subtable_header *res) >> { >> struct acpi_pptt_cache *cache = (struct acpi_pptt_cache *) res; >> int local_level = 1; >> >> if (res->type != ACPI_PPTT_TYPE_CACHE) >> return 0; >> >> while ((cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache))) >> local_level++; >> return local_level; >> } >> >> static int _acpi_count_cache_level( >> struct acpi_table_header *table_hdr, >> struct acpi_pptt_processor *cpu_node) >> { >> struct acpi_subtable_header *res; >> int levels = 0, resource = 0, number_of_levels = 0; >> >> /* walk down from the processor node */ >> while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { >> levels = acpi_pptt_cache_level(table_hdr, res); >> >> /* >> * we are looking for the max depth. Since its potentially >> * possible for a given node to have resources with differing >> * depths verify that the depth we have found is the largest. >> */ >> if (levels > number_of_levels) >> number_of_levels = levels; >> >> resource++; >> } >> return number_of_levels; >> } >> >> static int acpi_count_cache_level(struct acpi_table_header *table_hdr, >> u32 acpi_cpu_id) >> { >> int total_levels = 0; >> struct acpi_pptt_processor *cpu_node; >> >> cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> while (cpu_node) { >> total_levels += _acpi_count_cache_level(table_hdr, cpu_node); >> cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >> } >> >> return total_levels; >> } >> >> >> Did not compile the code so I may have missed somthing. >> >> Thanks, >> Tomasz > > > . >
Hi, On 17.10.2017 17:22, Jeremy Linton wrote: > Hi, > > On 10/17/2017 08:25 AM, Tomasz Nowicki wrote: >> Hi Jeremy, >> >> I did second round of review and have some more comments, please see >> below: >> >> On 12.10.2017 21:48, Jeremy Linton wrote: >>> ACPI 6.2 adds a new table, which describes how processing units >>> are related to each other in tree like fashion. Caches are >>> also sprinkled throughout the tree and describe the properties >>> of the caches in relation to other caches and processing units. >>> >>> Add the code to parse the cache hierarchy and report the total >>> number of levels of cache for a given core using >>> acpi_find_last_cache_level() as well as fill out the individual >>> cores cache information with cache_setup_acpi() once the >>> cpu_cacheinfo structure has been populated by the arch specific >>> code. >>> >>> Further, report peers in the topology using setup_acpi_cpu_topology() >>> to report a unique ID for each processing unit at a given level >>> in the tree. These unique id's can then be used to match related >>> processing units which exist as threads, COD (clusters >>> on die), within a given package, etc. >>> >>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> >>> --- >>> drivers/acpi/pptt.c | 485 >>> ++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> 1 file changed, 485 insertions(+) >>> create mode 100644 drivers/acpi/pptt.c >>> >>> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c >>> new file mode 100644 >>> index 000000000000..c86715fed4a7 >>> --- /dev/null >>> +++ b/drivers/acpi/pptt.c >>> @@ -0,1 +1,485 @@ >>> +/* >>> + * Copyright (C) 2017, ARM >>> + * >>> + * This program is free software; you can redistribute it and/or >>> modify it >>> + * under the terms and conditions of the GNU General Public License, >>> + * version 2, as published by the Free Software Foundation. >>> + * >>> + * This program is distributed in the hope it will be useful, but >>> WITHOUT >>> + * ANY WARRANTY; without even the implied warranty of >>> MERCHANTABILITY or >>> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public >>> License for >>> + * more details. >>> + * >>> + * This file implements parsing of Processor Properties Topology >>> Table (PPTT) >>> + * which is optionally used to describe the processor and cache >>> topology. >>> + * Due to the relative pointers used throughout the table, this doesn't >>> + * leverage the existing subtable parsing in the kernel. >>> + */ >>> +#define pr_fmt(fmt) "ACPI PPTT: " fmt >>> + >>> +#include <linux/acpi.h> >>> +#include <linux/cacheinfo.h> >>> +#include <acpi/processor.h> >>> + >>> +/* >>> + * Given the PPTT table, find and verify that the subtable entry >>> + * is located within the table >>> + */ >>> +static struct acpi_subtable_header *fetch_pptt_subtable( >>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>> +{ >>> + struct acpi_subtable_header *entry; >>> + >>> + /* there isn't a subtable at reference 0 */ >>> + if (!pptt_ref) >>> + return NULL; >>> + >>> + if (pptt_ref + sizeof(struct acpi_subtable_header) > >>> table_hdr->length) >>> + return NULL; >>> + >>> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >>> pptt_ref); >>> + >>> + if (pptt_ref + entry->length > table_hdr->length) >>> + return NULL; >>> + >>> + return entry; >>> +} >>> + >>> +static struct acpi_pptt_processor *fetch_pptt_node( >>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>> +{ >>> + return (struct acpi_pptt_processor >>> *)fetch_pptt_subtable(table_hdr, pptt_ref); >>> +} >>> + >>> +static struct acpi_pptt_cache *fetch_pptt_cache( >>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>> +{ >>> + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, >>> pptt_ref); >>> +} >>> + >>> +static struct acpi_subtable_header *acpi_get_pptt_resource( >>> + struct acpi_table_header *table_hdr, >>> + struct acpi_pptt_processor *node, int resource) >>> +{ >>> + u32 ref; >>> + >>> + if (resource >= node->number_of_priv_resources) >>> + return NULL; >>> + >>> + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + >>> + sizeof(u32) * resource); >>> + >>> + return fetch_pptt_subtable(table_hdr, ref); >>> +} >>> + >>> +/* >>> + * given a pptt resource, verify that it is a cache node, then walk >>> + * down each level of caches, counting how many levels are found >>> + * as well as checking the cache type (icache, dcache, unified). If a >>> + * level & type match, then we set found, and continue the search. >>> + * Once the entire cache branch has been walked return its max >>> + * depth. >>> + */ >>> +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, >>> + int local_level, >>> + struct acpi_subtable_header *res, >>> + struct acpi_pptt_cache **found, >>> + int level, int type) >>> +{ >>> + struct acpi_pptt_cache *cache; >>> + >>> + if (res->type != ACPI_PPTT_TYPE_CACHE) >>> + return 0; >>> + >>> + cache = (struct acpi_pptt_cache *) res; >>> + while (cache) { >>> + local_level++; >>> + >>> + if ((local_level == level) && >>> + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && >>> + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == >>> type)) { >> >> Attributes have to be shifted: >> >> (cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 > > Hmmm, I'm not sure that is true, the top level function in this routine > convert the "linux" constant to the ACPI version of that constant. In > that case the "type" field is pre-shifted, so that it matches the result > of just anding against the field... That is unless I messed something > up, which I don't see at the moment (and the code of course has been > tested with PPTT's from multiple people at this point). For ThunderX2 I got lots of errors in dmesg: Found duplicate cache level/type unable to determine uniqueness So I fixed "type" macros definitions (without shifting) and shift it here which fixes the issue. As you said, it can be pre-shifted as well. > > >> >>> + if (*found != NULL) >>> + pr_err("Found duplicate cache level/type unable to >>> determine uniqueness\n"); >>> + >>> + pr_debug("Found cache @ level %d\n", level); >>> + *found = cache; >>> + /* >>> + * continue looking at this node's resource list >>> + * to verify that we don't find a duplicate >>> + * cache node. >>> + */ >>> + } >>> + cache = fetch_pptt_cache(table_hdr, >>> cache->next_level_of_cache); >>> + } >>> + return local_level; >>> +} >>> + >>> +/* >>> + * Given a CPU node look for cache levels that exist at this level, >>> and then >>> + * for each cache node, count how many levels exist below (logically >>> above) it. >>> + * If a level and type are specified, and we find that level/type, >>> abort >>> + * processing and return the acpi_pptt_cache structure. >>> + */ >>> +static struct acpi_pptt_cache *acpi_find_cache_level( >>> + struct acpi_table_header *table_hdr, >>> + struct acpi_pptt_processor *cpu_node, >>> + int *starting_level, int level, int type) >>> +{ >>> + struct acpi_subtable_header *res; >>> + int number_of_levels = *starting_level; >>> + int resource = 0; >>> + struct acpi_pptt_cache *ret = NULL; >>> + int local_level; >>> + >>> + /* walk down from the processor node */ >>> + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, >>> resource))) { >>> + resource++; >>> + >>> + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, >>> + res, &ret, level, type); >>> + /* >>> + * we are looking for the max depth. Since its potentially >>> + * possible for a given node to have resources with differing >>> + * depths verify that the depth we have found is the largest. >>> + */ >>> + if (number_of_levels < local_level) >>> + number_of_levels = local_level; >>> + } >>> + if (number_of_levels > *starting_level) >>> + *starting_level = number_of_levels; >>> + >>> + return ret; >>> +} >>> + >>> +/* >>> + * given a processor node containing a processing unit, walk into it >>> and count >>> + * how many levels exist solely for it, and then walk up each level >>> until we hit >>> + * the root node (ignore the package level because it may be >>> possible to have >>> + * caches that exist across packages). Count the number of cache >>> levels that >>> + * exist at each level on the way up. >>> + */ >>> +static int acpi_process_node(struct acpi_table_header *table_hdr, >>> + struct acpi_pptt_processor *cpu_node) >>> +{ >>> + int total_levels = 0; >>> + >>> + do { >>> + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, >>> 0); >>> + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >>> + } while (cpu_node); >>> + >>> + return total_levels; >>> +} >>> + >>> +/* determine if the given node is a leaf node */ >>> +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, >>> + struct acpi_pptt_processor *node) >>> +{ >>> + struct acpi_subtable_header *entry; >>> + unsigned long table_end; >>> + u32 node_entry; >>> + struct acpi_pptt_processor *cpu_node; >>> + >>> + table_end = (unsigned long)table_hdr + table_hdr->length; >>> + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); >>> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >>> + sizeof(struct acpi_table_pptt)); >>> + >>> + while (((unsigned long)entry) + sizeof(struct >>> acpi_subtable_header) < table_end) { >>> + cpu_node = (struct acpi_pptt_processor *)entry; >>> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >>> + (cpu_node->parent == node_entry)) >>> + return 0; >>> + entry = (struct acpi_subtable_header *)((u8 *)entry + >>> entry->length); >>> + } >>> + return 1; >>> +} >>> + >>> +/* >>> + * Find the subtable entry describing the provided processor >>> + */ >>> +static struct acpi_pptt_processor *acpi_find_processor_node( >>> + struct acpi_table_header *table_hdr, >>> + u32 acpi_cpu_id) >>> +{ >>> + struct acpi_subtable_header *entry; >>> + unsigned long table_end; >>> + struct acpi_pptt_processor *cpu_node; >>> + >>> + table_end = (unsigned long)table_hdr + table_hdr->length; >>> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >>> + sizeof(struct acpi_table_pptt)); >>> + >>> + /* find the processor structure associated with this cpuid */ >>> + while (((unsigned long)entry) + sizeof(struct >>> acpi_subtable_header) < table_end) { >>> + cpu_node = (struct acpi_pptt_processor *)entry; >>> + >>> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >>> + acpi_pptt_leaf_node(table_hdr, cpu_node)) { >>> + pr_debug("checking phy_cpu_id %d against acpi id %d\n", >>> + acpi_cpu_id, cpu_node->acpi_processor_id); >>> + if (acpi_cpu_id == cpu_node->acpi_processor_id) { >>> + /* found the correct entry */ >>> + pr_debug("match found!\n"); >>> + return (struct acpi_pptt_processor *)entry; >>> + } >>> + } >>> + >>> + if (entry->length == 0) { >>> + pr_err("Invalid zero length subtable\n"); >>> + break; >>> + } >>> + entry = (struct acpi_subtable_header *) >>> + ((u8 *)entry + entry->length); >>> + } >>> + >>> + return NULL; >>> +} >>> + >>> +/* >>> + * Given a acpi_pptt_processor node, walk up until we identify the >>> + * package that the node is associated with or we run out of levels >>> + * to request. >>> + */ >>> +static struct acpi_pptt_processor *acpi_find_processor_package_id( >>> + struct acpi_table_header *table_hdr, >>> + struct acpi_pptt_processor *cpu, >>> + int level) >>> +{ >>> + struct acpi_pptt_processor *prev_node; >>> + >>> + while (cpu && level && !(cpu->flags & >>> ACPI_PPTT_PHYSICAL_PACKAGE)) { >>> + pr_debug("level %d\n", level); >>> + prev_node = fetch_pptt_node(table_hdr, cpu->parent); >>> + if (prev_node == NULL) >>> + break; >>> + cpu = prev_node; >>> + level--; >>> + } >>> + return cpu; >>> +} >>> + >>> +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 >>> acpi_cpu_id) >> >> The function name can be more descriptive. How about: >> >> acpi_count_cache_level() ? > > The naming has drifted a bit, so yes, that routine is only used by the > portion which is determining the number of cache levels for a given PE. > > >> >>> +{ >>> + int number_of_levels = 0; >>> + struct acpi_pptt_processor *cpu; >>> + >>> + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); >>> + if (cpu) >>> + number_of_levels = acpi_process_node(table_hdr, cpu); >>> + >>> + return number_of_levels; >>> +} >> >> It is hard to follow what acpi_find_cache_level() and >> acpi_pptt_walk_cache() really do. It is because they are trying to do >> too many things at the same time. IMO, splitting >> acpi_find_cache_level() logic to: >> 1. counting the cache levels (max depth) >> 2. finding the specific cache node >> makes sense. > > I disagree, that routine is shared by the two code paths because its > functionality is 99% duplicated between the two. The difference being > whether it terminates the search at a given level, or continues > searching until it runs out of nodes. The latter case is simply a > degenerate version of the first. Mostly it is about trade-off between code simplicity and redundancy, I personally prefer the former. It is not the critical issue though. > > >> >> Also, seems like we can merge acpi_parse_pptt() & acpi_process_node(). > > That is true, but I fail to see how any of this is actually fixes > anything. There are a million ways to do this, including as pointed out > by building another data-structure to simplify the parsing what is a > table that is less than ideal for runtime parsing (starting with the > direction of the relative pointers, and ending with having to "infer" > information that isn't directly flagged). I actually built a couple > other versions of this, including a nice cute version which is about 1/8 > this size of this and really easy to understand but of course is > recursive... I believe this will improve code readability. Obviously, you can disagree with my suggestions. Thanks, Tomasz
On 18.10.2017 07:39, Tomasz Nowicki wrote: > Hi, > > On 17.10.2017 17:22, Jeremy Linton wrote: >> Hi, >> >> On 10/17/2017 08:25 AM, Tomasz Nowicki wrote: >>> Hi Jeremy, >>> >>> I did second round of review and have some more comments, please see >>> below: >>> >>> On 12.10.2017 21:48, Jeremy Linton wrote: >>>> ACPI 6.2 adds a new table, which describes how processing units >>>> are related to each other in tree like fashion. Caches are >>>> also sprinkled throughout the tree and describe the properties >>>> of the caches in relation to other caches and processing units. >>>> >>>> Add the code to parse the cache hierarchy and report the total >>>> number of levels of cache for a given core using >>>> acpi_find_last_cache_level() as well as fill out the individual >>>> cores cache information with cache_setup_acpi() once the >>>> cpu_cacheinfo structure has been populated by the arch specific >>>> code. >>>> >>>> Further, report peers in the topology using setup_acpi_cpu_topology() >>>> to report a unique ID for each processing unit at a given level >>>> in the tree. These unique id's can then be used to match related >>>> processing units which exist as threads, COD (clusters >>>> on die), within a given package, etc. >>>> >>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> >>>> --- >>>> drivers/acpi/pptt.c | 485 >>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> 1 file changed, 485 insertions(+) >>>> create mode 100644 drivers/acpi/pptt.c >>>> >>>> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c >>>> new file mode 100644 >>>> index 000000000000..c86715fed4a7 >>>> --- /dev/null >>>> +++ b/drivers/acpi/pptt.c >>>> @@ -0,1 +1,485 @@ >>>> +/* >>>> + * Copyright (C) 2017, ARM >>>> + * >>>> + * This program is free software; you can redistribute it and/or >>>> modify it >>>> + * under the terms and conditions of the GNU General Public License, >>>> + * version 2, as published by the Free Software Foundation. >>>> + * >>>> + * This program is distributed in the hope it will be useful, but >>>> WITHOUT >>>> + * ANY WARRANTY; without even the implied warranty of >>>> MERCHANTABILITY or >>>> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public >>>> License for >>>> + * more details. >>>> + * >>>> + * This file implements parsing of Processor Properties Topology >>>> Table (PPTT) >>>> + * which is optionally used to describe the processor and cache >>>> topology. >>>> + * Due to the relative pointers used throughout the table, this >>>> doesn't >>>> + * leverage the existing subtable parsing in the kernel. >>>> + */ >>>> +#define pr_fmt(fmt) "ACPI PPTT: " fmt >>>> + >>>> +#include <linux/acpi.h> >>>> +#include <linux/cacheinfo.h> >>>> +#include <acpi/processor.h> >>>> + >>>> +/* >>>> + * Given the PPTT table, find and verify that the subtable entry >>>> + * is located within the table >>>> + */ >>>> +static struct acpi_subtable_header *fetch_pptt_subtable( >>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>> +{ >>>> + struct acpi_subtable_header *entry; >>>> + >>>> + /* there isn't a subtable at reference 0 */ >>>> + if (!pptt_ref) >>>> + return NULL; >>>> + >>>> + if (pptt_ref + sizeof(struct acpi_subtable_header) > >>>> table_hdr->length) >>>> + return NULL; >>>> + >>>> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >>>> pptt_ref); >>>> + >>>> + if (pptt_ref + entry->length > table_hdr->length) >>>> + return NULL; >>>> + >>>> + return entry; >>>> +} >>>> + >>>> +static struct acpi_pptt_processor *fetch_pptt_node( >>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>> +{ >>>> + return (struct acpi_pptt_processor >>>> *)fetch_pptt_subtable(table_hdr, pptt_ref); >>>> +} >>>> + >>>> +static struct acpi_pptt_cache *fetch_pptt_cache( >>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>> +{ >>>> + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, >>>> pptt_ref); >>>> +} >>>> + >>>> +static struct acpi_subtable_header *acpi_get_pptt_resource( >>>> + struct acpi_table_header *table_hdr, >>>> + struct acpi_pptt_processor *node, int resource) >>>> +{ >>>> + u32 ref; >>>> + >>>> + if (resource >= node->number_of_priv_resources) >>>> + return NULL; >>>> + >>>> + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + >>>> + sizeof(u32) * resource); >>>> + >>>> + return fetch_pptt_subtable(table_hdr, ref); >>>> +} >>>> + >>>> +/* >>>> + * given a pptt resource, verify that it is a cache node, then walk >>>> + * down each level of caches, counting how many levels are found >>>> + * as well as checking the cache type (icache, dcache, unified). If a >>>> + * level & type match, then we set found, and continue the search. >>>> + * Once the entire cache branch has been walked return its max >>>> + * depth. >>>> + */ >>>> +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, >>>> + int local_level, >>>> + struct acpi_subtable_header *res, >>>> + struct acpi_pptt_cache **found, >>>> + int level, int type) >>>> +{ >>>> + struct acpi_pptt_cache *cache; >>>> + >>>> + if (res->type != ACPI_PPTT_TYPE_CACHE) >>>> + return 0; >>>> + >>>> + cache = (struct acpi_pptt_cache *) res; >>>> + while (cache) { >>>> + local_level++; >>>> + >>>> + if ((local_level == level) && >>>> + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && >>>> + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == >>>> type)) { >>> >>> Attributes have to be shifted: >>> >>> (cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 >> >> Hmmm, I'm not sure that is true, the top level function in this >> routine convert the "linux" constant to the ACPI version of that >> constant. In that case the "type" field is pre-shifted, so that it >> matches the result of just anding against the field... That is unless >> I messed something up, which I don't see at the moment (and the code >> of course has been tested with PPTT's from multiple people at this >> point). > > For ThunderX2 I got lots of errors in dmesg: > Found duplicate cache level/type unable to determine uniqueness > > So I fixed "type" macros definitions (without shifting) and shift it > here which fixes the issue. As you said, it can be pre-shifted as well. > >> >> >>> >>>> + if (*found != NULL) >>>> + pr_err("Found duplicate cache level/type unable to >>>> determine uniqueness\n"); Actually I still see this error messages in my dmesg. It is because the following ThunderX2 per-core L1 and L2 cache hierarchy: Core ------------------ | | | L1i ----- | | | | | ----L2 | | | | | L1d ----- | | | ------------------ In this case we have two paths which lead to L2 cache and hit above case. Is it really error case? Thanks, Tomasz
On 10/18/2017 05:24 AM, Tomasz Nowicki wrote: > On 18.10.2017 07:39, Tomasz Nowicki wrote: >> Hi, >> >> On 17.10.2017 17:22, Jeremy Linton wrote: >>> Hi, >>> >>> On 10/17/2017 08:25 AM, Tomasz Nowicki wrote: >>>> Hi Jeremy, >>>> >>>> I did second round of review and have some more comments, please see >>>> below: >>>> >>>> On 12.10.2017 21:48, Jeremy Linton wrote: >>>>> ACPI 6.2 adds a new table, which describes how processing units >>>>> are related to each other in tree like fashion. Caches are >>>>> also sprinkled throughout the tree and describe the properties >>>>> of the caches in relation to other caches and processing units. >>>>> >>>>> Add the code to parse the cache hierarchy and report the total >>>>> number of levels of cache for a given core using >>>>> acpi_find_last_cache_level() as well as fill out the individual >>>>> cores cache information with cache_setup_acpi() once the >>>>> cpu_cacheinfo structure has been populated by the arch specific >>>>> code. >>>>> >>>>> Further, report peers in the topology using setup_acpi_cpu_topology() >>>>> to report a unique ID for each processing unit at a given level >>>>> in the tree. These unique id's can then be used to match related >>>>> processing units which exist as threads, COD (clusters >>>>> on die), within a given package, etc. >>>>> >>>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> >>>>> --- >>>>> drivers/acpi/pptt.c | 485 >>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>> 1 file changed, 485 insertions(+) >>>>> create mode 100644 drivers/acpi/pptt.c >>>>> >>>>> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c >>>>> new file mode 100644 >>>>> index 000000000000..c86715fed4a7 >>>>> --- /dev/null >>>>> +++ b/drivers/acpi/pptt.c >>>>> @@ -0,1 +1,485 @@ >>>>> +/* >>>>> + * Copyright (C) 2017, ARM >>>>> + * >>>>> + * This program is free software; you can redistribute it and/or >>>>> modify it >>>>> + * under the terms and conditions of the GNU General Public License, >>>>> + * version 2, as published by the Free Software Foundation. >>>>> + * >>>>> + * This program is distributed in the hope it will be useful, but >>>>> WITHOUT >>>>> + * ANY WARRANTY; without even the implied warranty of >>>>> MERCHANTABILITY or >>>>> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public >>>>> License for >>>>> + * more details. >>>>> + * >>>>> + * This file implements parsing of Processor Properties Topology >>>>> Table (PPTT) >>>>> + * which is optionally used to describe the processor and cache >>>>> topology. >>>>> + * Due to the relative pointers used throughout the table, this >>>>> doesn't >>>>> + * leverage the existing subtable parsing in the kernel. >>>>> + */ >>>>> +#define pr_fmt(fmt) "ACPI PPTT: " fmt >>>>> + >>>>> +#include <linux/acpi.h> >>>>> +#include <linux/cacheinfo.h> >>>>> +#include <acpi/processor.h> >>>>> + >>>>> +/* >>>>> + * Given the PPTT table, find and verify that the subtable entry >>>>> + * is located within the table >>>>> + */ >>>>> +static struct acpi_subtable_header *fetch_pptt_subtable( >>>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>>> +{ >>>>> + struct acpi_subtable_header *entry; >>>>> + >>>>> + /* there isn't a subtable at reference 0 */ >>>>> + if (!pptt_ref) >>>>> + return NULL; >>>>> + >>>>> + if (pptt_ref + sizeof(struct acpi_subtable_header) > >>>>> table_hdr->length) >>>>> + return NULL; >>>>> + >>>>> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >>>>> pptt_ref); >>>>> + >>>>> + if (pptt_ref + entry->length > table_hdr->length) >>>>> + return NULL; >>>>> + >>>>> + return entry; >>>>> +} >>>>> + >>>>> +static struct acpi_pptt_processor *fetch_pptt_node( >>>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>>> +{ >>>>> + return (struct acpi_pptt_processor >>>>> *)fetch_pptt_subtable(table_hdr, pptt_ref); >>>>> +} >>>>> + >>>>> +static struct acpi_pptt_cache *fetch_pptt_cache( >>>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>>> +{ >>>>> + return (struct acpi_pptt_cache >>>>> *)fetch_pptt_subtable(table_hdr, pptt_ref); >>>>> +} >>>>> + >>>>> +static struct acpi_subtable_header *acpi_get_pptt_resource( >>>>> + struct acpi_table_header *table_hdr, >>>>> + struct acpi_pptt_processor *node, int resource) >>>>> +{ >>>>> + u32 ref; >>>>> + >>>>> + if (resource >= node->number_of_priv_resources) >>>>> + return NULL; >>>>> + >>>>> + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + >>>>> + sizeof(u32) * resource); >>>>> + >>>>> + return fetch_pptt_subtable(table_hdr, ref); >>>>> +} >>>>> + >>>>> +/* >>>>> + * given a pptt resource, verify that it is a cache node, then walk >>>>> + * down each level of caches, counting how many levels are found >>>>> + * as well as checking the cache type (icache, dcache, unified). If a >>>>> + * level & type match, then we set found, and continue the search. >>>>> + * Once the entire cache branch has been walked return its max >>>>> + * depth. >>>>> + */ >>>>> +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, >>>>> + int local_level, >>>>> + struct acpi_subtable_header *res, >>>>> + struct acpi_pptt_cache **found, >>>>> + int level, int type) >>>>> +{ >>>>> + struct acpi_pptt_cache *cache; >>>>> + >>>>> + if (res->type != ACPI_PPTT_TYPE_CACHE) >>>>> + return 0; >>>>> + >>>>> + cache = (struct acpi_pptt_cache *) res; >>>>> + while (cache) { >>>>> + local_level++; >>>>> + >>>>> + if ((local_level == level) && >>>>> + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && >>>>> + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == >>>>> type)) { >>>> >>>> Attributes have to be shifted: >>>> >>>> (cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 >>> >>> Hmmm, I'm not sure that is true, the top level function in this >>> routine convert the "linux" constant to the ACPI version of that >>> constant. In that case the "type" field is pre-shifted, so that it >>> matches the result of just anding against the field... That is unless >>> I messed something up, which I don't see at the moment (and the code >>> of course has been tested with PPTT's from multiple people at this >>> point). >> >> For ThunderX2 I got lots of errors in dmesg: >> Found duplicate cache level/type unable to determine uniqueness >> >> So I fixed "type" macros definitions (without shifting) and shift it >> here which fixes the issue. As you said, it can be pre-shifted as well. Ah, yah right... If you removed the shift per your original comment then it breaks this. Yes, and the type definitions for cache type aren't wrong in this version because the unified state has the 3rd bit set for both the 0x3 and 0x2 values and its only used to covert from the linux type to the ACPI type (and not back because we don't mess with whatever the original "detection" was). I'm not really planning on changing that because I don't think it helps "readability" (and it converts a compile time constant to a runtime shift). >> >>> >>> >>>> >>>>> + if (*found != NULL) >>>>> + pr_err("Found duplicate cache level/type unable to >>>>> determine uniqueness\n"); > > Actually I still see this error messages in my dmesg. It is because the > following ThunderX2 per-core L1 and L2 cache hierarchy: > > Core > ------------------ > | | > | L1i ----- | > | | | > | ----L2 | > | | | > | L1d ----- | > | | > ------------------ > > In this case we have two paths which lead to L2 cache and hit above > case. Is it really error case? No, but its not deterministic unless we mark the node, which doesn't solve the problem of a table constructed like L1i->L2 (unified) L1d->L2 (unified) or various other structures which aren't disallowed by the spec and have non-deterministic real world meanings, anymore than constructing the table like: L1i Lid->L2(unified) which I tend to prefer because with a structuring like that it can be deterministic (and in a way actually represents the non-coherent behavior of (most?) ARM64 core's i-caches, as could be argued the first example if the allocation policies are varied between the L2 nodes). The really ugly bits here happen if you add another layer: L1i->L2i-L3 L1d------^ which is why I made that an error message, not including the fact that since the levels aren't tagged the numbering and meaning isn't clear. (the L1i in the above example might be better called an L0i to avoid throwing off the reset of the hierarchy numbering, also so it could be ignored). Summary: I'm not at all happy with this specification's attempt to leave out pieces of information which make parsing things more deterministic. In this case I'm happy to demote the message level, but not remove it entirely but I do think the obvious case you list shouldn't be the default one. Lastly: I'm assuming the final result is that the table is actually being parsed correctly despite the ugly message?
On 18.10.2017 19:30, Jeremy Linton wrote: > On 10/18/2017 05:24 AM, Tomasz Nowicki wrote: >> On 18.10.2017 07:39, Tomasz Nowicki wrote: >>> Hi, >>> >>> On 17.10.2017 17:22, Jeremy Linton wrote: >>>> Hi, >>>> >>>> On 10/17/2017 08:25 AM, Tomasz Nowicki wrote: >>>>> Hi Jeremy, >>>>> >>>>> I did second round of review and have some more comments, please >>>>> see below: >>>>> >>>>> On 12.10.2017 21:48, Jeremy Linton wrote: >>>>>> ACPI 6.2 adds a new table, which describes how processing units >>>>>> are related to each other in tree like fashion. Caches are >>>>>> also sprinkled throughout the tree and describe the properties >>>>>> of the caches in relation to other caches and processing units. >>>>>> >>>>>> Add the code to parse the cache hierarchy and report the total >>>>>> number of levels of cache for a given core using >>>>>> acpi_find_last_cache_level() as well as fill out the individual >>>>>> cores cache information with cache_setup_acpi() once the >>>>>> cpu_cacheinfo structure has been populated by the arch specific >>>>>> code. >>>>>> >>>>>> Further, report peers in the topology using setup_acpi_cpu_topology() >>>>>> to report a unique ID for each processing unit at a given level >>>>>> in the tree. These unique id's can then be used to match related >>>>>> processing units which exist as threads, COD (clusters >>>>>> on die), within a given package, etc. >>>>>> >>>>>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> >>>>>> --- >>>>>> drivers/acpi/pptt.c | 485 >>>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>>>> 1 file changed, 485 insertions(+) >>>>>> create mode 100644 drivers/acpi/pptt.c >>>>>> >>>>>> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c >>>>>> new file mode 100644 >>>>>> index 000000000000..c86715fed4a7 >>>>>> --- /dev/null >>>>>> +++ b/drivers/acpi/pptt.c >>>>>> @@ -0,1 +1,485 @@ >>>>>> +/* >>>>>> + * Copyright (C) 2017, ARM >>>>>> + * >>>>>> + * This program is free software; you can redistribute it and/or >>>>>> modify it >>>>>> + * under the terms and conditions of the GNU General Public License, >>>>>> + * version 2, as published by the Free Software Foundation. >>>>>> + * >>>>>> + * This program is distributed in the hope it will be useful, but >>>>>> WITHOUT >>>>>> + * ANY WARRANTY; without even the implied warranty of >>>>>> MERCHANTABILITY or >>>>>> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public >>>>>> License for >>>>>> + * more details. >>>>>> + * >>>>>> + * This file implements parsing of Processor Properties Topology >>>>>> Table (PPTT) >>>>>> + * which is optionally used to describe the processor and cache >>>>>> topology. >>>>>> + * Due to the relative pointers used throughout the table, this >>>>>> doesn't >>>>>> + * leverage the existing subtable parsing in the kernel. >>>>>> + */ >>>>>> +#define pr_fmt(fmt) "ACPI PPTT: " fmt >>>>>> + >>>>>> +#include <linux/acpi.h> >>>>>> +#include <linux/cacheinfo.h> >>>>>> +#include <acpi/processor.h> >>>>>> + >>>>>> +/* >>>>>> + * Given the PPTT table, find and verify that the subtable entry >>>>>> + * is located within the table >>>>>> + */ >>>>>> +static struct acpi_subtable_header *fetch_pptt_subtable( >>>>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>>>> +{ >>>>>> + struct acpi_subtable_header *entry; >>>>>> + >>>>>> + /* there isn't a subtable at reference 0 */ >>>>>> + if (!pptt_ref) >>>>>> + return NULL; >>>>>> + >>>>>> + if (pptt_ref + sizeof(struct acpi_subtable_header) > >>>>>> table_hdr->length) >>>>>> + return NULL; >>>>>> + >>>>>> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >>>>>> pptt_ref); >>>>>> + >>>>>> + if (pptt_ref + entry->length > table_hdr->length) >>>>>> + return NULL; >>>>>> + >>>>>> + return entry; >>>>>> +} >>>>>> + >>>>>> +static struct acpi_pptt_processor *fetch_pptt_node( >>>>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>>>> +{ >>>>>> + return (struct acpi_pptt_processor >>>>>> *)fetch_pptt_subtable(table_hdr, pptt_ref); >>>>>> +} >>>>>> + >>>>>> +static struct acpi_pptt_cache *fetch_pptt_cache( >>>>>> + struct acpi_table_header *table_hdr, u32 pptt_ref) >>>>>> +{ >>>>>> + return (struct acpi_pptt_cache >>>>>> *)fetch_pptt_subtable(table_hdr, pptt_ref); >>>>>> +} >>>>>> + >>>>>> +static struct acpi_subtable_header *acpi_get_pptt_resource( >>>>>> + struct acpi_table_header *table_hdr, >>>>>> + struct acpi_pptt_processor *node, int resource) >>>>>> +{ >>>>>> + u32 ref; >>>>>> + >>>>>> + if (resource >= node->number_of_priv_resources) >>>>>> + return NULL; >>>>>> + >>>>>> + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + >>>>>> + sizeof(u32) * resource); >>>>>> + >>>>>> + return fetch_pptt_subtable(table_hdr, ref); >>>>>> +} >>>>>> + >>>>>> +/* >>>>>> + * given a pptt resource, verify that it is a cache node, then walk >>>>>> + * down each level of caches, counting how many levels are found >>>>>> + * as well as checking the cache type (icache, dcache, unified). >>>>>> If a >>>>>> + * level & type match, then we set found, and continue the search. >>>>>> + * Once the entire cache branch has been walked return its max >>>>>> + * depth. >>>>>> + */ >>>>>> +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, >>>>>> + int local_level, >>>>>> + struct acpi_subtable_header *res, >>>>>> + struct acpi_pptt_cache **found, >>>>>> + int level, int type) >>>>>> +{ >>>>>> + struct acpi_pptt_cache *cache; >>>>>> + >>>>>> + if (res->type != ACPI_PPTT_TYPE_CACHE) >>>>>> + return 0; >>>>>> + >>>>>> + cache = (struct acpi_pptt_cache *) res; >>>>>> + while (cache) { >>>>>> + local_level++; >>>>>> + >>>>>> + if ((local_level == level) && >>>>>> + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && >>>>>> + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == >>>>>> type)) { >>>>> >>>>> Attributes have to be shifted: >>>>> >>>>> (cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) >> 2 >>>> >>>> Hmmm, I'm not sure that is true, the top level function in this >>>> routine convert the "linux" constant to the ACPI version of that >>>> constant. In that case the "type" field is pre-shifted, so that it >>>> matches the result of just anding against the field... That is >>>> unless I messed something up, which I don't see at the moment (and >>>> the code of course has been tested with PPTT's from multiple people >>>> at this point). >>> >>> For ThunderX2 I got lots of errors in dmesg: >>> Found duplicate cache level/type unable to determine uniqueness >>> >>> So I fixed "type" macros definitions (without shifting) and shift it >>> here which fixes the issue. As you said, it can be pre-shifted as well. > > Ah, yah right... If you removed the shift per your original comment then > it breaks this. Yes, and the type definitions for cache type aren't > wrong in this version because the unified state has the 3rd bit set for > both the 0x3 and 0x2 values and its only used to covert from the linux > type to the ACPI type (and not back because we don't mess with whatever > the original "detection" was). I'm not really planning on changing that > because I don't think it helps "readability" (and it converts a compile > time constant to a runtime shift). > >>> >>>> >>>> >>>>> >>>>>> + if (*found != NULL) >>>>>> + pr_err("Found duplicate cache level/type unable >>>>>> to determine uniqueness\n"); >> >> Actually I still see this error messages in my dmesg. It is because >> the following ThunderX2 per-core L1 and L2 cache hierarchy: >> >> Core >> ------------------ >> | | >> | L1i ----- | >> | | | >> | ----L2 | >> | | | >> | L1d ----- | >> | | >> ------------------ >> >> In this case we have two paths which lead to L2 cache and hit above >> case. Is it really error case? > > No, but its not deterministic unless we mark the node, which doesn't > solve the problem of a table constructed like > > L1i->L2 (unified) > L1d->L2 (unified) > > or various other structures which aren't disallowed by the spec and have > non-deterministic real world meanings, anymore than constructing the > table like: > > L1i > Lid->L2(unified) > > which I tend to prefer because with a structuring like that it can be > deterministic (and in a way actually represents the non-coherent > behavior of (most?) ARM64 core's i-caches, as could be argued the first > example if the allocation policies are varied between the L2 nodes). > > The really ugly bits here happen if you add another layer: > > L1i->L2i-L3 > L1d------^ > > which is why I made that an error message, not including the fact that > since the levels aren't tagged the numbering and meaning isn't clear. > > (the L1i in the above example might be better called an L0i to avoid > throwing off the reset of the hierarchy numbering, also so it could be > ignored). > > Summary: > > I'm not at all happy with this specification's attempt to leave out > pieces of information which make parsing things more deterministic. In > this case I'm happy to demote the message level, but not remove it > entirely but I do think the obvious case you list shouldn't be the > default one. > > Lastly: > > I'm assuming the final result is that the table is actually being parsed > correctly despite the ugly message? Indeed, the ThunderX2 PPTT table is being parsed so that topology shown in lstopo and lscpu is correct. Thanks, Tomasz
On Thu, Oct 12, 2017 at 02:48:50PM -0500, Jeremy Linton wrote: > ACPI 6.2 adds a new table, which describes how processing units > are related to each other in tree like fashion. Caches are > also sprinkled throughout the tree and describe the properties > of the caches in relation to other caches and processing units. > > Add the code to parse the cache hierarchy and report the total > number of levels of cache for a given core using > acpi_find_last_cache_level() as well as fill out the individual > cores cache information with cache_setup_acpi() once the > cpu_cacheinfo structure has been populated by the arch specific > code. > > Further, report peers in the topology using setup_acpi_cpu_topology() > to report a unique ID for each processing unit at a given level > in the tree. These unique id's can then be used to match related > processing units which exist as threads, COD (clusters > on die), within a given package, etc. I think this patch should be split ((1) topology (2) cache), it is doing too much which makes it hard to review. [...] > +/* determine if the given node is a leaf node */ > +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + u32 node_entry; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); > + > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + (cpu_node->parent == node_entry)) > + return 0; > + entry = (struct acpi_subtable_header *)((u8 *)entry + entry->length); > + } A leaf node is a node with a valid acpi_id corresponding to an MADT entry, right ? By the way, is this function really needed ? > + return 1; > +} > + > +/* > + * Find the subtable entry describing the provided processor > + */ > +static struct acpi_pptt_processor *acpi_find_processor_node( > + struct acpi_table_header *table_hdr, > + u32 acpi_cpu_id) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); > + > + /* find the processor structure associated with this cpuid */ > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + acpi_pptt_leaf_node(table_hdr, cpu_node)) { Is the leaf node check necessary ? Or you just need to check the ACPI Processor ID valid flag (as discussed offline) ? > + pr_debug("checking phy_cpu_id %d against acpi id %d\n", > + acpi_cpu_id, cpu_node->acpi_processor_id); Side note: I'd question (some of) these pr_debug() messages. > + if (acpi_cpu_id == cpu_node->acpi_processor_id) { > + /* found the correct entry */ > + pr_debug("match found!\n"); Like this one for instance. > + return (struct acpi_pptt_processor *)entry; > + } > + } > + > + if (entry->length == 0) { > + pr_err("Invalid zero length subtable\n"); > + break; > + } This should be moved at the beginning of the loop. > + entry = (struct acpi_subtable_header *) > + ((u8 *)entry + entry->length); > + } > + > + return NULL; > +} > + > +/* > + * Given a acpi_pptt_processor node, walk up until we identify the > + * package that the node is associated with or we run out of levels > + * to request. > + */ > +static struct acpi_pptt_processor *acpi_find_processor_package_id( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu, > + int level) > +{ > + struct acpi_pptt_processor *prev_node; > + > + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { I really do not understand what ACPI_PPTT_PHYSICAL_PACKAGE means and more importantly, how it is actually used in this code. This function is used to get a topology id (that is just a number for a given topology level) for a given level starting from a given leaf node. Why do we care at all about ACPI_PPTT_PHYSICAL_PACKAGE ? > + pr_debug("level %d\n", level); > + prev_node = fetch_pptt_node(table_hdr, cpu->parent); > + if (prev_node == NULL) > + break; > + cpu = prev_node; > + level--; > + } > + return cpu; > +} > + > +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) > +{ > + int number_of_levels = 0; > + struct acpi_pptt_processor *cpu; > + > + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (cpu) > + number_of_levels = acpi_process_node(table_hdr, cpu); > + > + return number_of_levels; > +} > + > +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) > +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) > +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) > +#define ACPI_6_2_CACHE_POLICY_WB (0x0) > +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) > +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) > +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) > +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) > + > +static u8 acpi_cache_type(enum cache_type type) > +{ > + switch (type) { > + case CACHE_TYPE_DATA: > + pr_debug("Looking for data cache\n"); > + return ACPI_6_2_CACHE_TYPE_DATA; > + case CACHE_TYPE_INST: > + pr_debug("Looking for instruction cache\n"); > + return ACPI_6_2_CACHE_TYPE_INSTR; > + default: > + pr_debug("Unknown cache type, assume unified\n"); > + case CACHE_TYPE_UNIFIED: > + pr_debug("Looking for unified cache\n"); > + return ACPI_6_2_CACHE_TYPE_UNIFIED; > + } > +} > + > +/* find the ACPI node describing the cache type/level for the given CPU */ > +static struct acpi_pptt_cache *acpi_find_cache_node( > + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, > + enum cache_type type, unsigned int level, > + struct acpi_pptt_processor **node) > +{ > + int total_levels = 0; > + struct acpi_pptt_cache *found = NULL; > + struct acpi_pptt_processor *cpu_node; > + u8 acpi_type = acpi_cache_type(type); > + > + pr_debug("Looking for CPU %d's level %d cache type %d\n", > + acpi_cpu_id, level, acpi_type); > + > + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (!cpu_node) > + return NULL; > + > + do { > + found = acpi_find_cache_level(table_hdr, cpu_node, &total_levels, level, acpi_type); > + *node = cpu_node; > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while ((cpu_node) && (!found)); > + > + return found; > +} > + > +int acpi_find_last_cache_level(unsigned int cpu) > +{ > + u32 acpi_cpu_id; > + struct acpi_table_header *table; > + int number_of_levels = 0; > + acpi_status status; > + > + pr_debug("Cache Setup find last level cpu=%d\n", cpu); > + > + acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; This would break !ARM64. > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > + } else { > + number_of_levels = acpi_parse_pptt(table, acpi_cpu_id); > + acpi_put_table(table); > + } > + pr_debug("Cache Setup find last level level=%d\n", number_of_levels); > + > + return number_of_levels; > +} > + > +/* > + * The ACPI spec implies that the fields in the cache structures are used to > + * extend and correct the information probed from the hardware. In the case > + * of arm64 the CCSIDR probing has been removed because it might be incorrect. > + */ > +static void update_cache_properties(struct cacheinfo *this_leaf, > + struct acpi_pptt_cache *found_cache, > + struct acpi_pptt_processor *cpu_node) > +{ > + if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) > + this_leaf->size = found_cache->size; > + if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) > + this_leaf->coherency_line_size = found_cache->line_size; > + if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) > + this_leaf->number_of_sets = found_cache->number_of_sets; > + if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) > + this_leaf->ways_of_associativity = found_cache->associativity; > + if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) > + switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) { > + case ACPI_6_2_CACHE_POLICY_WT: > + this_leaf->attributes = CACHE_WRITE_THROUGH; > + break; > + case ACPI_6_2_CACHE_POLICY_WB: > + this_leaf->attributes = CACHE_WRITE_BACK; > + break; > + default: > + pr_err("Unknown ACPI cache policy %d\n", > + found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY); > + } > + if (found_cache->flags & ACPI_PPTT_ALLOCATION_TYPE_VALID) > + switch (found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE) { > + case ACPI_6_2_CACHE_READ_ALLOCATE: > + this_leaf->attributes |= CACHE_READ_ALLOCATE; > + break; > + case ACPI_6_2_CACHE_WRITE_ALLOCATE: > + this_leaf->attributes |= CACHE_WRITE_ALLOCATE; > + break; > + case ACPI_6_2_CACHE_RW_ALLOCATE: > + this_leaf->attributes |= > + CACHE_READ_ALLOCATE|CACHE_WRITE_ALLOCATE; > + break; > + default: > + pr_err("Unknown ACPI cache allocation policy %d\n", > + found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE); > + } > +} > + > +static void cache_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu) > +{ > + struct acpi_pptt_cache *found_cache; > + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; Ditto. > + struct cacheinfo *this_leaf; > + unsigned int index = 0; > + struct acpi_pptt_processor *cpu_node = NULL; > + > + while (index < get_cpu_cacheinfo(cpu)->num_leaves) { > + this_leaf = this_cpu_ci->info_list + index; > + found_cache = acpi_find_cache_node(table, acpi_cpu_id, > + this_leaf->type, > + this_leaf->level, > + &cpu_node); > + pr_debug("found = %p %p\n", found_cache, cpu_node); > + if (found_cache) > + update_cache_properties(this_leaf, > + found_cache, > + cpu_node); > + > + index++; > + } > +} > + > +static int topology_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu, int level) > +{ > + struct acpi_pptt_processor *cpu_node; > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; Ditto. > + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); > + if (cpu_node) { > + cpu_node = acpi_find_processor_package_id(table, cpu_node, level); If level is 0 there is nothing to do here. > + /* Only the first level has a guaranteed id */ > + if (level == 0) > + return cpu_node->acpi_processor_id; > + return (int)((u8 *)cpu_node - (u8 *)table); Please explain to me the rationale behind this. To me acpi_processor_id is as good as the cpu_node offset in the table to describe the topology id at a given level, why special case level 0. On top of that, with this ID scheme, we would end up with thread/core/cluster id potentially being non-sequential values (depending on the PPTT table layout) which should not be a problem but we'd better check how people are using them. > + } > + pr_err_once("PPTT table found, but unable to locate core for %d\n", > + cpu); > + return -ENOENT; > +} > + > +/* > + * simply assign a ACPI cache entry to each known CPU cache entry > + * determining which entries are shared is done later. Add a kerneldoc style comment for an external interface. > + */ > +int cache_setup_acpi(unsigned int cpu) > +{ > + struct acpi_table_header *table; > + acpi_status status; > + > + pr_debug("Cache Setup ACPI cpu %d\n", cpu); > + > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > + return -ENOENT; > + } > + > + cache_setup_acpi_cpu(table, cpu); > + acpi_put_table(table); > + > + return status; > +} > + > +/* > + * Determine a topology unique ID for each thread/core/cluster/socket/etc. > + * This ID can then be used to group peers. Ditto. > + */ > +int setup_acpi_cpu_topology(unsigned int cpu, int level) > +{ > + struct acpi_table_header *table; > + acpi_status status; > + int retval; > + > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cpu topology may be inaccurate\n"); > + return -ENOENT; > + } > + retval = topology_setup_acpi_cpu(table, cpu, level); > + pr_debug("Topology Setup ACPI cpu %d, level %d ret = %d\n", > + cpu, level, retval); > + acpi_put_table(table); > + > + return retval; This value is just a token - with no HW meaning whatsoever and that's where I question the ACPI_PPTT_PHYSICAL_PACKAGE flag usage in retrieving it, you are not looking for a packageid (which has no meaning whatsoever anyway and I wonder why it was added to the specs at all) you are looking for an id at a given level. I will comment on the cache code separately - which deserves to be in a separate patch to simplify the review, I avoided repeating already reported review comments. Lorenzo
On 19/10/2017 06:18, Tomasz Nowicki wrote: >> >> Summary: >> >> I'm not at all happy with this specification's attempt to leave out >> pieces of information which make parsing things more deterministic. In >> this case I'm happy to demote the message level, but not remove it >> entirely but I do think the obvious case you list shouldn't be the >> default one. >> >> Lastly: >> >> I'm assuming the final result is that the table is actually being >> parsed correctly despite the ugly message? > > Indeed, the ThunderX2 PPTT table is being parsed so that topology shown > in lstopo and lscpu is correct. Hi Tomasz, Can you share the lscpu output? Does it have cluster info? I did not think that lscpu has a concept of clustering. I would say that the per-cpu cluster index sysfs entry needs be added to drivers/base/arch_topology.c (and other appropiate code under GENERIC_ARCH_TOPOLOGY) to support this. Thanks, John > > Thanks, > Tomasz
Hi, On 10/19/2017 12:18 AM, Tomasz Nowicki wrote: > On 18.10.2017 19:30, Jeremy Linton wrote: >> On 10/18/2017 05:24 AM, Tomasz Nowicki wrote: >>> On 18.10.2017 07:39, Tomasz Nowicki wrote: >>>> On 17.10.2017 17:22, Jeremy Linton wrote: >>>>> On 10/17/2017 08:25 AM, Tomasz Nowicki wrote: >>>>>> On 12.10.2017 21:48, Jeremy Linton wrote: (trimming) >>>>>>> + if (*found != NULL) >>>>>>> + pr_err("Found duplicate cache level/type unable >>>>>>> to determine uniqueness\n"); >>> >>> Actually I still see this error messages in my dmesg. It is because >>> the following ThunderX2 per-core L1 and L2 cache hierarchy: >>> >>> Core >>> ------------------ >>> | | >>> | L1i ----- | >>> | | | >>> | ----L2 | >>> | | | >>> | L1d ----- | >>> | | >>> ------------------ >>> >>> In this case we have two paths which lead to L2 cache and hit above >>> case. Is it really error case? >> >> No, but its not deterministic unless we mark the node, which doesn't >> solve the problem of a table constructed like >> >> L1i->L2 (unified) >> L1d->L2 (unified) >> >> or various other structures which aren't disallowed by the spec and >> have non-deterministic real world meanings, anymore than constructing >> the table like: >> >> L1i >> Lid->L2(unified) >> >> which I tend to prefer because with a structuring like that it can be >> deterministic (and in a way actually represents the non-coherent >> behavior of (most?) ARM64 core's i-caches, as could be argued the >> first example if the allocation policies are varied between the L2 >> nodes). >> >> The really ugly bits here happen if you add another layer: >> >> L1i->L2i-L3 >> L1d------^ >> >> which is why I made that an error message, not including the fact that >> since the levels aren't tagged the numbering and meaning isn't clear. >> >> (the L1i in the above example might be better called an L0i to avoid >> throwing off the reset of the hierarchy numbering, also so it could be >> ignored). >> >> Summary: >> >> I'm not at all happy with this specification's attempt to leave out >> pieces of information which make parsing things more deterministic. In >> this case I'm happy to demote the message level, but not remove it >> entirely but I do think the obvious case you list shouldn't be the >> default one. >> >> Lastly: >> >> I'm assuming the final result is that the table is actually being >> parsed correctly despite the ugly message? > > Indeed, the ThunderX2 PPTT table is being parsed so that topology shown > in lstopo and lscpu is correct. Great. Also, I think this is a better change: if ((*found != NULL) && (*found != cache)) pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); Which if its a duplicate node/type at the given level the message is just suppressed. It will of course still trigger in cases like: L1d->L2 l1i->L2 or other odd cases.
On 10/19/2017 05:22 AM, Lorenzo Pieralisi wrote: > On Thu, Oct 12, 2017 at 02:48:50PM -0500, Jeremy Linton wrote: >> ACPI 6.2 adds a new table, which describes how processing units >> are related to each other in tree like fashion. Caches are >> also sprinkled throughout the tree and describe the properties >> of the caches in relation to other caches and processing units. >> >> Add the code to parse the cache hierarchy and report the total >> number of levels of cache for a given core using >> acpi_find_last_cache_level() as well as fill out the individual >> cores cache information with cache_setup_acpi() once the >> cpu_cacheinfo structure has been populated by the arch specific >> code. >> >> Further, report peers in the topology using setup_acpi_cpu_topology() >> to report a unique ID for each processing unit at a given level >> in the tree. These unique id's can then be used to match related >> processing units which exist as threads, COD (clusters >> on die), within a given package, etc. > > I think this patch should be split ((1) topology (2) cache), it is doing > too much which makes it hard to review. If you look at the RFC, it only did cache parsing, the topology changes were added for v1. The cache bits are the ugly parts because they are walking up/down both the node tree, as well as the cache tree's attached to the nodes during the walk. Once that was in the place the addition of the cpu topology was trivial. But, trying to understand the cpu topology without first understanding the weird stuff done for the cache topology might not be the right way to approach this code. > > [...] > >> +/* determine if the given node is a leaf node */ >> +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *node) >> +{ >> + struct acpi_subtable_header *entry; >> + unsigned long table_end; >> + u32 node_entry; >> + struct acpi_pptt_processor *cpu_node; >> + >> + table_end = (unsigned long)table_hdr + table_hdr->length; >> + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >> + sizeof(struct acpi_table_pptt)); >> + >> + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { >> + cpu_node = (struct acpi_pptt_processor *)entry; >> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >> + (cpu_node->parent == node_entry)) >> + return 0; >> + entry = (struct acpi_subtable_header *)((u8 *)entry + entry->length); >> + } > > A leaf node is a node with a valid acpi_id corresponding to an MADT > entry, right ? By the way, is this function really needed ? Yes, because the only way to determine if it is a leaf node is to see if there are any references to it elsewhere in the table because the nodes point towards the root of the tree (rather than the other way). This piece was the primary change for v1->v2. > >> + return 1; >> +} >> + >> +/* >> + * Find the subtable entry describing the provided processor >> + */ >> +static struct acpi_pptt_processor *acpi_find_processor_node( >> + struct acpi_table_header *table_hdr, >> + u32 acpi_cpu_id) >> +{ >> + struct acpi_subtable_header *entry; >> + unsigned long table_end; >> + struct acpi_pptt_processor *cpu_node; >> + >> + table_end = (unsigned long)table_hdr + table_hdr->length; >> + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + >> + sizeof(struct acpi_table_pptt)); >> + >> + /* find the processor structure associated with this cpuid */ >> + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { >> + cpu_node = (struct acpi_pptt_processor *)entry; >> + >> + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && >> + acpi_pptt_leaf_node(table_hdr, cpu_node)) { > > Is the leaf node check necessary ? Or you just need to check the > ACPI Processor ID valid flag (as discussed offline) ? The valid flag doesn't mean anything for the leaf nodes, so its the only correct way of determining if the node _might_ have a valid madt/acpi ID. This actually should have the acpi_cpu_id checked as part of the if statement and the leaf node check below because doing it this way makes this parse n^2 instead of 2n. Of course in my mind, checking the id before we know it might be valid is backwards of the "logical" way to do it. > >> + pr_debug("checking phy_cpu_id %d against acpi id %d\n", >> + acpi_cpu_id, cpu_node->acpi_processor_id); > > Side note: I'd question (some of) these pr_debug() messages > >> + if (acpi_cpu_id == cpu_node->acpi_processor_id) { >> + /* found the correct entry */ >> + pr_debug("match found!\n"); > > Like this one for instance. This one is a bit redundant, but I come from the school that I want to be able to debug a remote machine. Large blocks of silent code are a nightmare, particularly if you have a sysadmin level user driving the keyboard/etc. > >> + return (struct acpi_pptt_processor *)entry; >> + } >> + } >> + >> + if (entry->length == 0) { >> + pr_err("Invalid zero length subtable\n"); >> + break; >> + } > > This should be moved at the beginning of the loop. Yah, the intention was to verify the next entry, but if its 0 then good point, the current one is probably invalid. > >> + entry = (struct acpi_subtable_header *) >> + ((u8 *)entry + entry->length); >> + } >> + >> + return NULL; >> +} >> + >> +/* >> + * Given a acpi_pptt_processor node, walk up until we identify the >> + * package that the node is associated with or we run out of levels >> + * to request. >> + */ >> +static struct acpi_pptt_processor *acpi_find_processor_package_id( >> + struct acpi_table_header *table_hdr, >> + struct acpi_pptt_processor *cpu, >> + int level) >> +{ >> + struct acpi_pptt_processor *prev_node; >> + >> + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { > > I really do not understand what ACPI_PPTT_PHYSICAL_PACKAGE means and > more importantly, how it is actually used in this code. ? Physical package maps to the package_id, which is generally defined to mean the "socket" and is used to terminate the cpu topology side of the parse. > > This function is used to get a topology id (that is just a number for > a given topology level) for a given level starting from a given leaf > node. This flag is the one decent part of the spec, because its the only level which actually is guaranteed to mean anything. Because the requirement that the sharability of cache nodes is described with general processor nodes it means that the number of nodes within a given leg of the tree is mostly meaningless because people sprinkle caches around the system, including potentially above the "socket" level. > Why do we care at all about ACPI_PPTT_PHYSICAL_PACKAGE ? Because, it gives us a hard mapping to core siblings. > >> + pr_debug("level %d\n", level); >> + prev_node = fetch_pptt_node(table_hdr, cpu->parent); >> + if (prev_node == NULL) >> + break; >> + cpu = prev_node; >> + level--; >> + } >> + return cpu; >> +} >> + >> +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) >> +{ >> + int number_of_levels = 0; >> + struct acpi_pptt_processor *cpu; >> + >> + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> + if (cpu) >> + number_of_levels = acpi_process_node(table_hdr, cpu); >> + >> + return number_of_levels; >> +} >> + >> +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) >> +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) >> +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) >> +#define ACPI_6_2_CACHE_POLICY_WB (0x0) >> +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) >> +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) >> +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) >> +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) >> + >> +static u8 acpi_cache_type(enum cache_type type) >> +{ >> + switch (type) { >> + case CACHE_TYPE_DATA: >> + pr_debug("Looking for data cache\n"); >> + return ACPI_6_2_CACHE_TYPE_DATA; >> + case CACHE_TYPE_INST: >> + pr_debug("Looking for instruction cache\n"); >> + return ACPI_6_2_CACHE_TYPE_INSTR; >> + default: >> + pr_debug("Unknown cache type, assume unified\n"); >> + case CACHE_TYPE_UNIFIED: >> + pr_debug("Looking for unified cache\n"); >> + return ACPI_6_2_CACHE_TYPE_UNIFIED; >> + } >> +} >> + >> +/* find the ACPI node describing the cache type/level for the given CPU */ >> +static struct acpi_pptt_cache *acpi_find_cache_node( >> + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, >> + enum cache_type type, unsigned int level, >> + struct acpi_pptt_processor **node) >> +{ >> + int total_levels = 0; >> + struct acpi_pptt_cache *found = NULL; >> + struct acpi_pptt_processor *cpu_node; >> + u8 acpi_type = acpi_cache_type(type); >> + >> + pr_debug("Looking for CPU %d's level %d cache type %d\n", >> + acpi_cpu_id, level, acpi_type); >> + >> + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); >> + if (!cpu_node) >> + return NULL; >> + >> + do { >> + found = acpi_find_cache_level(table_hdr, cpu_node, &total_levels, level, acpi_type); >> + *node = cpu_node; >> + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); >> + } while ((cpu_node) && (!found)); >> + >> + return found; >> +} >> + >> +int acpi_find_last_cache_level(unsigned int cpu) >> +{ >> + u32 acpi_cpu_id; >> + struct acpi_table_header *table; >> + int number_of_levels = 0; >> + acpi_status status; >> + >> + pr_debug("Cache Setup find last level cpu=%d\n", cpu); >> + >> + acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > > This would break !ARM64. > >> + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); >> + if (ACPI_FAILURE(status)) { >> + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); Yup, as in a way this does too... Without writing the binding code for another arch where that line is isn't clear at the moment. Part of the reason I put this in the arm64 directory. >> + } else { >> + number_of_levels = acpi_parse_pptt(table, acpi_cpu_id); >> + acpi_put_table(table); >> + } >> + pr_debug("Cache Setup find last level level=%d\n", number_of_levels); >> + >> + return number_of_levels; >> +} >> + >> +/* >> + * The ACPI spec implies that the fields in the cache structures are used to >> + * extend and correct the information probed from the hardware. In the case >> + * of arm64 the CCSIDR probing has been removed because it might be incorrect. >> + */ >> +static void update_cache_properties(struct cacheinfo *this_leaf, >> + struct acpi_pptt_cache *found_cache, >> + struct acpi_pptt_processor *cpu_node) >> +{ >> + if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) >> + this_leaf->size = found_cache->size; >> + if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) >> + this_leaf->coherency_line_size = found_cache->line_size; >> + if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) >> + this_leaf->number_of_sets = found_cache->number_of_sets; >> + if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) >> + this_leaf->ways_of_associativity = found_cache->associativity; >> + if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) >> + switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) { >> + case ACPI_6_2_CACHE_POLICY_WT: >> + this_leaf->attributes = CACHE_WRITE_THROUGH; >> + break; >> + case ACPI_6_2_CACHE_POLICY_WB: >> + this_leaf->attributes = CACHE_WRITE_BACK; >> + break; >> + default: >> + pr_err("Unknown ACPI cache policy %d\n", >> + found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY); >> + } >> + if (found_cache->flags & ACPI_PPTT_ALLOCATION_TYPE_VALID) >> + switch (found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE) { >> + case ACPI_6_2_CACHE_READ_ALLOCATE: >> + this_leaf->attributes |= CACHE_READ_ALLOCATE; >> + break; >> + case ACPI_6_2_CACHE_WRITE_ALLOCATE: >> + this_leaf->attributes |= CACHE_WRITE_ALLOCATE; >> + break; >> + case ACPI_6_2_CACHE_RW_ALLOCATE: >> + this_leaf->attributes |= >> + CACHE_READ_ALLOCATE|CACHE_WRITE_ALLOCATE; >> + break; >> + default: >> + pr_err("Unknown ACPI cache allocation policy %d\n", >> + found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE); >> + } >> +} >> + >> +static void cache_setup_acpi_cpu(struct acpi_table_header *table, >> + unsigned int cpu) >> +{ >> + struct acpi_pptt_cache *found_cache; >> + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); >> + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > > Ditto. > >> + struct cacheinfo *this_leaf; >> + unsigned int index = 0; >> + struct acpi_pptt_processor *cpu_node = NULL; >> + >> + while (index < get_cpu_cacheinfo(cpu)->num_leaves) { >> + this_leaf = this_cpu_ci->info_list + index; >> + found_cache = acpi_find_cache_node(table, acpi_cpu_id, >> + this_leaf->type, >> + this_leaf->level, >> + &cpu_node); >> + pr_debug("found = %p %p\n", found_cache, cpu_node); >> + if (found_cache) >> + update_cache_properties(this_leaf, >> + found_cache, >> + cpu_node); >> + >> + index++; >> + } >> +} >> + >> +static int topology_setup_acpi_cpu(struct acpi_table_header *table, >> + unsigned int cpu, int level) >> +{ >> + struct acpi_pptt_processor *cpu_node; >> + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > > Ditto. > >> + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); >> + if (cpu_node) { >> + cpu_node = acpi_find_processor_package_id(table, cpu_node, level); > > If level is 0 there is nothing to do here. > >> + /* Only the first level has a guaranteed id */ >> + if (level == 0) >> + return cpu_node->acpi_processor_id; >> + return (int)((u8 *)cpu_node - (u8 *)table); > > Please explain to me the rationale behind this. To me acpi_processor_id > is as good as the cpu_node offset in the table to describe the topology > id at a given level, why special case level 0. Level 0 is the only level guaranteed to have something set in the acpi_processor_id field. Its possible that values exist in nodes above this one, but they must _all_ be flagged and have matching container ids, and nothing in the spec requires that. Meaning that we need a guaranteed way to generate ids. This was added between v2->v3 after the discussion about making the ids a little nicer for the user. > > On top of that, with this ID scheme, we would end up with > thread/core/cluster id potentially being non-sequential values > (depending on the PPTT table layout) which should not be a problem but > we'd better check how people are using them. The thread (or core, depending on which is the 0 level) will have firmware provided Ids, everything else gets somewhat random looking but consistent ids. I commented earlier in this series that "normalizing" them is totally doable, although at the moment really only the physical_id is user visible and that should probably be normalized outside of this module in the arm64 topology parser if we want to actually do it. I'm not sure its worth the effort at least not as part of the general PPTT changes. > >> + } >> + pr_err_once("PPTT table found, but unable to locate core for %d\n", >> + cpu); >> + return -ENOENT; >> +} >> + >> +/* >> + * simply assign a ACPI cache entry to each known CPU cache entry >> + * determining which entries are shared is done later. > > Add a kerneldoc style comment for an external interface. That is a good point. > >> + */ >> +int cache_setup_acpi(unsigned int cpu) >> +{ >> + struct acpi_table_header *table; >> + acpi_status status; >> + >> + pr_debug("Cache Setup ACPI cpu %d\n", cpu); >> + >> + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); >> + if (ACPI_FAILURE(status)) { >> + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); >> + return -ENOENT; >> + } >> + >> + cache_setup_acpi_cpu(table, cpu); >> + acpi_put_table(table); >> + >> + return status; >> +} >> + >> +/* >> + * Determine a topology unique ID for each thread/core/cluster/socket/etc. >> + * This ID can then be used to group peers. > > Ditto. > >> + */ >> +int setup_acpi_cpu_topology(unsigned int cpu, int level) >> +{ >> + struct acpi_table_header *table; >> + acpi_status status; >> + int retval; >> + >> + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); >> + if (ACPI_FAILURE(status)) { >> + pr_err_once("No PPTT table found, cpu topology may be inaccurate\n"); >> + return -ENOENT; >> + } >> + retval = topology_setup_acpi_cpu(table, cpu, level); >> + pr_debug("Topology Setup ACPI cpu %d, level %d ret = %d\n", >> + cpu, level, retval); >> + acpi_put_table(table); >> + >> + return retval; > > This value is just a token - with no HW meaning whatsoever and that's > where I question the ACPI_PPTT_PHYSICAL_PACKAGE flag usage in retrieving > it, you are not looking for a packageid (which has no meaning whatsoever > anyway and I wonder why it was added to the specs at all) you are > looking for an id at a given level. If you look at the next patch in the series, to get the top level I pass an arbitrary large value as the "level" which should terminate on the PHYSICAL_PACKAGE rather than any intermediate nodes. > > I will comment on the cache code separately - which deserves to > be in a separate patch to simplify the review, I avoided repeating > already reported review comments. > > Lorenzo >
On Thu, Oct 19, 2017 at 10:43:46AM -0500, Jeremy Linton wrote: > On 10/19/2017 05:22 AM, Lorenzo Pieralisi wrote: > >On Thu, Oct 12, 2017 at 02:48:50PM -0500, Jeremy Linton wrote: > >>ACPI 6.2 adds a new table, which describes how processing units > >>are related to each other in tree like fashion. Caches are > >>also sprinkled throughout the tree and describe the properties > >>of the caches in relation to other caches and processing units. > >> > >>Add the code to parse the cache hierarchy and report the total > >>number of levels of cache for a given core using > >>acpi_find_last_cache_level() as well as fill out the individual > >>cores cache information with cache_setup_acpi() once the > >>cpu_cacheinfo structure has been populated by the arch specific > >>code. > >> > >>Further, report peers in the topology using setup_acpi_cpu_topology() > >>to report a unique ID for each processing unit at a given level > >>in the tree. These unique id's can then be used to match related > >>processing units which exist as threads, COD (clusters > >>on die), within a given package, etc. > > > >I think this patch should be split ((1) topology (2) cache), it is doing > >too much which makes it hard to review. > > If you look at the RFC, it only did cache parsing, the topology > changes were added for v1. The cache bits are the ugly parts because > they are walking up/down both the node tree, as well as the cache > tree's attached to the nodes during the walk. Once that was in the > place the addition of the cpu topology was trivial. But, trying to > understand the cpu topology without first understanding the weird > stuff done for the cache topology might not be the right way to > approach this code. Topology and cache bindings parsing seem decoupled to me: cache_setup_acpi(cpu) setup_acpi_cpu_topology(cpu, level) I mentioned that because it can simplify review (and merging) of this series. > > > >[...] > > > >>+/* determine if the given node is a leaf node */ > >>+static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, > >>+ struct acpi_pptt_processor *node) > >>+{ > >>+ struct acpi_subtable_header *entry; > >>+ unsigned long table_end; > >>+ u32 node_entry; > >>+ struct acpi_pptt_processor *cpu_node; > >>+ > >>+ table_end = (unsigned long)table_hdr + table_hdr->length; > >>+ node_entry = (u32)((u8 *)node - (u8 *)table_hdr); > >>+ entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > >>+ sizeof(struct acpi_table_pptt)); > >>+ > >>+ while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > >>+ cpu_node = (struct acpi_pptt_processor *)entry; > >>+ if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > >>+ (cpu_node->parent == node_entry)) > >>+ return 0; > >>+ entry = (struct acpi_subtable_header *)((u8 *)entry + entry->length); > >>+ } > > > >A leaf node is a node with a valid acpi_id corresponding to an MADT > >entry, right ? By the way, is this function really needed ? > > Yes, because the only way to determine if it is a leaf node is to > see if there are any references to it elsewhere in the table because > the nodes point towards the root of the tree (rather than the other > way). The question is whether we need to know a node is a leaf, see below. > This piece was the primary change for v1->v2. > > > > >>+ return 1; > >>+} > >>+ > >>+/* > >>+ * Find the subtable entry describing the provided processor > >>+ */ > >>+static struct acpi_pptt_processor *acpi_find_processor_node( > >>+ struct acpi_table_header *table_hdr, > >>+ u32 acpi_cpu_id) > >>+{ > >>+ struct acpi_subtable_header *entry; > >>+ unsigned long table_end; > >>+ struct acpi_pptt_processor *cpu_node; > >>+ > >>+ table_end = (unsigned long)table_hdr + table_hdr->length; > >>+ entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > >>+ sizeof(struct acpi_table_pptt)); > >>+ > >>+ /* find the processor structure associated with this cpuid */ > >>+ while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > >>+ cpu_node = (struct acpi_pptt_processor *)entry; > >>+ > >>+ if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > >>+ acpi_pptt_leaf_node(table_hdr, cpu_node)) { > > > >Is the leaf node check necessary ? Or you just need to check the > >ACPI Processor ID valid flag (as discussed offline) ? > > The valid flag doesn't mean anything for the leaf nodes, so its the > only correct way of determining if the node _might_ have a valid > madt/acpi ID. This actually should have the acpi_cpu_id checked as > part of the if statement and the leaf node check below because doing > it this way makes this parse n^2 instead of 2n. Of course in my > mind, checking the id before we know it might be valid is backwards > of the "logical" way to do it. Ok, it is not clearly worded in the specs (we can update it though) but I think the valid flag must be set for leaf nodes, which would make the leaf node check useless because you just have to match a PPTT node with a valid ACPI Processor ID. Lorenzo > >>+ pr_debug("checking phy_cpu_id %d against acpi id %d\n", > >>+ acpi_cpu_id, cpu_node->acpi_processor_id); > > > >Side note: I'd question (some of) these pr_debug() messages > > > >>+ if (acpi_cpu_id == cpu_node->acpi_processor_id) { > >>+ /* found the correct entry */ > >>+ pr_debug("match found!\n"); > > > >Like this one for instance. > > This one is a bit redundant, but I come from the school that I want > to be able to debug a remote machine. Large blocks of silent code > are a nightmare, particularly if you have a sysadmin level user > driving the keyboard/etc. > > > > >>+ return (struct acpi_pptt_processor *)entry; > >>+ } > >>+ } > >>+ > >>+ if (entry->length == 0) { > >>+ pr_err("Invalid zero length subtable\n"); > >>+ break; > >>+ } > > > >This should be moved at the beginning of the loop. > > Yah, the intention was to verify the next entry, but if its 0 then > good point, the current one is probably invalid. > > > > >>+ entry = (struct acpi_subtable_header *) > >>+ ((u8 *)entry + entry->length); > >>+ } > >>+ > >>+ return NULL; > >>+} > >>+ > >>+/* > >>+ * Given a acpi_pptt_processor node, walk up until we identify the > >>+ * package that the node is associated with or we run out of levels > >>+ * to request. > >>+ */ > >>+static struct acpi_pptt_processor *acpi_find_processor_package_id( > >>+ struct acpi_table_header *table_hdr, > >>+ struct acpi_pptt_processor *cpu, > >>+ int level) > >>+{ > >>+ struct acpi_pptt_processor *prev_node; > >>+ > >>+ while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { > > > >I really do not understand what ACPI_PPTT_PHYSICAL_PACKAGE means and > >more importantly, how it is actually used in this code. > > ? > > Physical package maps to the package_id, which is generally defined > to mean the "socket" and is used to terminate the cpu topology side > of the parse. > > > > >This function is used to get a topology id (that is just a number for > >a given topology level) for a given level starting from a given leaf > >node. > > This flag is the one decent part of the spec, because its the only > level which actually is guaranteed to mean anything. Because the > requirement that the sharability of cache nodes is described with > general processor nodes it means that the number of nodes within a > given leg of the tree is mostly meaningless because people sprinkle > caches around the system, including potentially above the "socket" > level. > > >Why do we care at all about ACPI_PPTT_PHYSICAL_PACKAGE ? > > Because, it gives us a hard mapping to core siblings. > > > > >>+ pr_debug("level %d\n", level); > >>+ prev_node = fetch_pptt_node(table_hdr, cpu->parent); > >>+ if (prev_node == NULL) > >>+ break; > >>+ cpu = prev_node; > >>+ level--; > >>+ } > >>+ return cpu; > >>+} > >>+ > >>+static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) > >>+{ > >>+ int number_of_levels = 0; > >>+ struct acpi_pptt_processor *cpu; > >>+ > >>+ cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); > >>+ if (cpu) > >>+ number_of_levels = acpi_process_node(table_hdr, cpu); > >>+ > >>+ return number_of_levels; > >>+} > >>+ > >>+#define ACPI_6_2_CACHE_TYPE_DATA (0x0) > >>+#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) > >>+#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) > >>+#define ACPI_6_2_CACHE_POLICY_WB (0x0) > >>+#define ACPI_6_2_CACHE_POLICY_WT (1<<4) > >>+#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) > >>+#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) > >>+#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) > >>+ > >>+static u8 acpi_cache_type(enum cache_type type) > >>+{ > >>+ switch (type) { > >>+ case CACHE_TYPE_DATA: > >>+ pr_debug("Looking for data cache\n"); > >>+ return ACPI_6_2_CACHE_TYPE_DATA; > >>+ case CACHE_TYPE_INST: > >>+ pr_debug("Looking for instruction cache\n"); > >>+ return ACPI_6_2_CACHE_TYPE_INSTR; > >>+ default: > >>+ pr_debug("Unknown cache type, assume unified\n"); > >>+ case CACHE_TYPE_UNIFIED: > >>+ pr_debug("Looking for unified cache\n"); > >>+ return ACPI_6_2_CACHE_TYPE_UNIFIED; > >>+ } > >>+} > >>+ > >>+/* find the ACPI node describing the cache type/level for the given CPU */ > >>+static struct acpi_pptt_cache *acpi_find_cache_node( > >>+ struct acpi_table_header *table_hdr, u32 acpi_cpu_id, > >>+ enum cache_type type, unsigned int level, > >>+ struct acpi_pptt_processor **node) > >>+{ > >>+ int total_levels = 0; > >>+ struct acpi_pptt_cache *found = NULL; > >>+ struct acpi_pptt_processor *cpu_node; > >>+ u8 acpi_type = acpi_cache_type(type); > >>+ > >>+ pr_debug("Looking for CPU %d's level %d cache type %d\n", > >>+ acpi_cpu_id, level, acpi_type); > >>+ > >>+ cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); > >>+ if (!cpu_node) > >>+ return NULL; > >>+ > >>+ do { > >>+ found = acpi_find_cache_level(table_hdr, cpu_node, &total_levels, level, acpi_type); > >>+ *node = cpu_node; > >>+ cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > >>+ } while ((cpu_node) && (!found)); > >>+ > >>+ return found; > >>+} > >>+ > >>+int acpi_find_last_cache_level(unsigned int cpu) > >>+{ > >>+ u32 acpi_cpu_id; > >>+ struct acpi_table_header *table; > >>+ int number_of_levels = 0; > >>+ acpi_status status; > >>+ > >>+ pr_debug("Cache Setup find last level cpu=%d\n", cpu); > >>+ > >>+ acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > > > >This would break !ARM64. > > > > >>+ status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > >>+ if (ACPI_FAILURE(status)) { > >>+ pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > > Yup, as in a way this does too... Without writing the binding code > for another arch where that line is isn't clear at the moment. Part > of the reason I put this in the arm64 directory. > > > >>+ } else { > >>+ number_of_levels = acpi_parse_pptt(table, acpi_cpu_id); > >>+ acpi_put_table(table); > >>+ } > >>+ pr_debug("Cache Setup find last level level=%d\n", number_of_levels); > >>+ > >>+ return number_of_levels; > >>+} > >>+ > >>+/* > >>+ * The ACPI spec implies that the fields in the cache structures are used to > >>+ * extend and correct the information probed from the hardware. In the case > >>+ * of arm64 the CCSIDR probing has been removed because it might be incorrect. > >>+ */ > >>+static void update_cache_properties(struct cacheinfo *this_leaf, > >>+ struct acpi_pptt_cache *found_cache, > >>+ struct acpi_pptt_processor *cpu_node) > >>+{ > >>+ if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) > >>+ this_leaf->size = found_cache->size; > >>+ if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) > >>+ this_leaf->coherency_line_size = found_cache->line_size; > >>+ if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) > >>+ this_leaf->number_of_sets = found_cache->number_of_sets; > >>+ if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) > >>+ this_leaf->ways_of_associativity = found_cache->associativity; > >>+ if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) > >>+ switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) { > >>+ case ACPI_6_2_CACHE_POLICY_WT: > >>+ this_leaf->attributes = CACHE_WRITE_THROUGH; > >>+ break; > >>+ case ACPI_6_2_CACHE_POLICY_WB: > >>+ this_leaf->attributes = CACHE_WRITE_BACK; > >>+ break; > >>+ default: > >>+ pr_err("Unknown ACPI cache policy %d\n", > >>+ found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY); > >>+ } > >>+ if (found_cache->flags & ACPI_PPTT_ALLOCATION_TYPE_VALID) > >>+ switch (found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE) { > >>+ case ACPI_6_2_CACHE_READ_ALLOCATE: > >>+ this_leaf->attributes |= CACHE_READ_ALLOCATE; > >>+ break; > >>+ case ACPI_6_2_CACHE_WRITE_ALLOCATE: > >>+ this_leaf->attributes |= CACHE_WRITE_ALLOCATE; > >>+ break; > >>+ case ACPI_6_2_CACHE_RW_ALLOCATE: > >>+ this_leaf->attributes |= > >>+ CACHE_READ_ALLOCATE|CACHE_WRITE_ALLOCATE; > >>+ break; > >>+ default: > >>+ pr_err("Unknown ACPI cache allocation policy %d\n", > >>+ found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE); > >>+ } > >>+} > >>+ > >>+static void cache_setup_acpi_cpu(struct acpi_table_header *table, > >>+ unsigned int cpu) > >>+{ > >>+ struct acpi_pptt_cache *found_cache; > >>+ struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); > >>+ u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > > > >Ditto. > > > >>+ struct cacheinfo *this_leaf; > >>+ unsigned int index = 0; > >>+ struct acpi_pptt_processor *cpu_node = NULL; > >>+ > >>+ while (index < get_cpu_cacheinfo(cpu)->num_leaves) { > >>+ this_leaf = this_cpu_ci->info_list + index; > >>+ found_cache = acpi_find_cache_node(table, acpi_cpu_id, > >>+ this_leaf->type, > >>+ this_leaf->level, > >>+ &cpu_node); > >>+ pr_debug("found = %p %p\n", found_cache, cpu_node); > >>+ if (found_cache) > >>+ update_cache_properties(this_leaf, > >>+ found_cache, > >>+ cpu_node); > >>+ > >>+ index++; > >>+ } > >>+} > >>+ > >>+static int topology_setup_acpi_cpu(struct acpi_table_header *table, > >>+ unsigned int cpu, int level) > >>+{ > >>+ struct acpi_pptt_processor *cpu_node; > >>+ u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > > > >Ditto. > > > >>+ cpu_node = acpi_find_processor_node(table, acpi_cpu_id); > >>+ if (cpu_node) { > >>+ cpu_node = acpi_find_processor_package_id(table, cpu_node, level); > > > >If level is 0 there is nothing to do here. > > > >>+ /* Only the first level has a guaranteed id */ > >>+ if (level == 0) > >>+ return cpu_node->acpi_processor_id; > >>+ return (int)((u8 *)cpu_node - (u8 *)table); > > > >Please explain to me the rationale behind this. To me acpi_processor_id > >is as good as the cpu_node offset in the table to describe the topology > >id at a given level, why special case level 0. > > Level 0 is the only level guaranteed to have something set in the > acpi_processor_id field. Its possible that values exist in nodes > above this one, but they must _all_ be flagged and have matching > container ids, and nothing in the spec requires that. Meaning that > we need a guaranteed way to generate ids. This was added between > v2->v3 after the discussion about making the ids a little nicer for > the user. > > > > > >On top of that, with this ID scheme, we would end up with > >thread/core/cluster id potentially being non-sequential values > >(depending on the PPTT table layout) which should not be a problem but > >we'd better check how people are using them. > > The thread (or core, depending on which is the 0 level) will have > firmware provided Ids, everything else gets somewhat random looking > but consistent ids. I commented earlier in this series that > "normalizing" them is totally doable, although at the moment really > only the physical_id is user visible and that should probably be > normalized outside of this module in the arm64 topology parser if we > want to actually do it. I'm not sure its worth the effort at least > not as part of the general PPTT changes. > > > > > >>+ } > >>+ pr_err_once("PPTT table found, but unable to locate core for %d\n", > >>+ cpu); > >>+ return -ENOENT; > >>+} > >>+ > >>+/* > >>+ * simply assign a ACPI cache entry to each known CPU cache entry > >>+ * determining which entries are shared is done later. > > > >Add a kerneldoc style comment for an external interface. > > That is a good point. > > > > >>+ */ > >>+int cache_setup_acpi(unsigned int cpu) > >>+{ > >>+ struct acpi_table_header *table; > >>+ acpi_status status; > >>+ > >>+ pr_debug("Cache Setup ACPI cpu %d\n", cpu); > >>+ > >>+ status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > >>+ if (ACPI_FAILURE(status)) { > >>+ pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > >>+ return -ENOENT; > >>+ } > >>+ > >>+ cache_setup_acpi_cpu(table, cpu); > >>+ acpi_put_table(table); > >>+ > >>+ return status; > >>+} > >>+ > >>+/* > >>+ * Determine a topology unique ID for each thread/core/cluster/socket/etc. > >>+ * This ID can then be used to group peers. > > > >Ditto. > > > >>+ */ > >>+int setup_acpi_cpu_topology(unsigned int cpu, int level) > >>+{ > >>+ struct acpi_table_header *table; > >>+ acpi_status status; > >>+ int retval; > >>+ > >>+ status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > >>+ if (ACPI_FAILURE(status)) { > >>+ pr_err_once("No PPTT table found, cpu topology may be inaccurate\n"); > >>+ return -ENOENT; > >>+ } > >>+ retval = topology_setup_acpi_cpu(table, cpu, level); > >>+ pr_debug("Topology Setup ACPI cpu %d, level %d ret = %d\n", > >>+ cpu, level, retval); > >>+ acpi_put_table(table); > >>+ > >>+ return retval; > > > >This value is just a token - with no HW meaning whatsoever and that's > >where I question the ACPI_PPTT_PHYSICAL_PACKAGE flag usage in retrieving > >it, you are not looking for a packageid (which has no meaning whatsoever > >anyway and I wonder why it was added to the specs at all) you are > >looking for an id at a given level. > > If you look at the next patch in the series, to get the top level I > pass an arbitrary large value as the "level" which should terminate > on the PHYSICAL_PACKAGE rather than any intermediate nodes. > > > > > >I will comment on the cache code separately - which deserves to > >be in a separate patch to simplify the review, I avoided repeating > >already reported review comments. > > > >Lorenzo > > >
Hey Jeremy, Quick comment below. On 10/12/2017 1:48 PM, Jeremy Linton wrote: > +static int topology_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu, int level) > +{ > + struct acpi_pptt_processor *cpu_node; > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; This lookup for the acpi id is architecture dependent. Can you use a function that would work for any user of PPTT and MADT? It may require writing and exporting the inverse lookup of the function acpi_get_cpuid() which is exported from processor_core.c > + > + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); > + if (cpu_node) { > + cpu_node = acpi_find_processor_package_id(table, cpu_node, level); > + /* Only the first level has a guaranteed id */ > + if (level == 0) > + return cpu_node->acpi_processor_id; > + return (int)((u8 *)cpu_node - (u8 *)table); > + } > + pr_err_once("PPTT table found, but unable to locate core for %d\n", > + cpu); > + return -ENOENT; > +}
Hi, On 10/20/2017 02:53 PM, Christ, Austin wrote: > Hey Jeremy, > > Quick comment below. > > On 10/12/2017 1:48 PM, Jeremy Linton wrote: >> +static int topology_setup_acpi_cpu(struct acpi_table_header *table, >> + unsigned int cpu, int level) >> +{ >> + struct acpi_pptt_processor *cpu_node; >> + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > > This lookup for the acpi id is architecture dependent. Can you use a > function that would work for any user of PPTT and MADT? It may require > writing and exporting the inverse lookup of the function > acpi_get_cpuid() which is exported from processor_core.c Sure, I was actually thinking about just passing it into the function, so it becomes the responsibility of the caller to do the platform specific reverse lookup. > >> + >> + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); >> + if (cpu_node) { >> + cpu_node = acpi_find_processor_package_id(table, cpu_node, >> level); >> + /* Only the first level has a guaranteed id */ >> + if (level == 0) >> + return cpu_node->acpi_processor_id; >> + return (int)((u8 *)cpu_node - (u8 *)table); >> + } >> + pr_err_once("PPTT table found, but unable to locate core for %d\n", >> + cpu); >> + return -ENOENT; >> +} >
Hi John, On 19.10.2017 12:25, John Garry wrote: > On 19/10/2017 06:18, Tomasz Nowicki wrote: >>> >>> Summary: >>> >>> I'm not at all happy with this specification's attempt to leave out >>> pieces of information which make parsing things more deterministic. In >>> this case I'm happy to demote the message level, but not remove it >>> entirely but I do think the obvious case you list shouldn't be the >>> default one. >>> >>> Lastly: >>> >>> I'm assuming the final result is that the table is actually being >>> parsed correctly despite the ugly message? >> >> Indeed, the ThunderX2 PPTT table is being parsed so that topology shown >> in lstopo and lscpu is correct. > > Hi Tomasz, > > Can you share the lscpu output? Does it have cluster info? I did not > think that lscpu has a concept of clustering. > > I would say that the per-cpu cluster index sysfs entry needs be added to > drivers/base/arch_topology.c (and other appropiate code under > GENERIC_ARCH_TOPOLOGY) to support this. Here is what I get: tn@val2-11 [~]$ lscpu -ap # The following is the parsable format, which can be fed to other # programs. Each different item in every column has an unique ID # starting from zero. # CPU,Core,Socket,Node,,L1d,L1i,L2,L3 [...] 1,0,0,0,,0,0,0,0 [...] so yes, no cluster info. Thanks, Tomasz
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c new file mode 100644 index 000000000000..c86715fed4a7 --- /dev/null +++ b/drivers/acpi/pptt.c @@ -0,1 +1,485 @@ +/* + * Copyright (C) 2017, ARM + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * This file implements parsing of Processor Properties Topology Table (PPTT) + * which is optionally used to describe the processor and cache topology. + * Due to the relative pointers used throughout the table, this doesn't + * leverage the existing subtable parsing in the kernel. + */ +#define pr_fmt(fmt) "ACPI PPTT: " fmt + +#include <linux/acpi.h> +#include <linux/cacheinfo.h> +#include <acpi/processor.h> + +/* + * Given the PPTT table, find and verify that the subtable entry + * is located within the table + */ +static struct acpi_subtable_header *fetch_pptt_subtable( + struct acpi_table_header *table_hdr, u32 pptt_ref) +{ + struct acpi_subtable_header *entry; + + /* there isn't a subtable at reference 0 */ + if (!pptt_ref) + return NULL; + + if (pptt_ref + sizeof(struct acpi_subtable_header) > table_hdr->length) + return NULL; + + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); + + if (pptt_ref + entry->length > table_hdr->length) + return NULL; + + return entry; +} + +static struct acpi_pptt_processor *fetch_pptt_node( + struct acpi_table_header *table_hdr, u32 pptt_ref) +{ + return (struct acpi_pptt_processor *)fetch_pptt_subtable(table_hdr, pptt_ref); +} + +static struct acpi_pptt_cache *fetch_pptt_cache( + struct acpi_table_header *table_hdr, u32 pptt_ref) +{ + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, pptt_ref); +} + +static struct acpi_subtable_header *acpi_get_pptt_resource( + struct acpi_table_header *table_hdr, + struct acpi_pptt_processor *node, int resource) +{ + u32 ref; + + if (resource >= node->number_of_priv_resources) + return NULL; + + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + + sizeof(u32) * resource); + + return fetch_pptt_subtable(table_hdr, ref); +} + +/* + * given a pptt resource, verify that it is a cache node, then walk + * down each level of caches, counting how many levels are found + * as well as checking the cache type (icache, dcache, unified). If a + * level & type match, then we set found, and continue the search. + * Once the entire cache branch has been walked return its max + * depth. + */ +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, + int local_level, + struct acpi_subtable_header *res, + struct acpi_pptt_cache **found, + int level, int type) +{ + struct acpi_pptt_cache *cache; + + if (res->type != ACPI_PPTT_TYPE_CACHE) + return 0; + + cache = (struct acpi_pptt_cache *) res; + while (cache) { + local_level++; + + if ((local_level == level) && + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { + if (*found != NULL) + pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); + + pr_debug("Found cache @ level %d\n", level); + *found = cache; + /* + * continue looking at this node's resource list + * to verify that we don't find a duplicate + * cache node. + */ + } + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); + } + return local_level; +} + +/* + * Given a CPU node look for cache levels that exist at this level, and then + * for each cache node, count how many levels exist below (logically above) it. + * If a level and type are specified, and we find that level/type, abort + * processing and return the acpi_pptt_cache structure. + */ +static struct acpi_pptt_cache *acpi_find_cache_level( + struct acpi_table_header *table_hdr, + struct acpi_pptt_processor *cpu_node, + int *starting_level, int level, int type) +{ + struct acpi_subtable_header *res; + int number_of_levels = *starting_level; + int resource = 0; + struct acpi_pptt_cache *ret = NULL; + int local_level; + + /* walk down from the processor node */ + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { + resource++; + + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, + res, &ret, level, type); + /* + * we are looking for the max depth. Since its potentially + * possible for a given node to have resources with differing + * depths verify that the depth we have found is the largest. + */ + if (number_of_levels < local_level) + number_of_levels = local_level; + } + if (number_of_levels > *starting_level) + *starting_level = number_of_levels; + + return ret; +} + +/* + * given a processor node containing a processing unit, walk into it and count + * how many levels exist solely for it, and then walk up each level until we hit + * the root node (ignore the package level because it may be possible to have + * caches that exist across packages). Count the number of cache levels that + * exist at each level on the way up. + */ +static int acpi_process_node(struct acpi_table_header *table_hdr, + struct acpi_pptt_processor *cpu_node) +{ + int total_levels = 0; + + do { + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); + } while (cpu_node); + + return total_levels; +} + +/* determine if the given node is a leaf node */ +static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr, + struct acpi_pptt_processor *node) +{ + struct acpi_subtable_header *entry; + unsigned long table_end; + u32 node_entry; + struct acpi_pptt_processor *cpu_node; + + table_end = (unsigned long)table_hdr + table_hdr->length; + node_entry = (u32)((u8 *)node - (u8 *)table_hdr); + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + + sizeof(struct acpi_table_pptt)); + + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { + cpu_node = (struct acpi_pptt_processor *)entry; + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && + (cpu_node->parent == node_entry)) + return 0; + entry = (struct acpi_subtable_header *)((u8 *)entry + entry->length); + } + return 1; +} + +/* + * Find the subtable entry describing the provided processor + */ +static struct acpi_pptt_processor *acpi_find_processor_node( + struct acpi_table_header *table_hdr, + u32 acpi_cpu_id) +{ + struct acpi_subtable_header *entry; + unsigned long table_end; + struct acpi_pptt_processor *cpu_node; + + table_end = (unsigned long)table_hdr + table_hdr->length; + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + + sizeof(struct acpi_table_pptt)); + + /* find the processor structure associated with this cpuid */ + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { + cpu_node = (struct acpi_pptt_processor *)entry; + + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && + acpi_pptt_leaf_node(table_hdr, cpu_node)) { + pr_debug("checking phy_cpu_id %d against acpi id %d\n", + acpi_cpu_id, cpu_node->acpi_processor_id); + if (acpi_cpu_id == cpu_node->acpi_processor_id) { + /* found the correct entry */ + pr_debug("match found!\n"); + return (struct acpi_pptt_processor *)entry; + } + } + + if (entry->length == 0) { + pr_err("Invalid zero length subtable\n"); + break; + } + entry = (struct acpi_subtable_header *) + ((u8 *)entry + entry->length); + } + + return NULL; +} + +/* + * Given a acpi_pptt_processor node, walk up until we identify the + * package that the node is associated with or we run out of levels + * to request. + */ +static struct acpi_pptt_processor *acpi_find_processor_package_id( + struct acpi_table_header *table_hdr, + struct acpi_pptt_processor *cpu, + int level) +{ + struct acpi_pptt_processor *prev_node; + + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { + pr_debug("level %d\n", level); + prev_node = fetch_pptt_node(table_hdr, cpu->parent); + if (prev_node == NULL) + break; + cpu = prev_node; + level--; + } + return cpu; +} + +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) +{ + int number_of_levels = 0; + struct acpi_pptt_processor *cpu; + + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); + if (cpu) + number_of_levels = acpi_process_node(table_hdr, cpu); + + return number_of_levels; +} + +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) +#define ACPI_6_2_CACHE_POLICY_WB (0x0) +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) + +static u8 acpi_cache_type(enum cache_type type) +{ + switch (type) { + case CACHE_TYPE_DATA: + pr_debug("Looking for data cache\n"); + return ACPI_6_2_CACHE_TYPE_DATA; + case CACHE_TYPE_INST: + pr_debug("Looking for instruction cache\n"); + return ACPI_6_2_CACHE_TYPE_INSTR; + default: + pr_debug("Unknown cache type, assume unified\n"); + case CACHE_TYPE_UNIFIED: + pr_debug("Looking for unified cache\n"); + return ACPI_6_2_CACHE_TYPE_UNIFIED; + } +} + +/* find the ACPI node describing the cache type/level for the given CPU */ +static struct acpi_pptt_cache *acpi_find_cache_node( + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, + enum cache_type type, unsigned int level, + struct acpi_pptt_processor **node) +{ + int total_levels = 0; + struct acpi_pptt_cache *found = NULL; + struct acpi_pptt_processor *cpu_node; + u8 acpi_type = acpi_cache_type(type); + + pr_debug("Looking for CPU %d's level %d cache type %d\n", + acpi_cpu_id, level, acpi_type); + + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); + if (!cpu_node) + return NULL; + + do { + found = acpi_find_cache_level(table_hdr, cpu_node, &total_levels, level, acpi_type); + *node = cpu_node; + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); + } while ((cpu_node) && (!found)); + + return found; +} + +int acpi_find_last_cache_level(unsigned int cpu) +{ + u32 acpi_cpu_id; + struct acpi_table_header *table; + int number_of_levels = 0; + acpi_status status; + + pr_debug("Cache Setup find last level cpu=%d\n", cpu); + + acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); + if (ACPI_FAILURE(status)) { + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); + } else { + number_of_levels = acpi_parse_pptt(table, acpi_cpu_id); + acpi_put_table(table); + } + pr_debug("Cache Setup find last level level=%d\n", number_of_levels); + + return number_of_levels; +} + +/* + * The ACPI spec implies that the fields in the cache structures are used to + * extend and correct the information probed from the hardware. In the case + * of arm64 the CCSIDR probing has been removed because it might be incorrect. + */ +static void update_cache_properties(struct cacheinfo *this_leaf, + struct acpi_pptt_cache *found_cache, + struct acpi_pptt_processor *cpu_node) +{ + if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) + this_leaf->size = found_cache->size; + if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) + this_leaf->coherency_line_size = found_cache->line_size; + if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) + this_leaf->number_of_sets = found_cache->number_of_sets; + if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) + this_leaf->ways_of_associativity = found_cache->associativity; + if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) + switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) { + case ACPI_6_2_CACHE_POLICY_WT: + this_leaf->attributes = CACHE_WRITE_THROUGH; + break; + case ACPI_6_2_CACHE_POLICY_WB: + this_leaf->attributes = CACHE_WRITE_BACK; + break; + default: + pr_err("Unknown ACPI cache policy %d\n", + found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY); + } + if (found_cache->flags & ACPI_PPTT_ALLOCATION_TYPE_VALID) + switch (found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE) { + case ACPI_6_2_CACHE_READ_ALLOCATE: + this_leaf->attributes |= CACHE_READ_ALLOCATE; + break; + case ACPI_6_2_CACHE_WRITE_ALLOCATE: + this_leaf->attributes |= CACHE_WRITE_ALLOCATE; + break; + case ACPI_6_2_CACHE_RW_ALLOCATE: + this_leaf->attributes |= + CACHE_READ_ALLOCATE|CACHE_WRITE_ALLOCATE; + break; + default: + pr_err("Unknown ACPI cache allocation policy %d\n", + found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE); + } +} + +static void cache_setup_acpi_cpu(struct acpi_table_header *table, + unsigned int cpu) +{ + struct acpi_pptt_cache *found_cache; + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; + struct cacheinfo *this_leaf; + unsigned int index = 0; + struct acpi_pptt_processor *cpu_node = NULL; + + while (index < get_cpu_cacheinfo(cpu)->num_leaves) { + this_leaf = this_cpu_ci->info_list + index; + found_cache = acpi_find_cache_node(table, acpi_cpu_id, + this_leaf->type, + this_leaf->level, + &cpu_node); + pr_debug("found = %p %p\n", found_cache, cpu_node); + if (found_cache) + update_cache_properties(this_leaf, + found_cache, + cpu_node); + + index++; + } +} + +static int topology_setup_acpi_cpu(struct acpi_table_header *table, + unsigned int cpu, int level) +{ + struct acpi_pptt_processor *cpu_node; + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; + + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); + if (cpu_node) { + cpu_node = acpi_find_processor_package_id(table, cpu_node, level); + /* Only the first level has a guaranteed id */ + if (level == 0) + return cpu_node->acpi_processor_id; + return (int)((u8 *)cpu_node - (u8 *)table); + } + pr_err_once("PPTT table found, but unable to locate core for %d\n", + cpu); + return -ENOENT; +} + +/* + * simply assign a ACPI cache entry to each known CPU cache entry + * determining which entries are shared is done later. + */ +int cache_setup_acpi(unsigned int cpu) +{ + struct acpi_table_header *table; + acpi_status status; + + pr_debug("Cache Setup ACPI cpu %d\n", cpu); + + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); + if (ACPI_FAILURE(status)) { + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); + return -ENOENT; + } + + cache_setup_acpi_cpu(table, cpu); + acpi_put_table(table); + + return status; +} + +/* + * Determine a topology unique ID for each thread/core/cluster/socket/etc. + * This ID can then be used to group peers. + */ +int setup_acpi_cpu_topology(unsigned int cpu, int level) +{ + struct acpi_table_header *table; + acpi_status status; + int retval; + + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); + if (ACPI_FAILURE(status)) { + pr_err_once("No PPTT table found, cpu topology may be inaccurate\n"); + return -ENOENT; + } + retval = topology_setup_acpi_cpu(table, cpu, level); + pr_debug("Topology Setup ACPI cpu %d, level %d ret = %d\n", + cpu, level, retval); + acpi_put_table(table); + + return retval; +} --
ACPI 6.2 adds a new table, which describes how processing units are related to each other in tree like fashion. Caches are also sprinkled throughout the tree and describe the properties of the caches in relation to other caches and processing units. Add the code to parse the cache hierarchy and report the total number of levels of cache for a given core using acpi_find_last_cache_level() as well as fill out the individual cores cache information with cache_setup_acpi() once the cpu_cacheinfo structure has been populated by the arch specific code. Further, report peers in the topology using setup_acpi_cpu_topology() to report a unique ID for each processing unit at a given level in the tree. These unique id's can then be used to match related processing units which exist as threads, COD (clusters on die), within a given package, etc. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> --- drivers/acpi/pptt.c | 485 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 485 insertions(+) create mode 100644 drivers/acpi/pptt.c 2.13.5