From patchwork Tue Sep 28 12:47:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kajoljain X-Patchwork-Id: 12522457 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BE5F2FB2 for ; Tue, 28 Sep 2021 12:47:51 +0000 (UTC) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18SCCXSq003391; Tue, 28 Sep 2021 08:47:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=5VRyKc8Dl36o15virRnUGPJzR7j1xnZTwlYyV+z/rhQ=; b=eI17a/lntV9hUR5sTP5xRChuBievGN4wCaFc8xtzenya5d1jUscSJKQqhaBGso1Cx3fu Ez1/TjBX2PY1EkkyHmRgGpNifw50uKBgQZuFQxQY1hqEl1ujS0fxXFuS5BgxpCn2twi6 woGK9TS74acjzDwsYvDYqrw5FkBotcsfUU1l50upsoizTl//Z0wANg1hZJ6APk3gtDX+ jYlYGNr1O7UmjyzbpTepyHou2PrW++9rbPfARuCEDtJOEzMF/UOpeMSdS3AQLuAlJCi2 sLUInm6WUjpTvXIY5hRhAU1BA3knmv//e15BuF1eK/eAhTuyUQSkvuklKWNE+NbCJGEl ug== Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com with ESMTP id 3bbws67qs8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 08:47:40 -0400 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 18SChGSU030281; Tue, 28 Sep 2021 12:47:38 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma06ams.nl.ibm.com with ESMTP id 3b9u1je205-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 12:47:38 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18SClYXQ38535482 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 28 Sep 2021 12:47:34 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AA559AE053; Tue, 28 Sep 2021 12:47:34 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 46E71AE057; Tue, 28 Sep 2021 12:47:29 +0000 (GMT) Received: from li-e8dccbcc-2adc-11b2-a85c-bc1f33b9b810.ibm.com.com (unknown [9.43.50.245]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 28 Sep 2021 12:47:28 +0000 (GMT) From: Kajol Jain To: mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, peterz@infradead.org, dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com Cc: maddy@linux.ibm.com, santosh@fossix.org, aneesh.kumar@linux.ibm.com, vaibhav@linux.ibm.com, atrajeev@linux.vnet.ibm.com, tglx@linutronix.de, rnsastry@linux.ibm.com, kjain@linux.ibm.com Subject: [PATCH v5 1/4] drivers/nvdimm: Add nvdimm pmu structure Date: Tue, 28 Sep 2021 18:17:24 +0530 Message-Id: <20210928124724.146614-1-kjain@linux.ibm.com> X-Mailer: git-send-email 2.27.0 Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: HAHvwjJwGIWBr1H_Knf9nIx9bPsTcCfk X-Proofpoint-GUID: HAHvwjJwGIWBr1H_Knf9nIx9bPsTcCfk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-28_05,2021-09-28_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 phishscore=0 mlxlogscore=954 impostorscore=0 lowpriorityscore=0 clxscore=1015 spamscore=0 mlxscore=0 adultscore=0 bulkscore=0 priorityscore=1501 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109230001 definitions=main-2109280071 A structure is added called nvdimm_pmu, for performance stats reporting support of nvdimm devices. It can be used to add device pmu data such as pmu data structure for performance stats, nvdimm device pointer along with cpumask attributes. Signed-off-by: Kajol Jain --- include/linux/nd.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/include/linux/nd.h b/include/linux/nd.h index ee9ad76afbba..f5ed4db2d859 100644 --- a/include/linux/nd.h +++ b/include/linux/nd.h @@ -8,6 +8,7 @@ #include #include #include +#include enum nvdimm_event { NVDIMM_REVALIDATE_POISON, @@ -23,6 +24,25 @@ enum nvdimm_claim_class { NVDIMM_CCLASS_UNKNOWN, }; +/** + * struct nvdimm_pmu - data structure for nvdimm perf driver + * @pmu: pmu data structure for nvdimm performance stats. + * @dev: nvdimm device pointer. + * @cpu: designated cpu for counter access. + * @node: node for cpu hotplug notifier link. + * @cpuhp_state: state for cpu hotplug notification. + * @arch_cpumask: cpumask to get designated cpu for counter access. + */ +struct nvdimm_pmu { + struct pmu pmu; + struct device *dev; + int cpu; + struct hlist_node node; + enum cpuhp_state cpuhp_state; + /* cpumask provided by arch/platform specific code */ + struct cpumask arch_cpumask; +}; + struct nd_device_driver { struct device_driver drv; unsigned long type; From patchwork Tue Sep 28 12:47:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kajoljain X-Patchwork-Id: 12522461 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 973133FCB for ; Tue, 28 Sep 2021 12:48:16 +0000 (UTC) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18SCLDDq027640; Tue, 28 Sep 2021 08:48:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=hsRgdMh6d2R8FWnQ8YPPSDAR7TL2eF67Qoo2D4e7vK4=; b=e5hmPojq/ws04P9fVPGtGwJjZhfPqZSIehbsxyl86DY+n4XjPVspbhKTFMIGYGfq7+I+ jO1KfRC6F4sOeRh4ogBOqwaASnUfB9cUyPZQ1bl4LZQaRszi92Lb38zNkHXHpGuyXcFF rrqOhSmBQm6QEdbTY6viszFo8pJy0nFIicLObRj8LALkuHR2/cCQon3fCG/EEBEc76Ri YBLtTkMggakCulpkAfR8Glc3VOXcN5DEue0Sl1WXlnCjihmpRKsO1TXvfmtOJf/WU+oI RRA3CDNeYXDF7PVbdiPsDsUgl5cxQga1oJbhbqbs3ph3mwNz+H2B0FJrRVzxynd/x7FL iA== Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 3bbx28qbyj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 08:48:04 -0400 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 18SChqZ9018768; Tue, 28 Sep 2021 12:48:01 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma04fra.de.ibm.com with ESMTP id 3b9ud9vfd4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 12:48:01 +0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18SCgsOY59507114 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 28 Sep 2021 12:42:54 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E2A7FA4062; Tue, 28 Sep 2021 12:47:57 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E3B55A405B; Tue, 28 Sep 2021 12:47:51 +0000 (GMT) Received: from li-e8dccbcc-2adc-11b2-a85c-bc1f33b9b810.ibm.com.com (unknown [9.43.50.245]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 28 Sep 2021 12:47:51 +0000 (GMT) From: Kajol Jain To: mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, peterz@infradead.org, dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com Cc: maddy@linux.ibm.com, santosh@fossix.org, aneesh.kumar@linux.ibm.com, vaibhav@linux.ibm.com, atrajeev@linux.vnet.ibm.com, tglx@linutronix.de, rnsastry@linux.ibm.com, kjain@linux.ibm.com, kernel test robot Subject: [PATCH v5 2/4] drivers/nvdimm: Add perf interface to expose nvdimm performance stats Date: Tue, 28 Sep 2021 18:17:44 +0530 Message-Id: <20210928124744.146673-1-kjain@linux.ibm.com> X-Mailer: git-send-email 2.27.0 Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 3gY1xspsGnPohPH3ARJZrnyubvZyl0Bb X-Proofpoint-GUID: 3gY1xspsGnPohPH3ARJZrnyubvZyl0Bb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-28_05,2021-09-28_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0 suspectscore=0 mlxlogscore=999 phishscore=0 spamscore=0 clxscore=1015 malwarescore=0 lowpriorityscore=0 bulkscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109230001 definitions=main-2109280071 A common interface is added to get performance stats reporting support for nvdimm devices. Added interface defines supported event list, config fields for the event attributes and their corresponding bit values which are exported via sysfs. Interface also added support for pmu register/unregister functions, cpu hotplug feature along with macros for handling events addition via sysfs. It adds attribute groups for format, cpumask and events to the pmu structure. User could use the standard perf tool to access perf events exposed via nvdimm pmu. Signed-off-by: Kajol Jain [Make hotplug function static as reported by kernel test rorbot] Reported-by: kernel test robot --- drivers/nvdimm/Makefile | 1 + drivers/nvdimm/nd_perf.c | 328 +++++++++++++++++++++++++++++++++++++++ include/linux/nd.h | 21 +++ 3 files changed, 350 insertions(+) create mode 100644 drivers/nvdimm/nd_perf.c diff --git a/drivers/nvdimm/Makefile b/drivers/nvdimm/Makefile index 29203f3d3069..25dba6095612 100644 --- a/drivers/nvdimm/Makefile +++ b/drivers/nvdimm/Makefile @@ -18,6 +18,7 @@ nd_e820-y := e820.o libnvdimm-y := core.o libnvdimm-y += bus.o libnvdimm-y += dimm_devs.o +libnvdimm-y += nd_perf.o libnvdimm-y += dimm.o libnvdimm-y += region_devs.o libnvdimm-y += region.o diff --git a/drivers/nvdimm/nd_perf.c b/drivers/nvdimm/nd_perf.c new file mode 100644 index 000000000000..314415894acf --- /dev/null +++ b/drivers/nvdimm/nd_perf.c @@ -0,0 +1,328 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * nd_perf.c: NVDIMM Device Performance Monitoring Unit support + * + * Perf interface to expose nvdimm performance stats. + * + * Copyright (C) 2021 IBM Corporation + */ + +#define pr_fmt(fmt) "nvdimm_pmu: " fmt + +#include + +#define EVENT(_name, _code) enum{_name = _code} + +/* + * NVDIMM Events codes. + */ + +/* Controller Reset Count */ +EVENT(CTL_RES_CNT, 0x1); +/* Controller Reset Elapsed Time */ +EVENT(CTL_RES_TM, 0x2); +/* Power-on Seconds */ +EVENT(POWERON_SECS, 0x3); +/* Life Remaining */ +EVENT(MEM_LIFE, 0x4); +/* Critical Resource Utilization */ +EVENT(CRI_RES_UTIL, 0x5); +/* Host Load Count */ +EVENT(HOST_L_CNT, 0x6); +/* Host Store Count */ +EVENT(HOST_S_CNT, 0x7); +/* Host Store Duration */ +EVENT(HOST_S_DUR, 0x8); +/* Host Load Duration */ +EVENT(HOST_L_DUR, 0x9); +/* Media Read Count */ +EVENT(MED_R_CNT, 0xa); +/* Media Write Count */ +EVENT(MED_W_CNT, 0xb); +/* Media Read Duration */ +EVENT(MED_R_DUR, 0xc); +/* Media Write Duration */ +EVENT(MED_W_DUR, 0xd); +/* Cache Read Hit Count */ +EVENT(CACHE_RH_CNT, 0xe); +/* Cache Write Hit Count */ +EVENT(CACHE_WH_CNT, 0xf); +/* Fast Write Count */ +EVENT(FAST_W_CNT, 0x10); + +NVDIMM_EVENT_ATTR(ctl_res_cnt, CTL_RES_CNT); +NVDIMM_EVENT_ATTR(ctl_res_tm, CTL_RES_TM); +NVDIMM_EVENT_ATTR(poweron_secs, POWERON_SECS); +NVDIMM_EVENT_ATTR(mem_life, MEM_LIFE); +NVDIMM_EVENT_ATTR(cri_res_util, CRI_RES_UTIL); +NVDIMM_EVENT_ATTR(host_l_cnt, HOST_L_CNT); +NVDIMM_EVENT_ATTR(host_s_cnt, HOST_S_CNT); +NVDIMM_EVENT_ATTR(host_s_dur, HOST_S_DUR); +NVDIMM_EVENT_ATTR(host_l_dur, HOST_L_DUR); +NVDIMM_EVENT_ATTR(med_r_cnt, MED_R_CNT); +NVDIMM_EVENT_ATTR(med_w_cnt, MED_W_CNT); +NVDIMM_EVENT_ATTR(med_r_dur, MED_R_DUR); +NVDIMM_EVENT_ATTR(med_w_dur, MED_W_DUR); +NVDIMM_EVENT_ATTR(cache_rh_cnt, CACHE_RH_CNT); +NVDIMM_EVENT_ATTR(cache_wh_cnt, CACHE_WH_CNT); +NVDIMM_EVENT_ATTR(fast_w_cnt, FAST_W_CNT); + +static struct attribute *nvdimm_events_attr[] = { + NVDIMM_EVENT_PTR(CTL_RES_CNT), + NVDIMM_EVENT_PTR(CTL_RES_TM), + NVDIMM_EVENT_PTR(POWERON_SECS), + NVDIMM_EVENT_PTR(MEM_LIFE), + NVDIMM_EVENT_PTR(CRI_RES_UTIL), + NVDIMM_EVENT_PTR(HOST_L_CNT), + NVDIMM_EVENT_PTR(HOST_S_CNT), + NVDIMM_EVENT_PTR(HOST_S_DUR), + NVDIMM_EVENT_PTR(HOST_L_DUR), + NVDIMM_EVENT_PTR(MED_R_CNT), + NVDIMM_EVENT_PTR(MED_W_CNT), + NVDIMM_EVENT_PTR(MED_R_DUR), + NVDIMM_EVENT_PTR(MED_W_DUR), + NVDIMM_EVENT_PTR(CACHE_RH_CNT), + NVDIMM_EVENT_PTR(CACHE_WH_CNT), + NVDIMM_EVENT_PTR(FAST_W_CNT), + NULL +}; + +static struct attribute_group nvdimm_pmu_events_group = { + .name = "events", + .attrs = nvdimm_events_attr, +}; + +PMU_FORMAT_ATTR(event, "config:0-4"); + +static struct attribute *nvdimm_pmu_format_attr[] = { + &format_attr_event.attr, + NULL, +}; + +static struct attribute_group nvdimm_pmu_format_group = { + .name = "format", + .attrs = nvdimm_pmu_format_attr, +}; + +ssize_t nvdimm_events_sysfs_show(struct device *dev, + struct device_attribute *attr, char *page) +{ + struct perf_pmu_events_attr *pmu_attr; + + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); + + return sprintf(page, "event=0x%02llx\n", pmu_attr->id); +} + +static ssize_t nvdimm_pmu_cpumask_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct pmu *pmu = dev_get_drvdata(dev); + struct nvdimm_pmu *nd_pmu; + + nd_pmu = container_of(pmu, struct nvdimm_pmu, pmu); + + return cpumap_print_to_pagebuf(true, buf, cpumask_of(nd_pmu->cpu)); +} + +static int nvdimm_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node) +{ + struct nvdimm_pmu *nd_pmu; + u32 target; + int nodeid; + const struct cpumask *cpumask; + + nd_pmu = hlist_entry_safe(node, struct nvdimm_pmu, node); + + /* Clear it, incase given cpu is set in nd_pmu->arch_cpumask */ + cpumask_test_and_clear_cpu(cpu, &nd_pmu->arch_cpumask); + + /* + * If given cpu is not same as current designated cpu for + * counter access, just return. + */ + if (cpu != nd_pmu->cpu) + return 0; + + /* Check for any active cpu in nd_pmu->arch_cpumask */ + target = cpumask_any(&nd_pmu->arch_cpumask); + + /* + * Incase we don't have any active cpu in nd_pmu->arch_cpumask, + * check in given cpu's numa node list. + */ + if (target >= nr_cpu_ids) { + nodeid = cpu_to_node(cpu); + cpumask = cpumask_of_node(nodeid); + target = cpumask_any_but(cpumask, cpu); + } + nd_pmu->cpu = target; + + /* Migrate nvdimm pmu events to the new target cpu if valid */ + if (target >= 0 && target < nr_cpu_ids) + perf_pmu_migrate_context(&nd_pmu->pmu, cpu, target); + + return 0; +} + +static int nvdimm_pmu_cpu_online(unsigned int cpu, struct hlist_node *node) +{ + struct nvdimm_pmu *nd_pmu; + + nd_pmu = hlist_entry_safe(node, struct nvdimm_pmu, node); + + if (nd_pmu->cpu >= nr_cpu_ids) + nd_pmu->cpu = cpu; + + return 0; +} + +static int create_cpumask_attr_group(struct nvdimm_pmu *nd_pmu) +{ + struct perf_pmu_events_attr *pmu_events_attr; + struct attribute **attrs_group; + struct attribute_group *nvdimm_pmu_cpumask_group; + + pmu_events_attr = kzalloc(sizeof(*pmu_events_attr), GFP_KERNEL); + if (!pmu_events_attr) + return -ENOMEM; + + attrs_group = kzalloc(2 * sizeof(struct attribute *), GFP_KERNEL); + if (!attrs_group) { + kfree(pmu_events_attr); + return -ENOMEM; + } + + /* Allocate memory for cpumask attribute group */ + nvdimm_pmu_cpumask_group = kzalloc(sizeof(*nvdimm_pmu_cpumask_group), GFP_KERNEL); + if (!nvdimm_pmu_cpumask_group) { + kfree(pmu_events_attr); + kfree(attrs_group); + return -ENOMEM; + } + + sysfs_attr_init(&pmu_events_attr->attr.attr); + pmu_events_attr->attr.attr.name = "cpumask"; + pmu_events_attr->attr.attr.mode = 0444; + pmu_events_attr->attr.show = nvdimm_pmu_cpumask_show; + attrs_group[0] = &pmu_events_attr->attr.attr; + attrs_group[1] = NULL; + + nvdimm_pmu_cpumask_group->attrs = attrs_group; + nd_pmu->pmu.attr_groups[NVDIMM_PMU_CPUMASK_ATTR] = nvdimm_pmu_cpumask_group; + return 0; +} + +static int nvdimm_pmu_cpu_hotplug_init(struct nvdimm_pmu *nd_pmu) +{ + int nodeid, rc; + const struct cpumask *cpumask; + + /* + * Incase of cpu hotplug feature, arch specific code + * can provide required cpumask which can be used + * to get designatd cpu for counter access. + * Check for any active cpu in nd_pmu->arch_cpumask. + */ + if (!cpumask_empty(&nd_pmu->arch_cpumask)) { + nd_pmu->cpu = cpumask_any(&nd_pmu->arch_cpumask); + } else { + /* pick active cpu from the cpumask of device numa node. */ + nodeid = dev_to_node(nd_pmu->dev); + cpumask = cpumask_of_node(nodeid); + nd_pmu->cpu = cpumask_any(cpumask); + } + + rc = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "perf/nvdimm:online", + nvdimm_pmu_cpu_online, nvdimm_pmu_cpu_offline); + + if (rc < 0) + return rc; + + nd_pmu->cpuhp_state = rc; + + /* Register the pmu instance for cpu hotplug */ + rc = cpuhp_state_add_instance_nocalls(nd_pmu->cpuhp_state, &nd_pmu->node); + if (rc) { + cpuhp_remove_multi_state(nd_pmu->cpuhp_state); + return rc; + } + + /* Create cpumask attribute group */ + rc = create_cpumask_attr_group(nd_pmu); + if (rc) { + cpuhp_state_remove_instance_nocalls(nd_pmu->cpuhp_state, &nd_pmu->node); + cpuhp_remove_multi_state(nd_pmu->cpuhp_state); + return rc; + } + + return 0; +} + +static void nvdimm_pmu_free_hotplug_memory(struct nvdimm_pmu *nd_pmu) +{ + cpuhp_state_remove_instance_nocalls(nd_pmu->cpuhp_state, &nd_pmu->node); + cpuhp_remove_multi_state(nd_pmu->cpuhp_state); + + if (nd_pmu->pmu.attr_groups[NVDIMM_PMU_CPUMASK_ATTR]) + kfree(nd_pmu->pmu.attr_groups[NVDIMM_PMU_CPUMASK_ATTR]->attrs); + kfree(nd_pmu->pmu.attr_groups[NVDIMM_PMU_CPUMASK_ATTR]); +} + +int register_nvdimm_pmu(struct nvdimm_pmu *nd_pmu, struct platform_device *pdev) +{ + int rc; + + if (!nd_pmu || !pdev) + return -EINVAL; + + /* event functions like add/del/read/event_init and pmu name should not be NULL */ + if (WARN_ON_ONCE(!(nd_pmu->pmu.event_init && nd_pmu->pmu.add && + nd_pmu->pmu.del && nd_pmu->pmu.read && nd_pmu->pmu.name))) + return -EINVAL; + + nd_pmu->pmu.attr_groups = kzalloc((NVDIMM_PMU_NULL_ATTR + 1) * + sizeof(struct attribute_group *), GFP_KERNEL); + if (!nd_pmu->pmu.attr_groups) + return -ENOMEM; + + /* + * Add platform_device->dev pointer to nvdimm_pmu to access + * device data in events functions. + */ + nd_pmu->dev = &pdev->dev; + + /* Fill attribute groups for the nvdimm pmu device */ + nd_pmu->pmu.attr_groups[NVDIMM_PMU_FORMAT_ATTR] = &nvdimm_pmu_format_group; + nd_pmu->pmu.attr_groups[NVDIMM_PMU_EVENT_ATTR] = &nvdimm_pmu_events_group; + nd_pmu->pmu.attr_groups[NVDIMM_PMU_NULL_ATTR] = NULL; + + /* Fill attribute group for cpumask */ + rc = nvdimm_pmu_cpu_hotplug_init(nd_pmu); + if (rc) { + pr_info("cpu hotplug feature failed for device: %s\n", nd_pmu->pmu.name); + kfree(nd_pmu->pmu.attr_groups); + return rc; + } + + rc = perf_pmu_register(&nd_pmu->pmu, nd_pmu->pmu.name, -1); + if (rc) { + kfree(nd_pmu->pmu.attr_groups); + nvdimm_pmu_free_hotplug_memory(nd_pmu); + return rc; + } + + pr_info("%s NVDIMM performance monitor support registered\n", + nd_pmu->pmu.name); + + return 0; +} +EXPORT_SYMBOL_GPL(register_nvdimm_pmu); + +void unregister_nvdimm_pmu(struct nvdimm_pmu *nd_pmu) +{ + perf_pmu_unregister(&nd_pmu->pmu); + nvdimm_pmu_free_hotplug_memory(nd_pmu); + kfree(nd_pmu); +} +EXPORT_SYMBOL_GPL(unregister_nvdimm_pmu); diff --git a/include/linux/nd.h b/include/linux/nd.h index f5ed4db2d859..fa4370607bdb 100644 --- a/include/linux/nd.h +++ b/include/linux/nd.h @@ -9,6 +9,7 @@ #include #include #include +#include enum nvdimm_event { NVDIMM_REVALIDATE_POISON, @@ -24,6 +25,19 @@ enum nvdimm_claim_class { NVDIMM_CCLASS_UNKNOWN, }; +#define NVDIMM_EVENT_VAR(_id) event_attr_##_id +#define NVDIMM_EVENT_PTR(_id) (&event_attr_##_id.attr.attr) + +#define NVDIMM_EVENT_ATTR(_name, _id) \ + PMU_EVENT_ATTR(_name, NVDIMM_EVENT_VAR(_id), _id, \ + nvdimm_events_sysfs_show) + +/* Event attribute array index */ +#define NVDIMM_PMU_FORMAT_ATTR 0 +#define NVDIMM_PMU_EVENT_ATTR 1 +#define NVDIMM_PMU_CPUMASK_ATTR 2 +#define NVDIMM_PMU_NULL_ATTR 3 + /** * struct nvdimm_pmu - data structure for nvdimm perf driver * @pmu: pmu data structure for nvdimm performance stats. @@ -43,6 +57,13 @@ struct nvdimm_pmu { struct cpumask arch_cpumask; }; +extern ssize_t nvdimm_events_sysfs_show(struct device *dev, + struct device_attribute *attr, + char *page); + +int register_nvdimm_pmu(struct nvdimm_pmu *nvdimm, struct platform_device *pdev); +void unregister_nvdimm_pmu(struct nvdimm_pmu *nd_pmu); + struct nd_device_driver { struct device_driver drv; unsigned long type; From patchwork Tue Sep 28 12:48:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kajoljain X-Patchwork-Id: 12522463 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EADF03FCB for ; Tue, 28 Sep 2021 12:48:39 +0000 (UTC) Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18SAxS9t015863; Tue, 28 Sep 2021 08:48:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=tHa4dpoRSg+76Su9l/gMtdWmRb+q+JmH4HBns0B1eHs=; b=IpYTcmHdDrWL9SjSZFpxzUEwUDR7mE5LfUdkI/9vPuD3A2xYhIeJshw+L/qokLV2v835 INHarUuLhFixlNDI+bOcxyVwVTOHrZ4xkon1b6QCniEsYDf8SN2zbnwsKGSZkMemzntO exnIqWuucMFvq3IwcpjDe2+zIVuWK5dd8vv+SuueEsrQtOBNvSnic8oXRJazxcgxR1nZ s5ByMUvPzbR10cE5FbVJZssBextU9Tnm2eZbhv10rztRiUcjaLHB6ezNSbg0dGzRdgpr bsmVaZ+hCW5J0VMXBkyKaX1WeUiblb4MjQ2qeq2RsLEgPmxpmtvxn9EHjpAVwihIDhJF oA== Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com with ESMTP id 3bbvq78wn9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 08:48:29 -0400 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 18SChGhN030321; Tue, 28 Sep 2021 12:48:26 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma06ams.nl.ibm.com with ESMTP id 3b9u1je25q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 12:48:26 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18SCmNRt44368356 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 28 Sep 2021 12:48:23 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2DAD0AE05D; Tue, 28 Sep 2021 12:48:23 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7756CAE053; Tue, 28 Sep 2021 12:48:18 +0000 (GMT) Received: from li-e8dccbcc-2adc-11b2-a85c-bc1f33b9b810.ibm.com.com (unknown [9.43.50.245]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 28 Sep 2021 12:48:18 +0000 (GMT) From: Kajol Jain To: mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, peterz@infradead.org, dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com Cc: maddy@linux.ibm.com, santosh@fossix.org, aneesh.kumar@linux.ibm.com, vaibhav@linux.ibm.com, atrajeev@linux.vnet.ibm.com, tglx@linutronix.de, rnsastry@linux.ibm.com, kjain@linux.ibm.com Subject: [PATCH v5 3/4] powerpc/papr_scm: Add perf interface support Date: Tue, 28 Sep 2021 18:18:12 +0530 Message-Id: <20210928124812.146734-1-kjain@linux.ibm.com> X-Mailer: git-send-email 2.27.0 Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: jwxO1MLLBMQRSXaXwBtTKeY2WhYATZnT X-Proofpoint-ORIG-GUID: jwxO1MLLBMQRSXaXwBtTKeY2WhYATZnT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-28_05,2021-09-28_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxscore=0 priorityscore=1501 bulkscore=0 mlxlogscore=999 spamscore=0 adultscore=0 phishscore=0 impostorscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109230001 definitions=main-2109280071 Performance monitoring support for papr-scm nvdimm devices via perf interface is added which includes addition of pmu functions like add/del/read/event_init for nvdimm_pmu struture. A new parameter 'priv' in added to the pdev_archdata structure to save nvdimm_pmu device pointer, to handle the unregistering of pmu device. papr_scm_pmu_register function populates the nvdimm_pmu structure with name, capabilities, cpumask along with event handling functions. Finally the populated nvdimm_pmu structure is passed to register the pmu device. Event handling functions internally uses hcall to get events and counter data. Result in power9 machine with 2 nvdimm device: Ex: List all event by perf list command:# perf list nmem nmem0/cache_rh_cnt/ [Kernel PMU event] nmem0/cache_wh_cnt/ [Kernel PMU event] nmem0/cri_res_util/ [Kernel PMU event] nmem0/ctl_res_cnt/ [Kernel PMU event] nmem0/ctl_res_tm/ [Kernel PMU event] nmem0/fast_w_cnt/ [Kernel PMU event] nmem0/host_l_cnt/ [Kernel PMU event] nmem0/host_l_dur/ [Kernel PMU event] nmem0/host_s_cnt/ [Kernel PMU event] nmem0/host_s_dur/ [Kernel PMU event] nmem0/med_r_cnt/ [Kernel PMU event] nmem0/med_r_dur/ [Kernel PMU event] nmem0/med_w_cnt/ [Kernel PMU event] nmem0/med_w_dur/ [Kernel PMU event] nmem0/mem_life/ [Kernel PMU event] nmem0/poweron_secs/ [Kernel PMU event] ... nmem1/mem_life/ [Kernel PMU event] nmem1/poweron_secs/ [Kernel PMU event] Signed-off-by: Kajol Jain --- arch/powerpc/include/asm/device.h | 5 + arch/powerpc/platforms/pseries/papr_scm.c | 225 ++++++++++++++++++++++ 2 files changed, 230 insertions(+) diff --git a/arch/powerpc/include/asm/device.h b/arch/powerpc/include/asm/device.h index 219559d65864..47ed639f3b8f 100644 --- a/arch/powerpc/include/asm/device.h +++ b/arch/powerpc/include/asm/device.h @@ -48,6 +48,11 @@ struct dev_archdata { struct pdev_archdata { u64 dma_mask; + /* + * Pointer to nvdimm_pmu structure, to handle the unregistering + * of pmu device + */ + void *priv; }; #endif /* _ASM_POWERPC_DEVICE_H */ diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c index f48e87ac89c9..bdf2620db461 100644 --- a/arch/powerpc/platforms/pseries/papr_scm.c +++ b/arch/powerpc/platforms/pseries/papr_scm.c @@ -19,6 +19,7 @@ #include #include #include +#include #define BIND_ANY_ADDR (~0ul) @@ -68,6 +69,8 @@ #define PAPR_SCM_PERF_STATS_EYECATCHER __stringify(SCMSTATS) #define PAPR_SCM_PERF_STATS_VERSION 0x1 +#define to_nvdimm_pmu(_pmu) container_of(_pmu, struct nvdimm_pmu, pmu) + /* Struct holding a single performance metric */ struct papr_scm_perf_stat { u8 stat_id[8]; @@ -120,6 +123,9 @@ struct papr_scm_priv { /* length of the stat buffer as expected by phyp */ size_t stat_buffer_len; + + /* array to have event_code and stat_id mappings */ + char **nvdimm_events_map; }; static int papr_scm_pmem_flush(struct nd_region *nd_region, @@ -340,6 +346,218 @@ static ssize_t drc_pmem_query_stats(struct papr_scm_priv *p, return 0; } +static int papr_scm_pmu_get_value(struct perf_event *event, struct device *dev, u64 *count) +{ + struct papr_scm_perf_stat *stat; + struct papr_scm_perf_stats *stats; + struct papr_scm_priv *p = (struct papr_scm_priv *)dev->driver_data; + int rc, size; + + /* Allocate request buffer enough to hold single performance stat */ + size = sizeof(struct papr_scm_perf_stats) + + sizeof(struct papr_scm_perf_stat); + + if (!p || !p->nvdimm_events_map) + return -EINVAL; + + stats = kzalloc(size, GFP_KERNEL); + if (!stats) + return -ENOMEM; + + stat = &stats->scm_statistic[0]; + memcpy(&stat->stat_id, + p->nvdimm_events_map[event->attr.config], + sizeof(stat->stat_id)); + stat->stat_val = 0; + + rc = drc_pmem_query_stats(p, stats, 1); + if (rc < 0) { + kfree(stats); + return rc; + } + + *count = be64_to_cpu(stat->stat_val); + kfree(stats); + return 0; +} + +static int papr_scm_pmu_event_init(struct perf_event *event) +{ + struct nvdimm_pmu *nd_pmu = to_nvdimm_pmu(event->pmu); + struct papr_scm_priv *p; + + if (!nd_pmu) + return -EINVAL; + + /* test the event attr type for PMU enumeration */ + if (event->attr.type != event->pmu->type) + return -ENOENT; + + /* it does not support event sampling mode */ + if (is_sampling_event(event)) + return -EOPNOTSUPP; + + /* no branch sampling */ + if (has_branch_stack(event)) + return -EOPNOTSUPP; + + p = (struct papr_scm_priv *)nd_pmu->dev->driver_data; + if (!p) + return -EINVAL; + + /* Invalid eventcode */ + if (event->attr.config == 0 || event->attr.config > 16) + return -EINVAL; + + return 0; +} + +static int papr_scm_pmu_add(struct perf_event *event, int flags) +{ + u64 count; + int rc; + struct nvdimm_pmu *nd_pmu = to_nvdimm_pmu(event->pmu); + + if (!nd_pmu) + return -EINVAL; + + if (flags & PERF_EF_START) { + rc = papr_scm_pmu_get_value(event, nd_pmu->dev, &count); + if (rc) + return rc; + + local64_set(&event->hw.prev_count, count); + } + + return 0; +} + +static void papr_scm_pmu_read(struct perf_event *event) +{ + u64 prev, now; + int rc; + struct nvdimm_pmu *nd_pmu = to_nvdimm_pmu(event->pmu); + + if (!nd_pmu) + return; + + rc = papr_scm_pmu_get_value(event, nd_pmu->dev, &now); + if (rc) + return; + + prev = local64_xchg(&event->hw.prev_count, now); + local64_add(now - prev, &event->count); +} + +static void papr_scm_pmu_del(struct perf_event *event, int flags) +{ + papr_scm_pmu_read(event); +} + +static int papr_scm_pmu_check_events(struct papr_scm_priv *p, struct nvdimm_pmu *nd_pmu) +{ + struct papr_scm_perf_stat *stat; + struct papr_scm_perf_stats *stats; + char *statid; + int index, rc, count; + u32 available_events; + + if (!p->stat_buffer_len) + return -ENOENT; + + available_events = (p->stat_buffer_len - sizeof(struct papr_scm_perf_stats)) + / sizeof(struct papr_scm_perf_stat); + + /* Allocate the buffer for phyp where stats are written */ + stats = kzalloc(p->stat_buffer_len, GFP_KERNEL); + if (!stats) { + rc = -ENOMEM; + return rc; + } + + /* Allocate memory to nvdimm_event_map */ + p->nvdimm_events_map = kcalloc(available_events, sizeof(char *), GFP_KERNEL); + if (!p->nvdimm_events_map) { + rc = -ENOMEM; + goto out_stats; + } + + /* Called to get list of events supported */ + rc = drc_pmem_query_stats(p, stats, 0); + if (rc) + goto out_nvdimm_events_map; + + for (index = 0, stat = stats->scm_statistic, count = 0; + index < available_events; index++, ++stat) { + statid = kzalloc(strlen(stat->stat_id) + 1, GFP_KERNEL); + if (!statid) { + rc = -ENOMEM; + goto out_nvdimm_events_map; + } + + strcpy(statid, stat->stat_id); + p->nvdimm_events_map[count] = statid; + count++; + } + p->nvdimm_events_map[count] = NULL; + kfree(stats); + return 0; + +out_nvdimm_events_map: + kfree(p->nvdimm_events_map); +out_stats: + kfree(stats); + return rc; +} + +static void papr_scm_pmu_register(struct papr_scm_priv *p) +{ + struct nvdimm_pmu *nd_pmu; + int rc, nodeid; + + nd_pmu = kzalloc(sizeof(*nd_pmu), GFP_KERNEL); + if (!nd_pmu) { + rc = -ENOMEM; + goto pmu_err_print; + } + + rc = papr_scm_pmu_check_events(p, nd_pmu); + if (rc) + goto pmu_check_events_err; + + nd_pmu->pmu.task_ctx_nr = perf_invalid_context; + nd_pmu->pmu.name = nvdimm_name(p->nvdimm); + nd_pmu->pmu.event_init = papr_scm_pmu_event_init; + nd_pmu->pmu.read = papr_scm_pmu_read; + nd_pmu->pmu.add = papr_scm_pmu_add; + nd_pmu->pmu.del = papr_scm_pmu_del; + + nd_pmu->pmu.capabilities = PERF_PMU_CAP_NO_INTERRUPT | + PERF_PMU_CAP_NO_EXCLUDE; + + /*updating the cpumask variable */ + nodeid = dev_to_node(&p->pdev->dev); + nd_pmu->arch_cpumask = *cpumask_of_node(nodeid); + + rc = register_nvdimm_pmu(nd_pmu, p->pdev); + if (rc) + goto pmu_register_err; + + /* + * Set archdata.priv value to nvdimm_pmu structure, to handle the + * unregistering of pmu device. + */ + p->pdev->archdata.priv = nd_pmu; + return; + +pmu_register_err: + kfree(p->nvdimm_events_map); +pmu_check_events_err: + kfree(nd_pmu); +pmu_err_print: + dev_info(&p->pdev->dev, "nvdimm pmu didn't register rc=%d\n", rc); +} + /* * Issue hcall to retrieve dimm health info and populate papr_scm_priv with the * health information. @@ -1236,6 +1454,7 @@ static int papr_scm_probe(struct platform_device *pdev) goto err2; platform_set_drvdata(pdev, p); + papr_scm_pmu_register(p); return 0; @@ -1254,6 +1473,12 @@ static int papr_scm_remove(struct platform_device *pdev) nvdimm_bus_unregister(p->bus); drc_pmem_unbind(p); + + if (pdev->archdata.priv) + unregister_nvdimm_pmu(pdev->archdata.priv); + + pdev->archdata.priv = NULL; + kfree(p->nvdimm_events_map); kfree(p->bus_desc.provider_name); kfree(p); From patchwork Tue Sep 28 12:48:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kajoljain X-Patchwork-Id: 12522519 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE7062FB2 for ; Tue, 28 Sep 2021 12:59:42 +0000 (UTC) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18SAJ4Eq019457; Tue, 28 Sep 2021 08:48:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=Y0lsPQ3de1hi0HT1iJbp1UbGiO/7GnkeYOUpwVeKUTE=; b=h5KAHzZNdK3KLlgER0onYrZGrzUUun2gsSU3KXV7xR9XoIayMvcXZUj12fu37/ZYARbZ XeefROo0pkwZEemrkKxlQrhy1lzNICFty02Yx1/hE1RTP3uRfHQ2gBVlAqF4rPov7ezU NaCVoOTc64PMCPQ4FwlsQnry7MEQf/oSRDuDzb+wHGNjDCk98WEIqcsXoDY/gO2jyTwd PPIkV1AePuA/SsnSlwE1P5I5eB/DTwlCQx7uQxiAgGbgtaMtEbZJworeScIdsit7+9BG hVcmP8Z7sIrj3gJIiWkvNSMnGW4D2RpnvX0kQPmKDqdUQqeBvW+jRMnWL4Hn+v9DX4at mg== Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0b-001b2d01.pphosted.com with ESMTP id 3bc16mb5jm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 08:48:53 -0400 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 18SChFcT030260; Tue, 28 Sep 2021 12:48:52 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma06ams.nl.ibm.com with ESMTP id 3b9u1je28x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 12:48:51 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18SCmmRK56361414 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 28 Sep 2021 12:48:48 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 45CD84C058; Tue, 28 Sep 2021 12:48:48 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 776474C059; Tue, 28 Sep 2021 12:48:42 +0000 (GMT) Received: from li-e8dccbcc-2adc-11b2-a85c-bc1f33b9b810.ibm.com.com (unknown [9.43.50.245]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 28 Sep 2021 12:48:42 +0000 (GMT) From: Kajol Jain To: mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, peterz@infradead.org, dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com Cc: maddy@linux.ibm.com, santosh@fossix.org, aneesh.kumar@linux.ibm.com, vaibhav@linux.ibm.com, atrajeev@linux.vnet.ibm.com, tglx@linutronix.de, rnsastry@linux.ibm.com, kjain@linux.ibm.com Subject: [PATCH v5 4/4] docs: ABI: sysfs-bus-nvdimm: Document sysfs event format entries for nvdimm pmu Date: Tue, 28 Sep 2021 18:18:34 +0530 Message-Id: <20210928124834.146803-1-kjain@linux.ibm.com> X-Mailer: git-send-email 2.27.0 Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 6vqiGTjGIiT5nhz1KivSxMLR6mekFC9r X-Proofpoint-GUID: 6vqiGTjGIiT5nhz1KivSxMLR6mekFC9r X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-28_05,2021-09-28_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0 bulkscore=0 phishscore=0 priorityscore=1501 lowpriorityscore=0 adultscore=0 clxscore=1015 mlxlogscore=999 impostorscore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109230001 definitions=main-2109280071 Details are added for the event, cpumask and format attributes in the ABI documentation. Signed-off-by: Kajol Jain --- Documentation/ABI/testing/sysfs-bus-nvdimm | 35 ++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/Documentation/ABI/testing/sysfs-bus-nvdimm b/Documentation/ABI/testing/sysfs-bus-nvdimm index bff84a16812a..64004d5e4840 100644 --- a/Documentation/ABI/testing/sysfs-bus-nvdimm +++ b/Documentation/ABI/testing/sysfs-bus-nvdimm @@ -6,3 +6,38 @@ Description: The libnvdimm sub-system implements a common sysfs interface for platform nvdimm resources. See Documentation/driver-api/nvdimm/. + +What: /sys/bus/event_source/devices/nmemX/format +Date: September 2021 +KernelVersion: 5.16 +Contact: Kajol Jain +Description: (RO) Attribute group to describe the magic bits + that go into perf_event_attr.config for a particular pmu. + (See ABI/testing/sysfs-bus-event_source-devices-format). + + Each attribute under this group defines a bit range of the + perf_event_attr.config. Supported attribute is listed + below:: + event = "config:0-4" - event ID + + For example:: + ctl_res_cnt = "event=0x1" + +What: /sys/bus/event_source/devices/nmemX/events +Date: September 2021 +KernelVersion: 5.16 +Contact: Kajol Jain +Description: (RO) Attribute group to describe performance monitoring events + for the nvdimm memory device. Each attribute in this group + describes a single performance monitoring event supported by + this nvdimm pmu. The name of the file is the name of the event. + (See ABI/testing/sysfs-bus-event_source-devices-events). A + listing of the events supported by a given nvdimm provider type + can be found in Documentation/driver-api/nvdimm/$provider. + +What: /sys/bus/event_source/devices/nmemX/cpumask +Date: September 2021 +KernelVersion: 5.16 +Contact: Kajol Jain +Description: (RO) This sysfs file exposes the cpumask which is designated to + to retrieve nvdimm pmu event counter data.