From patchwork Mon Feb 21 07:31:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ravi Bangoria X-Patchwork-Id: 12753189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D51FC433F5 for ; Mon, 21 Feb 2022 07:32:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346273AbiBUHdI (ORCPT ); Mon, 21 Feb 2022 02:33:08 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:51588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346252AbiBUHcx (ORCPT ); Mon, 21 Feb 2022 02:32:53 -0500 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2073.outbound.protection.outlook.com [40.107.244.73]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E745C5FC9; Sun, 20 Feb 2022 23:32:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iRvJPhZU7IQ3R75KFRYaorbpUnQf8puadR4yDCpBzWU3WxW5wDaMGSoKqpwCSPhNjUaQvNXth8myNR5Mi74/TuMU2A8qEfypYYa2XWKAAV9NVkwJ1Ky1iJd+Ulzao1obhCrJwul40/CyDc9pB+Lrz+tRxLHbsMNmcocHwXoVqg7GwaVkatFZIzN4s8kpFaZKR0ugaJShH0lSkc1Z4I3WFRjJWwKyftLHdNcYp5+ctN/rqQbPR18vnT3rr4FVIG6ojksYuokCUXEQyz7JQF9J1DqcO2d/CzdpRIxClMArCWocYJsNf1hZd6kUFtMdVx+WDHHE/iM3ZDBOACoLQinFWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EG8FqGss6EGGYrNu4VjNR1eqZlEyoEJpDyxuvYKAn+k=; b=VsxXZDmptZyueyqF8wRF78N1e3fFr9YzmDLo7Q823HfClzheDcAvLVXmDFUATWB2/M9f2kUrrg10Sz/kbRGhxfMhfMngb2cf/ORUoikRbk8xM2OfdYmrTo0ZrZaljVZMRkTXpKYDMOLBK+wwhRp4QTR5j8k7JezjeaS8iIsGKLub3NWgMHojpj2+PeNoVFRDqGk/u9OElXxugyoTPpOdcXQrCk+eJucrqY1ms0g9dX/1eVlgY+A4stnqmphK2zsL7X3p9EolhP9ovdzI2eCJ2AlD1h0zuQSxGsucmDe9dsOxCXkbYkvvuUdjtzdkpMbIPH9qvRVve7ObBFkZJVnY0Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EG8FqGss6EGGYrNu4VjNR1eqZlEyoEJpDyxuvYKAn+k=; b=VtoS4ObhaT34CJ13hW6HMwTW6y5c2DIfrsH7zKdJFV1VRQoVJbHo/VL5Vb8oKJ6iJKdqNrhWVntu0u61wCozHhlPHtghmU5fkzMbgVutyKJKqTcaC5B/Wbm6S7+vH4S2cqAPtGk9iYjCEg5sRRNCBWnrpdWp23NOvdUnDSmdzr0= Received: from DM6PR13CA0039.namprd13.prod.outlook.com (2603:10b6:5:134::16) by MWHPR12MB1229.namprd12.prod.outlook.com (2603:10b6:300:11::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 07:32:28 +0000 Received: from DM6NAM11FT027.eop-nam11.prod.protection.outlook.com (2603:10b6:5:134:cafe::7) by DM6PR13CA0039.outlook.office365.com (2603:10b6:5:134::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Mon, 21 Feb 2022 07:32:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT027.mail.protection.outlook.com (10.13.172.205) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 07:32:27 +0000 Received: from BLR-5CG113396H.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.18; Mon, 21 Feb 2022 01:32:18 -0600 From: Ravi Bangoria To: CC: , , , , , , , , , , , , , , , Subject: [PATCH 1/3] x86/pmu: Add INTEL_ prefix in some Intel specific macros Date: Mon, 21 Feb 2022 13:01:38 +0530 Message-ID: <20220221073140.10618-2-ravi.bangoria@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220221073140.10618-1-ravi.bangoria@amd.com> References: <20220221073140.10618-1-ravi.bangoria@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: eafdf28d-c2ff-4e36-1ba8-08d9f50c4fee X-MS-TrafficTypeDiagnostic: MWHPR12MB1229:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JbbwjTW7jGktezNIUk+KeIJPCSpJ+pxcpI4OB71d0OrTXqaBEn65mGD77NOmueR9a1Mealo+/sX5IdFjJAy8p7XyHFlb+1MYG3wG8rABWnNhvfZCm7eIEaCWlh500/t1WpOS4ez37fCKEY6XsoAS6FJoNJnt6XBjulehBi492IYJHnJImmTkNmzWogN0nbwWxfcEI+PxQ8saxuLOWf+c8dO6McXx+KubBrGn6oRraqvFs2jpqdsxoGSqaxv56lPff6wXJfk6008pkTAyeZa9uZUgDbQQccDtWTReiKSYmo9xtCz/iq8jKkiXkQ/U1n4GxSzSlBvuyM0Is8sLq27TvulUCzbTCOkzMC/7YqeatJRSOnxu5EyYH38DvIkb343GJsGVYk8TOYgY/B5sOTDlexnc8/TLq/yQ2V/fP7BWbOhPnXrnU9Hq0V+2rvYgCGeOdEb2Rv6JYUWRWdsMTXBOOZjvNlsJH+89DNLSSBDs91JmkOOlkbu4bK/StMK23wWzKq3MrkeFfiN8e9x2lehJHrTJBV9JccZG/diV9naL186OwnG9NZDtEX4RtXTe1/f/4uzhvS1PFZfqUrryNxmbViCyFYvtQADM5lxqzZ2dM6MdasKMWbUGf07MPMOGtMlMZEJXer6hDL1w2echoBYgsv0Jrn3GuZ+sH5G4ra1F4LtEyxCv0hazfuG26eGmocqHryPO4o3FZ73Vi264LQJ/sQ== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(36860700001)(7696005)(6666004)(508600001)(5660300002)(36756003)(8936002)(44832011)(7416002)(2906002)(82310400004)(70206006)(70586007)(86362001)(8676002)(4326008)(54906003)(16526019)(2616005)(83380400001)(40460700003)(81166007)(316002)(6916009)(426003)(356005)(336012)(1076003)(186003)(26005)(47076005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 07:32:27.8115 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: eafdf28d-c2ff-4e36-1ba8-08d9f50c4fee X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT027.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1229 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace: s/HSW_IN_TX/INTEL_HSW_IN_TX/ s/HSW_IN_TX_CHECKPOINTED/INTEL_HSW_IN_TX_CHECKPOINTED/ s/ICL_EVENTSEL_ADAPTIVE/INTEL_ICL_EVENTSEL_ADAPTIVE/ s/ICL_FIXED_0_ADAPTIVE/INTEL_ICL_FIXED_0_ADAPTIVE/ No functionality changes. Signed-off-by: Ravi Bangoria --- arch/x86/events/intel/core.c | 12 ++++++------ arch/x86/events/intel/ds.c | 2 +- arch/x86/events/perf_event.h | 2 +- arch/x86/include/asm/perf_event.h | 12 ++++++------ arch/x86/kvm/pmu.c | 14 +++++++------- arch/x86/kvm/vmx/pmu_intel.c | 2 +- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index a3c7ca876aeb..9a72fd8ddab9 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2359,7 +2359,7 @@ static inline void intel_pmu_ack_status(u64 ack) static inline bool event_is_checkpointed(struct perf_event *event) { - return unlikely(event->hw.config & HSW_IN_TX_CHECKPOINTED) != 0; + return unlikely(event->hw.config & INTEL_HSW_IN_TX_CHECKPOINTED) != 0; } static inline void intel_set_masks(struct perf_event *event, int idx) @@ -2717,8 +2717,8 @@ static void intel_pmu_enable_fixed(struct perf_event *event) mask = 0xfULL << (idx * 4); if (x86_pmu.intel_cap.pebs_baseline && event->attr.precise_ip) { - bits |= ICL_FIXED_0_ADAPTIVE << (idx * 4); - mask |= ICL_FIXED_0_ADAPTIVE << (idx * 4); + bits |= INTEL_ICL_FIXED_0_ADAPTIVE << (idx * 4); + mask |= INTEL_ICL_FIXED_0_ADAPTIVE << (idx * 4); } rdmsrl(hwc->config_base, ctrl_val); @@ -4000,14 +4000,14 @@ static int hsw_hw_config(struct perf_event *event) return ret; if (!boot_cpu_has(X86_FEATURE_RTM) && !boot_cpu_has(X86_FEATURE_HLE)) return 0; - event->hw.config |= event->attr.config & (HSW_IN_TX|HSW_IN_TX_CHECKPOINTED); + event->hw.config |= event->attr.config & (INTEL_HSW_IN_TX|INTEL_HSW_IN_TX_CHECKPOINTED); /* * IN_TX/IN_TX-CP filters are not supported by the Haswell PMU with * PEBS or in ANY thread mode. Since the results are non-sensical forbid * this combination. */ - if ((event->hw.config & (HSW_IN_TX|HSW_IN_TX_CHECKPOINTED)) && + if ((event->hw.config & (INTEL_HSW_IN_TX|INTEL_HSW_IN_TX_CHECKPOINTED)) && ((event->hw.config & ARCH_PERFMON_EVENTSEL_ANY) || event->attr.precise_ip > 0)) return -EOPNOTSUPP; @@ -4050,7 +4050,7 @@ hsw_get_event_constraints(struct cpu_hw_events *cpuc, int idx, c = intel_get_event_constraints(cpuc, idx, event); /* Handle special quirk on in_tx_checkpointed only in counter 2 */ - if (event->hw.config & HSW_IN_TX_CHECKPOINTED) { + if (event->hw.config & INTEL_HSW_IN_TX_CHECKPOINTED) { if (c->idxmsk64 & (1U << 2)) return &counter2_constraint; return &emptyconstraint; diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 2e215369df4a..9f1c419f401d 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1225,7 +1225,7 @@ void intel_pmu_pebs_enable(struct perf_event *event) cpuc->pebs_enabled |= 1ULL << 63; if (x86_pmu.intel_cap.pebs_baseline) { - hwc->config |= ICL_EVENTSEL_ADAPTIVE; + hwc->config |= INTEL_ICL_EVENTSEL_ADAPTIVE; if (cpuc->pebs_data_cfg != cpuc->active_pebs_data_cfg) { wrmsrl(MSR_PEBS_DATA_CFG, cpuc->pebs_data_cfg); cpuc->active_pebs_data_cfg = cpuc->pebs_data_cfg; diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 150261d929b9..e789b390d90c 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -410,7 +410,7 @@ struct cpu_hw_events { * The other filters are supported by fixed counters. * The any-thread option is supported starting with v3. */ -#define FIXED_EVENT_FLAGS (X86_RAW_EVENT_MASK|HSW_IN_TX|HSW_IN_TX_CHECKPOINTED) +#define FIXED_EVENT_FLAGS (X86_RAW_EVENT_MASK|INTEL_HSW_IN_TX|INTEL_HSW_IN_TX_CHECKPOINTED) #define FIXED_EVENT_CONSTRAINT(c, n) \ EVENT_CONSTRAINT(c, (1ULL << (32+n)), FIXED_EVENT_FLAGS) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 8fc1b5003713..002e67661330 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -30,10 +30,10 @@ #define ARCH_PERFMON_EVENTSEL_INV (1ULL << 23) #define ARCH_PERFMON_EVENTSEL_CMASK 0xFF000000ULL -#define HSW_IN_TX (1ULL << 32) -#define HSW_IN_TX_CHECKPOINTED (1ULL << 33) -#define ICL_EVENTSEL_ADAPTIVE (1ULL << 34) -#define ICL_FIXED_0_ADAPTIVE (1ULL << 32) +#define INTEL_HSW_IN_TX (1ULL << 32) +#define INTEL_HSW_IN_TX_CHECKPOINTED (1ULL << 33) +#define INTEL_ICL_EVENTSEL_ADAPTIVE (1ULL << 34) +#define INTEL_ICL_FIXED_0_ADAPTIVE (1ULL << 32) #define AMD64_EVENTSEL_INT_CORE_ENABLE (1ULL << 36) #define AMD64_EVENTSEL_GUESTONLY (1ULL << 40) @@ -79,8 +79,8 @@ ARCH_PERFMON_EVENTSEL_CMASK | \ ARCH_PERFMON_EVENTSEL_ANY | \ ARCH_PERFMON_EVENTSEL_PIN_CONTROL | \ - HSW_IN_TX | \ - HSW_IN_TX_CHECKPOINTED) + INTEL_HSW_IN_TX | \ + INTEL_HSW_IN_TX_CHECKPOINTED) #define AMD64_RAW_EVENT_MASK \ (X86_RAW_EVENT_MASK | \ AMD64_EVENTSEL_EVENT) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b1a02993782b..4a70380f2287 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -117,15 +117,15 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, attr.sample_period = get_sample_period(pmc, pmc->counter); if (in_tx) - attr.config |= HSW_IN_TX; + attr.config |= INTEL_HSW_IN_TX; if (in_tx_cp) { /* - * HSW_IN_TX_CHECKPOINTED is not supported with nonzero + * INTEL_HSW_IN_TX_CHECKPOINTED is not supported with nonzero * period. Just clear the sample period so at least * allocating the counter doesn't fail. */ attr.sample_period = 0; - attr.config |= HSW_IN_TX_CHECKPOINTED; + attr.config |= INTEL_HSW_IN_TX_CHECKPOINTED; } event = perf_event_create_kernel_counter(&attr, -1, current, @@ -213,8 +213,8 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | ARCH_PERFMON_EVENTSEL_INV | ARCH_PERFMON_EVENTSEL_CMASK | - HSW_IN_TX | - HSW_IN_TX_CHECKPOINTED))) { + INTEL_HSW_IN_TX | + INTEL_HSW_IN_TX_CHECKPOINTED))) { config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); if (config != PERF_COUNT_HW_MAX) type = PERF_TYPE_HARDWARE; @@ -233,8 +233,8 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) !(eventsel & ARCH_PERFMON_EVENTSEL_USR), !(eventsel & ARCH_PERFMON_EVENTSEL_OS), eventsel & ARCH_PERFMON_EVENTSEL_INT, - (eventsel & HSW_IN_TX), - (eventsel & HSW_IN_TX_CHECKPOINTED)); + (eventsel & INTEL_HSW_IN_TX), + (eventsel & INTEL_HSW_IN_TX_CHECKPOINTED)); } EXPORT_SYMBOL_GPL(reprogram_gp_counter); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 466d18fc0c5d..7c64792a9506 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -534,7 +534,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) if (entry && (boot_cpu_has(X86_FEATURE_HLE) || boot_cpu_has(X86_FEATURE_RTM)) && (entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM))) - pmu->reserved_bits ^= HSW_IN_TX|HSW_IN_TX_CHECKPOINTED; + pmu->reserved_bits ^= INTEL_HSW_IN_TX|INTEL_HSW_IN_TX_CHECKPOINTED; bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); From patchwork Mon Feb 21 07:31:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ravi Bangoria X-Patchwork-Id: 12753191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75880C433FE for ; Mon, 21 Feb 2022 07:32:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346185AbiBUHdP (ORCPT ); Mon, 21 Feb 2022 02:33:15 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:50820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346205AbiBUHc4 (ORCPT ); Mon, 21 Feb 2022 02:32:56 -0500 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2053.outbound.protection.outlook.com [40.107.94.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E3E264FB; Sun, 20 Feb 2022 23:32:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Q0i92k3BXdanZpMGkMKiiCt30899yEg7jMDAFgOD796Hh468Q/TxIYG7vEYoZ61HSrJHQ5Zs0doRf8uy1o0uzd0f6TIVFhURc1i2dOlX+cVeB7eFrcubO4bwdFBxeVmK55TsyumUlr3zEjgnmIh/mJFFFi0Aa3D1eadgUYB6z7EEl91BzvEX4GyHeVvh5r1RqfhGr09PB5l5XTZ4mrtUeEvaVjKU2XT+u+2dFbNb84JSUlYd8GQQwd0hD61XvbHOt44dNqMaqmbWxRLYWRv0ReTwxfiow6yNTpcPp2GVzM6iEaS/JCBu++4zRVPPjnkZh4zpt9mbrcdRSvXpnLkxxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8ikDOTJkoStv9L7b3/RtHn6ze5IAgG0U540TzkIGY04=; b=ApE9M3iqQcqxEy/YVe6zs9Ze3J/7cfnrMrxGfADcsJo+K1IaBNp4lZChq2k0WyLdPjYsfV0GKJhEVg4DJ6cQGy24e3XTeacAOlZNnO4nGf6ehF/DUm4gCp710hu0iqd9h9kTPiydtFu6oIs42/lXBSrxlwq5SEdZGf5eBiJdBHgW98IEXE7IwlcInMR/gZaq0gWyaqm4LcFkje0NTpMZ1vVhu5V9KWrq7Uz6TEfDHQHXy/Mt8WQuHo03BjJFyonKUtpy8YzdV7f+zslEA0SsqdXZFVirreYTMyj9aO7GUZ5aU1i31x/NcVhbKwm5fv0M6kqmvneqxIfs390N4UcMzg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8ikDOTJkoStv9L7b3/RtHn6ze5IAgG0U540TzkIGY04=; b=1csvMtQZfL7kwRcX0O0boY+lwYybviVqc/NPzBBuJiq1X2rmkMEhbs+i5IFSKRsOG7eOezHGKBj5Izt2/acBldnQ8LJ8SmzTSn8PJfSUfuWmb9xuT/EcA4FueVanxefqvKwW8IuLZamUKWzlJFp0E4Hzaaz/Gg4Na0KWhKWFBCs= Received: from DM5PR05CA0013.namprd05.prod.outlook.com (2603:10b6:3:d4::23) by CH0PR12MB5235.namprd12.prod.outlook.com (2603:10b6:610:d2::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.16; Mon, 21 Feb 2022 07:32:31 +0000 Received: from DM6NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:3:d4:cafe::2e) by DM5PR05CA0013.outlook.office365.com (2603:10b6:3:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.20 via Frontend Transport; Mon, 21 Feb 2022 07:32:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT024.mail.protection.outlook.com (10.13.172.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 07:32:31 +0000 Received: from BLR-5CG113396H.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.18; Mon, 21 Feb 2022 01:32:24 -0600 From: Ravi Bangoria To: CC: , , , , , , , , , , , , , , , Subject: [PATCH 2/3] x86/pmu: Replace X86_ALL_EVENT_FLAGS with INTEL_ALL_EVENT_FLAGS Date: Mon, 21 Feb 2022 13:01:39 +0530 Message-ID: <20220221073140.10618-3-ravi.bangoria@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220221073140.10618-1-ravi.bangoria@amd.com> References: <20220221073140.10618-1-ravi.bangoria@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b82481c8-bb4d-492b-8c0d-08d9f50c51f4 X-MS-TrafficTypeDiagnostic: CH0PR12MB5235:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JBqaEtF4iu9Ol8tx/dmHAUteUD+/M7AwAmFrlwXhMAq/g0Z3iqWpVPs31E9WHUiG4hslAvrbGOFa61BrREISvf6KW2YyMF6V1cDetRYXfYHyJ2ppgN6rwHQzxauPW99wJRqwaQKkwc0zJ/OkSv2Etld77jMBdRCsNOnxK07vYAP16SWvZjcgLuNe5l/tFkGd6tsF2p2UMrvojVOqbrHORA8kO9wxxeuf3owxkv1VgdyVvdd00XbQYdGIjV0rgoi+2MXfprYjdbzeBv3SDPD+Y6rOmdFlDAiL5uq+Z0LwCWByXA/DDyjFuMOLgVRuEbiehycutq18UfnScgmkvZZqZSVLrSecMW6IfsyKr8OOkJld4608Wub+qMOCJnJ8N/b3uuBcW8v4qhTpu/NMLGWEGPHBFknsF/vAj4Be7tisyKjOankgG0me+Wu7yZwRPM+nYA5FEPudCg6I84ZCbao09LQDgn/LWmzTvVXliX4IUga2fa+IfBe9AQrr+G4WP2tCAGouM+f+WMZJlW7iUsWlD65hI2B/zAmbA/WmFXgHKZXdQsuCQwO/Zw1YNKA93gC91bfZIVdUxnrUvCU79fhgNJG0WSyqJ1E3K2SFR8/eiidSWPVOcBd9VkP7F5l8dNowfNpiVrLVomKi+ROsseHKZfLHdwnqPbW1enTypUlfTTov3ycfXrk4r9jWE0M5LsAEvufoUhJjZzJLkJ989KVifw== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(36756003)(2906002)(426003)(2616005)(82310400004)(1076003)(86362001)(26005)(16526019)(44832011)(4326008)(70586007)(186003)(5660300002)(336012)(8676002)(6666004)(508600001)(7696005)(81166007)(70206006)(356005)(8936002)(7416002)(83380400001)(47076005)(6916009)(316002)(36860700001)(54906003)(40460700003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 07:32:31.1935 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b82481c8-bb4d-492b-8c0d-08d9f50c51f4 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5235 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X86_ALL_EVENT_FLAGS has Intel specific flags and it's used only by Intel specific macros, i.e. it's not x86 generic macro. Rename it to INTEL_ALL_EVENT_FLAGS. No functionality changes. Signed-off-by: Ravi Bangoria --- arch/x86/events/intel/core.c | 2 +- arch/x86/events/perf_event.h | 32 +++++++++++++++---------------- arch/x86/include/asm/perf_event.h | 2 +- 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 9a72fd8ddab9..54aba01a23a6 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3835,7 +3835,7 @@ static int intel_pmu_hw_config(struct perf_event *event) * The TopDown metrics events and slots event don't * support any filters. */ - if (event->attr.config & X86_ALL_EVENT_FLAGS) + if (event->attr.config & INTEL_ALL_EVENT_FLAGS) return -EINVAL; if (is_available_metric_event(event)) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index e789b390d90c..6bad5d4e6f17 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -439,86 +439,86 @@ struct cpu_hw_events { /* Like UEVENT_CONSTRAINT, but match flags too */ #define INTEL_FLAGS_UEVENT_CONSTRAINT(c, n) \ - EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS) + EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS) #define INTEL_EXCLUEVT_CONSTRAINT(c, n) \ __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK, \ HWEIGHT(n), 0, PERF_X86_EVENT_EXCL) #define INTEL_PLD_CONSTRAINT(c, n) \ - __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \ + __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_LDLAT) #define INTEL_PSD_CONSTRAINT(c, n) \ - __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \ + __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_STLAT) #define INTEL_PST_CONSTRAINT(c, n) \ - __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \ + __EVENT_CONSTRAINT(c, n, INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST) /* Event constraint, but match on all event flags too. */ #define INTEL_FLAGS_EVENT_CONSTRAINT(c, n) \ - EVENT_CONSTRAINT(c, n, ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS) + EVENT_CONSTRAINT(c, n, ARCH_PERFMON_EVENTSEL_EVENT|INTEL_ALL_EVENT_FLAGS) #define INTEL_FLAGS_EVENT_CONSTRAINT_RANGE(c, e, n) \ - EVENT_CONSTRAINT_RANGE(c, e, n, ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS) + EVENT_CONSTRAINT_RANGE(c, e, n, ARCH_PERFMON_EVENTSEL_EVENT|INTEL_ALL_EVENT_FLAGS) /* Check only flags, but allow all event/umask */ #define INTEL_ALL_EVENT_CONSTRAINT(code, n) \ - EVENT_CONSTRAINT(code, n, X86_ALL_EVENT_FLAGS) + EVENT_CONSTRAINT(code, n, INTEL_ALL_EVENT_FLAGS) /* Check flags and event code, and set the HSW store flag */ #define INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_ST(code, n) \ __EVENT_CONSTRAINT(code, n, \ - ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS, \ + ARCH_PERFMON_EVENTSEL_EVENT|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST_HSW) /* Check flags and event code, and set the HSW load flag */ #define INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_LD(code, n) \ __EVENT_CONSTRAINT(code, n, \ - ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS, \ + ARCH_PERFMON_EVENTSEL_EVENT|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_LD_HSW) #define INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_LD_RANGE(code, end, n) \ __EVENT_CONSTRAINT_RANGE(code, end, n, \ - ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS, \ + ARCH_PERFMON_EVENTSEL_EVENT|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_LD_HSW) #define INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_XLD(code, n) \ __EVENT_CONSTRAINT(code, n, \ - ARCH_PERFMON_EVENTSEL_EVENT|X86_ALL_EVENT_FLAGS, \ + ARCH_PERFMON_EVENTSEL_EVENT|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, \ PERF_X86_EVENT_PEBS_LD_HSW|PERF_X86_EVENT_EXCL) /* Check flags and event code/umask, and set the HSW store flag */ #define INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(code, n) \ __EVENT_CONSTRAINT(code, n, \ - INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \ + INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_ST_HSW) #define INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XST(code, n) \ __EVENT_CONSTRAINT(code, n, \ - INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \ + INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, \ PERF_X86_EVENT_PEBS_ST_HSW|PERF_X86_EVENT_EXCL) /* Check flags and event code/umask, and set the HSW load flag */ #define INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(code, n) \ __EVENT_CONSTRAINT(code, n, \ - INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \ + INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_LD_HSW) #define INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(code, n) \ __EVENT_CONSTRAINT(code, n, \ - INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \ + INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, \ PERF_X86_EVENT_PEBS_LD_HSW|PERF_X86_EVENT_EXCL) /* Check flags and event code/umask, and set the HSW N/A flag */ #define INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(code, n) \ __EVENT_CONSTRAINT(code, n, \ - INTEL_ARCH_EVENT_MASK|X86_ALL_EVENT_FLAGS, \ + INTEL_ARCH_EVENT_MASK|INTEL_ALL_EVENT_FLAGS, \ HWEIGHT(n), 0, PERF_X86_EVENT_PEBS_NA_HSW) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 002e67661330..216173a82ccc 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -73,7 +73,7 @@ ARCH_PERFMON_EVENTSEL_EDGE | \ ARCH_PERFMON_EVENTSEL_INV | \ ARCH_PERFMON_EVENTSEL_CMASK) -#define X86_ALL_EVENT_FLAGS \ +#define INTEL_ALL_EVENT_FLAGS \ (ARCH_PERFMON_EVENTSEL_EDGE | \ ARCH_PERFMON_EVENTSEL_INV | \ ARCH_PERFMON_EVENTSEL_CMASK | \ From patchwork Mon Feb 21 07:31:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ravi Bangoria X-Patchwork-Id: 12753192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08AC1C433EF for ; Mon, 21 Feb 2022 07:33:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346278AbiBUHdX (ORCPT ); Mon, 21 Feb 2022 02:33:23 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:51570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346245AbiBUHdH (ORCPT ); Mon, 21 Feb 2022 02:33:07 -0500 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2065.outbound.protection.outlook.com [40.107.94.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB9D1AE5F; Sun, 20 Feb 2022 23:32:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ftFgSLxqQQ7mRHwBkPIG9eAgQQ8RGrnt+n5s+54QF6BRy1T6X2LJuDkK8WsD0C3rvyeAasDvYRxMfsnwpjPeR7ECYBr9RKzjDaENsdf1bhWKcSGUO9BiwmtuydfiifEbWtp4rJE0LwMRvVvFvieZGJ8SXWRo1C6RTSrd0ocfK6ATTWTwCiis8zU/Ofxu5bS5I1C/r7Ackcbc1l1jMFIm9rv8tcpDOgriMQ/Ikk12Nnjt/HM4Cb2yY3MeKpnSTZCQ7pLK1apSxSVXjKOEyPocx3HzfLKPSO1SQavU0rJnLJSCejVs2GrcGTq1XcZrZjp/M6hEBTes4ZFtVROPCbqXRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CHJLSnrh8JKgCGFK481RScnF0FkxClmg6QBlRegZT64=; b=a0ibxEkaq7Z2DROrDn1J5Ci2LGqfHgwD5ywbGkrD4aiJZo1DinN2Npibw8ONLapR4qycW8ETdhdEPpO5ivWe46w+fJacmtQPQsLVrxQc57NPccx/vrmnNCvtBJqHllg2jEzJU2kqLOJ62P/W8M3UlNp4CGI7PRyz5iHVSos2BUACrC6h3/aRIU8ckVQMBWjzpeL6MGhSBGtOhCBbb5n4dEn2wQFJXEPTmflGyZS4tSc94aniaSzS0mT60lMdwD9eJcejGgl8DC4IJpJUb8JSgGNW238Tm6xbxcMo2V8GvOf4gXuvK6pbaZUJTRhLEvf4fHu8gTBRM4HHnmaN2T/UaA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CHJLSnrh8JKgCGFK481RScnF0FkxClmg6QBlRegZT64=; b=n+kzUOpZAxgEa7FyO7pSaaFw3cRTg00ZisNiHgSv4cblxVDrVi3uTDNz3bwbAGcjuJVZeXrhjYSNP1lIurYVB/ORYaKnbyhfvBiGWcOP+MtzAavWfl13ZVB3OveijDLaqFXyhJxcMe6t+Y+P5jZXQP0RF+bcR6uJ++9ZbZ1RSZs= Received: from DM5PR20CA0004.namprd20.prod.outlook.com (2603:10b6:3:93::14) by MWHPR1201MB2493.namprd12.prod.outlook.com (2603:10b6:300:ec::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.25; Mon, 21 Feb 2022 07:32:37 +0000 Received: from DM6NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:3:93:cafe::f1) by DM5PR20CA0004.outlook.office365.com (2603:10b6:3:93::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.12 via Frontend Transport; Mon, 21 Feb 2022 07:32:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT068.mail.protection.outlook.com (10.13.173.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4995.15 via Frontend Transport; Mon, 21 Feb 2022 07:32:37 +0000 Received: from BLR-5CG113396H.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.18; Mon, 21 Feb 2022 01:32:30 -0600 From: Ravi Bangoria To: CC: , , , , , , , , , , , , , , , Subject: [PATCH 3/3] KVM: x86/pmu: Segregate Intel and AMD specific logic Date: Mon, 21 Feb 2022 13:01:40 +0530 Message-ID: <20220221073140.10618-4-ravi.bangoria@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220221073140.10618-1-ravi.bangoria@amd.com> References: <20220221073140.10618-1-ravi.bangoria@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 196231f9-3bf2-4168-82df-08d9f50c559d X-MS-TrafficTypeDiagnostic: MWHPR1201MB2493:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: dpMV+h5YmF9zxNc0bpWFTp/6NB5VVpPb6ZkAKRfbx4oyZOfYo4X6RkoKvcKszKtNFWqTH7QzT5RtsUR48urYm4wmQXyXNOybuN3DZLec/avLmGULL7I+6ndHaOtTurRSPcYCpQ4UZvh1NVpoYnnkwFDXSOPjEAYuiSduGbkdI9cEDepmQ+uvYDe7ZNRqp4bgJPVqutvwemLRgMNYoDU7XuvxyXwvjHiUBnfLLQ7OywYm+sPNmx4sMaNKomBo6d6XjYGYAQoK+cskk4nhvAA1j8LSprtJ766R/oB6xLyhZzx14zgvA9rCsAZuuT7jMiBsZuHZ/Qm+4ZQcVo6IvLAtk33XHx8cc29XlrsnX3qfB0XcE6RWhwUpGDeKzOQkRoLvkp7u39lFfJDNiQJUSDu6IwP31urob349twkLEFLeozuP9Uzu+3nzRL5jSX34vo+RDLkNh73mPpxT62q/A7vsbDUQThhLrfmNnn/ll1wSSA5+NO3JvtNTwvQ+jMMmj7ZZCoLufnQBSevmTrxa91IzWVwPRDhblhFNoc3Fn36I6TEfwtakUQ35lQT5yRsQOIA9IAioLX3ND1JS6Eq2w09kDzmmWXYScd0TSL8NQrpzTq/oPz9WVQmc7DIWcidBHoYyf+z+3kq+xCmJw3xuy8VCSWhUF1bfdCmgD086+CfsvNHp5SIJ8ImFWsxK4RH40Pwr4RtAvcL3yTjKnDg/sX/KJg== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(54906003)(6916009)(1076003)(8936002)(26005)(7416002)(16526019)(186003)(5660300002)(70206006)(70586007)(83380400001)(4326008)(8676002)(356005)(82310400004)(426003)(316002)(44832011)(86362001)(336012)(81166007)(508600001)(7696005)(36756003)(6666004)(40460700003)(2906002)(2616005)(36860700001)(47076005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2022 07:32:37.3497 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 196231f9-3bf2-4168-82df-08d9f50c559d X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1201MB2493 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org HSW_IN_TX* bits are used in generic code which are not supported on AMD. Worse, these bits overlap with AMD EventSelect[11:8] and hence using HSW_IN_TX* bits unconditionally in generic code is resulting in unintentional pmu behavior on AMD. For example, if EventSelect[11:8] is 0x2, pmc_reprogram_counter() wrongly assumes that HSW_IN_TX_CHECKPOINTED is set and thus forces sampling period to be 0. Fixes: ca724305a2b0 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM") Signed-off-by: Ravi Bangoria --- arch/x86/kvm/pmu.c | 66 +++++++++++++++++++++++------------- arch/x86/kvm/pmu.h | 4 +-- arch/x86/kvm/svm/pmu.c | 6 +++- arch/x86/kvm/vmx/pmu_intel.c | 4 +-- 4 files changed, 51 insertions(+), 29 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 4a70380f2287..b91dbede87b3 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -97,7 +97,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event, static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool exclude_user, bool exclude_kernel, bool intr, - bool in_tx, bool in_tx_cp) + bool in_tx, bool in_tx_cp, bool is_intel) { struct perf_event *event; struct perf_event_attr attr = { @@ -116,16 +116,18 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, attr.sample_period = get_sample_period(pmc, pmc->counter); - if (in_tx) - attr.config |= INTEL_HSW_IN_TX; - if (in_tx_cp) { - /* - * INTEL_HSW_IN_TX_CHECKPOINTED is not supported with nonzero - * period. Just clear the sample period so at least - * allocating the counter doesn't fail. - */ - attr.sample_period = 0; - attr.config |= INTEL_HSW_IN_TX_CHECKPOINTED; + if (is_intel) { + if (in_tx) + attr.config |= INTEL_HSW_IN_TX; + if (in_tx_cp) { + /* + * INTEL_HSW_IN_TX_CHECKPOINTED is not supported with nonzero + * period. Just clear the sample period so at least + * allocating the counter doesn't fail. + */ + attr.sample_period = 0; + attr.config |= INTEL_HSW_IN_TX_CHECKPOINTED; + } } event = perf_event_create_kernel_counter(&attr, -1, current, @@ -179,13 +181,14 @@ static int cmp_u64(const void *a, const void *b) return *(__u64 *)a - *(__u64 *)b; } -void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) +void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel, bool is_intel) { u64 config; u32 type = PERF_TYPE_RAW; struct kvm *kvm = pmc->vcpu->kvm; struct kvm_pmu_event_filter *filter; bool allow_event = true; + u64 eventsel_mask; if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); @@ -210,18 +213,31 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) if (!allow_event) return; - if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | - ARCH_PERFMON_EVENTSEL_INV | - ARCH_PERFMON_EVENTSEL_CMASK | - INTEL_HSW_IN_TX | - INTEL_HSW_IN_TX_CHECKPOINTED))) { + eventsel_mask = ARCH_PERFMON_EVENTSEL_EDGE | + ARCH_PERFMON_EVENTSEL_INV | + ARCH_PERFMON_EVENTSEL_CMASK; + if (is_intel) { + eventsel_mask |= INTEL_HSW_IN_TX | INTEL_HSW_IN_TX_CHECKPOINTED; + } else { + /* + * None of the AMD generalized events has EventSelect[11:8] + * set so far. + */ + eventsel_mask |= (0xFULL << 32); + } + + if (!(eventsel & eventsel_mask)) { config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); if (config != PERF_COUNT_HW_MAX) type = PERF_TYPE_HARDWARE; } - if (type == PERF_TYPE_RAW) - config = eventsel & AMD64_RAW_EVENT_MASK; + if (type == PERF_TYPE_RAW) { + if (is_intel) + config = eventsel & X86_RAW_EVENT_MASK; + else + config = eventsel & AMD64_RAW_EVENT_MASK; + } if (pmc->current_config == eventsel && pmc_resume_counter(pmc)) return; @@ -234,11 +250,12 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) !(eventsel & ARCH_PERFMON_EVENTSEL_OS), eventsel & ARCH_PERFMON_EVENTSEL_INT, (eventsel & INTEL_HSW_IN_TX), - (eventsel & INTEL_HSW_IN_TX_CHECKPOINTED)); + (eventsel & INTEL_HSW_IN_TX_CHECKPOINTED), + is_intel); } EXPORT_SYMBOL_GPL(reprogram_gp_counter); -void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) +void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx, bool is_intel) { unsigned en_field = ctrl & 0x3; bool pmi = ctrl & 0x8; @@ -270,24 +287,25 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc), !(en_field & 0x2), /* exclude user */ !(en_field & 0x1), /* exclude kernel */ - pmi, false, false); + pmi, false, false, is_intel); } EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx) { struct kvm_pmc *pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, pmc_idx); + bool is_intel = !strncmp(kvm_x86_ops.name, "kvm_intel", 9); if (!pmc) return; if (pmc_is_gp(pmc)) - reprogram_gp_counter(pmc, pmc->eventsel); + reprogram_gp_counter(pmc, pmc->eventsel, is_intel); else { int idx = pmc_idx - INTEL_PMC_IDX_FIXED; u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); - reprogram_fixed_counter(pmc, ctrl, idx); + reprogram_fixed_counter(pmc, ctrl, idx, is_intel); } } EXPORT_SYMBOL_GPL(reprogram_counter); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7a7b8d5b775e..610a4cbf85a4 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -140,8 +140,8 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } -void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); -void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); +void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel, bool is_intel); +void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx, bool is_intel); void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 5aa45f13b16d..9ad63e940883 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -140,6 +140,10 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) { + /* + * None of the AMD generalized events has EventSelect[11:8] set. + * Hence 8 bit event_select works for now. + */ u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; @@ -265,7 +269,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data == pmc->eventsel) return 0; if (!(data & pmu->reserved_bits)) { - reprogram_gp_counter(pmc, data); + reprogram_gp_counter(pmc, data, false); return 0; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 7c64792a9506..ba1fbd37f608 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -50,7 +50,7 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) continue; __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); - reprogram_fixed_counter(pmc, new_ctrl, i); + reprogram_fixed_counter(pmc, new_ctrl, i, true); } pmu->fixed_ctr_ctrl = data; @@ -444,7 +444,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data == pmc->eventsel) return 0; if (!(data & pmu->reserved_bits)) { - reprogram_gp_counter(pmc, data); + reprogram_gp_counter(pmc, data, true); return 0; } } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, false))