From patchwork Mon Jun 26 09:16:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 9808925 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2E1AF60209 for ; Mon, 26 Jun 2017 09:19:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 379AC28161 for ; Mon, 26 Jun 2017 09:19:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2BFBF2847D; Mon, 26 Jun 2017 09:19:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 641C028161 for ; Mon, 26 Jun 2017 09:19:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dPQ8d-0002iS-MB; Mon, 26 Jun 2017 09:16:55 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dPQ8c-0002i5-Cb for xen-devel@lists.xen.org; Mon, 26 Jun 2017 09:16:54 +0000 Received: from [85.158.143.35] by server-4.bemta-6.messagelabs.com id 59/FF-02956-501D0595; Mon, 26 Jun 2017 09:16:53 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrDLMWRWlGSWpSXmKPExsVywNwkVpf1YkC kwdYZjBZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aTOxIFG3UrplzgbWC8qdrFyMUhJDCNUaLp SitTFyMnh4QAr8SRZTNYIWx/iUMXDrOD2EICvYwSN49KgdhsAvoSKx4fBKsREZCWuPb5MiOIz SxQLTFx+hwwW1ggWqLp1i2wGhYBVYmDi5vBbF4BW4nO1UtYIObLS+xquwgW5xSwk3jx9wDULl uJCb9WME5g5F3AyLCKUaM4tagstUjX0EIvqSgzPaMkNzEzR9fQwEwvN7W4ODE9NScxqVgvOT9 3EyMwFBiAYAfjzY0BhxglOZiURHkb/f0jhfiS8lMqMxKLM+KLSnNSiw8xynBwKEnwnjsfECkk WJSanlqRlpkDDEqYtAQHj5II74WzQGne4oLE3OLMdIjUKUZFKXHeWSB9AiCJjNI8uDZYJFxil JUS5mUEOkSIpyC1KDezBFX+FaM4B6OSMG87yBSezLwSuOmvgBYzAS1mmQe2uCQRISXVwJj8j0 O7e97el/vY2CR6tnv1L3DSeHQ1wS9O/xf/i9lBfPN5wl7sm2gow8gYteN+mprxsgi36YVFt6p XzdkR4rp/9UqJ5sM5W/9cXVwVIH/Zm6/MLfyfzc+mvpnMMa/jlly+MC9p353F989x292cN/Pe 6jtzUu0czfWuLLmgpB+ny8N2eu+rb9JKLMUZiYZazEXFiQDmwGRHfwIAAA== X-Env-Sender: haozhong.zhang@intel.com X-Msg-Ref: server-9.tower-21.messagelabs.com!1498468609!75505562!2 X-Originating-IP: [192.55.52.93] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n X-StarScan-Received: X-StarScan-Version: 9.4.19; banners=-,-,- X-VirusChecked: Checked Received: (qmail 64429 invoked from network); 26 Jun 2017 09:16:52 -0000 Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 26 Jun 2017 09:16:52 -0000 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Jun 2017 02:16:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,395,1493708400"; d="scan'208";a="985229953" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.159.142]) by orsmga003.jf.intel.com with ESMTP; 26 Jun 2017 02:16:49 -0700 From: Haozhong Zhang To: xen-devel@lists.xen.org Date: Mon, 26 Jun 2017 17:16:16 +0800 Message-Id: <20170626091625.19655-3-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170626091625.19655-1-haozhong.zhang@intel.com> References: <20170626091625.19655-1-haozhong.zhang@intel.com> Cc: Haozhong Zhang , Jan Beulich , Andrew Cooper Subject: [Xen-devel] [PATCH v4 02/11] xen/mce: allow mce_barrier_{enter, exit} to return without waiting X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add a 'nowait' argument to mce_barrier_{enter,exit}() to allow them return immediately without waiting mce_barrier_{enter,exit}() on other CPUs. This is useful when handling LMCE, where mce_barrier_{enter,exit} are called only on one CPU. Signed-off-by: Haozhong Zhang --- Cc: Jan Beulich Cc: Andrew Cooper --- xen/arch/x86/cpu/mcheck/barrier.c | 12 ++++++------ xen/arch/x86/cpu/mcheck/barrier.h | 12 ++++++++++-- xen/arch/x86/cpu/mcheck/mce.c | 20 ++++++++++---------- 3 files changed, 26 insertions(+), 18 deletions(-) diff --git a/xen/arch/x86/cpu/mcheck/barrier.c b/xen/arch/x86/cpu/mcheck/barrier.c index 5dce1fb9b9..0b3b09103d 100644 --- a/xen/arch/x86/cpu/mcheck/barrier.c +++ b/xen/arch/x86/cpu/mcheck/barrier.c @@ -16,11 +16,11 @@ void mce_barrier_dec(struct mce_softirq_barrier *bar) atomic_dec(&bar->val); } -void mce_barrier_enter(struct mce_softirq_barrier *bar) +void mce_barrier_enter(struct mce_softirq_barrier *bar, bool nowait) { int gen; - if (!mce_broadcast) + if ( !mce_broadcast || nowait ) return; atomic_inc(&bar->ingen); gen = atomic_read(&bar->outgen); @@ -34,11 +34,11 @@ void mce_barrier_enter(struct mce_softirq_barrier *bar) } } -void mce_barrier_exit(struct mce_softirq_barrier *bar) +void mce_barrier_exit(struct mce_softirq_barrier *bar, bool nowait) { int gen; - if ( !mce_broadcast ) + if ( !mce_broadcast || nowait ) return; atomic_inc(&bar->outgen); gen = atomic_read(&bar->ingen); @@ -54,6 +54,6 @@ void mce_barrier_exit(struct mce_softirq_barrier *bar) void mce_barrier(struct mce_softirq_barrier *bar) { - mce_barrier_enter(bar); - mce_barrier_exit(bar); + mce_barrier_enter(bar, false); + mce_barrier_exit(bar, false); } diff --git a/xen/arch/x86/cpu/mcheck/barrier.h b/xen/arch/x86/cpu/mcheck/barrier.h index d3ccf8b15f..f6b4370945 100644 --- a/xen/arch/x86/cpu/mcheck/barrier.h +++ b/xen/arch/x86/cpu/mcheck/barrier.h @@ -32,6 +32,14 @@ void mce_barrier_init(struct mce_softirq_barrier *); void mce_barrier_dec(struct mce_softirq_barrier *); /* + * If nowait is true, mce_barrier_enter/exit() will return immediately + * without touching the barrier. It's used when handling a LMCE which + * is received on only one CPU and thus does not invoke + * mce_barrier_enter/exit() calls on all CPUs. + * + * If nowait is false, mce_barrier_enter/exit() will handle the given + * barrier as below. + * * Increment the generation number and the value. The generation number * is incremented when entering a barrier. This way, it can be checked * on exit if a CPU is trying to re-enter the barrier. This can happen @@ -43,8 +51,8 @@ void mce_barrier_dec(struct mce_softirq_barrier *); * These barrier functions should always be paired, so that the * counter value will reach 0 again after all CPUs have exited. */ -void mce_barrier_enter(struct mce_softirq_barrier *); -void mce_barrier_exit(struct mce_softirq_barrier *); +void mce_barrier_enter(struct mce_softirq_barrier *, bool nowait); +void mce_barrier_exit(struct mce_softirq_barrier *, bool nowait); void mce_barrier(struct mce_softirq_barrier *); diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c index 54fd000aa0..1e0b03c38b 100644 --- a/xen/arch/x86/cpu/mcheck/mce.c +++ b/xen/arch/x86/cpu/mcheck/mce.c @@ -497,15 +497,15 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) } mce_spin_unlock(&mce_logout_lock); - mce_barrier_enter(&mce_trap_bar); + mce_barrier_enter(&mce_trap_bar, false); if ( mctc != NULL && mce_urgent_action(regs, mctc)) cpumask_set_cpu(smp_processor_id(), &mce_fatal_cpus); - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, false); /* * Wait until everybody has processed the trap. */ - mce_barrier_enter(&mce_trap_bar); + mce_barrier_enter(&mce_trap_bar, false); if (atomic_read(&severity_cpu) == smp_processor_id()) { /* According to SDM, if no error bank found on any cpus, @@ -524,16 +524,16 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) atomic_set(&found_error, 0); atomic_set(&severity_cpu, -1); } - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, false); /* Clear flags after above fatal check */ - mce_barrier_enter(&mce_trap_bar); + mce_barrier_enter(&mce_trap_bar, false); gstatus = mca_rdmsr(MSR_IA32_MCG_STATUS); if ((gstatus & MCG_STATUS_MCIP) != 0) { mce_printk(MCE_CRITICAL, "MCE: Clear MCIP@ last step"); mca_wrmsr(MSR_IA32_MCG_STATUS, 0); } - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, false); raise_softirq(MACHINE_CHECK_SOFTIRQ); } @@ -1703,7 +1703,7 @@ static void mce_softirq(void) mce_printk(MCE_VERBOSE, "CPU%d enter softirq\n", cpu); - mce_barrier_enter(&mce_inside_bar); + mce_barrier_enter(&mce_inside_bar, false); /* * Everybody is here. Now let's see who gets to do the @@ -1716,10 +1716,10 @@ static void mce_softirq(void) atomic_set(&severity_cpu, cpu); - mce_barrier_enter(&mce_severity_bar); + mce_barrier_enter(&mce_severity_bar, false); if (!mctelem_has_deferred(cpu)) atomic_set(&severity_cpu, cpu); - mce_barrier_exit(&mce_severity_bar); + mce_barrier_exit(&mce_severity_bar, false); /* We choose severity_cpu for further processing */ if (atomic_read(&severity_cpu) == cpu) { @@ -1740,7 +1740,7 @@ static void mce_softirq(void) } } - mce_barrier_exit(&mce_inside_bar); + mce_barrier_exit(&mce_inside_bar, false); } /* Machine Check owner judge algorithm: