From patchwork Mon Jul 3 03:46:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 9821903 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4AC696035F for ; Mon, 3 Jul 2017 03:49:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33AC0283C7 for ; Mon, 3 Jul 2017 03:49:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 26D252851E; Mon, 3 Jul 2017 03:49:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7161F283C7 for ; Mon, 3 Jul 2017 03:49:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dRsKi-0006UZ-85; Mon, 03 Jul 2017 03:47:32 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dRsKg-0006U4-QV for xen-devel@lists.xen.org; Mon, 03 Jul 2017 03:47:30 +0000 Received: from [85.158.139.211] by server-7.bemta-5.messagelabs.com id B5/7B-02176-25EB9595; Mon, 03 Jul 2017 03:47:30 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpgkeJIrShJLcpLzFFi42I5YG5SrBu4LzL SoGeKnMWSj4tZHBg9ju7+zRTAGMWamZeUX5HAmjF7yVS2gjv6FQsef2VsYLyt3sXIxSEkMJ1R Yvez1SxdjJwcEgK8EkeWzWDtYuQAsgMkFlyIhKjpZZR43gFRwyagL7Hi8UFWEFtEQFri2ufLj CA2s0C1xMTpc8BsYYFoicYnj8DqWQRUJdrXt4HV8wrYSEx5No8NYpe8xK62i2BxTgFbiZtTdz KB2EJANXsm/WOcwMi7gJFhFaNGcWpRWWqRrpGFXlJRZnpGSW5iZo6uoYGpXm5qcXFiempOYlK xXnJ+7iZGYDjUMzAw7mDsW+V3iFGSg0lJlHfltdBIIb6k/JTKjMTijPii0pzU4kOMMhwcShK8 vHsjI4UEi1LTUyvSMnOAgQmTluDgURLhveYDlOYtLkjMLc5Mh0idYjTmWDXz5zcmjlcT/n9jE mLJy89LlRLnFQOZJABSmlGaBzcIFjGXGGWlhHkZGRgYhHgKUotyM0tQ5V8xinMwKgnz6uwBms KTmVcCt+8V0ClMQKc09ESAnFKSiJCSamB0mfVzU92pp10T7vMy5+v37HshvV2Eb/aBH478fgV NUcX1nhFMqkuLXMsPzXg6uXxByj3vQM36HXH6CV8UGVMiFBbXHPzty5FyTUv87qHb7BvulH9Y /E1ifVfVwzLmVdGSmq27Nm9sTeLi7hLVCVhVdmBl3OF89d/HC2MMGKVqQ4LF617klCmxFGckG moxFxUnAgDen6w0kwIAAA== X-Env-Sender: haozhong.zhang@intel.com X-Msg-Ref: server-6.tower-206.messagelabs.com!1499053645!100869672!2 X-Originating-IP: [192.55.52.115] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 21901 invoked from network); 3 Jul 2017 03:47:29 -0000 Received: from mga14.intel.com (HELO mga14.intel.com) (192.55.52.115) by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 3 Jul 2017 03:47:29 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Jul 2017 20:47:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.40,301,1496127600"; d="scan'208"; a="1189835619" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.159.142]) by fmsmga002.fm.intel.com with ESMTP; 02 Jul 2017 20:47:25 -0700 From: Haozhong Zhang To: xen-devel@lists.xen.org Date: Mon, 3 Jul 2017 11:46:17 +0800 Message-Id: <20170703034626.9429-3-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170703034626.9429-1-haozhong.zhang@intel.com> References: <20170703034626.9429-1-haozhong.zhang@intel.com> Cc: Haozhong Zhang , Jan Beulich , Andrew Cooper Subject: [Xen-devel] [PATCH v5 02/11] xen/mce: allow mce_barrier_{enter, exit} to return without waiting X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add a 'wait' argument to mce_barrier_{enter,exit}() to specify whether the barrier functions should return immediately without waiting mce_barrier_{enter,exit}() on other CPUs. This is useful when handling LMCE, where mce_barrier_{enter,exit} are called only on one CPU. Signed-off-by: Haozhong Zhang --- Changes in v5: * Invert paremeter "nowait" to "wait". * Let callers pass in "mce_broadcast" explicitly. Cc: Jan Beulich Cc: Andrew Cooper --- xen/arch/x86/cpu/mcheck/barrier.c | 12 ++++++------ xen/arch/x86/cpu/mcheck/barrier.h | 14 ++++++++++++-- xen/arch/x86/cpu/mcheck/mce.c | 20 ++++++++++---------- 3 files changed, 28 insertions(+), 18 deletions(-) diff --git a/xen/arch/x86/cpu/mcheck/barrier.c b/xen/arch/x86/cpu/mcheck/barrier.c index 5dce1fb9b9..7de8e45e8c 100644 --- a/xen/arch/x86/cpu/mcheck/barrier.c +++ b/xen/arch/x86/cpu/mcheck/barrier.c @@ -16,11 +16,11 @@ void mce_barrier_dec(struct mce_softirq_barrier *bar) atomic_dec(&bar->val); } -void mce_barrier_enter(struct mce_softirq_barrier *bar) +void mce_barrier_enter(struct mce_softirq_barrier *bar, bool wait) { int gen; - if (!mce_broadcast) + if ( !wait ) return; atomic_inc(&bar->ingen); gen = atomic_read(&bar->outgen); @@ -34,11 +34,11 @@ void mce_barrier_enter(struct mce_softirq_barrier *bar) } } -void mce_barrier_exit(struct mce_softirq_barrier *bar) +void mce_barrier_exit(struct mce_softirq_barrier *bar, bool wait) { int gen; - if ( !mce_broadcast ) + if ( !wait ) return; atomic_inc(&bar->outgen); gen = atomic_read(&bar->ingen); @@ -54,6 +54,6 @@ void mce_barrier_exit(struct mce_softirq_barrier *bar) void mce_barrier(struct mce_softirq_barrier *bar) { - mce_barrier_enter(bar); - mce_barrier_exit(bar); + mce_barrier_enter(bar, mce_broadcast); + mce_barrier_exit(bar, mce_broadcast); } diff --git a/xen/arch/x86/cpu/mcheck/barrier.h b/xen/arch/x86/cpu/mcheck/barrier.h index d3ccf8b15f..c4d52b6192 100644 --- a/xen/arch/x86/cpu/mcheck/barrier.h +++ b/xen/arch/x86/cpu/mcheck/barrier.h @@ -32,6 +32,16 @@ void mce_barrier_init(struct mce_softirq_barrier *); void mce_barrier_dec(struct mce_softirq_barrier *); /* + * If @wait is false, mce_barrier_enter/exit() will return immediately + * without touching the barrier. It's used when handling a + * non-broadcasting MCE (e.g. MCE on some old Intel CPU, MCE on AMD + * CPU and LMCE on Intel Skylake-server CPU) which is received on only + * one CPU and thus does not invoke mce_barrier_enter/exit() calls on + * all CPUs. + * + * If @wait is true, mce_barrier_enter/exit() will handle the given + * barrier as below. + * * Increment the generation number and the value. The generation number * is incremented when entering a barrier. This way, it can be checked * on exit if a CPU is trying to re-enter the barrier. This can happen @@ -43,8 +53,8 @@ void mce_barrier_dec(struct mce_softirq_barrier *); * These barrier functions should always be paired, so that the * counter value will reach 0 again after all CPUs have exited. */ -void mce_barrier_enter(struct mce_softirq_barrier *); -void mce_barrier_exit(struct mce_softirq_barrier *); +void mce_barrier_enter(struct mce_softirq_barrier *, bool wait); +void mce_barrier_exit(struct mce_softirq_barrier *, bool wait); void mce_barrier(struct mce_softirq_barrier *); diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c index 54fd000aa0..d247d6e198 100644 --- a/xen/arch/x86/cpu/mcheck/mce.c +++ b/xen/arch/x86/cpu/mcheck/mce.c @@ -497,15 +497,15 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) } mce_spin_unlock(&mce_logout_lock); - mce_barrier_enter(&mce_trap_bar); + mce_barrier_enter(&mce_trap_bar, mce_broadcast); if ( mctc != NULL && mce_urgent_action(regs, mctc)) cpumask_set_cpu(smp_processor_id(), &mce_fatal_cpus); - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, mce_broadcast); /* * Wait until everybody has processed the trap. */ - mce_barrier_enter(&mce_trap_bar); + mce_barrier_enter(&mce_trap_bar, mce_broadcast); if (atomic_read(&severity_cpu) == smp_processor_id()) { /* According to SDM, if no error bank found on any cpus, @@ -524,16 +524,16 @@ void mcheck_cmn_handler(const struct cpu_user_regs *regs) atomic_set(&found_error, 0); atomic_set(&severity_cpu, -1); } - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, mce_broadcast); /* Clear flags after above fatal check */ - mce_barrier_enter(&mce_trap_bar); + mce_barrier_enter(&mce_trap_bar, mce_broadcast); gstatus = mca_rdmsr(MSR_IA32_MCG_STATUS); if ((gstatus & MCG_STATUS_MCIP) != 0) { mce_printk(MCE_CRITICAL, "MCE: Clear MCIP@ last step"); mca_wrmsr(MSR_IA32_MCG_STATUS, 0); } - mce_barrier_exit(&mce_trap_bar); + mce_barrier_exit(&mce_trap_bar, mce_broadcast); raise_softirq(MACHINE_CHECK_SOFTIRQ); } @@ -1703,7 +1703,7 @@ static void mce_softirq(void) mce_printk(MCE_VERBOSE, "CPU%d enter softirq\n", cpu); - mce_barrier_enter(&mce_inside_bar); + mce_barrier_enter(&mce_inside_bar, mce_broadcast); /* * Everybody is here. Now let's see who gets to do the @@ -1716,10 +1716,10 @@ static void mce_softirq(void) atomic_set(&severity_cpu, cpu); - mce_barrier_enter(&mce_severity_bar); + mce_barrier_enter(&mce_severity_bar, mce_broadcast); if (!mctelem_has_deferred(cpu)) atomic_set(&severity_cpu, cpu); - mce_barrier_exit(&mce_severity_bar); + mce_barrier_exit(&mce_severity_bar, mce_broadcast); /* We choose severity_cpu for further processing */ if (atomic_read(&severity_cpu) == cpu) { @@ -1740,7 +1740,7 @@ static void mce_softirq(void) } } - mce_barrier_exit(&mce_inside_bar); + mce_barrier_exit(&mce_inside_bar, mce_broadcast); } /* Machine Check owner judge algorithm: