From patchwork Tue May 7 03:32:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Scott Wood X-Patchwork-Id: 2530571 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id A2DF63FCA5 for ; Tue, 7 May 2013 03:32:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759469Ab3EGDcp (ORCPT ); Mon, 6 May 2013 23:32:45 -0400 Received: from co9ehsobe004.messaging.microsoft.com ([207.46.163.27]:45650 "EHLO co9outboundpool.messaging.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758091Ab3EGDco (ORCPT ); Mon, 6 May 2013 23:32:44 -0400 Received: from mail12-co9-R.bigfish.com (10.236.132.249) by CO9EHSOBE014.bigfish.com (10.236.130.77) with Microsoft SMTP Server id 14.1.225.23; Tue, 7 May 2013 03:32:43 +0000 Received: from mail12-co9 (localhost [127.0.0.1]) by mail12-co9-R.bigfish.com (Postfix) with ESMTP id 8BD1E320415; Tue, 7 May 2013 03:32:43 +0000 (UTC) X-Forefront-Antispam-Report: CIP:70.37.183.190; KIP:(null); UIP:(null); IPV:NLI; H:mail.freescale.net; RD:none; EFVD:NLI X-SpamScore: 0 X-BigFish: VS0(zzzz1f42h1fc6h1ee6h1de0h1fdah1202h1e76h1d1ah1d2ahzz8275bh8275dhz2dh2a8h668h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah1d0ch1d2eh1d3fh1155h) Received: from mail12-co9 (localhost.localdomain [127.0.0.1]) by mail12-co9 (MessageSwitch) id 1367897561410241_31141; Tue, 7 May 2013 03:32:41 +0000 (UTC) Received: from CO9EHSMHS004.bigfish.com (unknown [10.236.132.230]) by mail12-co9.bigfish.com (Postfix) with ESMTP id 6198D3C004B; Tue, 7 May 2013 03:32:41 +0000 (UTC) Received: from mail.freescale.net (70.37.183.190) by CO9EHSMHS004.bigfish.com (10.236.130.14) with Microsoft SMTP Server (TLS) id 14.1.225.23; Tue, 7 May 2013 03:32:36 +0000 Received: from tx30smr01.am.freescale.net (10.81.153.31) by 039-SN1MMR1-002.039d.mgd.msft.net (10.84.1.15) with Microsoft SMTP Server (TLS) id 14.2.328.11; Tue, 7 May 2013 03:32:35 +0000 Received: from snotra.am.freescale.net ([10.214.85.48]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id r473WW5N019445; Mon, 6 May 2013 20:32:32 -0700 From: Scott Wood To: Alexander Graf CC: , , Scott Wood , Mihai Caraman , Benjamin Herrenschmidt , Tiejun Chen Subject: [PATCH] kvm/ppc: interrupt disabling fixes Date: Mon, 6 May 2013 22:32:31 -0500 Message-ID: <1367897551-8382-1-git-send-email-scottwood@freescale.com> X-Mailer: git-send-email 1.7.10.4 MIME-Version: 1.0 X-OriginatorOrg: freescale.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org booke64 was not maintaing consistent lazy ee state when exiting the guest, leading to warnings and worse. booke32 was less affected due to the absence of lazy ee, but it was still feeding bad information into trace_hardirqs_off/on -- we don't want guest execution to be seen as an "IRQs off" interval. book3s_pr also has this problem. book3s_pr and booke both used kvmppc_lazy_ee_enable() without hard-disabling EE first, which could lead to races when irq_happened is cleared, or if an interrupt happens after kvmppc_lazy_ee_enable(), and possibly other issues. Now, on book3s_pr and booke, always hard-disable interrupts before kvmppc_prepare_to_enter(), but leave them soft-enabled. On book3s, this should results in the right lazy EE state when the asm code hard-enables on an exit. On booke, we call hard_irq_disable() rather than hard-enable immediately. Signed-off-by: Scott Wood Cc: Mihai Caraman Cc: Benjamin Herrenschmidt Cc: Tiejun Chen --- Only tested on booke (32 and 64 bit). Testers of book3s_pr would be appreciated (particularly with lockdep enabled). --- arch/powerpc/include/asm/kvm_ppc.h | 7 +++++++ arch/powerpc/kvm/book3s_pr.c | 6 ++++-- arch/powerpc/kvm/booke.c | 12 ++++++++++-- 3 files changed, 21 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index a5287fe..e55d7e5 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -399,6 +399,13 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn) static inline void kvmppc_lazy_ee_enable(void) { #ifdef CONFIG_PPC64 + /* + * To avoid races, the caller must have gone directly from having + * interrupts fully-enabled to hard-disabled. + */ + WARN_ON(local_paca->irq_happened != PACA_IRQ_HARD_DIS); + trace_hardirqs_on(); + /* Only need to enable IRQs by hard enabling them after this */ local_paca->irq_happened = 0; local_paca->soft_enabled = 1; diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index d09baf1..a1e70113 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -884,7 +884,8 @@ program_interrupt: * and if we really did time things so badly, then we just exit * again due to a host external interrupt. */ - local_irq_disable(); + hard_irq_disable(); + trace_hardirqs_off(); s = kvmppc_prepare_to_enter(vcpu); if (s <= 0) { local_irq_enable(); @@ -1121,7 +1122,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) * really did time things so badly, then we just exit again due to * a host external interrupt. */ - local_irq_disable(); + hard_irq_disable(); + trace_hardirqs_off(); ret = kvmppc_prepare_to_enter(vcpu); if (ret <= 0) { local_irq_enable(); diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index ecbe908..5dc1f53 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -666,7 +666,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) return -EINVAL; } - local_irq_disable(); + hard_irq_disable(); + trace_hardirqs_off(); s = kvmppc_prepare_to_enter(vcpu); if (s <= 0) { local_irq_enable(); @@ -834,6 +835,12 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, int s; int idx; +#ifdef CONFIG_PPC64 + WARN_ON(local_paca->irq_happened != 0); +#endif + hard_irq_disable(); + trace_hardirqs_off(); + /* update before a new last_exit_type is rewritten */ kvmppc_update_timing_stats(vcpu); @@ -1150,7 +1157,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, * aren't already exiting to userspace for some other reason. */ if (!(r & RESUME_HOST)) { - local_irq_disable(); + hard_irq_disable(); + trace_hardirqs_off(); s = kvmppc_prepare_to_enter(vcpu); if (s <= 0) { local_irq_enable();