From patchwork Fri Apr 25 15:19:52 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Hogan X-Patchwork-Id: 4064071 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8D6FD9F1F4 for ; Fri, 25 Apr 2014 15:22:36 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9646620304 for ; Fri, 25 Apr 2014 15:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4D8EB20295 for ; Fri, 25 Apr 2014 15:22:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752551AbaDYPV4 (ORCPT ); Fri, 25 Apr 2014 11:21:56 -0400 Received: from mailapp01.imgtec.com ([195.89.28.115]:11278 "EHLO mailapp01.imgtec.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752057AbaDYPUk (ORCPT ); Fri, 25 Apr 2014 11:20:40 -0400 Received: from KLMAIL01.kl.imgtec.org (unknown [192.168.5.35]) by Websense Email Security Gateway with ESMTPS id 55EE8BDE219; Fri, 25 Apr 2014 16:20:34 +0100 (IST) Received: from LEMAIL01.le.imgtec.org (192.168.152.62) by KLMAIL01.kl.imgtec.org (192.168.5.35) with Microsoft SMTP Server (TLS) id 14.3.181.6; Fri, 25 Apr 2014 16:20:36 +0100 Received: from jhogan-linux.le.imgtec.org (192.168.154.65) by LEMAIL01.le.imgtec.org (192.168.152.62) with Microsoft SMTP Server (TLS) id 14.3.174.1; Fri, 25 Apr 2014 16:20:36 +0100 From: James Hogan To: Paolo Bonzini CC: James Hogan , Gleb Natapov , , Ralf Baechle , , Sanjay Lal Subject: [PATCH 09/21] MIPS: KVM: Fix timer race modifying guest CP0_Cause Date: Fri, 25 Apr 2014 16:19:52 +0100 Message-ID: <1398439204-26171-10-git-send-email-james.hogan@imgtec.com> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1398439204-26171-1-git-send-email-james.hogan@imgtec.com> References: <1398439204-26171-1-git-send-email-james.hogan@imgtec.com> MIME-Version: 1.0 X-Originating-IP: [192.168.154.65] Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The hrtimer callback for guest timer timeouts sets the guest's CP0_Cause.TI bit to indicate to the guest that a timer interrupt is pending, however there is no mutual exclusion implemented to prevent this occurring while the guest's CP0_Cause register is being read-modify-written elsewhere. When this occurs the setting of the CP0_Cause.TI bit is undone and the guest misses the timer interrupt and doesn't reprogram the CP0_Compare register for the next timeout. Currently another timer interrupt will be triggered again in another 10ms anyway due to the way timers are emulated, but after the MIPS timer emulation is fixed this would result in Linux guest time standing still and the guest scheduler not being invoked until the guest CP0_Count has looped around again, which at 100MHz takes just under 43 seconds. Currently this is the only asynchronous modification of guest registers, therefore it is fixed by adjusting the implementations of the kvm_set_c0_guest_cause(), kvm_clear_c0_guest_cause(), and kvm_change_c0_guest_cause() macros which are used for modifying the guest CP0_Cause register to use ll/sc to ensure atomic modification. This should work in both UP and SMP cases without requiring interrupts to be disabled. Signed-off-by: James Hogan Cc: Paolo Bonzini Cc: Gleb Natapov Cc: kvm@vger.kernel.org Cc: Ralf Baechle Cc: linux-mips@linux-mips.org Cc: Sanjay Lal --- arch/mips/include/asm/kvm_host.h | 71 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 65 insertions(+), 6 deletions(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 3eedcb3015e5..90e1c0005ff4 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -481,15 +481,74 @@ struct kvm_vcpu_arch { #define kvm_read_c0_guest_errorepc(cop0) (cop0->reg[MIPS_CP0_ERROR_PC][0]) #define kvm_write_c0_guest_errorepc(cop0, val) (cop0->reg[MIPS_CP0_ERROR_PC][0] = (val)) +/* + * Some of the guest registers may be modified asynchronously (e.g. from a + * hrtimer callback in hard irq context) and therefore need stronger atomicity + * guarantees than other registers. + */ + +static inline void _kvm_atomic_set_c0_guest_reg(unsigned long *reg, + unsigned long val) +{ + unsigned long temp; + do { + __asm__ __volatile__( + " .set mips3 \n" + " " __LL "%0, %1 \n" + " or %0, %2 \n" + " " __SC "%0, %1 \n" + " .set mips0 \n" + : "=&r" (temp), "+m" (*reg) + : "r" (val)); + } while (unlikely(!temp)); +} + +static inline void _kvm_atomic_clear_c0_guest_reg(unsigned long *reg, + unsigned long val) +{ + unsigned long temp; + do { + __asm__ __volatile__( + " .set mips3 \n" + " " __LL "%0, %1 \n" + " and %0, %2 \n" + " " __SC "%0, %1 \n" + " .set mips0 \n" + : "=&r" (temp), "+m" (*reg) + : "r" (~val)); + } while (unlikely(!temp)); +} + +static inline void _kvm_atomic_change_c0_guest_reg(unsigned long *reg, + unsigned long change, + unsigned long val) +{ + unsigned long temp; + do { + __asm__ __volatile__( + " .set mips3 \n" + " " __LL "%0, %1 \n" + " and %0, %2 \n" + " or %0, %3 \n" + " " __SC "%0, %1 \n" + " .set mips0 \n" + : "=&r" (temp), "+m" (*reg) + : "r" (~change), "r" (val & change)); + } while (unlikely(!temp)); +} + #define kvm_set_c0_guest_status(cop0, val) (cop0->reg[MIPS_CP0_STATUS][0] |= (val)) #define kvm_clear_c0_guest_status(cop0, val) (cop0->reg[MIPS_CP0_STATUS][0] &= ~(val)) -#define kvm_set_c0_guest_cause(cop0, val) (cop0->reg[MIPS_CP0_CAUSE][0] |= (val)) -#define kvm_clear_c0_guest_cause(cop0, val) (cop0->reg[MIPS_CP0_CAUSE][0] &= ~(val)) + +/* Cause can be modified asynchronously from hardirq hrtimer callback */ +#define kvm_set_c0_guest_cause(cop0, val) \ + _kvm_atomic_set_c0_guest_reg(&cop0->reg[MIPS_CP0_CAUSE][0], val) +#define kvm_clear_c0_guest_cause(cop0, val) \ + _kvm_atomic_clear_c0_guest_reg(&cop0->reg[MIPS_CP0_CAUSE][0], val) #define kvm_change_c0_guest_cause(cop0, change, val) \ -{ \ - kvm_clear_c0_guest_cause(cop0, change); \ - kvm_set_c0_guest_cause(cop0, ((val) & (change))); \ -} + _kvm_atomic_change_c0_guest_reg(&cop0->reg[MIPS_CP0_CAUSE][0], \ + change, val) + #define kvm_set_c0_guest_ebase(cop0, val) (cop0->reg[MIPS_CP0_PRID][1] |= (val)) #define kvm_clear_c0_guest_ebase(cop0, val) (cop0->reg[MIPS_CP0_PRID][1] &= ~(val)) #define kvm_change_c0_guest_ebase(cop0, change, val) \