From patchwork Mon Dec 3 15:05:45 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1834171 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id E3EA1DF2F9 for ; Mon, 3 Dec 2012 15:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752651Ab2LCPFr (ORCPT ); Mon, 3 Dec 2012 10:05:47 -0500 Received: from mail-vb0-f46.google.com ([209.85.212.46]:42254 "EHLO mail-vb0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751398Ab2LCPFq (ORCPT ); Mon, 3 Dec 2012 10:05:46 -0500 Received: by mail-vb0-f46.google.com with SMTP id b13so1667425vby.19 for ; Mon, 03 Dec 2012 07:05:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:date :message-id:subject:from:to:cc:content-type:x-gm-message-state; bh=phG154wvwdg0iCou4iJG6cqYVPB+ALgI8e7MinjnYcg=; b=ITYSAz/+Hoy65E7pK1o0QrWxaER4ofQBcY6wr+nIesqhMFfmpiaoHn6ks1DYcL8uuS S64rOiLhZrGFyUFIRHiWHOx3+ugf1l9Q7btWavcPQtc9WAcmRMgNflbqx8D6qGjM8zPQ /MnX4/p+6aRCm5lPZqUiJTYerLKhUHA0EKrLK9NFrE1ub5PMnjAThQww/W9lmRw6H2bZ cnjIVITvIiSoRB76K2XD0U1KqeS5Ui34RXWYWAySoZc0EdO2g2Be36pk9bSrj7K10QSo PHidko4ffwz07MFqPT4TImuxdCsbI5QgivTOlUzp5taslHixJW4l/UNSjhZqP1Hpx5ep E0DQ== MIME-Version: 1.0 Received: by 10.220.153.80 with SMTP id j16mr8738353vcw.21.1354547145608; Mon, 03 Dec 2012 07:05:45 -0800 (PST) Received: by 10.221.6.7 with HTTP; Mon, 3 Dec 2012 07:05:45 -0800 (PST) X-Originating-IP: [72.80.83.148] In-Reply-To: <50BC7FE7.6050006@arm.com> References: <20121110154203.2836.46686.stgit@chazy-air> <20121110154306.2836.93473.stgit@chazy-air> <20121119145758.GA3205@mudshark.cambridge.arm.com> <20121130151500.GC26289@mudshark.cambridge.arm.com> <20121130171420.GC619@mudshark.cambridge.arm.com> <50BC7FE7.6050006@arm.com> Date: Mon, 3 Dec 2012 10:05:45 -0500 Message-ID: Subject: Re: [PATCH v4 08/14] KVM: ARM: World-switch implementation From: Christoffer Dall To: Marc Zyngier Cc: Will Deacon , "kvm@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "kvmarm@lists.cs.columbia.edu" , Antonios Motakis , Marcelo Tosatti , Rusty Russell X-Gm-Message-State: ALoCoQkz4agQdWKv1rccDSPqPRAIeE7IFidppXkb4hICHuJnYsBjrKwqBLUQ1ptaeeJmBsAFRh7Y Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Mon, Dec 3, 2012 at 5:33 AM, Marc Zyngier wrote: > On 30/11/12 18:49, Christoffer Dall wrote: >> On Fri, Nov 30, 2012 at 12:14 PM, Will Deacon wrote: >>> On Fri, Nov 30, 2012 at 04:47:40PM +0000, Christoffer Dall wrote: >>>> On Fri, Nov 30, 2012 at 10:15 AM, Will Deacon wrote: >>>>> At this point, VM1 is running and VM0:VCPU1 is running. VM0:VCPU0 is not >>>>> running because physical CPU0 is handling an interrupt. The problem is that >>>>> when VCPU0 *is* resumed, it will update the VMID of VM0 and could be >>>>> scheduled in parallel with VCPU1 but with a different VMID. >>>>> >>>>> How do you avoid this in the current code? >>>>> >>>> I don't. Nice catch. Please apply your interesting brain to the following fix:) >>> >>> I'm far too sober to look at your patch right now, but I'll think about it >>> over the weekend [I can't break it at a quick glance] :) >>> >>> In the meantime, can you think about whether the TLB operations need to run >>> on every CPU please? >>> >> they don't we can invalidate the TLB and the icache using the inner >> shareability domain. Here's a patch: >> >> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h >> index ad1390f..df1b753 100644 >> --- a/arch/arm/include/asm/kvm_host.h >> +++ b/arch/arm/include/asm/kvm_host.h >> @@ -146,6 +146,7 @@ struct kvm_one_reg; >> int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); >> int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); >> u64 kvm_call_hyp(void *hypfn, ...); >> +void force_vm_exit(const cpumask_t *mask); >> >> #define KVM_ARCH_WANT_MMU_NOTIFIER >> struct kvm; >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c >> index c4f631e..674592e 100644 >> --- a/arch/arm/kvm/arm.c >> +++ b/arch/arm/kvm/arm.c >> @@ -405,9 +405,14 @@ int kvm_arch_vcpu_in_guest_mode(struct kvm_vcpu *v) >> return v->mode == IN_GUEST_MODE; >> } >> >> -static void reset_vm_context(void *info) >> +/* Just ensure a guest exit from a particular CPU */ >> +static void exit_vm_noop(void *info) >> { >> - kvm_call_hyp(__kvm_flush_vm_context); >> +} >> + >> +void force_vm_exit(const cpumask_t *mask) >> +{ >> + smp_call_function_many(mask, exit_vm_noop, NULL, true); >> } > > Care to update the do_nothing() call in emulate.c to use this as well? > yes, was actually already done: commit d1ff14417724e9062ec9d585bf3ef0537e12b54d Author: Christoffer Dall Date: Fri Nov 30 13:37:01 2012 -0500 KVM: ARM: Reuse force_vm_exit from copy_current_insn The VMID management code also needs to force VM exits, so reuse that functionality from both places. Signed-off-by: Christoffer Dall --- -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c index 60e7ec0..ad743b7 100644 --- a/arch/arm/kvm/emulate.c +++ b/arch/arm/kvm/emulate.c @@ -221,11 +221,6 @@ static bool copy_from_guest_va(struct kvm_vcpu *vcpu, return true; } -/* Just ensure we're not running the guest. */ -static void do_nothing(void *info) -{ -} - /* * We have to be very careful copying memory from a running (ie. SMP) guest. * Another CPU may remap the page (eg. swap out a userspace text page) as we @@ -257,8 +252,9 @@ static bool copy_current_insn(struct kvm_vcpu *vcpu, unsigned long *instr) /* Kick out any which are still running. */ kvm_for_each_vcpu(i, v, vcpu->kvm) { /* Guest could exit now, making cpu wrong. That's OK. */ - if (kvm_vcpu_exiting_guest_mode(v) == IN_GUEST_MODE) - smp_call_function_single(v->cpu, do_nothing, NULL, 1); + if (kvm_vcpu_exiting_guest_mode(v) == IN_GUEST_MODE) { + force_vm_exit(get_cpu_mask(v->cpu)); + } }