From patchwork Mon Dec 3 15:05:45 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1834191 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 535D7DF2F9 for ; Mon, 3 Dec 2012 15:09:02 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TfXag-0006C1-JN; Mon, 03 Dec 2012 15:05:51 +0000 Received: from mail-vb0-f49.google.com ([209.85.212.49]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TfXac-0006BI-GR for linux-arm-kernel@lists.infradead.org; Mon, 03 Dec 2012 15:05:47 +0000 Received: by mail-vb0-f49.google.com with SMTP id r6so1727173vbi.36 for ; Mon, 03 Dec 2012 07:05:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:date :message-id:subject:from:to:cc:content-type:x-gm-message-state; bh=phG154wvwdg0iCou4iJG6cqYVPB+ALgI8e7MinjnYcg=; b=JxEOhWEKDCfa74aXRS+kP1d6JvVpFZ0HYe3Wf21/ZOP2dw7g1lWr8VnL6lPNBwBWrC HoVVW09ZIgQc9I5GnfG691vKzImVNSpAKa1vDGwZgRnKEkySyD+ku8bkfe53RMu4lEoa b3MOcqWI6QU7Re3jVZzEn+8QvCC+MPXNvPQ58FBZYJizqTBcIv5qFT9n1pshGhfBLgH2 EhmmEiHfIXWEz8EJxLr10jrLLoqVciEiSWjEpp6FcuFxOiATkNNugMgdJxidNQsdccSu EExPxN4gHgWDiyBjcxkKWr8zIiQjOKsOuPi/1PhAP4dyEhRziWUB07Y7PpFkAfEhZ+wx gpTw== MIME-Version: 1.0 Received: by 10.220.153.80 with SMTP id j16mr8738353vcw.21.1354547145608; Mon, 03 Dec 2012 07:05:45 -0800 (PST) Received: by 10.221.6.7 with HTTP; Mon, 3 Dec 2012 07:05:45 -0800 (PST) X-Originating-IP: [72.80.83.148] In-Reply-To: <50BC7FE7.6050006@arm.com> References: <20121110154203.2836.46686.stgit@chazy-air> <20121110154306.2836.93473.stgit@chazy-air> <20121119145758.GA3205@mudshark.cambridge.arm.com> <20121130151500.GC26289@mudshark.cambridge.arm.com> <20121130171420.GC619@mudshark.cambridge.arm.com> <50BC7FE7.6050006@arm.com> Date: Mon, 3 Dec 2012 10:05:45 -0500 Message-ID: Subject: Re: [PATCH v4 08/14] KVM: ARM: World-switch implementation From: Christoffer Dall To: Marc Zyngier X-Gm-Message-State: ALoCoQkXtPKelm0dBZPfNnP8K56pOAwl18kC/iDNmbl+3YfmCmea6Vs6BnWFco8LlP3lXb7sgA0x X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121203_100546_656000_C51B888E X-CRM114-Status: GOOD ( 19.30 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.49 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Rusty Russell , "kvm@vger.kernel.org" , Marcelo Tosatti , Will Deacon , Antonios Motakis , "kvmarm@lists.cs.columbia.edu" , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On Mon, Dec 3, 2012 at 5:33 AM, Marc Zyngier wrote: > On 30/11/12 18:49, Christoffer Dall wrote: >> On Fri, Nov 30, 2012 at 12:14 PM, Will Deacon wrote: >>> On Fri, Nov 30, 2012 at 04:47:40PM +0000, Christoffer Dall wrote: >>>> On Fri, Nov 30, 2012 at 10:15 AM, Will Deacon wrote: >>>>> At this point, VM1 is running and VM0:VCPU1 is running. VM0:VCPU0 is not >>>>> running because physical CPU0 is handling an interrupt. The problem is that >>>>> when VCPU0 *is* resumed, it will update the VMID of VM0 and could be >>>>> scheduled in parallel with VCPU1 but with a different VMID. >>>>> >>>>> How do you avoid this in the current code? >>>>> >>>> I don't. Nice catch. Please apply your interesting brain to the following fix:) >>> >>> I'm far too sober to look at your patch right now, but I'll think about it >>> over the weekend [I can't break it at a quick glance] :) >>> >>> In the meantime, can you think about whether the TLB operations need to run >>> on every CPU please? >>> >> they don't we can invalidate the TLB and the icache using the inner >> shareability domain. Here's a patch: >> >> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h >> index ad1390f..df1b753 100644 >> --- a/arch/arm/include/asm/kvm_host.h >> +++ b/arch/arm/include/asm/kvm_host.h >> @@ -146,6 +146,7 @@ struct kvm_one_reg; >> int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); >> int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); >> u64 kvm_call_hyp(void *hypfn, ...); >> +void force_vm_exit(const cpumask_t *mask); >> >> #define KVM_ARCH_WANT_MMU_NOTIFIER >> struct kvm; >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c >> index c4f631e..674592e 100644 >> --- a/arch/arm/kvm/arm.c >> +++ b/arch/arm/kvm/arm.c >> @@ -405,9 +405,14 @@ int kvm_arch_vcpu_in_guest_mode(struct kvm_vcpu *v) >> return v->mode == IN_GUEST_MODE; >> } >> >> -static void reset_vm_context(void *info) >> +/* Just ensure a guest exit from a particular CPU */ >> +static void exit_vm_noop(void *info) >> { >> - kvm_call_hyp(__kvm_flush_vm_context); >> +} >> + >> +void force_vm_exit(const cpumask_t *mask) >> +{ >> + smp_call_function_many(mask, exit_vm_noop, NULL, true); >> } > > Care to update the do_nothing() call in emulate.c to use this as well? > yes, was actually already done: commit d1ff14417724e9062ec9d585bf3ef0537e12b54d Author: Christoffer Dall Date: Fri Nov 30 13:37:01 2012 -0500 KVM: ARM: Reuse force_vm_exit from copy_current_insn The VMID management code also needs to force VM exits, so reuse that functionality from both places. Signed-off-by: Christoffer Dall diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c index 60e7ec0..ad743b7 100644 --- a/arch/arm/kvm/emulate.c +++ b/arch/arm/kvm/emulate.c @@ -221,11 +221,6 @@ static bool copy_from_guest_va(struct kvm_vcpu *vcpu, return true; } -/* Just ensure we're not running the guest. */ -static void do_nothing(void *info) -{ -} - /* * We have to be very careful copying memory from a running (ie. SMP) guest. * Another CPU may remap the page (eg. swap out a userspace text page) as we @@ -257,8 +252,9 @@ static bool copy_current_insn(struct kvm_vcpu *vcpu, unsigned long *instr) /* Kick out any which are still running. */ kvm_for_each_vcpu(i, v, vcpu->kvm) { /* Guest could exit now, making cpu wrong. That's OK. */ - if (kvm_vcpu_exiting_guest_mode(v) == IN_GUEST_MODE) - smp_call_function_single(v->cpu, do_nothing, NULL, 1); + if (kvm_vcpu_exiting_guest_mode(v) == IN_GUEST_MODE) { + force_vm_exit(get_cpu_mask(v->cpu)); + } }