From patchwork Mon Jul 31 13:22:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9871839 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8F040603F4 for ; Mon, 31 Jul 2017 13:23:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 88178285AE for ; Mon, 31 Jul 2017 13:23:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7CF19285D3; Mon, 31 Jul 2017 13:23:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7AF3D285CF for ; Mon, 31 Jul 2017 13:23:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752389AbdGaNXA (ORCPT ); Mon, 31 Jul 2017 09:23:00 -0400 Received: from mail-wm0-f41.google.com ([74.125.82.41]:37765 "EHLO mail-wm0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752368AbdGaNW7 (ORCPT ); Mon, 31 Jul 2017 09:22:59 -0400 Received: by mail-wm0-f41.google.com with SMTP id t201so49272902wmt.0 for ; Mon, 31 Jul 2017 06:22:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=CQKRpABtlH3a8x82IoMPm8h4OqVZ7ZyVx1QicLi7l+U=; b=Lizc+etyN8VQQS6PCQ1ojzDkFvMdNbUezCD1MBFBxVGcF6dxCYUWCXb+vIhb1Zed1x gaqpBfv/6msSInn5WSFIbUNolgLDZXI6UQ6q+hpFBN5+4OOMLljIt611NGDbEbAdtwS1 mrm/oGy4Ver0Pf2G+QHF3g4rMF8WvcE9imsrQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=CQKRpABtlH3a8x82IoMPm8h4OqVZ7ZyVx1QicLi7l+U=; b=BgJTnjm0Y6ROB+mUNIMYB5qnQoPqmTlnGgTmYUnhvmkK5peCzTlOStrbIMzEJhVJIP hisUEK2yuR0FOqk9WGgB6KM+MxxM8C8ZTEJv+5BPAO6qlMkjgXbhh/ujxDrSAnyggo9l TUONoMnx1pnttRI4Xkye/izeXrhef9J1G31SJfLlJkwmYq5wdQo9i3KP/fV+QrQFayPE OzzzROa/nH8qXadqzrOs5m0aZYYS+CdYkQdf2o/V7iwsZjyodmwPGwrEV/h7L2B+EVvG /mjOPnwTKFY6FwMcKfjMtNB7DVx1AgRybGCueyQpa/kZnrtsFVlzSjI8E1eW/MJWFBZV KrjQ== X-Gm-Message-State: AIVw113eyH6u4IFtoB0OP/A7h9hGl7jpcPpcomxJiMVwOkgE09irdjed 3eMQGgpHmzMnirdm X-Received: by 10.80.222.138 with SMTP id c10mr14547598edl.97.1501507377693; Mon, 31 Jul 2017 06:22:57 -0700 (PDT) Received: from localhost (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id u2sm12696176edl.71.2017.07.31.06.22.56 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Mon, 31 Jul 2017 06:22:57 -0700 (PDT) Date: Mon, 31 Jul 2017 15:22:55 +0200 From: Christoffer Dall To: "Longpeng(Mike)" Cc: pbonzini@redhat.com, rkrcmar@redhat.com, agraf@suse.com, borntraeger@de.ibm.com, cohuck@redhat.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, james.hogan@imgtec.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, weidong.huang@huawei.com, arei.gonglei@huawei.com, wangxinxin.wang@huawei.com, longpeng.mike@gmail.com Subject: Re: [RFC] KVM: optimize the kvm_vcpu_on_spin Message-ID: <20170731132255.GZ5176@cbox> References: <1501309377-195256-1-git-send-email-longpeng2@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1501309377-195256-1-git-send-email-longpeng2@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Sat, Jul 29, 2017 at 02:22:57PM +0800, Longpeng(Mike) wrote: > We had disscuss the idea here: > https://www.spinics.net/lists/kvm/msg140593.html This is not a very nice way to start a commit description. Please provide the necessary background to understand your change directly in the commit message. > > I think it's also suitable for other architectures. > I think this sentence can go in the end of the commit message together with your explanation of only doing this for x86. By the way, the ARM solution should be pretty simple: I am also curious in the workload you use to measure this and how I can evaluate the benefit on ARM? Thanks, -Christoffer > If the vcpu(me) exit due to request a usermode spinlock, then > the spinlock-holder may be preempted in usermode or kernmode. > But if the vcpu(me) is in kernmode, then the holder must be > preempted in kernmode, so we should choose a vcpu in kernmode > as the most eligible candidate. > > PS: I only implement X86 arch currently for I'm not familiar > with other architecture. > > Signed-off-by: Longpeng(Mike) > --- > arch/mips/kvm/mips.c | 5 +++++ > arch/powerpc/kvm/powerpc.c | 5 +++++ > arch/s390/kvm/kvm-s390.c | 5 +++++ > arch/x86/kvm/x86.c | 5 +++++ > include/linux/kvm_host.h | 4 ++++ > virt/kvm/arm/arm.c | 5 +++++ > virt/kvm/kvm_main.c | 9 ++++++++- > 7 files changed, 37 insertions(+), 1 deletion(-) > > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c > index d4b2ad1..2e2701d 100644 > --- a/arch/mips/kvm/mips.c > +++ b/arch/mips/kvm/mips.c > @@ -98,6 +98,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) > return !!(vcpu->arch.pending_exceptions); > } > > +bool kvm_arch_vcpu_spin_kernmode(struct kvm_vcpu *vcpu) > +{ > + return false; > +} > + > int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > { > return 1; > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c > index 1a75c0b..2489f64 100644 > --- a/arch/powerpc/kvm/powerpc.c > +++ b/arch/powerpc/kvm/powerpc.c > @@ -58,6 +58,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *v) > return !!(v->arch.pending_exceptions) || kvm_request_pending(v); > } > > +bool kvm_arch_vcpu_spin_kernmode(struct kvm_vcpu *vcpu) > +{ > + return false; > +} > + > int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > { > return 1; > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c > index 3f2884e..9d7c42e 100644 > --- a/arch/s390/kvm/kvm-s390.c > +++ b/arch/s390/kvm/kvm-s390.c > @@ -2443,6 +2443,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) > return kvm_s390_vcpu_has_irq(vcpu, 0); > } > > +bool kvm_arch_vcpu_spin_kernmode(struct kvm_vcpu *vcpu) > +{ > + return false; > +} > + > void kvm_s390_vcpu_block(struct kvm_vcpu *vcpu) > { > atomic_or(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 82a63c5..b5a2e53 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8435,6 +8435,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) > return kvm_vcpu_running(vcpu) || kvm_vcpu_has_events(vcpu); > } > > +bool kvm_arch_vcpu_spin_kernmode(struct kvm_vcpu *vcpu) > +{ > + return kvm_x86_ops->get_cpl(vcpu) == 0; > +} > + > int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) > { > return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 648b34c..f8f0d74 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -272,6 +272,9 @@ struct kvm_vcpu { > } spin_loop; > #endif > bool preempted; > + /* If vcpu is in kernel-mode when preempted */ > + bool in_kernmode; > + > struct kvm_vcpu_arch arch; > struct dentry *debugfs_dentry; > }; > @@ -797,6 +800,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, > void kvm_arch_hardware_unsetup(void); > void kvm_arch_check_processor_compat(void *rtn); > int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); > +bool kvm_arch_vcpu_spin_kernmode(struct kvm_vcpu *vcpu); > int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); > > #ifndef __KVM_HAVE_ARCH_VM_ALLOC > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c > index a39a1e1..ca6a394 100644 > --- a/virt/kvm/arm/arm.c > +++ b/virt/kvm/arm/arm.c > @@ -416,6 +416,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *v) > && !v->arch.power_off && !v->arch.pause); > } > > +bool kvm_arch_vcpu_spin_kernmode(struct kvm_vcpu *vcpu) > +{ > + return false; > +} > + > /* Just ensure a guest exit from a particular CPU */ > static void exit_vm_noop(void *info) > { > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 82987d4..8d83caa 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -290,6 +290,7 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) > kvm_vcpu_set_in_spin_loop(vcpu, false); > kvm_vcpu_set_dy_eligible(vcpu, false); > vcpu->preempted = false; > + vcpu->in_kernmode = false; > > r = kvm_arch_vcpu_init(vcpu); > if (r < 0) > @@ -2330,6 +2331,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) > int pass; > int i; > > + me->in_kernmode = kvm_arch_vcpu_spin_kernmode(me); > kvm_vcpu_set_in_spin_loop(me, true); > /* > * We boost the priority of a VCPU that is runnable but not > @@ -2351,6 +2353,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) > continue; > if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) > continue; > + if (me->in_kernmode && !vcpu->in_kernmode) > + continue; > if (!kvm_vcpu_eligible_for_directed_yield(vcpu)) > continue; > > @@ -4009,8 +4013,11 @@ static void kvm_sched_out(struct preempt_notifier *pn, > { > struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn); > > - if (current->state == TASK_RUNNING) > + if (current->state == TASK_RUNNING) { > vcpu->preempted = true; > + vcpu->in_kernmode = kvm_arch_vcpu_spin_kernmode(vcpu); > + } > + > kvm_arch_vcpu_put(vcpu); > } > > -- > 1.8.3.1 > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index a39a1e1..b9f68e4 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -416,6 +416,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *v) && !v->arch.power_off && !v->arch.pause); } +bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) +{ + return vcpu_mode_priv(vcpu); +} + /* Just ensure a guest exit from a particular CPU */ static void exit_vm_noop(void *info) {