From patchwork Mon Aug 21 20:35:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= X-Patchwork-Id: 9913763 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0EB1A603F9 for ; Mon, 21 Aug 2017 20:49:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0093F2823D for ; Mon, 21 Aug 2017 20:49:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E8E7528437; Mon, 21 Aug 2017 20:49:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HK_RANDOM_FROM, RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 71F512823D for ; Mon, 21 Aug 2017 20:49:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WvuMdV4oSUW4CoVRIWgc89GdjNklsk6sJjOgfFb3U0A=; b=DyitNmxV62/yLg 7xYvtSS/NvhIvOSy8ujW/JKsBtJcTC3dBjHrZU/C9Xad4DPOjRiuU6HyHAe9TVUc6fBF5b3WNNWO2 SHgIR7dYz6cOZYnwpr/mHHf4jyKQ2JZvR1yNKkYcHlE/cwnEdlbvO2eDTkaopP9gxdP+oqp7cWDN0 j0IxqpHK5p2i1YY0dxjUvva+pNNUcaTOfrCYKD+QUleTcmNTosnOeQI+ZJLjpeUQF+q6/OJHvtdKv 9mYrCGIJeJYwILR44EYyj3l3QSY5Xace4hCD7FVaM1iTbp3mc7yTXymDj5zjMbg1vgwt2HLIKHjCc hn3iGksenJn4RXNLiE/A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1djtdi-00022b-RJ; Mon, 21 Aug 2017 20:49:38 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1djtdY-0001lP-NY for linux-arm-kernel@lists.infradead.org; Mon, 21 Aug 2017 20:49:31 +0000 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9AF3AC0587FE; Mon, 21 Aug 2017 20:38:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 9AF3AC0587FE Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=rkrcmar@redhat.com Received: from flask (unknown [10.43.2.80]) by smtp.corp.redhat.com (Postfix) with SMTP id B46816A54A; Mon, 21 Aug 2017 20:38:27 +0000 (UTC) Received: by flask (sSMTP sendmail emulation); Mon, 21 Aug 2017 22:38:26 +0200 From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH RFC v3 6/9] KVM: rework kvm_vcpu_on_spin loop Date: Mon, 21 Aug 2017 22:35:27 +0200 Message-Id: <20170821203530.9266-7-rkrcmar@redhat.com> In-Reply-To: <20170821203530.9266-1-rkrcmar@redhat.com> References: <20170821203530.9266-1-rkrcmar@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 21 Aug 2017 20:38:32 +0000 (UTC) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170821_134928_899780_F5ACBC4E X-CRM114-Status: GOOD ( 17.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Christoffer Dall , James Hogan , David Hildenbrand , Marc Zyngier , Cornelia Huck , Paul Mackerras , Christian Borntraeger , Paolo Bonzini , Alexander Graf Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The original code managed to obfuscate a straightforward idea: start iterating from the selected index and reset the index to 0 when reaching the end of online vcpus, then iterate until reaching the index that we started at. The resulting code is a bit better, IMO. (Still horrible, though.) Signed-off-by: Radim Krčmář --- include/linux/kvm_host.h | 13 +++++++++++++ virt/kvm/kvm_main.c | 47 ++++++++++++++++++----------------------------- 2 files changed, 31 insertions(+), 29 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index abd5cb1feb9e..cfb3c0efdd51 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -498,6 +498,19 @@ static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i) (vcpup = kvm_get_vcpu(kvm, idx)) != NULL; \ idx++) +#define kvm_for_each_vcpu_from(idx, vcpup, from, kvm) \ + for (idx = from, vcpup = kvm_get_vcpu(kvm, idx); \ + vcpup; \ + ({ \ + idx++; \ + if (idx >= atomic_read(&kvm->online_vcpus)) \ + idx = 0; \ + if (idx == from) \ + vcpup = NULL; \ + else \ + vcpup = kvm_get_vcpu(kvm, idx); \ + })) + static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id) { struct kvm_vcpu *vcpu = NULL; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d89261d0d8c6..33a15e176927 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2333,8 +2333,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) struct kvm_vcpu *vcpu; int last_boosted_vcpu = me->kvm->last_boosted_vcpu; int yielded = 0; - int try = 3; - int pass; + int try = 2; int i; kvm_vcpu_set_in_spin_loop(me, true); @@ -2345,34 +2344,24 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) * VCPU is holding the lock that we need and will release it. * We approximate round-robin by starting at the last boosted VCPU. */ - for (pass = 0; pass < 2 && !yielded && try; pass++) { - kvm_for_each_vcpu(i, vcpu, kvm) { - if (!pass && i <= last_boosted_vcpu) { - i = last_boosted_vcpu; - continue; - } else if (pass && i > last_boosted_vcpu) - break; - if (!ACCESS_ONCE(vcpu->preempted)) - continue; - if (vcpu == me) - continue; - if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) - continue; - if (yield_to_kernel_mode && !kvm_arch_vcpu_in_kernel(vcpu)) - continue; - if (!kvm_vcpu_eligible_for_directed_yield(vcpu)) - continue; + kvm_for_each_vcpu_from(i, vcpu, last_boosted_vcpu, kvm) { + if (!ACCESS_ONCE(vcpu->preempted)) + continue; + if (vcpu == me) + continue; + if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) + continue; + if (yield_to_kernel_mode && !kvm_arch_vcpu_in_kernel(vcpu)) + continue; + if (!kvm_vcpu_eligible_for_directed_yield(vcpu)) + continue; - yielded = kvm_vcpu_yield_to(vcpu); - if (yielded > 0) { - kvm->last_boosted_vcpu = i; - break; - } else if (yielded < 0) { - try--; - if (!try) - break; - } - } + yielded = kvm_vcpu_yield_to(vcpu); + if (yielded > 0) { + kvm->last_boosted_vcpu = i; + break; + } else if (yielded < 0 && !try--) + break; } kvm_vcpu_set_in_spin_loop(me, false);