From patchwork Fri Mar 29 15:09:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10877257 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A9CF1575 for ; Fri, 29 Mar 2019 15:11:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 35D0C2986C for ; Fri, 29 Mar 2019 15:11:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A11E2986B; Fri, 29 Mar 2019 15:11:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6E8F32983C for ; Fri, 29 Mar 2019 15:11:34 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t91-0004RU-V3; Fri, 29 Mar 2019 15:10:11 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8k-0003m2-IN for xen-devel@lists.xenproject.org; Fri, 29 Mar 2019 15:09:54 +0000 X-Inumbo-ID: b2da4f0c-5234-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id b2da4f0c-5234-11e9-bc90-bc764e045a96; Fri, 29 Mar 2019 15:09:51 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1CADAB038; Fri, 29 Mar 2019 15:09:50 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 29 Mar 2019 16:09:22 +0100 Message-Id: <20190329150934.17694-38-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190329150934.17694-1-jgross@suse.com> References: <20190329150934.17694-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC 37/49] xen/sched: Change vcpu_migrate_*() to operate on schedule item X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Now that vcpu_migrate_start() and vcpu_migrate_finish() are used only to ensure a vcpu is running on a suitable processor they can be switched to operate on schedule items instead of vcpus. While doing that rename them accordingly and make the _start() variant static. vcpu_move_locked() is switched to schedule item, too. Signed-off-by: Juergen Gross --- xen/common/schedule.c | 107 +++++++++++++++++++++++++++++--------------------- 1 file changed, 62 insertions(+), 45 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 7c7735bf33..22e43d88cc 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -687,38 +687,43 @@ void vcpu_unblock(struct vcpu *v) } /* - * Do the actual movement of a vcpu from old to new CPU. Locks for *both* + * Do the actual movement of an item from old to new CPU. Locks for *both* * CPUs needs to have been taken already when calling this! */ -static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu) +static void sched_item_move_locked(struct sched_item *item, + unsigned int new_cpu) { - unsigned int old_cpu = v->processor; + unsigned int old_cpu = item->res->processor; + struct vcpu *v; /* * Transfer urgency status to new CPU before switching CPUs, as * once the switch occurs, v->is_urgent is no longer protected by * the per-CPU scheduler lock we are holding. */ - if ( unlikely(v->is_urgent) && (old_cpu != new_cpu) ) + for_each_sched_item_vcpu ( item, v ) { - atomic_inc(&per_cpu(sched_res, new_cpu)->urgent_count); - atomic_dec(&per_cpu(sched_res, old_cpu)->urgent_count); + if ( unlikely(v->is_urgent) && (old_cpu != new_cpu) ) + { + atomic_inc(&per_cpu(sched_res, new_cpu)->urgent_count); + atomic_dec(&per_cpu(sched_res, old_cpu)->urgent_count); + } } /* * Actual CPU switch to new CPU. This is safe because the lock - * pointer cant' change while the current lock is held. + * pointer can't change while the current lock is held. */ - if ( vcpu_scheduler(v)->migrate ) - SCHED_OP(vcpu_scheduler(v), migrate, v->sched_item, new_cpu); + if ( vcpu_scheduler(item->vcpu)->migrate ) + SCHED_OP(vcpu_scheduler(item->vcpu), migrate, item, new_cpu); else - sched_set_res(v->sched_item, per_cpu(sched_res, new_cpu)); + sched_set_res(item, per_cpu(sched_res, new_cpu)); } /* * Initiating migration * - * In order to migrate, we need the vcpu in question to have stopped + * In order to migrate, we need the item in question to have stopped * running and had SCHED_OP(sleep) called (to take it off any * runqueues, for instance); and if it is currently running, it needs * to be scheduled out. Finally, we need to hold the scheduling locks @@ -734,36 +739,45 @@ static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu) * should be called like this: * * lock = item_schedule_lock_irq(item); - * vcpu_migrate_start(v); + * sched_item_migrate_start(item); * item_schedule_unlock_irq(lock, item) - * vcpu_migrate_finish(v); + * sched_item_migrate_finish(item); * - * vcpu_migrate_finish() will do the work now if it can, or simply - * return if it can't (because v is still running); in that case - * vcpu_migrate_finish() will be called by context_saved(). + * sched_item_migrate_finish() will do the work now if it can, or simply + * return if it can't (because item is still running); in that case + * sched_item_migrate_finish() will be called by context_saved(). */ -void vcpu_migrate_start(struct vcpu *v) +static void sched_item_migrate_start(struct sched_item *item) { - set_bit(_VPF_migrating, &v->pause_flags); - vcpu_sleep_nosync_locked(v); + struct vcpu *v; + + for_each_sched_item_vcpu ( item, v ) + { + set_bit(_VPF_migrating, &v->pause_flags); + vcpu_sleep_nosync_locked(v); + } } -static void vcpu_migrate_finish(struct vcpu *v) +static void sched_item_migrate_finish(struct sched_item *item) { unsigned long flags; unsigned int old_cpu, new_cpu; spinlock_t *old_lock, *new_lock; bool_t pick_called = 0; + struct vcpu *v; /* - * If the vcpu is currently running, this will be handled by + * If the item is currently running, this will be handled by * context_saved(); and in any case, if the bit is cleared, then * someone else has already done the work so we don't need to. */ - if ( vcpu_running(v) || !test_bit(_VPF_migrating, &v->pause_flags) ) - return; + for_each_sched_item_vcpu ( item, v ) + { + if ( vcpu_running(v) || !test_bit(_VPF_migrating, &v->pause_flags) ) + return; + } - old_cpu = new_cpu = v->processor; + old_cpu = new_cpu = item->res->processor; for ( ; ; ) { /* @@ -776,7 +790,7 @@ static void vcpu_migrate_finish(struct vcpu *v) sched_spin_lock_double(old_lock, new_lock, &flags); - old_cpu = v->processor; + old_cpu = item->res->processor; if ( old_lock == per_cpu(sched_res, old_cpu)->schedule_lock ) { /* @@ -785,15 +799,15 @@ static void vcpu_migrate_finish(struct vcpu *v) */ if ( pick_called && (new_lock == per_cpu(sched_res, new_cpu)->schedule_lock) && - cpumask_test_cpu(new_cpu, v->sched_item->cpu_hard_affinity) && - cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) + cpumask_test_cpu(new_cpu, item->cpu_hard_affinity) && + cpumask_test_cpu(new_cpu, item->domain->cpupool->cpu_valid) ) break; /* Select a new CPU. */ - new_cpu = SCHED_OP(vcpu_scheduler(v), pick_resource, - v->sched_item)->processor; + new_cpu = SCHED_OP(vcpu_scheduler(item->vcpu), pick_resource, + item)->processor; if ( (new_lock == per_cpu(sched_res, new_cpu)->schedule_lock) && - cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) + cpumask_test_cpu(new_cpu, item->domain->cpupool->cpu_valid) ) break; pick_called = 1; } @@ -814,22 +828,26 @@ static void vcpu_migrate_finish(struct vcpu *v) * because they both happen in (different) spinlock regions, and those * regions are strictly serialised. */ - if ( vcpu_running(v) || - !test_and_clear_bit(_VPF_migrating, &v->pause_flags) ) + for_each_sched_item_vcpu ( item, v ) { - sched_spin_unlock_double(old_lock, new_lock, flags); - return; + if ( vcpu_running(v) || + !test_and_clear_bit(_VPF_migrating, &v->pause_flags) ) + { + sched_spin_unlock_double(old_lock, new_lock, flags); + return; + } } - vcpu_move_locked(v, new_cpu); + sched_item_move_locked(item, new_cpu); sched_spin_unlock_double(old_lock, new_lock, flags); if ( old_cpu != new_cpu ) - sched_move_irqs(v->sched_item); + sched_move_irqs(item); /* Wake on new CPU. */ - vcpu_wake(v); + for_each_sched_item_vcpu ( item, v ) + vcpu_wake(v); } /* @@ -970,10 +988,9 @@ int cpu_disable_scheduler(unsigned int cpu) * * the scheduler will always find a suitable solution, or * things would have failed before getting in here. */ - vcpu_migrate_start(item->vcpu); + sched_item_migrate_start(item); item_schedule_unlock_irqrestore(lock, flags, item); - - vcpu_migrate_finish(item->vcpu); + sched_item_migrate_finish(item); /* * The only caveat, in this case, is that if a vcpu active in @@ -1064,14 +1081,14 @@ static int vcpu_set_affinity( ASSERT(which == item->cpu_soft_affinity); sched_set_affinity(v, NULL, affinity); } - vcpu_migrate_start(v); + sched_item_migrate_start(item); } item_schedule_unlock_irq(lock, item); domain_update_node_affinity(v->domain); - vcpu_migrate_finish(v); + sched_item_migrate_finish(item); return ret; } @@ -1318,13 +1335,13 @@ int vcpu_pin_override(struct vcpu *v, int cpu) } if ( ret == 0 ) - vcpu_migrate_start(v); + sched_item_migrate_start(item); item_schedule_unlock_irq(lock, item); domain_update_node_affinity(v->domain); - vcpu_migrate_finish(v); + sched_item_migrate_finish(item); return ret; } @@ -1709,7 +1726,7 @@ void context_saved(struct vcpu *prev) SCHED_OP(vcpu_scheduler(prev), context_saved, prev->sched_item); - vcpu_migrate_finish(prev); + sched_item_migrate_finish(prev->sched_item); } /* The scheduler timer: force a run through the scheduler */