From patchwork Fri Mar 17 18:19:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9631165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DD955602D6 for ; Fri, 17 Mar 2017 18:22:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D557E26E81 for ; Fri, 17 Mar 2017 18:22:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C6C0E26E78; Fri, 17 Mar 2017 18:22:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DD57C26E78 for ; Fri, 17 Mar 2017 18:22:07 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cowTd-0002hb-Sn; Fri, 17 Mar 2017 18:19:49 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cowTc-0002h2-Bt for xen-devel@lists.xenproject.org; Fri, 17 Mar 2017 18:19:48 +0000 Received: from [85.158.137.68] by server-13.bemta-3.messagelabs.com id EC/0B-05091-3C82CC85; Fri, 17 Mar 2017 18:19:47 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrPIsWRWlGSWpSXmKPExsXiVRvkqHtY40y EwWU/i+9bJjM5MHoc/nCFJYAxijUzLym/IoE149KjN4wFHeYV82/OYm5g/K/VxcjFISQwg1Fi 95IX7CAOi8AaVokLmw+xgDgSApdYJT73bwLKcAI5MRI3N29l62LkALIrJbad0gYJCwmoSNzcv ooJwv7DKPFxBjOILSygJ3Hk6A92CNtHYuWLdjYQm03AQOLNjr2sILaIgJLEvVWTwXqZBfIlVk 3oA4uzCKhKTJ+zHKyXV8BfYuv0NWA2J5D9+tY+ZohdfhIH1j9mAbFFBeQkVl5uYYWoF5Q4OfM JC8iZzAKaEut36UOMl5fY/nYO8wRGkVlIqmYhVM1CUrWAkXkVo3pxalFZapGumV5SUWZ6Rklu YmaOrqGBsV5uanFxYnpqTmJSsV5yfu4mRmDg1zMwMO5gvNLmfIhRkoNJSZT3+rkTEUJ8Sfkpl RmJxRnxRaU5qcWHGGU4OJQkeE0vAOUEi1LTUyvSMnOAMQiTluDgURLhPQHSyltckJhbnJkOkT rFqMvxof/wGyYhlrz8vFQpcd6d54GKBECKMkrz4EbA0sElRlkpYV5GBgYGIZ6C1KLczBJU+Ve M4hyMSsK8/0Cm8GTmlcBtegV0BBPQEYk/j4AcUZKIkJJqYHQ5nfPhVb/S/m+3XKtEUpTkeUqT oqbtuPt7s05TjWrZ3NAcBh5hlwvqUw893bM1OzbR96/vfmeVBzk9UpcUa4osFquz/RHPCrjDz LD4YVhY0eGW1596K5jeGAuyl61SvuXopzhnh8qnrMVrRHV/TXb4+HnVdT7r+NO7vkxMn2/y89 KU9SWLLiixFGckGmoxFxUnAgDetXeMAgMAAA== X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-14.tower-31.messagelabs.com!1489774786!90842010!1 X-Originating-IP: [74.125.82.65] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 44407 invoked from network); 17 Mar 2017 18:19:46 -0000 Received: from mail-wm0-f65.google.com (HELO mail-wm0-f65.google.com) (74.125.82.65) by server-14.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 17 Mar 2017 18:19:46 -0000 Received: by mail-wm0-f65.google.com with SMTP id z133so4560619wmb.2 for ; Fri, 17 Mar 2017 11:19:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=w1UuMU4YmA/VpwcXWPi197yp+5mHYOgyg84QKPpoA9g=; b=Xd/3BhlU/RPaLCqqeSFkLSeUCl7GJPZzpEtwjL0xXjHYL7rSTc5v36RbuCnVuXA5m0 WugQMprw2jDzJEqbfcosFdE0Ap3vefs5N62FHiunmEfv8vI7lrJXY5iyNhajjTUmAKTO 2UHvlAYyDPfgp+5ynzYdvU/ZmCsFCBzWzqth4oV99KCO/+XHZxTv2nBue7+EWFa6nPkB DlHpTMoXkeG0RCTynXxHDgwiS+nZ6/pLpLJ2TyV+Db4E1lKkvH6knm1uqAUs5UO8d6m1 sG9zrB3LkSJLbXDrZ2snx7/cMBtgtjd7v1cYNsPYAkEhmQguftFLDYu6GacArH/4tkPx Zveg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=w1UuMU4YmA/VpwcXWPi197yp+5mHYOgyg84QKPpoA9g=; b=EqACBwRq2h0l8aWDhJ18n+gaGfefqoA3Bj4VqEr19+ThyAqUvsbd3CCAXEiiYBa7An GufZ2AcVBGFxLBlTV2YpfKV0vN/xvA6MJypZMGixNSQ3boFdpnXRvSp9iibNkuCQdj1d 2tb2j2N5/5tokgzUIYv3J385y5X19ur7BfruIC8orH0w8A9PmQ+sMeJ+kI3rH+K/mjEd RdV47VZWnJyYpgAoDNDcq5LldNI48sRUZkzqBIQlLQLkZdX85Nk3fYusMwJQr0t+XDNL kFlubBkK6LXdG23S7HrcTbxOtZx0QAdOeM8wq7xoxjlNIrtWPng/8OpweErW0ZUirrlG CeZA== X-Gm-Message-State: AFeK/H02LsR3YC1tNmiTKCpy0n/2lYjLPTlp1CP+DPFf1XyLksRpOeC8k00jkV0ZWOpZJQ== X-Received: by 10.28.20.148 with SMTP id 142mr4268751wmu.134.1489774786552; Fri, 17 Mar 2017 11:19:46 -0700 (PDT) Received: from Palanthas.fritz.box ([80.66.223.93]) by smtp.gmail.com with ESMTPSA id k10sm3521527wmg.10.2017.03.17.11.19.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Mar 2017 11:19:45 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Fri, 17 Mar 2017 19:19:44 +0100 Message-ID: <148977478448.22479.13625390869019347980.stgit@Palanthas.fritz.box> In-Reply-To: <148977465656.22479.5382577625088079334.stgit@Palanthas.fritz.box> References: <148977465656.22479.5382577625088079334.stgit@Palanthas.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: Juergen Gross , George Dunlap , Jan Beulich Subject: [Xen-devel] [PATCH v2 2/2] xen: sched: improve robustness (and rename) DOM2OP() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Clarify and enforce (with ASSERTs) when the function is called on the idle domain, and explain in comments what it means and when it is ok to do so. While there, change the name of the function to a more self-explanatory one, and do the same to VCPU2OP. Signed-off-by: Dario Faggioli --- Cc: George Dunlap Cc: Juergen Gross Cc: Jan Beulich --- Changes from v1: - new patch; - renamed VCPU2OP, as suggested during v1's review of patch 1. --- xen/common/schedule.c | 56 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index d344b7c..fdb8ff4 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -77,8 +77,25 @@ static struct scheduler __read_mostly ops; (( (opsptr)->fn != NULL ) ? (opsptr)->fn(opsptr, ##__VA_ARGS__ ) \ : (typeof((opsptr)->fn(opsptr, ##__VA_ARGS__)))0 ) -#define DOM2OP(_d) (((_d)->cpupool == NULL) ? &ops : ((_d)->cpupool->sched)) -static inline struct scheduler *VCPU2OP(const struct vcpu *v) +static inline struct scheduler *dom_get_scheduler(const struct domain *d) +{ + if ( likely(d->cpupool != NULL) ) + return d->cpupool->sched; + + /* + * If d->cpupool is NULL, this is the idle domain. This is special + * because the idle domain does not really bolong to any cpupool, and, + * hence, does not really have a scheduler. + * + * This is (should be!) only called like this for allocating the idle + * vCPUs for the first time, during boot, in which case what we want + * is the default scheduler that has been, choosen at boot. + */ + ASSERT(is_idle_domain(d)); + return &ops; +} + +static inline struct scheduler *vcpu_get_scheduler(const struct vcpu *v) { struct domain *d = v->domain; @@ -260,7 +277,8 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) init_timer(&v->poll_timer, poll_timer_fn, v, v->processor); - v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv); + v->sched_priv = SCHED_OP(dom_get_scheduler(d), alloc_vdata, v, + d->sched_priv); if ( v->sched_priv == NULL ) return 1; @@ -272,7 +290,7 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) } else { - SCHED_OP(DOM2OP(d), insert_vcpu, v); + SCHED_OP(dom_get_scheduler(d), insert_vcpu, v); } return 0; @@ -326,7 +344,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) domain_pause(d); - old_ops = DOM2OP(d); + old_ops = dom_get_scheduler(d); old_domdata = d->sched_priv; for_each_vcpu ( d, v ) @@ -389,8 +407,8 @@ void sched_destroy_vcpu(struct vcpu *v) kill_timer(&v->poll_timer); if ( test_and_clear_bool(v->is_urgent) ) atomic_dec(&per_cpu(schedule_data, v->processor).urgent_count); - SCHED_OP(VCPU2OP(v), remove_vcpu, v); - SCHED_OP(VCPU2OP(v), free_vdata, v->sched_priv); + SCHED_OP(vcpu_get_scheduler(v), remove_vcpu, v); + SCHED_OP(vcpu_get_scheduler(v), free_vdata, v->sched_priv); } int sched_init_domain(struct domain *d, int poolid) @@ -404,7 +422,7 @@ int sched_init_domain(struct domain *d, int poolid) SCHED_STAT_CRANK(dom_init); TRACE_1D(TRC_SCHED_DOM_ADD, d->domain_id); - return SCHED_OP(DOM2OP(d), init_domain, d); + return SCHED_OP(dom_get_scheduler(d), init_domain, d); } void sched_destroy_domain(struct domain *d) @@ -413,7 +431,7 @@ void sched_destroy_domain(struct domain *d) SCHED_STAT_CRANK(dom_destroy); TRACE_1D(TRC_SCHED_DOM_REM, d->domain_id); - SCHED_OP(DOM2OP(d), destroy_domain, d); + SCHED_OP(dom_get_scheduler(d), destroy_domain, d); cpupool_rm_domain(d); } @@ -432,7 +450,7 @@ void vcpu_sleep_nosync(struct vcpu *v) if ( v->runstate.state == RUNSTATE_runnable ) vcpu_runstate_change(v, RUNSTATE_offline, NOW()); - SCHED_OP(VCPU2OP(v), sleep, v); + SCHED_OP(vcpu_get_scheduler(v), sleep, v); } vcpu_schedule_unlock_irqrestore(lock, flags, v); @@ -461,7 +479,7 @@ void vcpu_wake(struct vcpu *v) { if ( v->runstate.state >= RUNSTATE_blocked ) vcpu_runstate_change(v, RUNSTATE_runnable, NOW()); - SCHED_OP(VCPU2OP(v), wake, v); + SCHED_OP(vcpu_get_scheduler(v), wake, v); } else if ( !(v->pause_flags & VPF_blocked) ) { @@ -516,8 +534,8 @@ static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu) * Actual CPU switch to new CPU. This is safe because the lock * pointer cant' change while the current lock is held. */ - if ( VCPU2OP(v)->migrate ) - SCHED_OP(VCPU2OP(v), migrate, v, new_cpu); + if ( vcpu_get_scheduler(v)->migrate ) + SCHED_OP(vcpu_get_scheduler(v), migrate, v, new_cpu); else v->processor = new_cpu; } @@ -583,7 +601,7 @@ static void vcpu_migrate(struct vcpu *v) break; /* Select a new CPU. */ - new_cpu = SCHED_OP(VCPU2OP(v), pick_cpu, v); + new_cpu = SCHED_OP(vcpu_get_scheduler(v), pick_cpu, v); if ( (new_lock == per_cpu(schedule_data, new_cpu).schedule_lock) && cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) break; @@ -685,7 +703,7 @@ void restore_vcpu_affinity(struct domain *d) spin_unlock_irq(lock);; lock = vcpu_schedule_lock_irq(v); - v->processor = SCHED_OP(VCPU2OP(v), pick_cpu, v); + v->processor = SCHED_OP(vcpu_get_scheduler(v), pick_cpu, v); spin_unlock_irq(lock); } @@ -975,7 +993,7 @@ long vcpu_yield(void) struct vcpu * v=current; spinlock_t *lock = vcpu_schedule_lock_irq(v); - SCHED_OP(VCPU2OP(v), yield, v); + SCHED_OP(vcpu_get_scheduler(v), yield, v); vcpu_schedule_unlock_irq(lock, v); SCHED_STAT_CRANK(vcpu_yield); @@ -1288,7 +1306,7 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op) if ( ret ) return ret; - if ( op->sched_id != DOM2OP(d)->sched_id ) + if ( op->sched_id != dom_get_scheduler(d)->sched_id ) return -EINVAL; switch ( op->cmd ) @@ -1304,7 +1322,7 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op) /* NB: the pluggable scheduler code needs to take care * of locking by itself. */ - if ( (ret = SCHED_OP(DOM2OP(d), adjust, d, op)) == 0 ) + if ( (ret = SCHED_OP(dom_get_scheduler(d), adjust, d, op)) == 0 ) TRACE_1D(TRC_SCHED_ADJDOM, d->domain_id); return ret; @@ -1482,7 +1500,7 @@ void context_saved(struct vcpu *prev) /* Check for migration request /after/ clearing running flag. */ smp_mb(); - SCHED_OP(VCPU2OP(prev), context_saved, prev); + SCHED_OP(vcpu_get_scheduler(prev), context_saved, prev); if ( unlikely(prev->pause_flags & VPF_migrating) ) vcpu_migrate(prev);