From patchwork Fri Mar 29 15:08:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10877269 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 80E9E1708 for ; Fri, 29 Mar 2019 15:11:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 678D229873 for ; Fri, 29 Mar 2019 15:11:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6581E29844; Fri, 29 Mar 2019 15:11:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C402129836 for ; Fri, 29 Mar 2019 15:11:40 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8f-0003cU-Ep; Fri, 29 Mar 2019 15:09:49 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8c-0003Xy-E7 for xen-devel@lists.xenproject.org; Fri, 29 Mar 2019 15:09:46 +0000 X-Inumbo-ID: add852ec-5234-11e9-b752-8342b3861e84 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id add852ec-5234-11e9-b752-8342b3861e84; Fri, 29 Mar 2019 15:09:42 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B6A4EB01C; Fri, 29 Mar 2019 15:09:41 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 29 Mar 2019 16:08:56 +0100 Message-Id: <20190329150934.17694-12-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190329150934.17694-1-jgross@suse.com> References: <20190329150934.17694-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC 11/49] xen/sched: build a linked list of struct sched_item X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP In order to make it easy to iterate over sched_item elements of a domain build a single linked list and add an iterator for it. The new list is guarded by the same mechanisms as the vcpu linked list as it is modified only via vcpu_create() or vcpu_destroy(). For completeness add another iterator for_each_sched_item_vcpu() which will iterate over all vcpus if a sched_item (right now only one). This will be needed later for larger scheduling granularity (e.g. cores). Signed-off-by: Juergen Gross --- xen/common/schedule.c | 56 ++++++++++++++++++++++++++++++++++++++++------ xen/include/xen/sched-if.h | 8 +++++++ xen/include/xen/sched.h | 1 + 3 files changed, 58 insertions(+), 7 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 819a78b646..e9d91d29cc 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -253,6 +253,52 @@ static void sched_spin_unlock_double(spinlock_t *lock1, spinlock_t *lock2, spin_unlock_irqrestore(lock1, flags); } +static void sched_free_item(struct sched_item *item) +{ + struct sched_item *prev_item; + struct domain *d = item->vcpu->domain; + + if ( d->sched_item_list == item ) + d->sched_item_list = item->next_in_list; + else + { + for_each_sched_item(d, prev_item) + { + if ( prev_item->next_in_list == item ) + { + prev_item->next_in_list = item->next_in_list; + break; + } + } + } + + item->vcpu->sched_item = NULL; + xfree(item); +} + +static struct sched_item *sched_alloc_item(struct vcpu *v) +{ + struct sched_item *item, **prev_item; + struct domain *d = v->domain; + + if ( (item = xzalloc(struct sched_item)) == NULL ) + return NULL; + + v->sched_item = item; + item->vcpu = v; + + for ( prev_item = &d->sched_item_list; *prev_item; + prev_item = &(*prev_item)->next_in_list ) + if ( (*prev_item)->next_in_list && + (*prev_item)->next_in_list->vcpu->vcpu_id > v->vcpu_id ) + break; + + item->next_in_list = *prev_item; + *prev_item = item; + + return item; +} + int sched_init_vcpu(struct vcpu *v, unsigned int processor) { struct domain *d = v->domain; @@ -260,10 +306,8 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) v->processor = processor; - if ( (item = xzalloc(struct sched_item)) == NULL ) + if ( (item = sched_alloc_item(v)) == NULL ) return 1; - v->sched_item = item; - item->vcpu = v; /* Initialise the per-vcpu timers. */ init_timer(&v->periodic_timer, vcpu_periodic_timer_fn, @@ -276,8 +320,7 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) item->priv = SCHED_OP(dom_scheduler(d), alloc_vdata, item, d->sched_priv); if ( item->priv == NULL ) { - v->sched_item = NULL; - xfree(item); + sched_free_item(item); return 1; } @@ -420,8 +463,7 @@ void sched_destroy_vcpu(struct vcpu *v) atomic_dec(&per_cpu(schedule_data, v->processor).urgent_count); SCHED_OP(vcpu_scheduler(v), remove_item, item); SCHED_OP(vcpu_scheduler(v), free_vdata, item->priv); - xfree(item); - v->sched_item = NULL; + sched_free_item(item); } int sched_init_domain(struct domain *d, int poolid) diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 1fe87a73b4..4caade5b8b 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -51,8 +51,16 @@ DECLARE_PER_CPU(struct cpupool *, cpupool); struct sched_item { struct vcpu *vcpu; void *priv; /* scheduler private data */ + struct sched_item *next_in_list; }; +#define for_each_sched_item(d, e) \ + for ( (e) = (d)->sched_item_list; (e) != NULL; (e) = (e)->next_in_list ) + +#define for_each_sched_item_vcpu(i, v) \ + for ( (v) = (i)->vcpu; (v) != NULL && (v)->sched_item == (i); \ + (v) = (v)->next_in_list ) + /* * Scratch space, for avoiding having too many cpumask_t on the stack. * Within each scheduler, when using the scratch mask of one pCPU: diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 6acdc0f5be..2e9ced29a8 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -334,6 +334,7 @@ struct domain /* Scheduling. */ void *sched_priv; /* scheduler-specific data */ + struct sched_item *sched_item_list; struct cpupool *cpupool; struct domain *next_in_list;