From patchwork Wed Dec 18 07:48:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299677 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B5436C1 for ; Wed, 18 Dec 2019 07:50:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 10F0A2176D for ; Wed, 18 Dec 2019 07:50:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10F0A2176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5A-0003gu-GZ; Wed, 18 Dec 2019 07:49:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU59-0003g6-5J for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:19 +0000 X-Inumbo-ID: debd0fbe-216a-11ea-b6f1-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id debd0fbe-216a-11ea-b6f1-bc764e2007e4; Wed, 18 Dec 2019 07:49:08 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CD2CAAD95; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:55 +0100 Message-Id: <20191218074859.21665-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Meng Xu , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In sched_rt there are three instances of cpumasks allocated on the stack. Replace them by using cpumask_scratch. Signed-off-by: Juergen Gross --- xen/common/sched/sched_rt.c | 56 ++++++++++++++++++++++++++++++--------------- 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index 379b56bc2a..264a753116 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -637,23 +637,38 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) * and available resources */ static struct sched_resource * -rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu) { - cpumask_t cpus; + cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu); cpumask_t *online; int cpu; online = cpupool_domain_master_cpumask(unit->domain); - cpumask_and(&cpus, online, unit->cpu_hard_affinity); + cpumask_and(cpus, online, unit->cpu_hard_affinity); - cpu = cpumask_test_cpu(sched_unit_master(unit), &cpus) + cpu = cpumask_test_cpu(sched_unit_master(unit), cpus) ? sched_unit_master(unit) - : cpumask_cycle(sched_unit_master(unit), &cpus); - ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); + : cpumask_cycle(sched_unit_master(unit), cpus); + ASSERT( !cpumask_empty(cpus) && cpumask_test_cpu(cpu, cpus) ); return get_sched_res(cpu); } +/* + * Pick a valid resource for the unit vc + * Valid resource of an unit is intesection of unit's affinity + * and available resources + */ +static struct sched_resource * +rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +{ + struct sched_resource *res; + + res = rt_res_pick_locked(unit, unit->res->master_cpu); + + return res; +} + /* * Init/Free related code */ @@ -886,11 +901,14 @@ rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit) struct rt_unit *svc = rt_unit(unit); s_time_t now; spinlock_t *lock; + unsigned int cpu = smp_processor_id(); BUG_ON( is_idle_unit(unit) ); /* This is safe because unit isn't yet being scheduled */ - sched_set_res(unit, rt_res_pick(ops, unit)); + lock = pcpu_schedule_lock_irq(cpu); + sched_set_res(unit, rt_res_pick_locked(unit, cpu)); + pcpu_schedule_unlock_irq(lock, cpu); lock = unit_schedule_lock_irq(unit); @@ -1003,13 +1021,13 @@ burn_budget(const struct scheduler *ops, struct rt_unit *svc, s_time_t now) * lock is grabbed before calling this function */ static struct rt_unit * -runq_pick(const struct scheduler *ops, const cpumask_t *mask) +runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu) { struct list_head *runq = rt_runq(ops); struct list_head *iter; struct rt_unit *svc = NULL; struct rt_unit *iter_svc = NULL; - cpumask_t cpu_common; + cpumask_t *cpu_common = cpumask_scratch_cpu(cpu); cpumask_t *online; list_for_each ( iter, runq ) @@ -1018,9 +1036,9 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask) /* mask cpu_hard_affinity & cpupool & mask */ online = cpupool_domain_master_cpumask(iter_svc->unit->domain); - cpumask_and(&cpu_common, online, iter_svc->unit->cpu_hard_affinity); - cpumask_and(&cpu_common, mask, &cpu_common); - if ( cpumask_empty(&cpu_common) ) + cpumask_and(cpu_common, online, iter_svc->unit->cpu_hard_affinity); + cpumask_and(cpu_common, mask, cpu_common); + if ( cpumask_empty(cpu_common) ) continue; ASSERT( iter_svc->cur_budget > 0 ); @@ -1092,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit, } else { - snext = runq_pick(ops, cpumask_of(sched_cpu)); + snext = runq_pick(ops, cpumask_of(sched_cpu), cur_cpu); if ( snext == NULL ) snext = rt_unit(sched_idle_unit(sched_cpu)); @@ -1186,22 +1204,22 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new) struct rt_unit *iter_svc; struct sched_unit *iter_unit; int cpu = 0, cpu_to_tickle = 0; - cpumask_t not_tickled; + cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id()); cpumask_t *online; if ( new == NULL || is_idle_unit(new->unit) ) return; online = cpupool_domain_master_cpumask(new->unit->domain); - cpumask_and(¬_tickled, online, new->unit->cpu_hard_affinity); - cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); + cpumask_and(not_tickled, online, new->unit->cpu_hard_affinity); + cpumask_andnot(not_tickled, not_tickled, &prv->tickled); /* * 1) If there are any idle CPUs, kick one. * For cache benefit,we first search new->cpu. * The same loop also find the one with lowest priority. */ - cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), ¬_tickled); + cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), not_tickled); while ( cpu!= nr_cpu_ids ) { iter_unit = curr_on_cpu(cpu); @@ -1216,8 +1234,8 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new) compare_unit_priority(iter_svc, latest_deadline_unit) < 0 ) latest_deadline_unit = iter_svc; - cpumask_clear_cpu(cpu, ¬_tickled); - cpu = cpumask_cycle(cpu, ¬_tickled); + cpumask_clear_cpu(cpu, not_tickled); + cpu = cpumask_cycle(cpu, not_tickled); } /* 2) candicate has higher priority, kick out lowest priority unit */