From patchwork Fri Sep 27 07:00:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11163957 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4363A112B for ; Fri, 27 Sep 2019 07:03:49 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1C2D2207FF for ; Fri, 27 Sep 2019 07:03:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C2D2207FF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkHN-0005yx-Pn; Fri, 27 Sep 2019 07:03:01 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkHM-0005wV-70 for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:03:00 +0000 X-Inumbo-ID: 94e035dc-e0f4-11e9-966c-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 94e035dc-e0f4-11e9-966c-12813bfff9fa; Fri, 27 Sep 2019 07:01:09 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1D7D1B183; Fri, 27 Sep 2019 07:01:07 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:45 +0200 Message-Id: <20190927070050.12405-42-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 41/46] xen/sched: protect scheduling resource via rcu X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In order to be able to move cpus to cpupools with core scheduling active it is mandatory to merge multiple cpus into one scheduling resource or to split a scheduling resource with multiple cpus in it into multiple scheduling resources. This in turn requires to modify the cpu <-> scheduling resource relation. In order to be able to free unused resources protect struct sched_resource via RCU. This ensures there are no users left when freeing such a resource. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V1: new patch --- xen/common/cpupool.c | 4 + xen/common/schedule.c | 187 ++++++++++++++++++++++++++++++++++++++++----- xen/include/xen/sched-if.h | 7 +- 3 files changed, 178 insertions(+), 20 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index 02825e779d..7228ca84b4 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -511,8 +511,10 @@ static int cpupool_cpu_add(unsigned int cpu) * (or unplugging would have failed) and that is the default behavior * anyway. */ + rcu_read_lock(&sched_res_rculock); get_sched_res(cpu)->cpupool = NULL; ret = cpupool_assign_cpu_locked(cpupool0, cpu); + rcu_read_unlock(&sched_res_rculock); spin_unlock(&cpupool_lock); @@ -597,7 +599,9 @@ static void cpupool_cpu_remove_forced(unsigned int cpu) } } + rcu_read_lock(&sched_res_rculock); sched_rm_cpu(cpu); + rcu_read_unlock(&sched_res_rculock); } /* diff --git a/xen/common/schedule.c b/xen/common/schedule.c index c75ec969c2..c14a66d5f0 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -75,6 +75,7 @@ static void poll_timer_fn(void *data); /* This is global for now so that private implementations can reach it */ DEFINE_PER_CPU_READ_MOSTLY(struct sched_resource *, sched_res); static DEFINE_PER_CPU_READ_MOSTLY(unsigned int, sched_res_idx); +DEFINE_RCU_READ_LOCK(sched_res_rculock); /* Scratch space for cpumasks. */ DEFINE_PER_CPU(cpumask_t, cpumask_scratch); @@ -291,10 +292,12 @@ void sched_guest_idle(void (*idle) (void), unsigned int cpu) void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) { - spinlock_t *lock = likely(v == current) - ? NULL : unit_schedule_lock_irq(v->sched_unit); + spinlock_t *lock; s_time_t delta; + rcu_read_lock(&sched_res_rculock); + + lock = likely(v == current) ? NULL : unit_schedule_lock_irq(v->sched_unit); memcpy(runstate, &v->runstate, sizeof(*runstate)); delta = NOW() - runstate->state_entry_time; if ( delta > 0 ) @@ -302,6 +305,8 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) if ( unlikely(lock != NULL) ) unit_schedule_unlock_irq(lock, v->sched_unit); + + rcu_read_unlock(&sched_res_rculock); } uint64_t get_cpu_idle_time(unsigned int cpu) @@ -513,6 +518,8 @@ int sched_init_vcpu(struct vcpu *v) return 0; } + rcu_read_lock(&sched_res_rculock); + /* The first vcpu of an unit can be set via sched_set_res(). */ sched_set_res(unit, get_sched_res(processor)); @@ -520,6 +527,7 @@ int sched_init_vcpu(struct vcpu *v) if ( unit->priv == NULL ) { sched_free_unit(unit, v); + rcu_read_unlock(&sched_res_rculock); return 1; } @@ -546,6 +554,8 @@ int sched_init_vcpu(struct vcpu *v) sched_insert_unit(dom_scheduler(d), unit); } + rcu_read_unlock(&sched_res_rculock); + return 0; } @@ -574,6 +584,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) struct scheduler *old_ops; void *old_domdata; unsigned int gran = cpupool_get_granularity(c); + int ret = 0; for_each_vcpu ( d, v ) { @@ -581,15 +592,21 @@ int sched_move_domain(struct domain *d, struct cpupool *c) return -EBUSY; } + rcu_read_lock(&sched_res_rculock); + domdata = sched_alloc_domdata(c->sched, d); if ( IS_ERR(domdata) ) - return PTR_ERR(domdata); + { + ret = PTR_ERR(domdata); + goto out; + } unit_priv = xzalloc_array(void *, DIV_ROUND_UP(d->max_vcpus, gran)); if ( unit_priv == NULL ) { sched_free_domdata(c->sched, domdata); - return -ENOMEM; + ret = -ENOMEM; + goto out; } unit_idx = 0; @@ -602,7 +619,8 @@ int sched_move_domain(struct domain *d, struct cpupool *c) sched_free_udata(c->sched, unit_priv[unit_idx]); xfree(unit_priv); sched_free_domdata(c->sched, domdata); - return -ENOMEM; + ret = -ENOMEM; + goto out; } unit_idx++; } @@ -668,7 +686,10 @@ int sched_move_domain(struct domain *d, struct cpupool *c) xfree(unit_priv); - return 0; +out: + rcu_read_unlock(&sched_res_rculock); + + return ret; } void sched_destroy_vcpu(struct vcpu *v) @@ -686,9 +707,13 @@ void sched_destroy_vcpu(struct vcpu *v) */ if ( unit->vcpu_list == v ) { + rcu_read_lock(&sched_res_rculock); + sched_remove_unit(vcpu_scheduler(v), unit); sched_free_udata(vcpu_scheduler(v), unit->priv); sched_free_unit(unit, v); + + rcu_read_unlock(&sched_res_rculock); } } @@ -706,7 +731,12 @@ int sched_init_domain(struct domain *d, int poolid) SCHED_STAT_CRANK(dom_init); TRACE_1D(TRC_SCHED_DOM_ADD, d->domain_id); + rcu_read_lock(&sched_res_rculock); + sdom = sched_alloc_domdata(dom_scheduler(d), d); + + rcu_read_unlock(&sched_res_rculock); + if ( IS_ERR(sdom) ) return PTR_ERR(sdom); @@ -724,9 +754,13 @@ void sched_destroy_domain(struct domain *d) SCHED_STAT_CRANK(dom_destroy); TRACE_1D(TRC_SCHED_DOM_REM, d->domain_id); + rcu_read_lock(&sched_res_rculock); + sched_free_domdata(dom_scheduler(d), d->sched_priv); d->sched_priv = NULL; + rcu_read_unlock(&sched_res_rculock); + cpupool_rm_domain(d); } } @@ -760,11 +794,15 @@ void vcpu_sleep_nosync(struct vcpu *v) TRACE_2D(TRC_SCHED_SLEEP, v->domain->domain_id, v->vcpu_id); + rcu_read_lock(&sched_res_rculock); + lock = unit_schedule_lock_irqsave(v->sched_unit, &flags); vcpu_sleep_nosync_locked(v); unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); + + rcu_read_unlock(&sched_res_rculock); } void vcpu_sleep_sync(struct vcpu *v) @@ -785,6 +823,8 @@ void vcpu_wake(struct vcpu *v) TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id); + rcu_read_lock(&sched_res_rculock); + lock = unit_schedule_lock_irqsave(unit, &flags); if ( likely(vcpu_runnable(v)) ) @@ -805,6 +845,8 @@ void vcpu_wake(struct vcpu *v) } unit_schedule_unlock_irqrestore(lock, flags, unit); + + rcu_read_unlock(&sched_res_rculock); } void vcpu_unblock(struct vcpu *v) @@ -838,6 +880,8 @@ static void sched_unit_move_locked(struct sched_unit *unit, unsigned int old_cpu = unit->res->master_cpu; struct vcpu *v; + rcu_read_lock(&sched_res_rculock); + /* * Transfer urgency status to new CPU before switching CPUs, as * once the switch occurs, v->is_urgent is no longer protected by @@ -857,6 +901,8 @@ static void sched_unit_move_locked(struct sched_unit *unit, * pointer can't change while the current lock is held. */ sched_migrate(unit_scheduler(unit), unit, new_cpu); + + rcu_read_unlock(&sched_res_rculock); } /* @@ -1024,6 +1070,8 @@ void restore_vcpu_affinity(struct domain *d) ASSERT(system_state == SYS_STATE_resume); + rcu_read_lock(&sched_res_rculock); + for_each_sched_unit ( d, unit ) { spinlock_t *lock; @@ -1080,6 +1128,8 @@ void restore_vcpu_affinity(struct domain *d) sched_move_irqs(unit); } + rcu_read_unlock(&sched_res_rculock); + domain_update_node_affinity(d); } @@ -1095,9 +1145,11 @@ int cpu_disable_scheduler(unsigned int cpu) cpumask_t online_affinity; int ret = 0; + rcu_read_lock(&sched_res_rculock); + c = get_sched_res(cpu)->cpupool; if ( c == NULL ) - return ret; + goto out; for_each_domain_in_cpupool ( d, c ) { @@ -1155,6 +1207,9 @@ int cpu_disable_scheduler(unsigned int cpu) } } +out: + rcu_read_unlock(&sched_res_rculock); + return ret; } @@ -1186,7 +1241,9 @@ static int cpu_disable_scheduler_check(unsigned int cpu) static void sched_set_affinity( struct sched_unit *unit, const cpumask_t *hard, const cpumask_t *soft) { + rcu_read_lock(&sched_res_rculock); sched_adjust_affinity(dom_scheduler(unit->domain), unit, hard, soft); + rcu_read_unlock(&sched_res_rculock); if ( hard ) cpumask_copy(unit->cpu_hard_affinity, hard); @@ -1206,6 +1263,8 @@ static int vcpu_set_affinity( spinlock_t *lock; int ret = 0; + rcu_read_lock(&sched_res_rculock); + lock = unit_schedule_lock_irq(unit); if ( v->affinity_broken ) @@ -1234,6 +1293,8 @@ static int vcpu_set_affinity( sched_unit_migrate_finish(unit); + rcu_read_unlock(&sched_res_rculock); + return ret; } @@ -1360,11 +1421,16 @@ static long do_poll(struct sched_poll *sched_poll) long vcpu_yield(void) { struct vcpu * v=current; - spinlock_t *lock = unit_schedule_lock_irq(v->sched_unit); + spinlock_t *lock; + + rcu_read_lock(&sched_res_rculock); + lock = unit_schedule_lock_irq(v->sched_unit); sched_yield(vcpu_scheduler(v), v->sched_unit); unit_schedule_unlock_irq(lock, v->sched_unit); + rcu_read_unlock(&sched_res_rculock); + SCHED_STAT_CRANK(vcpu_yield); TRACE_2D(TRC_SCHED_YIELD, current->domain->domain_id, current->vcpu_id); @@ -1461,6 +1527,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason) int ret = -EINVAL; bool migrate; + rcu_read_lock(&sched_res_rculock); + lock = unit_schedule_lock_irq(unit); if ( cpu == NR_CPUS ) @@ -1500,6 +1568,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason) if ( migrate ) sched_unit_migrate_finish(unit); + rcu_read_unlock(&sched_res_rculock); + return ret; } @@ -1711,9 +1781,13 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op) /* NB: the pluggable scheduler code needs to take care * of locking by itself. */ + rcu_read_lock(&sched_res_rculock); + if ( (ret = sched_adjust_dom(dom_scheduler(d), d, op)) == 0 ) TRACE_1D(TRC_SCHED_ADJDOM, d->domain_id); + rcu_read_unlock(&sched_res_rculock); + return ret; } @@ -1734,9 +1808,13 @@ long sched_adjust_global(struct xen_sysctl_scheduler_op *op) if ( pool == NULL ) return -ESRCH; + rcu_read_lock(&sched_res_rculock); + rc = ((op->sched_id == pool->sched->sched_id) ? sched_adjust_cpupool(pool->sched, op) : -EINVAL); + rcu_read_unlock(&sched_res_rculock); + cpupool_put(pool); return rc; @@ -1956,7 +2034,11 @@ static void unit_context_saved(struct sched_resource *sr) void sched_context_switched(struct vcpu *vprev, struct vcpu *vnext) { struct sched_unit *next = vnext->sched_unit; - struct sched_resource *sr = get_sched_res(smp_processor_id()); + struct sched_resource *sr; + + rcu_read_lock(&sched_res_rculock); + + sr = get_sched_res(smp_processor_id()); if ( atomic_read(&next->rendezvous_out_cnt) ) { @@ -1983,6 +2065,8 @@ void sched_context_switched(struct vcpu *vprev, struct vcpu *vnext) if ( is_idle_vcpu(vprev) && vprev != vnext ) vprev->sched_unit = sr->sched_unit_idle; + + rcu_read_unlock(&sched_res_rculock); } static void sched_context_switch(struct vcpu *vprev, struct vcpu *vnext, @@ -2006,6 +2090,8 @@ static void sched_context_switch(struct vcpu *vprev, struct vcpu *vnext, vnext->sched_unit = get_sched_res(smp_processor_id())->sched_unit_idle; + rcu_read_unlock(&sched_res_rculock); + trace_continue_running(vnext); return continue_running(vprev); } @@ -2019,6 +2105,8 @@ static void sched_context_switch(struct vcpu *vprev, struct vcpu *vnext, vcpu_periodic_timer_work(vnext); + rcu_read_unlock(&sched_res_rculock); + context_switch(vprev, vnext); } @@ -2171,6 +2259,8 @@ static void sched_slave(void) ASSERT_NOT_IN_ATOMIC(); + rcu_read_lock(&sched_res_rculock); + lock = pcpu_schedule_lock_irq(cpu); now = NOW(); @@ -2194,6 +2284,8 @@ static void sched_slave(void) { pcpu_schedule_unlock_irq(lock, cpu); + rcu_read_unlock(&sched_res_rculock); + /* Check for failed forced context switch. */ if ( do_softirq ) raise_softirq(SCHEDULE_SOFTIRQ); @@ -2226,13 +2318,16 @@ static void schedule(void) struct sched_resource *sr; spinlock_t *lock; int cpu = smp_processor_id(); - unsigned int gran = get_sched_res(cpu)->granularity; + unsigned int gran; ASSERT_NOT_IN_ATOMIC(); SCHED_STAT_CRANK(sched_run); + rcu_read_lock(&sched_res_rculock); + sr = get_sched_res(cpu); + gran = sr->granularity; lock = pcpu_schedule_lock_irq(cpu); @@ -2244,6 +2339,8 @@ static void schedule(void) */ pcpu_schedule_unlock_irq(lock, cpu); + rcu_read_unlock(&sched_res_rculock); + raise_softirq(SCHEDULE_SOFTIRQ); return sched_slave(); } @@ -2355,14 +2452,27 @@ static int cpu_schedule_up(unsigned int cpu) return 0; } +static void sched_res_free(struct rcu_head *head) +{ + struct sched_resource *sr = container_of(head, struct sched_resource, rcu); + + xfree(sr); +} + static void cpu_schedule_down(unsigned int cpu) { - struct sched_resource *sr = get_sched_res(cpu); + struct sched_resource *sr; + + rcu_read_lock(&sched_res_rculock); + + sr = get_sched_res(cpu); kill_timer(&sr->s_timer); set_sched_res(cpu, NULL); - xfree(sr); + call_rcu(&sr->rcu, sched_res_free); + + rcu_read_unlock(&sched_res_rculock); } void sched_rm_cpu(unsigned int cpu) @@ -2382,6 +2492,8 @@ static int cpu_schedule_callback( unsigned int cpu = (unsigned long)hcpu; int rc = 0; + rcu_read_lock(&sched_res_rculock); + /* * From the scheduler perspective, bringing up a pCPU requires * allocating and initializing the per-pCPU scheduler specific data, @@ -2428,6 +2540,8 @@ static int cpu_schedule_callback( break; } + rcu_read_unlock(&sched_res_rculock); + return !rc ? NOTIFY_DONE : notifier_from_errno(rc); } @@ -2517,8 +2631,13 @@ void __init scheduler_init(void) idle_domain->max_vcpus = nr_cpu_ids; if ( vcpu_create(idle_domain, 0) == NULL ) BUG(); + + rcu_read_lock(&sched_res_rculock); + get_sched_res(0)->curr = idle_vcpu[0]->sched_unit; get_sched_res(0)->sched_unit_idle = idle_vcpu[0]->sched_unit; + + rcu_read_unlock(&sched_res_rculock); } /* @@ -2531,9 +2650,14 @@ int schedule_cpu_add(unsigned int cpu, struct cpupool *c) struct vcpu *idle; void *ppriv, *vpriv; struct scheduler *new_ops = c->sched; - struct sched_resource *sr = get_sched_res(cpu); + struct sched_resource *sr; spinlock_t *old_lock, *new_lock; unsigned long flags; + int ret = 0; + + rcu_read_lock(&sched_res_rculock); + + sr = get_sched_res(cpu); ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); ASSERT(!cpumask_test_cpu(cpu, c->cpu_valid)); @@ -2553,13 +2677,18 @@ int schedule_cpu_add(unsigned int cpu, struct cpupool *c) idle = idle_vcpu[cpu]; ppriv = sched_alloc_pdata(new_ops, cpu); if ( IS_ERR(ppriv) ) - return PTR_ERR(ppriv); + { + ret = PTR_ERR(ppriv); + goto out; + } + vpriv = sched_alloc_udata(new_ops, idle->sched_unit, idle->domain->sched_priv); if ( vpriv == NULL ) { sched_free_pdata(new_ops, ppriv, cpu); - return -ENOMEM; + ret = -ENOMEM; + goto out; } /* @@ -2598,7 +2727,10 @@ int schedule_cpu_add(unsigned int cpu, struct cpupool *c) /* The cpu is added to a pool, trigger it to go pick up some work */ cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); - return 0; +out: + rcu_read_unlock(&sched_res_rculock); + + return ret; } /* @@ -2611,11 +2743,16 @@ int schedule_cpu_rm(unsigned int cpu) { struct vcpu *idle; void *ppriv_old, *vpriv_old; - struct sched_resource *sr = get_sched_res(cpu); - struct scheduler *old_ops = sr->scheduler; + struct sched_resource *sr; + struct scheduler *old_ops; spinlock_t *old_lock; unsigned long flags; + rcu_read_lock(&sched_res_rculock); + + sr = get_sched_res(cpu); + old_ops = sr->scheduler; + ASSERT(sr->cpupool != NULL); ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid)); @@ -2648,6 +2785,8 @@ int schedule_cpu_rm(unsigned int cpu) sr->granularity = 1; sr->cpupool = NULL; + rcu_read_unlock(&sched_res_rculock); + return 0; } @@ -2696,6 +2835,8 @@ void schedule_dump(struct cpupool *c) /* Locking, if necessary, must be handled withing each scheduler */ + rcu_read_lock(&sched_res_rculock); + if ( c != NULL ) { sched = c->sched; @@ -2715,6 +2856,8 @@ void schedule_dump(struct cpupool *c) for_each_cpu (i, cpus) sched_dump_cpu_state(sched, i); } + + rcu_read_unlock(&sched_res_rculock); } void sched_tick_suspend(void) @@ -2722,10 +2865,14 @@ void sched_tick_suspend(void) struct scheduler *sched; unsigned int cpu = smp_processor_id(); + rcu_read_lock(&sched_res_rculock); + sched = get_sched_res(cpu)->scheduler; sched_do_tick_suspend(sched, cpu); rcu_idle_enter(cpu); rcu_idle_timer_start(); + + rcu_read_unlock(&sched_res_rculock); } void sched_tick_resume(void) @@ -2733,10 +2880,14 @@ void sched_tick_resume(void) struct scheduler *sched; unsigned int cpu = smp_processor_id(); + rcu_read_lock(&sched_res_rculock); + rcu_idle_timer_stop(); rcu_idle_exit(cpu); sched = get_sched_res(cpu)->scheduler; sched_do_tick_resume(sched, cpu); + + rcu_read_unlock(&sched_res_rculock); } void wait(void) diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index f8f0f484cb..3988985ee6 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -10,6 +10,7 @@ #include #include +#include /* A global pointer to the initial cpupool (POOL0). */ extern struct cpupool *cpupool0; @@ -57,18 +58,20 @@ struct sched_resource { unsigned int master_cpu; unsigned int granularity; const cpumask_t *cpus; /* cpus covered by this struct */ + struct rcu_head rcu; }; DECLARE_PER_CPU(struct sched_resource *, sched_res); +extern rcu_read_lock_t sched_res_rculock; static inline struct sched_resource *get_sched_res(unsigned int cpu) { - return per_cpu(sched_res, cpu); + return rcu_dereference(per_cpu(sched_res, cpu)); } static inline void set_sched_res(unsigned int cpu, struct sched_resource *res) { - per_cpu(sched_res, cpu) = res; + rcu_assign_pointer(per_cpu(sched_res, cpu), res); } static inline struct sched_unit *curr_on_cpu(unsigned int cpu)