From patchwork Mon Dec 4 15:23:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13478684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44B81C4167B for ; Mon, 4 Dec 2023 15:23:44 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.647119.1009852 (Exim 4.92) (envelope-from ) id 1rAAnE-0005nY-4a; Mon, 04 Dec 2023 15:23:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 647119.1009852; Mon, 04 Dec 2023 15:23:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAAnE-0005nP-1z; Mon, 04 Dec 2023 15:23:32 +0000 Received: by outflank-mailman (input) for mailman id 647119; Mon, 04 Dec 2023 15:23:31 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAAnD-0005Xf-6L for xen-devel@lists.xenproject.org; Mon, 04 Dec 2023 15:23:31 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 13bc8396-92b9-11ee-9b0f-b553b5be7939; Mon, 04 Dec 2023 16:23:29 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 2CA21220CE; Mon, 4 Dec 2023 15:23:29 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 03EEE13588; Mon, 4 Dec 2023 15:23:28 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id 0LZJO/DubWUqJgAAn2gu4w (envelope-from ); Mon, 04 Dec 2023 15:23:28 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 13bc8396-92b9-11ee-9b0f-b553b5be7939 From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Dario Faggioli , George Dunlap , =?utf-8?q?Ren=C3=A9_Winther_H?= =?utf-8?q?=C3=B8jgaard?= Subject: [PATCH v2 1/3] xen/sched: fix adding offline cpu to cpupool Date: Mon, 4 Dec 2023 16:23:19 +0100 Message-Id: <20231204152321.16520-2-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231204152321.16520-1-jgross@suse.com> References: <20231204152321.16520-1-jgross@suse.com> MIME-Version: 1.0 X-Rspamd-Server: rspamd1 Authentication-Results: smtp-out1.suse.de; none X-Rspamd-Queue-Id: 2CA21220CE X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] Trying to add an offline cpu to a cpupool can crash the hypervisor, as the probably non-existing percpu area of the cpu is accessed before the availability of the cpu is being tested. This can happen in case the cpupool's granularity is "core" or "socket". Fix that by testing the cpu to be online. Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving cpus to/from cpupools") Reported-by: René Winther Højgaard Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Acked-by: George Dunlap --- V2: - enhance commit message --- xen/common/sched/cpupool.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 2e094b0cfa..ad8f608462 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -892,6 +892,8 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) if ( cpu >= nr_cpu_ids ) goto addcpu_out; ret = -ENODEV; + if ( !cpu_online(cpu) ) + goto addcpu_out; cpus = sched_get_opt_cpumask(c->gran, cpu); if ( !cpumask_subset(cpus, &cpupool_free_cpus) || cpumask_intersects(cpus, &cpupool_locked_cpus) ) From patchwork Mon Dec 4 15:23:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13478687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6F031C4167B for ; Mon, 4 Dec 2023 15:23:52 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.647120.1009862 (Exim 4.92) (envelope-from ) id 1rAAnK-00066U-CC; Mon, 04 Dec 2023 15:23:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 647120.1009862; Mon, 04 Dec 2023 15:23:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAAnK-00066L-9S; Mon, 04 Dec 2023 15:23:38 +0000 Received: by outflank-mailman (input) for mailman id 647120; Mon, 04 Dec 2023 15:23:37 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAAnJ-00064x-5w for xen-devel@lists.xenproject.org; Mon, 04 Dec 2023 15:23:37 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 175903ae-92b9-11ee-98e5-6d05b1d4d9a1; Mon, 04 Dec 2023 16:23:35 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id B29E51FE58; Mon, 4 Dec 2023 15:23:34 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 8151513588; Mon, 4 Dec 2023 15:23:34 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id m6VRHvbubWU4JgAAn2gu4w (envelope-from ); Mon, 04 Dec 2023 15:23:34 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 175903ae-92b9-11ee-98e5-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1701703414; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=06xhZUNn8wMG33vXmrBvLhXQYrmhEPNGSLjT0D5WEEg=; b=PWqbjIbCKJRmP9IpOQuHXskej2+0oGva+bYoAGrvQJ+rp2189LjzkCSVJ+EZFpKT6zIBF5 8bp/TeQ8C4bTZe2wSSloWLJ5kTg+LLYwDNfmEf0CkaVcmKZ8z/KFXscbNBZ4NRkhVIVohV zUDGXhksYfpzJ46Nn4KeW9sKOLnXP8M= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli , =?utf-8?q?Ren=C3=A9_Winther_H=C3=B8jga?= =?utf-8?q?ard?= , Jan Beulich , George Dunlap Subject: [PATCH v2 2/3] xen/sched: fix sched_move_domain() Date: Mon, 4 Dec 2023 16:23:20 +0100 Message-Id: <20231204152321.16520-3-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231204152321.16520-1-jgross@suse.com> References: <20231204152321.16520-1-jgross@suse.com> MIME-Version: 1.0 Authentication-Results: smtp-out2.suse.de; none X-Spamd-Result: default: False [-4.10 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; REPLY(-4.00)[]; RCVD_COUNT_THREE(0.00)[3]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; RCPT_COUNT_SEVEN(0.00)[7]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[]; BAYES_HAM(-0.00)[12.34%] Do cleanup in sched_move_domain() in a dedicated service function, which is called either in error case with newly allocated data, or in success case with the old data to be freed. This will at once fix some subtle bugs which sneaked in due to forgetting to overwrite some pointers in the error case. Fixes: 70fadc41635b ("xen/cpupool: support moving domain between cpupools with different granularity") Reported-by: René Winther Højgaard Initial-fix-by: Jan Beulich Initial-fix-by: George Dunlap Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Acked-by: George Dunlap --- V2: - make ops parameter of new function const (Jan Beulich) --- xen/common/sched/core.c | 47 +++++++++++++++++++++++------------------ 1 file changed, 27 insertions(+), 20 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index eba0cea4bb..901782bbb4 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -647,6 +647,24 @@ static void sched_move_irqs(const struct sched_unit *unit) vcpu_move_irqs(v); } +static void sched_move_domain_cleanup(const struct scheduler *ops, + struct sched_unit *units, + void *domdata) +{ + struct sched_unit *unit, *old_unit; + + for ( unit = units; unit; ) + { + if ( unit->priv ) + sched_free_udata(ops, unit->priv); + old_unit = unit; + unit = unit->next_in_list; + xfree(old_unit); + } + + sched_free_domdata(ops, domdata); +} + /* * Move a domain from one cpupool to another. * @@ -686,7 +704,6 @@ int sched_move_domain(struct domain *d, struct cpupool *c) void *old_domdata; unsigned int gran = cpupool_get_granularity(c); unsigned int n_units = d->vcpu[0] ? DIV_ROUND_UP(d->max_vcpus, gran) : 0; - int ret = 0; for_each_vcpu ( d, v ) { @@ -699,8 +716,9 @@ int sched_move_domain(struct domain *d, struct cpupool *c) domdata = sched_alloc_domdata(c->sched, d); if ( IS_ERR(domdata) ) { - ret = PTR_ERR(domdata); - goto out; + rcu_read_unlock(&sched_res_rculock); + + return PTR_ERR(domdata); } for ( unit_idx = 0; unit_idx < n_units; unit_idx++ ) @@ -718,10 +736,10 @@ int sched_move_domain(struct domain *d, struct cpupool *c) if ( !unit || !unit->priv ) { - old_units = new_units; - old_domdata = domdata; - ret = -ENOMEM; - goto out_free; + sched_move_domain_cleanup(c->sched, new_units, domdata); + rcu_read_unlock(&sched_res_rculock); + + return -ENOMEM; } unit_ptr = &unit->next_in_list; @@ -808,22 +826,11 @@ int sched_move_domain(struct domain *d, struct cpupool *c) domain_unpause(d); - out_free: - for ( unit = old_units; unit; ) - { - if ( unit->priv ) - sched_free_udata(c->sched, unit->priv); - old_unit = unit; - unit = unit->next_in_list; - xfree(old_unit); - } - - sched_free_domdata(old_ops, old_domdata); + sched_move_domain_cleanup(old_ops, old_units, old_domdata); - out: rcu_read_unlock(&sched_res_rculock); - return ret; + return 0; } void sched_destroy_vcpu(struct vcpu *v) From patchwork Mon Dec 4 15:23:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13478686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9BB44C10DC1 for ; Mon, 4 Dec 2023 15:23:51 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.647121.1009872 (Exim 4.92) (envelope-from ) id 1rAAnO-0006Pb-K5; Mon, 04 Dec 2023 15:23:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 647121.1009872; Mon, 04 Dec 2023 15:23:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAAnO-0006PR-GF; Mon, 04 Dec 2023 15:23:42 +0000 Received: by outflank-mailman (input) for mailman id 647121; Mon, 04 Dec 2023 15:23:41 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAAnN-00064x-Dh for xen-devel@lists.xenproject.org; Mon, 04 Dec 2023 15:23:41 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 1a5f6f12-92b9-11ee-98e5-6d05b1d4d9a1; Mon, 04 Dec 2023 16:23:40 +0100 (CET) Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [10.150.64.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 3A968220DA; Mon, 4 Dec 2023 15:23:40 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 1635813588; Mon, 4 Dec 2023 15:23:40 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap2.dmz-prg2.suse.org with ESMTPSA id bag1BPzubWVHJgAAn2gu4w (envelope-from ); Mon, 04 Dec 2023 15:23:40 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1a5f6f12-92b9-11ee-98e5-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1701703420; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t+n+ooepeFxvmrkgaRJSTpiRMUBlO5vWL9QgpNTdaX0=; b=EdapVsXnX0I0jlf0s0el02O39j1XMRgi8tlpeUmnQa0INrGteU0d7xBvXYd5oiNLaeS/qO R895BemGN/X09+u2lKfFjeecvA1HsFcJv3gEUAgbSXQRo23c/UzjiPhQ0+chMTJIp45dAq pDczQTDAqZ1TukZy3itSsYuDcGe1wLI= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH v2 3/3] xen/sched: do some minor cleanup of sched_move_domain() Date: Mon, 4 Dec 2023 16:23:21 +0100 Message-Id: <20231204152321.16520-4-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231204152321.16520-1-jgross@suse.com> References: <20231204152321.16520-1-jgross@suse.com> MIME-Version: 1.0 Authentication-Results: smtp-out1.suse.de; none X-Spamd-Result: default: False [-0.30 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; R_MISSING_CHARSET(2.50)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; REPLY(-4.00)[]; BROKEN_CONTENT_TYPE(1.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; RCVD_COUNT_THREE(0.00)[3]; TO_DN_SOME(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MID_CONTAINS_FROM(1.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[] Do some minor cleanups: - Move setting of old_domdata and old_units next to each other - Drop incrementing unit_idx in the final loop of sched_move_domain() as it isn't used afterwards - Rename new_p to new_cpu and unit_p to unit_cpu Signed-off-by: Juergen Gross Reviewed-by: George Dunlap --- V2: - new patch --- xen/common/sched/core.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 901782bbb4..f6ac1e5af8 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -698,7 +698,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) struct sched_unit *unit, *old_unit; struct sched_unit *new_units = NULL, *old_units; struct sched_unit **unit_ptr = &new_units; - unsigned int new_p, unit_idx; + unsigned int new_cpu, unit_idx; void *domdata; struct scheduler *old_ops = dom_scheduler(d); void *old_domdata; @@ -748,13 +748,14 @@ int sched_move_domain(struct domain *d, struct cpupool *c) domain_pause(d); old_domdata = d->sched_priv; + old_units = d->sched_unit_list; /* * Remove all units from the old scheduler, and temporarily move them to * the same processor to make locking easier when moving the new units to * new processors. */ - new_p = cpumask_first(d->cpupool->cpu_valid); + new_cpu = cpumask_first(d->cpupool->cpu_valid); for_each_sched_unit ( d, unit ) { spinlock_t *lock; @@ -762,12 +763,10 @@ int sched_move_domain(struct domain *d, struct cpupool *c) sched_remove_unit(old_ops, unit); lock = unit_schedule_lock_irq(unit); - sched_set_res(unit, get_sched_res(new_p)); + sched_set_res(unit, get_sched_res(new_cpu)); spin_unlock_irq(lock); } - old_units = d->sched_unit_list; - d->cpupool = c; d->sched_priv = domdata; @@ -781,32 +780,32 @@ int sched_move_domain(struct domain *d, struct cpupool *c) unit->state_entry_time = old_unit->state_entry_time; unit->runstate_cnt[v->runstate.state]++; /* Temporarily use old resource assignment */ - unit->res = get_sched_res(new_p); + unit->res = get_sched_res(new_cpu); v->sched_unit = unit; } d->sched_unit_list = new_units; - new_p = cpumask_first(c->cpu_valid); + new_cpu = cpumask_first(c->cpu_valid); for_each_sched_unit ( d, unit ) { spinlock_t *lock; - unsigned int unit_p = new_p; + unsigned int unit_cpu = new_cpu; for_each_sched_unit_vcpu ( unit, v ) { - migrate_timer(&v->periodic_timer, new_p); - migrate_timer(&v->singleshot_timer, new_p); - migrate_timer(&v->poll_timer, new_p); - new_p = cpumask_cycle(new_p, c->cpu_valid); + migrate_timer(&v->periodic_timer, new_cpu); + migrate_timer(&v->singleshot_timer, new_cpu); + migrate_timer(&v->poll_timer, new_cpu); + new_cpu = cpumask_cycle(new_cpu, c->cpu_valid); } lock = unit_schedule_lock_irq(unit); sched_set_affinity(unit, &cpumask_all, &cpumask_all); - sched_set_res(unit, get_sched_res(unit_p)); + sched_set_res(unit, get_sched_res(unit_cpu)); /* * With v->processor modified we must not * - make any further changes assuming we hold the scheduler lock, @@ -818,8 +817,6 @@ int sched_move_domain(struct domain *d, struct cpupool *c) sched_move_irqs(unit); sched_insert_unit(c->sched, unit); - - unit_idx++; } domain_update_node_affinity(d);