From patchwork Wed Dec 18 07:48:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299685 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C53C26C1 for ; Wed, 18 Dec 2019 07:50:43 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9EBBB2176D for ; Wed, 18 Dec 2019 07:50:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EBBB2176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU54-0003ew-B6; Wed, 18 Dec 2019 07:49:14 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU52-0003eo-HD for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:12 +0000 X-Inumbo-ID: dc75c886-216a-11ea-9041-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id dc75c886-216a-11ea-9041-12813bfff9fa; Wed, 18 Dec 2019 07:49:04 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 13462AB9B; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:51 +0100 Message-Id: <20191218074859.21665-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Move sched*c and cpupool.c to a new directory common/sched. Signed-off-by: Juergen Gross --- MAINTAINERS | 8 +-- xen/common/Kconfig | 66 +--------------------- xen/common/Makefile | 8 +-- xen/common/sched/Kconfig | 65 +++++++++++++++++++++ xen/common/sched/Makefile | 7 +++ .../{compat/schedule.c => sched/compat_schedule.c} | 2 +- xen/common/{ => sched}/cpupool.c | 0 xen/common/{ => sched}/sched_arinc653.c | 0 xen/common/{ => sched}/sched_credit.c | 0 xen/common/{ => sched}/sched_credit2.c | 0 xen/common/{ => sched}/sched_null.c | 0 xen/common/{ => sched}/sched_rt.c | 0 xen/common/{ => sched}/schedule.c | 2 +- 13 files changed, 80 insertions(+), 78 deletions(-) create mode 100644 xen/common/sched/Kconfig create mode 100644 xen/common/sched/Makefile rename xen/common/{compat/schedule.c => sched/compat_schedule.c} (97%) rename xen/common/{ => sched}/cpupool.c (100%) rename xen/common/{ => sched}/sched_arinc653.c (100%) rename xen/common/{ => sched}/sched_credit.c (100%) rename xen/common/{ => sched}/sched_credit2.c (100%) rename xen/common/{ => sched}/sched_null.c (100%) rename xen/common/{ => sched}/sched_rt.c (100%) rename xen/common/{ => sched}/schedule.c (99%) diff --git a/MAINTAINERS b/MAINTAINERS index 012c847ebd..37d4da2bc2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -174,7 +174,7 @@ M: Josh Whitehead M: Stewart Hildebrand S: Supported L: DornerWorks Xen-Devel -F: xen/common/sched_arinc653.c +F: xen/common/sched/sched_arinc653.c F: tools/libxc/xc_arinc653.c ARM (W/ VIRTUALISATION EXTENSIONS) ARCHITECTURE @@ -212,7 +212,7 @@ CPU POOLS M: Juergen Gross M: Dario Faggioli S: Supported -F: xen/common/cpupool.c +F: xen/common/sched/cpupool.c DEVICE TREE M: Stefano Stabellini @@ -378,13 +378,13 @@ RTDS SCHEDULER M: Dario Faggioli M: Meng Xu S: Supported -F: xen/common/sched_rt.c +F: xen/common/sched/sched_rt.c SCHEDULING M: George Dunlap M: Dario Faggioli S: Supported -F: xen/common/sched* +F: xen/common/sched/ SEABIOS UPSTREAM M: Wei Liu diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 2f516da101..79465fc1f9 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -278,71 +278,7 @@ config ARGO If unsure, say N. -menu "Schedulers" - visible if EXPERT = "y" - -config SCHED_CREDIT - bool "Credit scheduler support" - default y - ---help--- - The traditional credit scheduler is a general purpose scheduler. - -config SCHED_CREDIT2 - bool "Credit2 scheduler support" - default y - ---help--- - The credit2 scheduler is a general purpose scheduler that is - optimized for lower latency and higher VM density. - -config SCHED_RTDS - bool "RTDS scheduler support (EXPERIMENTAL)" - default y - ---help--- - The RTDS scheduler is a soft and firm real-time scheduler for - multicore, targeted for embedded, automotive, graphics and gaming - in the cloud, and general low-latency workloads. - -config SCHED_ARINC653 - bool "ARINC653 scheduler support (EXPERIMENTAL)" - default DEBUG - ---help--- - The ARINC653 scheduler is a hard real-time scheduler for single - cores, targeted for avionics, drones, and medical devices. - -config SCHED_NULL - bool "Null scheduler support (EXPERIMENTAL)" - default y - ---help--- - The null scheduler is a static, zero overhead scheduler, - for when there always are less vCPUs than pCPUs, typically - in embedded or HPC scenarios. - -choice - prompt "Default Scheduler?" - default SCHED_CREDIT2_DEFAULT - - config SCHED_CREDIT_DEFAULT - bool "Credit Scheduler" if SCHED_CREDIT - config SCHED_CREDIT2_DEFAULT - bool "Credit2 Scheduler" if SCHED_CREDIT2 - config SCHED_RTDS_DEFAULT - bool "RT Scheduler" if SCHED_RTDS - config SCHED_ARINC653_DEFAULT - bool "ARINC653 Scheduler" if SCHED_ARINC653 - config SCHED_NULL_DEFAULT - bool "Null Scheduler" if SCHED_NULL -endchoice - -config SCHED_DEFAULT - string - default "credit" if SCHED_CREDIT_DEFAULT - default "credit2" if SCHED_CREDIT2_DEFAULT - default "rtds" if SCHED_RTDS_DEFAULT - default "arinc653" if SCHED_ARINC653_DEFAULT - default "null" if SCHED_NULL_DEFAULT - default "credit2" - -endmenu +source "common/sched/Kconfig" config CRYPTO bool diff --git a/xen/common/Makefile b/xen/common/Makefile index 62b34e69e9..2abb8250b0 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -3,7 +3,6 @@ obj-y += bitmap.o obj-y += bsearch.o obj-$(CONFIG_CORE_PARKING) += core_parking.o obj-y += cpu.o -obj-y += cpupool.o obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o obj-y += domctl.o @@ -38,12 +37,6 @@ obj-y += radix-tree.o obj-y += rbtree.o obj-y += rcupdate.o obj-y += rwlock.o -obj-$(CONFIG_SCHED_ARINC653) += sched_arinc653.o -obj-$(CONFIG_SCHED_CREDIT) += sched_credit.o -obj-$(CONFIG_SCHED_CREDIT2) += sched_credit2.o -obj-$(CONFIG_SCHED_RTDS) += sched_rt.o -obj-$(CONFIG_SCHED_NULL) += sched_null.o -obj-y += schedule.o obj-y += shutdown.o obj-y += softirq.o obj-y += sort.o @@ -74,6 +67,7 @@ obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o kernel.o memory.o multicall extra-y := symbols-dummy.o subdir-$(CONFIG_COVERAGE) += coverage +subdir-y += sched subdir-$(CONFIG_UBSAN) += ubsan subdir-$(CONFIG_NEEDS_LIBELF) += libelf diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig new file mode 100644 index 0000000000..883ac87cab --- /dev/null +++ b/xen/common/sched/Kconfig @@ -0,0 +1,65 @@ +menu "Schedulers" + visible if EXPERT = "y" + +config SCHED_CREDIT + bool "Credit scheduler support" + default y + ---help--- + The traditional credit scheduler is a general purpose scheduler. + +config SCHED_CREDIT2 + bool "Credit2 scheduler support" + default y + ---help--- + The credit2 scheduler is a general purpose scheduler that is + optimized for lower latency and higher VM density. + +config SCHED_RTDS + bool "RTDS scheduler support (EXPERIMENTAL)" + default y + ---help--- + The RTDS scheduler is a soft and firm real-time scheduler for + multicore, targeted for embedded, automotive, graphics and gaming + in the cloud, and general low-latency workloads. + +config SCHED_ARINC653 + bool "ARINC653 scheduler support (EXPERIMENTAL)" + default DEBUG + ---help--- + The ARINC653 scheduler is a hard real-time scheduler for single + cores, targeted for avionics, drones, and medical devices. + +config SCHED_NULL + bool "Null scheduler support (EXPERIMENTAL)" + default y + ---help--- + The null scheduler is a static, zero overhead scheduler, + for when there always are less vCPUs than pCPUs, typically + in embedded or HPC scenarios. + +choice + prompt "Default Scheduler?" + default SCHED_CREDIT2_DEFAULT + + config SCHED_CREDIT_DEFAULT + bool "Credit Scheduler" if SCHED_CREDIT + config SCHED_CREDIT2_DEFAULT + bool "Credit2 Scheduler" if SCHED_CREDIT2 + config SCHED_RTDS_DEFAULT + bool "RT Scheduler" if SCHED_RTDS + config SCHED_ARINC653_DEFAULT + bool "ARINC653 Scheduler" if SCHED_ARINC653 + config SCHED_NULL_DEFAULT + bool "Null Scheduler" if SCHED_NULL +endchoice + +config SCHED_DEFAULT + string + default "credit" if SCHED_CREDIT_DEFAULT + default "credit2" if SCHED_CREDIT2_DEFAULT + default "rtds" if SCHED_RTDS_DEFAULT + default "arinc653" if SCHED_ARINC653_DEFAULT + default "null" if SCHED_NULL_DEFAULT + default "credit2" + +endmenu diff --git a/xen/common/sched/Makefile b/xen/common/sched/Makefile new file mode 100644 index 0000000000..359af4f8bb --- /dev/null +++ b/xen/common/sched/Makefile @@ -0,0 +1,7 @@ +obj-y += cpupool.o +obj-$(CONFIG_SCHED_ARINC653) += sched_arinc653.o +obj-$(CONFIG_SCHED_CREDIT) += sched_credit.o +obj-$(CONFIG_SCHED_CREDIT2) += sched_credit2.o +obj-$(CONFIG_SCHED_RTDS) += sched_rt.o +obj-$(CONFIG_SCHED_NULL) += sched_null.o +obj-y += schedule.o diff --git a/xen/common/compat/schedule.c b/xen/common/sched/compat_schedule.c similarity index 97% rename from xen/common/compat/schedule.c rename to xen/common/sched/compat_schedule.c index 8b6e6f107d..2e450685d6 100644 --- a/xen/common/compat/schedule.c +++ b/xen/common/sched/compat_schedule.c @@ -37,7 +37,7 @@ static int compat_poll(struct compat_sched_poll *compat) #define do_poll compat_poll #define sched_poll compat_sched_poll -#include "../schedule.c" +#include "schedule.c" int compat_set_timer_op(u32 lo, s32 hi) { diff --git a/xen/common/cpupool.c b/xen/common/sched/cpupool.c similarity index 100% rename from xen/common/cpupool.c rename to xen/common/sched/cpupool.c diff --git a/xen/common/sched_arinc653.c b/xen/common/sched/sched_arinc653.c similarity index 100% rename from xen/common/sched_arinc653.c rename to xen/common/sched/sched_arinc653.c diff --git a/xen/common/sched_credit.c b/xen/common/sched/sched_credit.c similarity index 100% rename from xen/common/sched_credit.c rename to xen/common/sched/sched_credit.c diff --git a/xen/common/sched_credit2.c b/xen/common/sched/sched_credit2.c similarity index 100% rename from xen/common/sched_credit2.c rename to xen/common/sched/sched_credit2.c diff --git a/xen/common/sched_null.c b/xen/common/sched/sched_null.c similarity index 100% rename from xen/common/sched_null.c rename to xen/common/sched/sched_null.c diff --git a/xen/common/sched_rt.c b/xen/common/sched/sched_rt.c similarity index 100% rename from xen/common/sched_rt.c rename to xen/common/sched/sched_rt.c diff --git a/xen/common/schedule.c b/xen/common/sched/schedule.c similarity index 99% rename from xen/common/schedule.c rename to xen/common/sched/schedule.c index e70cc70a65..a550dd8f93 100644 --- a/xen/common/schedule.c +++ b/xen/common/sched/schedule.c @@ -3125,7 +3125,7 @@ void __init sched_setup_dom0_vcpus(struct domain *d) #endif #ifdef CONFIG_COMPAT -#include "compat/schedule.c" +#include "compat_schedule.c" #endif #endif /* !COMPAT */ From patchwork Wed Dec 18 07:48:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299679 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4864214B7 for ; Wed, 18 Dec 2019 07:50:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 23DC121D7D for ; Wed, 18 Dec 2019 07:50:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 23DC121D7D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU59-0003g2-2F; Wed, 18 Dec 2019 07:49:19 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU57-0003fl-HJ for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:17 +0000 X-Inumbo-ID: dba022a0-216a-11ea-9041-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id dba022a0-216a-11ea-9041-12813bfff9fa; Wed, 18 Dec 2019 07:49:04 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 62ED1AC71; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:52 +0100 Message-Id: <20191218074859.21665-3-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 2/9] xen/sched: make sched-if.h really scheduler private X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand , =?utf-8?q?Roger_Pa?= =?utf-8?q?u_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" include/xen/sched-if.h should be private to scheduler code, so move it to common/sched/sched-if.h and move the remaining use cases to cpupool.c and schedule.c. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/arch/x86/dom0_build.c | 5 +- xen/common/domain.c | 70 ---------- xen/common/domctl.c | 135 +------------------ xen/common/sched/cpupool.c | 13 +- xen/{include/xen => common/sched}/sched-if.h | 3 - xen/common/sched/sched_arinc653.c | 3 +- xen/common/sched/sched_credit.c | 2 +- xen/common/sched/sched_credit2.c | 3 +- xen/common/sched/sched_null.c | 3 +- xen/common/sched/sched_rt.c | 3 +- xen/common/sched/schedule.c | 191 ++++++++++++++++++++++++++- xen/include/xen/domain.h | 3 + xen/include/xen/sched.h | 7 + 13 files changed, 228 insertions(+), 213 deletions(-) rename xen/{include/xen => common/sched}/sched-if.h (99%) diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c index 28b964e018..56c2dee0fc 100644 --- a/xen/arch/x86/dom0_build.c +++ b/xen/arch/x86/dom0_build.c @@ -9,7 +9,6 @@ #include #include #include -#include #include #include @@ -227,9 +226,9 @@ unsigned int __init dom0_max_vcpus(void) dom0_nodes = node_online_map; for_each_node_mask ( node, dom0_nodes ) cpumask_or(&dom0_cpus, &dom0_cpus, &node_to_cpumask(node)); - cpumask_and(&dom0_cpus, &dom0_cpus, cpupool0->cpu_valid); + cpumask_and(&dom0_cpus, &dom0_cpus, cpupool_valid_cpus(cpupool0)); if ( cpumask_empty(&dom0_cpus) ) - cpumask_copy(&dom0_cpus, cpupool0->cpu_valid); + cpumask_copy(&dom0_cpus, cpupool_valid_cpus(cpupool0)); max_vcpus = cpumask_weight(&dom0_cpus); if ( opt_dom0_max_vcpus_min > max_vcpus ) diff --git a/xen/common/domain.c b/xen/common/domain.c index 611116c7fc..f4f0a66262 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -10,7 +10,6 @@ #include #include #include -#include #include #include #include @@ -565,75 +564,6 @@ void __init setup_system_domains(void) #endif } -void domain_update_node_affinity(struct domain *d) -{ - cpumask_var_t dom_cpumask, dom_cpumask_soft; - cpumask_t *dom_affinity; - const cpumask_t *online; - struct sched_unit *unit; - unsigned int cpu; - - /* Do we have vcpus already? If not, no need to update node-affinity. */ - if ( !d->vcpu || !d->vcpu[0] ) - return; - - if ( !zalloc_cpumask_var(&dom_cpumask) ) - return; - if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) - { - free_cpumask_var(dom_cpumask); - return; - } - - online = cpupool_domain_master_cpumask(d); - - spin_lock(&d->node_affinity_lock); - - /* - * If d->auto_node_affinity is true, let's compute the domain's - * node-affinity and update d->node_affinity accordingly. if false, - * just leave d->auto_node_affinity alone. - */ - if ( d->auto_node_affinity ) - { - /* - * We want the narrowest possible set of pcpus (to get the narowest - * possible set of nodes). What we need is the cpumask of where the - * domain can run (the union of the hard affinity of all its vcpus), - * and the full mask of where it would prefer to run (the union of - * the soft affinity of all its various vcpus). Let's build them. - */ - for_each_sched_unit ( d, unit ) - { - cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); - cpumask_or(dom_cpumask_soft, dom_cpumask_soft, - unit->cpu_soft_affinity); - } - /* Filter out non-online cpus */ - cpumask_and(dom_cpumask, dom_cpumask, online); - ASSERT(!cpumask_empty(dom_cpumask)); - /* And compute the intersection between hard, online and soft */ - cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); - - /* - * If not empty, the intersection of hard, soft and online is the - * narrowest set we want. If empty, we fall back to hard&online. - */ - dom_affinity = cpumask_empty(dom_cpumask_soft) ? - dom_cpumask : dom_cpumask_soft; - - nodes_clear(d->node_affinity); - for_each_cpu ( cpu, dom_affinity ) - node_set(cpu_to_node(cpu), d->node_affinity); - } - - spin_unlock(&d->node_affinity_lock); - - free_cpumask_var(dom_cpumask_soft); - free_cpumask_var(dom_cpumask); -} - - int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity) { /* Being disjoint with the system is just wrong. */ diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 03d0226039..3407db44fd 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -65,9 +64,9 @@ static int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap, return err; } -static int xenctl_bitmap_to_bitmap(unsigned long *bitmap, - const struct xenctl_bitmap *xenctl_bitmap, - unsigned int nbits) +int xenctl_bitmap_to_bitmap(unsigned long *bitmap, + const struct xenctl_bitmap *xenctl_bitmap, + unsigned int nbits) { unsigned int guest_bytes, copy_bytes; int err = 0; @@ -200,7 +199,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info) info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info)); BUG_ON(SHARED_M2P(info->shared_info_frame)); - info->cpupool = d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; + info->cpupool = cpupool_get_id(d); memcpy(info->handle, d->handle, sizeof(xen_domain_handle_t)); @@ -234,16 +233,6 @@ void domctl_lock_release(void) spin_unlock(¤t->domain->hypercall_deadlock_mutex); } -static inline -int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff) -{ - return vcpuaff->flags == 0 || - ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) && - guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) || - ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) && - guest_handle_is_null(vcpuaff->cpumap_soft.bitmap)); -} - void vnuma_destroy(struct vnuma_info *vnuma) { if ( vnuma ) @@ -608,122 +597,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) case XEN_DOMCTL_setvcpuaffinity: case XEN_DOMCTL_getvcpuaffinity: - { - struct vcpu *v; - const struct sched_unit *unit; - struct xen_domctl_vcpuaffinity *vcpuaff = &op->u.vcpuaffinity; - - ret = -EINVAL; - if ( vcpuaff->vcpu >= d->max_vcpus ) - break; - - ret = -ESRCH; - if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL ) - break; - - unit = v->sched_unit; - ret = -EINVAL; - if ( vcpuaffinity_params_invalid(vcpuaff) ) - break; - - if ( op->cmd == XEN_DOMCTL_setvcpuaffinity ) - { - cpumask_var_t new_affinity, old_affinity; - cpumask_t *online = cpupool_domain_master_cpumask(v->domain); - - /* - * We want to be able to restore hard affinity if we are trying - * setting both and changing soft affinity (which happens later, - * when hard affinity has been succesfully chaged already) fails. - */ - if ( !alloc_cpumask_var(&old_affinity) ) - { - ret = -ENOMEM; - break; - } - cpumask_copy(old_affinity, unit->cpu_hard_affinity); - - if ( !alloc_cpumask_var(&new_affinity) ) - { - free_cpumask_var(old_affinity); - ret = -ENOMEM; - break; - } - - /* Undo a stuck SCHED_pin_override? */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE ) - vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); - - ret = 0; - - /* - * We both set a new affinity and report back to the caller what - * the scheduler will be effectively using. - */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - { - ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), - &vcpuaff->cpumap_hard, - nr_cpu_ids); - if ( !ret ) - ret = vcpu_set_hard_affinity(v, new_affinity); - if ( ret ) - goto setvcpuaffinity_out; - - /* - * For hard affinity, what we return is the intersection of - * cpupool's online mask and the new hard affinity. - */ - cpumask_and(new_affinity, online, unit->cpu_hard_affinity); - ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, - new_affinity); - } - if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) - { - ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), - &vcpuaff->cpumap_soft, - nr_cpu_ids); - if ( !ret) - ret = vcpu_set_soft_affinity(v, new_affinity); - if ( ret ) - { - /* - * Since we're returning error, the caller expects nothing - * happened, so we rollback the changes to hard affinity - * (if any). - */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - vcpu_set_hard_affinity(v, old_affinity); - goto setvcpuaffinity_out; - } - - /* - * For soft affinity, we return the intersection between the - * new soft affinity, the cpupool's online map and the (new) - * hard affinity. - */ - cpumask_and(new_affinity, new_affinity, online); - cpumask_and(new_affinity, new_affinity, - unit->cpu_hard_affinity); - ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, - new_affinity); - } - - setvcpuaffinity_out: - free_cpumask_var(new_affinity); - free_cpumask_var(old_affinity); - } - else - { - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, - unit->cpu_hard_affinity); - if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) - ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, - unit->cpu_soft_affinity); - } + ret = vcpu_affinity_domctl(d, op->cmd, &op->u.vcpuaffinity); break; - } case XEN_DOMCTL_scheduler_op: ret = sched_adjust(d, &op->u.scheduler_op); diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 4d3adbdd8d..d5b64d0a6a 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -16,11 +16,12 @@ #include #include #include -#include #include #include #include +#include "sched-if.h" + #define for_each_cpupool(ptr) \ for ((ptr) = &cpupool_list; *(ptr) != NULL; (ptr) = &((*(ptr))->next)) @@ -876,6 +877,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) return ret; } +int cpupool_get_id(const struct domain *d) +{ + return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; +} + +cpumask_t *cpupool_valid_cpus(struct cpupool *pool) +{ + return pool->cpu_valid; +} + void dump_runq(unsigned char key) { unsigned long flags; diff --git a/xen/include/xen/sched-if.h b/xen/common/sched/sched-if.h similarity index 99% rename from xen/include/xen/sched-if.h rename to xen/common/sched/sched-if.h index b0ac54e63d..a702fd23b1 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/common/sched/sched-if.h @@ -12,9 +12,6 @@ #include #include -/* A global pointer to the initial cpupool (POOL0). */ -extern struct cpupool *cpupool0; - /* cpus currently in no cpupool */ extern cpumask_t cpupool_free_cpus; diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c index 565575c326..fe15754900 100644 --- a/xen/common/sched/sched_arinc653.c +++ b/xen/common/sched/sched_arinc653.c @@ -26,7 +26,6 @@ #include #include -#include #include #include #include @@ -35,6 +34,8 @@ #include #include +#include "sched-if.h" + /************************************************************************** * Private Macros * **************************************************************************/ diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c index aa41a3301b..a098ca0f3a 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -15,7 +15,6 @@ #include #include #include -#include #include #include #include @@ -24,6 +23,7 @@ #include #include +#include "sched-if.h" /* * Locking: diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c index f7c477053c..5bfe1441a2 100644 --- a/xen/common/sched/sched_credit2.c +++ b/xen/common/sched/sched_credit2.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include #include @@ -26,6 +25,8 @@ #include #include +#include "sched-if.h" + /* Meant only for helping developers during debugging. */ /* #define d2printk printk */ #define d2printk(x...) diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c index 3f3418c9b1..5a23a7e7dc 100644 --- a/xen/common/sched/sched_null.c +++ b/xen/common/sched/sched_null.c @@ -29,10 +29,11 @@ */ #include -#include #include #include +#include "sched-if.h" + /* * null tracing events. Check include/public/trace.h for more details. */ diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index b2b29481f3..379b56bc2a 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include @@ -31,6 +30,8 @@ #include #include +#include "sched-if.h" + /* * TODO: * diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index a550dd8f93..c751faa741 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -38,6 +37,8 @@ #include #include +#include "sched-if.h" + #ifdef CONFIG_XEN_GUEST #include #else @@ -1607,6 +1608,194 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason) return ret; } +static inline +int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff) +{ + return vcpuaff->flags == 0 || + ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) && + guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) || + ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) && + guest_handle_is_null(vcpuaff->cpumap_soft.bitmap)); +} + +int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, + struct xen_domctl_vcpuaffinity *vcpuaff) +{ + struct vcpu *v; + const struct sched_unit *unit; + int ret = 0; + + if ( vcpuaff->vcpu >= d->max_vcpus ) + return -EINVAL; + + if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL ) + return -ESRCH; + + if ( vcpuaffinity_params_invalid(vcpuaff) ) + return -EINVAL; + + unit = v->sched_unit; + + if ( cmd == XEN_DOMCTL_setvcpuaffinity ) + { + cpumask_var_t new_affinity, old_affinity; + cpumask_t *online = cpupool_domain_master_cpumask(v->domain); + + /* + * We want to be able to restore hard affinity if we are trying + * setting both and changing soft affinity (which happens later, + * when hard affinity has been succesfully chaged already) fails. + */ + if ( !alloc_cpumask_var(&old_affinity) ) + return -ENOMEM; + + cpumask_copy(old_affinity, unit->cpu_hard_affinity); + + if ( !alloc_cpumask_var(&new_affinity) ) + { + free_cpumask_var(old_affinity); + return -ENOMEM; + } + + /* Undo a stuck SCHED_pin_override? */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE ) + vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); + + ret = 0; + + /* + * We both set a new affinity and report back to the caller what + * the scheduler will be effectively using. + */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + { + ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), + &vcpuaff->cpumap_hard, nr_cpu_ids); + if ( !ret ) + ret = vcpu_set_hard_affinity(v, new_affinity); + if ( ret ) + goto setvcpuaffinity_out; + + /* + * For hard affinity, what we return is the intersection of + * cpupool's online mask and the new hard affinity. + */ + cpumask_and(new_affinity, online, unit->cpu_hard_affinity); + ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, new_affinity); + } + if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) + { + ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), + &vcpuaff->cpumap_soft, nr_cpu_ids); + if ( !ret) + ret = vcpu_set_soft_affinity(v, new_affinity); + if ( ret ) + { + /* + * Since we're returning error, the caller expects nothing + * happened, so we rollback the changes to hard affinity + * (if any). + */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + vcpu_set_hard_affinity(v, old_affinity); + goto setvcpuaffinity_out; + } + + /* + * For soft affinity, we return the intersection between the + * new soft affinity, the cpupool's online map and the (new) + * hard affinity. + */ + cpumask_and(new_affinity, new_affinity, online); + cpumask_and(new_affinity, new_affinity, unit->cpu_hard_affinity); + ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, new_affinity); + } + + setvcpuaffinity_out: + free_cpumask_var(new_affinity); + free_cpumask_var(old_affinity); + } + else + { + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, + unit->cpu_hard_affinity); + if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) + ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, + unit->cpu_soft_affinity); + } + + return ret; +} + +void domain_update_node_affinity(struct domain *d) +{ + cpumask_var_t dom_cpumask, dom_cpumask_soft; + cpumask_t *dom_affinity; + const cpumask_t *online; + struct sched_unit *unit; + unsigned int cpu; + + /* Do we have vcpus already? If not, no need to update node-affinity. */ + if ( !d->vcpu || !d->vcpu[0] ) + return; + + if ( !zalloc_cpumask_var(&dom_cpumask) ) + return; + if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) + { + free_cpumask_var(dom_cpumask); + return; + } + + online = cpupool_domain_master_cpumask(d); + + spin_lock(&d->node_affinity_lock); + + /* + * If d->auto_node_affinity is true, let's compute the domain's + * node-affinity and update d->node_affinity accordingly. if false, + * just leave d->auto_node_affinity alone. + */ + if ( d->auto_node_affinity ) + { + /* + * We want the narrowest possible set of pcpus (to get the narowest + * possible set of nodes). What we need is the cpumask of where the + * domain can run (the union of the hard affinity of all its vcpus), + * and the full mask of where it would prefer to run (the union of + * the soft affinity of all its various vcpus). Let's build them. + */ + for_each_sched_unit ( d, unit ) + { + cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); + cpumask_or(dom_cpumask_soft, dom_cpumask_soft, + unit->cpu_soft_affinity); + } + /* Filter out non-online cpus */ + cpumask_and(dom_cpumask, dom_cpumask, online); + ASSERT(!cpumask_empty(dom_cpumask)); + /* And compute the intersection between hard, online and soft */ + cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); + + /* + * If not empty, the intersection of hard, soft and online is the + * narrowest set we want. If empty, we fall back to hard&online. + */ + dom_affinity = cpumask_empty(dom_cpumask_soft) ? + dom_cpumask : dom_cpumask_soft; + + nodes_clear(d->node_affinity); + for_each_cpu ( cpu, dom_affinity ) + node_set(cpu_to_node(cpu), d->node_affinity); + } + + spin_unlock(&d->node_affinity_lock); + + free_cpumask_var(dom_cpumask_soft); + free_cpumask_var(dom_cpumask); +} + typedef long ret_t; #endif /* !COMPAT */ diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 769302057b..c931eab4a9 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -27,6 +27,9 @@ struct xen_domctl_getdomaininfo; void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info); void arch_get_domain_info(const struct domain *d, struct xen_domctl_getdomaininfo *info); +int xenctl_bitmap_to_bitmap(unsigned long *bitmap, + const struct xenctl_bitmap *xenctl_bitmap, + unsigned int nbits); /* * Arch-specifics. diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 9f7bc69293..2507a833c2 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -50,6 +50,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t); /* A global pointer to the hardware domain (usually DOM0). */ extern struct domain *hardware_domain; +/* A global pointer to the initial cpupool (POOL0). */ +extern struct cpupool *cpupool0; + #ifdef CONFIG_LATE_HWDOM extern domid_t hardware_domid; #else @@ -929,6 +932,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason); int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity); int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity); void restore_vcpu_affinity(struct domain *d); +int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, + struct xen_domctl_vcpuaffinity *vcpuaff); void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate); uint64_t get_cpu_idle_time(unsigned int cpu); @@ -1054,6 +1059,8 @@ int cpupool_add_domain(struct domain *d, int poolid); void cpupool_rm_domain(struct domain *d); int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); +int cpupool_get_id(const struct domain *d); +cpumask_t *cpupool_valid_cpus(struct cpupool *pool); void schedule_dump(struct cpupool *c); extern void dump_runq(unsigned char key); From patchwork Wed Dec 18 07:48:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299681 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DCDF314B7 for ; Wed, 18 Dec 2019 07:50:41 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C27152176D for ; Wed, 18 Dec 2019 07:50:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C27152176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5e-0003zh-Cl; Wed, 18 Dec 2019 07:49:50 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5d-0003yx-6W for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:49 +0000 X-Inumbo-ID: dc92800c-216a-11ea-b6f1-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id dc92800c-216a-11ea-b6f1-bc764e2007e4; Wed, 18 Dec 2019 07:49:21 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 84EE8AD5F; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:53 +0100 Message-Id: <20191218074859.21665-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 3/9] xen/sched: cleanup sched.h X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" There are some items in include/xen/sched.h which can be moved to sched-if.h as they are scheduler private. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/sched-if.h | 13 +++++++++++++ xen/common/sched/schedule.c | 2 +- xen/include/xen/sched.h | 17 ----------------- 3 files changed, 14 insertions(+), 18 deletions(-) diff --git a/xen/common/sched/sched-if.h b/xen/common/sched/sched-if.h index a702fd23b1..edce354dc7 100644 --- a/xen/common/sched/sched-if.h +++ b/xen/common/sched/sched-if.h @@ -533,6 +533,7 @@ static inline void sched_unit_unpause(const struct sched_unit *unit) struct cpupool { int cpupool_id; +#define CPUPOOLID_NONE -1 unsigned int n_dom; cpumask_var_t cpu_valid; /* all cpus assigned to pool */ cpumask_var_t res_valid; /* all scheduling resources of pool */ @@ -618,5 +619,17 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step, void sched_rm_cpu(unsigned int cpu); const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu); +void schedule_dump(struct cpupool *c); +struct scheduler *scheduler_get_default(void); +struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); +void scheduler_free(struct scheduler *sched); +int cpu_disable_scheduler(unsigned int cpu); +int schedule_cpu_add(unsigned int cpu, struct cpupool *c); +int schedule_cpu_rm(unsigned int cpu); +int sched_move_domain(struct domain *d, struct cpupool *c); +struct cpupool *cpupool_get_by_id(int poolid); +void cpupool_put(struct cpupool *pool); +int cpupool_add_domain(struct domain *d, int poolid); +void cpupool_rm_domain(struct domain *d); #endif /* __XEN_SCHED_IF_H__ */ diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index c751faa741..db8ce146ca 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -1346,7 +1346,7 @@ int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity) return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_hard_affinity); } -int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity) +static int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity) { return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_soft_affinity); } diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 2507a833c2..55335d6ab3 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -685,7 +685,6 @@ int sched_init_vcpu(struct vcpu *v); void sched_destroy_vcpu(struct vcpu *v); int sched_init_domain(struct domain *d, int poolid); void sched_destroy_domain(struct domain *d); -int sched_move_domain(struct domain *d, struct cpupool *c); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); @@ -918,19 +917,10 @@ static inline bool sched_has_urgent_vcpu(void) return atomic_read(&this_cpu(sched_urgent_count)); } -struct scheduler; - -struct scheduler *scheduler_get_default(void); -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); -void scheduler_free(struct scheduler *sched); -int schedule_cpu_add(unsigned int cpu, struct cpupool *c); -int schedule_cpu_rm(unsigned int cpu); void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value); -int cpu_disable_scheduler(unsigned int cpu); void sched_setup_dom0_vcpus(struct domain *d); int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason); int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity); -int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity); void restore_vcpu_affinity(struct domain *d); int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, struct xen_domctl_vcpuaffinity *vcpuaff); @@ -1051,17 +1041,10 @@ extern enum cpufreq_controller { FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen } cpufreq_controller; -#define CPUPOOLID_NONE -1 - -struct cpupool *cpupool_get_by_id(int poolid); -void cpupool_put(struct cpupool *pool); -int cpupool_add_domain(struct domain *d, int poolid); -void cpupool_rm_domain(struct domain *d); int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); int cpupool_get_id(const struct domain *d); cpumask_t *cpupool_valid_cpus(struct cpupool *pool); -void schedule_dump(struct cpupool *c); extern void dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi); From patchwork Wed Dec 18 07:48:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299687 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6093114B7 for ; Wed, 18 Dec 2019 07:50:44 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4660C2176D for ; Wed, 18 Dec 2019 07:50:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4660C2176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU4z-0003ef-25; Wed, 18 Dec 2019 07:49:09 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU4x-0003ea-KW for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:07 +0000 X-Inumbo-ID: dc9d0b8a-216a-11ea-9041-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id dc9d0b8a-216a-11ea-9041-12813bfff9fa; Wed, 18 Dec 2019 07:49:04 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id A2FF1AD69; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:54 +0100 Message-Id: <20191218074859.21665-5-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 4/9] xen/sched: remove special cases for free cpus in schedulers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" With the idle scheduler now taking care of all cpus not in any cpupool the special cases in the other schedulers for no cpupool associated can be removed. Signed-off-by: Juergen Gross --- xen/common/sched/sched_credit.c | 7 ++----- xen/common/sched/sched_credit2.c | 30 ------------------------------ 2 files changed, 2 insertions(+), 35 deletions(-) diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c index a098ca0f3a..8b1de9b033 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -1690,11 +1690,8 @@ csched_load_balance(struct csched_private *prv, int cpu, BUG_ON(get_sched_res(cpu) != snext->unit->res); - /* - * If this CPU is going offline, or is not (yet) part of any cpupool - * (as it happens, e.g., during cpu bringup), we shouldn't steal work. - */ - if ( unlikely(!cpumask_test_cpu(cpu, online) || c == NULL) ) + /* If this CPU is going offline, we shouldn't steal work. */ + if ( unlikely(!cpumask_test_cpu(cpu, online)) ) goto out; if ( snext->pri == CSCHED_PRI_IDLE ) diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c index 5bfe1441a2..f9e521a3a8 100644 --- a/xen/common/sched/sched_credit2.c +++ b/xen/common/sched/sched_credit2.c @@ -2744,40 +2744,10 @@ static void csched2_unit_migrate( const struct scheduler *ops, struct sched_unit *unit, unsigned int new_cpu) { - struct domain *d = unit->domain; struct csched2_unit * const svc = csched2_unit(unit); struct csched2_runqueue_data *trqd; s_time_t now = NOW(); - /* - * Being passed a target pCPU which is outside of our cpupool is only - * valid if we are shutting down (or doing ACPI suspend), and we are - * moving everyone to BSP, no matter whether or not BSP is inside our - * cpupool. - * - * And since there indeed is the chance that it is not part of it, all - * we must do is remove _and_ unassign the unit from any runqueue, as - * well as updating v->processor with the target, so that the suspend - * process can continue. - * - * It will then be during resume that a new, meaningful, value for - * v->processor will be chosen, and during actual domain unpause that - * the unit will be assigned to and added to the proper runqueue. - */ - if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_master_cpumask(d))) ) - { - ASSERT(system_state == SYS_STATE_suspend); - if ( unit_on_runq(svc) ) - { - runq_remove(svc); - update_load(ops, svc->rqd, NULL, -1, now); - } - _runq_deassign(svc); - sched_set_res(unit, get_sched_res(new_cpu)); - return; - } - - /* If here, new_cpu must be a valid Credit2 pCPU, and in our affinity. */ ASSERT(cpumask_test_cpu(new_cpu, &csched2_priv(ops)->initialized)); ASSERT(cpumask_test_cpu(new_cpu, unit->cpu_hard_affinity)); From patchwork Wed Dec 18 07:48:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299677 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B5436C1 for ; Wed, 18 Dec 2019 07:50:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 10F0A2176D for ; Wed, 18 Dec 2019 07:50:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10F0A2176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5A-0003gu-GZ; Wed, 18 Dec 2019 07:49:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU59-0003g6-5J for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:19 +0000 X-Inumbo-ID: debd0fbe-216a-11ea-b6f1-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id debd0fbe-216a-11ea-b6f1-bc764e2007e4; Wed, 18 Dec 2019 07:49:08 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CD2CAAD95; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:55 +0100 Message-Id: <20191218074859.21665-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Meng Xu , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In sched_rt there are three instances of cpumasks allocated on the stack. Replace them by using cpumask_scratch. Signed-off-by: Juergen Gross --- xen/common/sched/sched_rt.c | 56 ++++++++++++++++++++++++++++++--------------- 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index 379b56bc2a..264a753116 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -637,23 +637,38 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) * and available resources */ static struct sched_resource * -rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu) { - cpumask_t cpus; + cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu); cpumask_t *online; int cpu; online = cpupool_domain_master_cpumask(unit->domain); - cpumask_and(&cpus, online, unit->cpu_hard_affinity); + cpumask_and(cpus, online, unit->cpu_hard_affinity); - cpu = cpumask_test_cpu(sched_unit_master(unit), &cpus) + cpu = cpumask_test_cpu(sched_unit_master(unit), cpus) ? sched_unit_master(unit) - : cpumask_cycle(sched_unit_master(unit), &cpus); - ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); + : cpumask_cycle(sched_unit_master(unit), cpus); + ASSERT( !cpumask_empty(cpus) && cpumask_test_cpu(cpu, cpus) ); return get_sched_res(cpu); } +/* + * Pick a valid resource for the unit vc + * Valid resource of an unit is intesection of unit's affinity + * and available resources + */ +static struct sched_resource * +rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +{ + struct sched_resource *res; + + res = rt_res_pick_locked(unit, unit->res->master_cpu); + + return res; +} + /* * Init/Free related code */ @@ -886,11 +901,14 @@ rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit) struct rt_unit *svc = rt_unit(unit); s_time_t now; spinlock_t *lock; + unsigned int cpu = smp_processor_id(); BUG_ON( is_idle_unit(unit) ); /* This is safe because unit isn't yet being scheduled */ - sched_set_res(unit, rt_res_pick(ops, unit)); + lock = pcpu_schedule_lock_irq(cpu); + sched_set_res(unit, rt_res_pick_locked(unit, cpu)); + pcpu_schedule_unlock_irq(lock, cpu); lock = unit_schedule_lock_irq(unit); @@ -1003,13 +1021,13 @@ burn_budget(const struct scheduler *ops, struct rt_unit *svc, s_time_t now) * lock is grabbed before calling this function */ static struct rt_unit * -runq_pick(const struct scheduler *ops, const cpumask_t *mask) +runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu) { struct list_head *runq = rt_runq(ops); struct list_head *iter; struct rt_unit *svc = NULL; struct rt_unit *iter_svc = NULL; - cpumask_t cpu_common; + cpumask_t *cpu_common = cpumask_scratch_cpu(cpu); cpumask_t *online; list_for_each ( iter, runq ) @@ -1018,9 +1036,9 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask) /* mask cpu_hard_affinity & cpupool & mask */ online = cpupool_domain_master_cpumask(iter_svc->unit->domain); - cpumask_and(&cpu_common, online, iter_svc->unit->cpu_hard_affinity); - cpumask_and(&cpu_common, mask, &cpu_common); - if ( cpumask_empty(&cpu_common) ) + cpumask_and(cpu_common, online, iter_svc->unit->cpu_hard_affinity); + cpumask_and(cpu_common, mask, cpu_common); + if ( cpumask_empty(cpu_common) ) continue; ASSERT( iter_svc->cur_budget > 0 ); @@ -1092,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit, } else { - snext = runq_pick(ops, cpumask_of(sched_cpu)); + snext = runq_pick(ops, cpumask_of(sched_cpu), cur_cpu); if ( snext == NULL ) snext = rt_unit(sched_idle_unit(sched_cpu)); @@ -1186,22 +1204,22 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new) struct rt_unit *iter_svc; struct sched_unit *iter_unit; int cpu = 0, cpu_to_tickle = 0; - cpumask_t not_tickled; + cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id()); cpumask_t *online; if ( new == NULL || is_idle_unit(new->unit) ) return; online = cpupool_domain_master_cpumask(new->unit->domain); - cpumask_and(¬_tickled, online, new->unit->cpu_hard_affinity); - cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); + cpumask_and(not_tickled, online, new->unit->cpu_hard_affinity); + cpumask_andnot(not_tickled, not_tickled, &prv->tickled); /* * 1) If there are any idle CPUs, kick one. * For cache benefit,we first search new->cpu. * The same loop also find the one with lowest priority. */ - cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), ¬_tickled); + cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), not_tickled); while ( cpu!= nr_cpu_ids ) { iter_unit = curr_on_cpu(cpu); @@ -1216,8 +1234,8 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new) compare_unit_priority(iter_svc, latest_deadline_unit) < 0 ) latest_deadline_unit = iter_svc; - cpumask_clear_cpu(cpu, ¬_tickled); - cpu = cpumask_cycle(cpu, ¬_tickled); + cpumask_clear_cpu(cpu, not_tickled); + cpu = cpumask_cycle(cpu, not_tickled); } /* 2) candicate has higher priority, kick out lowest priority unit */ From patchwork Wed Dec 18 07:48:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299691 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 585E914B7 for ; Wed, 18 Dec 2019 07:50:47 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3393924650 for ; Wed, 18 Dec 2019 07:50:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3393924650 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5E-0003jS-RN; Wed, 18 Dec 2019 07:49:24 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5E-0003iq-7N for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:24 +0000 X-Inumbo-ID: debd3188-216a-11ea-a914-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id debd3188-216a-11ea-a914-bc764e2007e4; Wed, 18 Dec 2019 07:49:08 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id F21C8ADEB; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:56 +0100 Message-Id: <20191218074859.21665-7-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Instead of having an own percpu-variable for private data per cpu the generic scheduler interface for that purpose should be used. Signed-off-by: Juergen Gross --- xen/common/sched/sched_null.c | 89 +++++++++++++++++++++++++++++-------------- 1 file changed, 60 insertions(+), 29 deletions(-) diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c index 5a23a7e7dc..11aab25743 100644 --- a/xen/common/sched/sched_null.c +++ b/xen/common/sched/sched_null.c @@ -89,7 +89,6 @@ struct null_private { struct null_pcpu { struct sched_unit *unit; }; -DEFINE_PER_CPU(struct null_pcpu, npc); /* * Schedule unit @@ -159,32 +158,48 @@ static void null_deinit(struct scheduler *ops) ops->sched_data = NULL; } -static void init_pdata(struct null_private *prv, unsigned int cpu) +static void init_pdata(struct null_private *prv, struct null_pcpu *npc, + unsigned int cpu) { /* Mark the pCPU as free, and with no unit assigned */ cpumask_set_cpu(cpu, &prv->cpus_free); - per_cpu(npc, cpu).unit = NULL; + npc->unit = NULL; } static void null_init_pdata(const struct scheduler *ops, void *pdata, int cpu) { struct null_private *prv = null_priv(ops); - /* alloc_pdata is not implemented, so we want this to be NULL. */ - ASSERT(!pdata); + ASSERT(pdata); - init_pdata(prv, cpu); + init_pdata(prv, pdata, cpu); } static void null_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu) { struct null_private *prv = null_priv(ops); + struct null_pcpu *npc = pcpu; - /* alloc_pdata not implemented, so this must have stayed NULL */ - ASSERT(!pcpu); + ASSERT(npc); cpumask_clear_cpu(cpu, &prv->cpus_free); - per_cpu(npc, cpu).unit = NULL; + npc->unit = NULL; +} + +static void *null_alloc_pdata(const struct scheduler *ops, int cpu) +{ + struct null_pcpu *npc; + + npc = xzalloc(struct null_pcpu); + if ( npc == NULL ) + return ERR_PTR(-ENOMEM); + + return npc; +} + +static void null_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) +{ + xfree(pcpu); } static void *null_alloc_udata(const struct scheduler *ops, @@ -268,6 +283,7 @@ pick_res(struct null_private *prv, const struct sched_unit *unit) unsigned int bs; unsigned int cpu = sched_unit_master(unit), new_cpu; cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain); + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); @@ -286,8 +302,7 @@ pick_res(struct null_private *prv, const struct sched_unit *unit) * don't, so we get to keep in the scratch cpumask what we have just * put in it.) */ - if ( likely((per_cpu(npc, cpu).unit == NULL || - per_cpu(npc, cpu).unit == unit) + if ( likely((npc->unit == NULL || npc->unit == unit) && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) ) { new_cpu = cpu; @@ -336,9 +351,11 @@ pick_res(struct null_private *prv, const struct sched_unit *unit) static void unit_assign(struct null_private *prv, struct sched_unit *unit, unsigned int cpu) { + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; + ASSERT(is_unit_online(unit)); - per_cpu(npc, cpu).unit = unit; + npc->unit = unit; sched_set_res(unit, get_sched_res(cpu)); cpumask_clear_cpu(cpu, &prv->cpus_free); @@ -363,12 +380,13 @@ static bool unit_deassign(struct null_private *prv, struct sched_unit *unit) unsigned int bs; unsigned int cpu = sched_unit_master(unit); struct null_unit *wvc; + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; ASSERT(list_empty(&null_unit(unit)->waitq_elem)); - ASSERT(per_cpu(npc, cpu).unit == unit); + ASSERT(npc->unit == unit); ASSERT(!cpumask_test_cpu(cpu, &prv->cpus_free)); - per_cpu(npc, cpu).unit = NULL; + npc->unit = NULL; cpumask_set_cpu(cpu, &prv->cpus_free); dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain, @@ -436,7 +454,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops, */ ASSERT(!local_irq_is_enabled()); - init_pdata(prv, cpu); + init_pdata(prv, pdata, cpu); return &sr->_lock; } @@ -446,6 +464,7 @@ static void null_unit_insert(const struct scheduler *ops, { struct null_private *prv = null_priv(ops); struct null_unit *nvc = null_unit(unit); + struct null_pcpu *npc; unsigned int cpu; spinlock_t *lock; @@ -462,6 +481,7 @@ static void null_unit_insert(const struct scheduler *ops, retry: sched_set_res(unit, pick_res(prv, unit)); cpu = sched_unit_master(unit); + npc = get_sched_res(cpu)->sched_priv; spin_unlock(lock); @@ -471,7 +491,7 @@ static void null_unit_insert(const struct scheduler *ops, cpupool_domain_master_cpumask(unit->domain)); /* If the pCPU is free, we assign unit to it */ - if ( likely(per_cpu(npc, cpu).unit == NULL) ) + if ( likely(npc->unit == NULL) ) { /* * Insert is followed by vcpu_wake(), so there's no need to poke @@ -519,7 +539,10 @@ static void null_unit_remove(const struct scheduler *ops, /* If offline, the unit shouldn't be assigned, nor in the waitqueue */ if ( unlikely(!is_unit_online(unit)) ) { - ASSERT(per_cpu(npc, sched_unit_master(unit)).unit != unit); + struct null_pcpu *npc; + + npc = unit->res->sched_priv; + ASSERT(npc->unit != unit); ASSERT(list_empty(&nvc->waitq_elem)); goto out; } @@ -548,6 +571,7 @@ static void null_unit_wake(const struct scheduler *ops, struct null_private *prv = null_priv(ops); struct null_unit *nvc = null_unit(unit); unsigned int cpu = sched_unit_master(unit); + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; ASSERT(!is_idle_unit(unit)); @@ -569,7 +593,7 @@ static void null_unit_wake(const struct scheduler *ops, else SCHED_STAT_CRANK(unit_wake_not_runnable); - if ( likely(per_cpu(npc, cpu).unit == unit) ) + if ( likely(npc->unit == unit) ) { cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); return; @@ -581,7 +605,7 @@ static void null_unit_wake(const struct scheduler *ops, * and its previous resource is free (and affinities match), we can just * assign the unit to it (we own the proper lock already) and be done. */ - if ( per_cpu(npc, cpu).unit == NULL && + if ( npc->unit == NULL && unit_check_affinity(unit, cpu, BALANCE_HARD_AFFINITY) ) { if ( !has_soft_affinity(unit) || @@ -622,6 +646,7 @@ static void null_unit_sleep(const struct scheduler *ops, { struct null_private *prv = null_priv(ops); unsigned int cpu = sched_unit_master(unit); + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; bool tickled = false; ASSERT(!is_idle_unit(unit)); @@ -640,7 +665,7 @@ static void null_unit_sleep(const struct scheduler *ops, list_del_init(&nvc->waitq_elem); spin_unlock(&prv->waitq_lock); } - else if ( per_cpu(npc, cpu).unit == unit ) + else if ( npc->unit == unit ) tickled = unit_deassign(prv, unit); } @@ -663,6 +688,7 @@ static void null_unit_migrate(const struct scheduler *ops, { struct null_private *prv = null_priv(ops); struct null_unit *nvc = null_unit(unit); + struct null_pcpu *npc; ASSERT(!is_idle_unit(unit)); @@ -686,7 +712,8 @@ static void null_unit_migrate(const struct scheduler *ops, * If unit is assigned to a pCPU, then such pCPU becomes free, and we * should look in the waitqueue if anyone else can be assigned to it. */ - if ( likely(per_cpu(npc, sched_unit_master(unit)).unit == unit) ) + npc = unit->res->sched_priv; + if ( likely(npc->unit == unit) ) { unit_deassign(prv, unit); SCHED_STAT_CRANK(migrate_running); @@ -720,7 +747,8 @@ static void null_unit_migrate(const struct scheduler *ops, * * In latter, all we can do is to park unit in the waitqueue. */ - if ( per_cpu(npc, new_cpu).unit == NULL && + npc = get_sched_res(new_cpu)->sched_priv; + if ( npc->unit == NULL && unit_check_affinity(unit, new_cpu, BALANCE_HARD_AFFINITY) ) { /* unit might have been in the waitqueue, so remove it */ @@ -788,6 +816,7 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev, unsigned int bs; const unsigned int cur_cpu = smp_processor_id(); const unsigned int sched_cpu = sched_get_resource_cpu(cur_cpu); + struct null_pcpu *npc = get_sched_res(sched_cpu)->sched_priv; struct null_private *prv = null_priv(ops); struct null_unit *wvc; @@ -802,14 +831,14 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev, } d; d.cpu = cur_cpu; d.tasklet = tasklet_work_scheduled; - if ( per_cpu(npc, sched_cpu).unit == NULL ) + if ( npc->unit == NULL ) { d.unit = d.dom = -1; } else { - d.unit = per_cpu(npc, sched_cpu).unit->unit_id; - d.dom = per_cpu(npc, sched_cpu).unit->domain->domain_id; + d.unit = npc->unit->unit_id; + d.dom = npc->unit->domain->domain_id; } __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d); } @@ -820,7 +849,7 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev, prev->next_task = sched_idle_unit(sched_cpu); } else - prev->next_task = per_cpu(npc, sched_cpu).unit; + prev->next_task = npc->unit; prev->next_time = -1; /* @@ -921,6 +950,7 @@ static inline void dump_unit(struct null_private *prv, struct null_unit *nvc) static void null_dump_pcpu(const struct scheduler *ops, int cpu) { struct null_private *prv = null_priv(ops); + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; struct null_unit *nvc; spinlock_t *lock; unsigned long flags; @@ -930,9 +960,8 @@ static void null_dump_pcpu(const struct scheduler *ops, int cpu) printk("CPU[%02d] sibling={%*pbl}, core={%*pbl}", cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), CPUMASK_PR(per_cpu(cpu_core_mask, cpu))); - if ( per_cpu(npc, cpu).unit != NULL ) - printk(", unit=%pdv%d", per_cpu(npc, cpu).unit->domain, - per_cpu(npc, cpu).unit->unit_id); + if ( npc->unit != NULL ) + printk(", unit=%pdv%d", npc->unit->domain, npc->unit->unit_id); printk("\n"); /* current unit (nothing to say if that's the idle unit) */ @@ -1010,6 +1039,8 @@ static const struct scheduler sched_null_def = { .init = null_init, .deinit = null_deinit, + .alloc_pdata = null_alloc_pdata, + .free_pdata = null_free_pdata, .init_pdata = null_init_pdata, .switch_sched = null_switch_sched, .deinit_pdata = null_deinit_pdata, From patchwork Wed Dec 18 07:48:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299689 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7085C14B7 for ; Wed, 18 Dec 2019 07:50:46 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4A5512176D for ; Wed, 18 Dec 2019 07:50:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A5512176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5P-0003p1-FF; Wed, 18 Dec 2019 07:49:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5O-0003oP-6G for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:34 +0000 X-Inumbo-ID: deef49a2-216a-11ea-88e7-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id deef49a2-216a-11ea-88e7-bc764e2007e4; Wed, 18 Dec 2019 07:49:08 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 5BFCDAE19; Wed, 18 Dec 2019 07:49:04 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:57 +0100 Message-Id: <20191218074859.21665-8-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 7/9] xen/sched: switch scheduling to bool where appropriate X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Scheduling code has several places using int or bool_t instead of bool. Switch those. Signed-off-by: Juergen Gross --- xen/common/sched/cpupool.c | 10 +++++----- xen/common/sched/sched-if.h | 2 +- xen/common/sched/sched_arinc653.c | 8 ++++---- xen/common/sched/sched_credit.c | 12 ++++++------ xen/common/sched/sched_rt.c | 14 +++++++------- xen/common/sched/schedule.c | 14 +++++++------- xen/include/xen/sched.h | 6 +++--- 7 files changed, 33 insertions(+), 33 deletions(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index d5b64d0a6a..14212bb4ae 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -154,7 +154,7 @@ static struct cpupool *alloc_cpupool_struct(void) * the searched id is returned * returns NULL if not found. */ -static struct cpupool *__cpupool_find_by_id(int id, int exact) +static struct cpupool *__cpupool_find_by_id(int id, bool exact) { struct cpupool **q; @@ -169,10 +169,10 @@ static struct cpupool *__cpupool_find_by_id(int id, int exact) static struct cpupool *cpupool_find_by_id(int poolid) { - return __cpupool_find_by_id(poolid, 1); + return __cpupool_find_by_id(poolid, true); } -static struct cpupool *__cpupool_get_by_id(int poolid, int exact) +static struct cpupool *__cpupool_get_by_id(int poolid, bool exact) { struct cpupool *c; spin_lock(&cpupool_lock); @@ -185,12 +185,12 @@ static struct cpupool *__cpupool_get_by_id(int poolid, int exact) struct cpupool *cpupool_get_by_id(int poolid) { - return __cpupool_get_by_id(poolid, 1); + return __cpupool_get_by_id(poolid, true); } static struct cpupool *cpupool_get_next_by_id(int poolid) { - return __cpupool_get_by_id(poolid, 0); + return __cpupool_get_by_id(poolid, false); } void cpupool_put(struct cpupool *pool) diff --git a/xen/common/sched/sched-if.h b/xen/common/sched/sched-if.h index edce354dc7..9d0db75cbb 100644 --- a/xen/common/sched/sched-if.h +++ b/xen/common/sched/sched-if.h @@ -589,7 +589,7 @@ unsigned int cpupool_get_granularity(const struct cpupool *c); * * The hard affinity is not a subset of soft affinity * * There is an overlap between the soft and hard affinity masks */ -static inline int has_soft_affinity(const struct sched_unit *unit) +static inline bool has_soft_affinity(const struct sched_unit *unit) { return unit->soft_aff_effective && !cpumask_subset(cpupool_domain_master_cpumask(unit->domain), diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c index fe15754900..dc45378952 100644 --- a/xen/common/sched/sched_arinc653.c +++ b/xen/common/sched/sched_arinc653.c @@ -75,7 +75,7 @@ typedef struct arinc653_unit_s * arinc653_unit_t pointer. */ struct sched_unit * unit; /* awake holds whether the UNIT has been woken with vcpu_wake() */ - bool_t awake; + bool awake; /* list holds the linked list information for the list this UNIT * is stored in */ struct list_head list; @@ -427,7 +427,7 @@ a653sched_alloc_udata(const struct scheduler *ops, struct sched_unit *unit, * will mark the UNIT awake. */ svc->unit = unit; - svc->awake = 0; + svc->awake = false; if ( !is_idle_unit(unit) ) list_add(&svc->list, &SCHED_PRIV(ops)->unit_list); update_schedule_units(ops); @@ -473,7 +473,7 @@ static void a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { if ( AUNIT(unit) != NULL ) - AUNIT(unit)->awake = 0; + AUNIT(unit)->awake = false; /* * If the UNIT being put to sleep is the same one that is currently @@ -493,7 +493,7 @@ static void a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { if ( AUNIT(unit) != NULL ) - AUNIT(unit)->awake = 1; + AUNIT(unit)->awake = true; cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ); } diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c index 8b1de9b033..05930261d9 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -245,7 +245,7 @@ __runq_elem(struct list_head *elem) } /* Is the first element of cpu's runq (if any) cpu's idle unit? */ -static inline bool_t is_runq_idle(unsigned int cpu) +static inline bool is_runq_idle(unsigned int cpu) { /* * We're peeking at cpu's runq, we must hold the proper lock. @@ -344,7 +344,7 @@ static void burn_credits(struct csched_unit *svc, s_time_t now) svc->start_time += (credits * MILLISECS(1)) / CSCHED_CREDITS_PER_MSEC; } -static bool_t __read_mostly opt_tickle_one_idle = 1; +static bool __read_mostly opt_tickle_one_idle = true; boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle); DEFINE_PER_CPU(unsigned int, last_tickle_cpu); @@ -719,7 +719,7 @@ __csched_unit_is_migrateable(const struct csched_private *prv, static int _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit, - bool_t commit) + bool commit) { int cpu = sched_unit_master(unit); /* We must always use cpu's scratch space */ @@ -871,7 +871,7 @@ csched_res_pick(const struct scheduler *ops, const struct sched_unit *unit) * get boosted, which we don't deserve as we are "only" migrating. */ set_bit(CSCHED_FLAG_UNIT_MIGRATING, &svc->flags); - return get_sched_res(_csched_cpu_pick(ops, unit, 1)); + return get_sched_res(_csched_cpu_pick(ops, unit, true)); } static inline void @@ -975,7 +975,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu) * migrating it to run elsewhere (see multi-core and multi-thread * support in csched_res_pick()). */ - new_cpu = _csched_cpu_pick(ops, currunit, 0); + new_cpu = _csched_cpu_pick(ops, currunit, false); unit_schedule_unlock_irqrestore(lock, flags, currunit); @@ -1108,7 +1108,7 @@ static void csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct csched_unit * const svc = CSCHED_UNIT(unit); - bool_t migrating; + bool migrating; BUG_ON( is_idle_unit(unit) ); diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index 264a753116..8646d77343 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -490,10 +490,10 @@ rt_update_deadline(s_time_t now, struct rt_unit *svc) static inline bool deadline_queue_remove(struct list_head *queue, struct list_head *elem) { - int pos = 0; + bool pos = false; if ( queue->next != elem ) - pos = 1; + pos = true; list_del_init(elem); return !pos; @@ -505,14 +505,14 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *), struct list_head *queue) { struct list_head *iter; - int pos = 0; + bool pos = false; list_for_each ( iter, queue ) { struct rt_unit * iter_svc = (*qelem)(iter); if ( compare_unit_priority(svc, iter_svc) > 0 ) break; - pos++; + pos = true; } list_add_tail(elem, iter); return !pos; @@ -605,7 +605,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) { struct list_head *replq = rt_replq(ops); struct rt_unit *rearm_svc = svc; - bool_t rearm = 0; + bool rearm = false; ASSERT( unit_on_replq(svc) ); @@ -622,7 +622,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) { deadline_replq_insert(svc, &svc->replq_elem, replq); rearm_svc = replq_elem(replq->next); - rearm = 1; + rearm = true; } else rearm = deadline_replq_insert(svc, &svc->replq_elem, replq); @@ -1279,7 +1279,7 @@ rt_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct rt_unit * const svc = rt_unit(unit); s_time_t now; - bool_t missed; + bool missed; BUG_ON( is_idle_unit(unit) ); diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index db8ce146ca..3307e88b6c 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -53,7 +53,7 @@ string_param("sched", opt_sched); * scheduler will give preferrence to partially idle package compared to * the full idle package, when picking pCPU to schedule vCPU. */ -bool_t sched_smt_power_savings = 0; +bool sched_smt_power_savings; boolean_param("sched_smt_power_savings", sched_smt_power_savings); /* Default scheduling rate limit: 1ms @@ -574,7 +574,7 @@ int sched_init_vcpu(struct vcpu *v) { get_sched_res(v->processor)->curr = unit; get_sched_res(v->processor)->sched_unit_idle = unit; - v->is_running = 1; + v->is_running = true; unit->is_running = true; unit->state_entry_time = NOW(); } @@ -983,7 +983,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit) unsigned long flags; unsigned int old_cpu, new_cpu; spinlock_t *old_lock, *new_lock; - bool_t pick_called = 0; + bool pick_called = false; struct vcpu *v; /* @@ -1029,7 +1029,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit) if ( (new_lock == get_sched_res(new_cpu)->schedule_lock) && cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_valid) ) break; - pick_called = 1; + pick_called = true; } else { @@ -1037,7 +1037,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit) * We do not hold the scheduler lock appropriate for this vCPU. * Thus we cannot select a new CPU on this iteration. Try again. */ - pick_called = 0; + pick_called = false; } sched_spin_unlock_double(old_lock, new_lock, flags); @@ -2148,7 +2148,7 @@ static void sched_switch_units(struct sched_resource *sr, vcpu_runstate_change(vnext, vnext->new_state, now); } - vnext->is_running = 1; + vnext->is_running = true; if ( is_idle_vcpu(vnext) ) vnext->sched_unit = next; @@ -2219,7 +2219,7 @@ static void vcpu_context_saved(struct vcpu *vprev, struct vcpu *vnext) smp_wmb(); if ( vprev != vnext ) - vprev->is_running = 0; + vprev->is_running = false; } static void unit_context_saved(struct sched_resource *sr) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 55335d6ab3..b2f48a3512 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -557,18 +557,18 @@ static inline bool is_system_domain(const struct domain *d) * Use this when you don't have an existing reference to @d. It returns * FALSE if @d is being destroyed. */ -static always_inline int get_domain(struct domain *d) +static always_inline bool get_domain(struct domain *d) { int old, seen = atomic_read(&d->refcnt); do { old = seen; if ( unlikely(old & DOMAIN_DESTROYED) ) - return 0; + return false; seen = atomic_cmpxchg(&d->refcnt, old, old + 1); } while ( unlikely(seen != old) ); - return 1; + return true; } /* From patchwork Wed Dec 18 07:48:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299683 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83AC91593 for ; Wed, 18 Dec 2019 07:50:42 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 68CAE2176D for ; Wed, 18 Dec 2019 07:50:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 68CAE2176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5K-0003mO-4i; Wed, 18 Dec 2019 07:49:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5J-0003lu-5i for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:29 +0000 X-Inumbo-ID: deef0b90-216a-11ea-a1e1-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id deef0b90-216a-11ea-a1e1-bc764e2007e4; Wed, 18 Dec 2019 07:49:08 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id AC402AE52; Wed, 18 Dec 2019 07:49:04 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:58 +0100 Message-Id: <20191218074859.21665-9-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Jan Beulich , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" sched_tick_suspend() and sched_tick_resume() only call rcu related functions, so eliminate them and do the rcu_idle_timer*() calling in rcu_idle_[enter|exit](). Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Acked-by: Julien Grall --- xen/arch/arm/domain.c | 6 +++--- xen/arch/x86/acpi/cpu_idle.c | 15 ++++++++------- xen/arch/x86/cpu/mwait-idle.c | 8 ++++---- xen/common/rcupdate.c | 7 +++++-- xen/common/sched/schedule.c | 12 ------------ xen/include/xen/rcupdate.h | 3 --- xen/include/xen/sched.h | 2 -- 7 files changed, 20 insertions(+), 33 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index c0a13aa0ab..aa3df3b3ba 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -46,8 +46,8 @@ static void do_idle(void) { unsigned int cpu = smp_processor_id(); - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); local_irq_disable(); @@ -58,7 +58,7 @@ static void do_idle(void) } local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); } void idle_loop(void) diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c index 5edd1844f4..2676f0d7da 100644 --- a/xen/arch/x86/acpi/cpu_idle.c +++ b/xen/arch/x86/acpi/cpu_idle.c @@ -599,7 +599,8 @@ void update_idle_stats(struct acpi_processor_power *power, static void acpi_processor_idle(void) { - struct acpi_processor_power *power = processor_powers[smp_processor_id()]; + unsigned int cpu = smp_processor_id(); + struct acpi_processor_power *power = processor_powers[cpu]; struct acpi_processor_cx *cx = NULL; int next_state; uint64_t t1, t2 = 0; @@ -648,8 +649,8 @@ static void acpi_processor_idle(void) cpufreq_dbs_timer_suspend(); - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); /* @@ -658,10 +659,10 @@ static void acpi_processor_idle(void) */ local_irq_disable(); - if ( !cpu_is_haltable(smp_processor_id()) ) + if ( !cpu_is_haltable(cpu) ) { local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -786,7 +787,7 @@ static void acpi_processor_idle(void) /* Now in C0 */ power->last_state = &power->states[0]; local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -794,7 +795,7 @@ static void acpi_processor_idle(void) /* Now in C0 */ power->last_state = &power->states[0]; - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); if ( cpuidle_current_governor->reflect ) diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c index 52413e6da1..f49b04c45b 100644 --- a/xen/arch/x86/cpu/mwait-idle.c +++ b/xen/arch/x86/cpu/mwait-idle.c @@ -755,8 +755,8 @@ static void mwait_idle(void) cpufreq_dbs_timer_suspend(); - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); /* Interrupts must be disabled for C2 and higher transitions. */ @@ -764,7 +764,7 @@ static void mwait_idle(void) if (!cpu_is_haltable(cpu)) { local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -806,7 +806,7 @@ static void mwait_idle(void) if (!(lapic_timer_reliable_states & (1 << cstate))) lapic_timer_on(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); if ( cpuidle_current_governor->reflect ) diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c index a56103c6f7..cb712c8690 100644 --- a/xen/common/rcupdate.c +++ b/xen/common/rcupdate.c @@ -459,7 +459,7 @@ int rcu_needs_cpu(int cpu) * periodically poke rcu_pedning(), so that it will invoke the callback * not too late after the end of the grace period. */ -void rcu_idle_timer_start() +static void rcu_idle_timer_start(void) { struct rcu_data *rdp = &this_cpu(rcu_data); @@ -475,7 +475,7 @@ void rcu_idle_timer_start() rdp->idle_timer_active = true; } -void rcu_idle_timer_stop() +static void rcu_idle_timer_stop(void) { struct rcu_data *rdp = &this_cpu(rcu_data); @@ -633,10 +633,13 @@ void rcu_idle_enter(unsigned int cpu) * Se the comment before cpumask_andnot() in rcu_start_batch(). */ smp_mb(); + + rcu_idle_timer_start(); } void rcu_idle_exit(unsigned int cpu) { + rcu_idle_timer_stop(); ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask)); cpumask_clear_cpu(cpu, &rcu_ctrlblk.idle_cpumask); } diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index 3307e88b6c..ddbface969 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -3265,18 +3265,6 @@ void schedule_dump(struct cpupool *c) rcu_read_unlock(&sched_res_rculock); } -void sched_tick_suspend(void) -{ - rcu_idle_enter(smp_processor_id()); - rcu_idle_timer_start(); -} - -void sched_tick_resume(void) -{ - rcu_idle_timer_stop(); - rcu_idle_exit(smp_processor_id()); -} - void wait(void) { schedule(); diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h index 13850865ed..174d058113 100644 --- a/xen/include/xen/rcupdate.h +++ b/xen/include/xen/rcupdate.h @@ -148,7 +148,4 @@ int rcu_barrier(void); void rcu_idle_enter(unsigned int cpu); void rcu_idle_exit(unsigned int cpu); -void rcu_idle_timer_start(void); -void rcu_idle_timer_stop(void); - #endif /* __XEN_RCUPDATE_H */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index b2f48a3512..e4263de2d5 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -688,8 +688,6 @@ void sched_destroy_domain(struct domain *d); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); -void sched_tick_suspend(void); -void sched_tick_resume(void); void vcpu_wake(struct vcpu *v); long vcpu_yield(void); void vcpu_sleep_nosync(struct vcpu *v); From patchwork Wed Dec 18 07:48:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11299693 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 615DF6C1 for ; Wed, 18 Dec 2019 07:50:55 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3C5402176D for ; Wed, 18 Dec 2019 07:50:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C5402176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5U-0003sQ-1c; Wed, 18 Dec 2019 07:49:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ihU5T-0003rv-60 for xen-devel@lists.xenproject.org; Wed, 18 Dec 2019 07:49:39 +0000 X-Inumbo-ID: df03c954-216a-11ea-b6f1-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id df03c954-216a-11ea-b6f1-bc764e2007e4; Wed, 18 Dec 2019 07:49:08 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 195D5AF27; Wed, 18 Dec 2019 07:49:05 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 18 Dec 2019 08:48:59 +0100 Message-Id: <20191218074859.21665-10-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 9/9] xen/sched: add const qualifier where appropriate X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Make use of the const qualifier more often in scheduling code. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/cpupool.c | 2 +- xen/common/sched/sched_arinc653.c | 4 +-- xen/common/sched/sched_credit.c | 44 +++++++++++++++++---------------- xen/common/sched/sched_credit2.c | 52 ++++++++++++++++++++------------------- xen/common/sched/sched_null.c | 17 +++++++------ xen/common/sched/sched_rt.c | 32 ++++++++++++------------ xen/common/sched/schedule.c | 25 ++++++++++--------- xen/include/xen/sched.h | 9 ++++--- 8 files changed, 96 insertions(+), 89 deletions(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 14212bb4ae..a6c04c46cb 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -882,7 +882,7 @@ int cpupool_get_id(const struct domain *d) return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; } -cpumask_t *cpupool_valid_cpus(struct cpupool *pool) +const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool) { return pool->cpu_valid; } diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c index dc45378952..0de4ba6b2c 100644 --- a/xen/common/sched/sched_arinc653.c +++ b/xen/common/sched/sched_arinc653.c @@ -608,7 +608,7 @@ static struct sched_resource * a653sched_pick_resource(const struct scheduler *ops, const struct sched_unit *unit) { - cpumask_t *online; + const cpumask_t *online; unsigned int cpu; /* @@ -639,7 +639,7 @@ a653_switch_sched(struct scheduler *new_ops, unsigned int cpu, void *pdata, void *vdata) { struct sched_resource *sr = get_sched_res(cpu); - arinc653_unit_t *svc = vdata; + const arinc653_unit_t *svc = vdata; ASSERT(!pdata && svc && is_idle_unit(svc->unit)); diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c index 05930261d9..f2fc1cca5a 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -233,7 +233,7 @@ static void csched_tick(void *_cpu); static void csched_acct(void *dummy); static inline int -__unit_on_runq(struct csched_unit *svc) +__unit_on_runq(const struct csched_unit *svc) { return !list_empty(&svc->runq_elem); } @@ -349,11 +349,11 @@ boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle); DEFINE_PER_CPU(unsigned int, last_tickle_cpu); -static inline void __runq_tickle(struct csched_unit *new) +static inline void __runq_tickle(const struct csched_unit *new) { unsigned int cpu = sched_unit_master(new->unit); - struct sched_resource *sr = get_sched_res(cpu); - struct sched_unit *unit = new->unit; + const struct sched_resource *sr = get_sched_res(cpu); + const struct sched_unit *unit = new->unit; struct csched_unit * const cur = CSCHED_UNIT(curr_on_cpu(cpu)); struct csched_private *prv = CSCHED_PRIV(sr->scheduler); cpumask_t mask, idle_mask, *online; @@ -509,7 +509,7 @@ static inline void __runq_tickle(struct csched_unit *new) static void csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) { - struct csched_private *prv = CSCHED_PRIV(ops); + const struct csched_private *prv = CSCHED_PRIV(ops); /* * pcpu either points to a valid struct csched_pcpu, or is NULL, if we're @@ -652,7 +652,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned int cpu, #ifndef NDEBUG static inline void -__csched_unit_check(struct sched_unit *unit) +__csched_unit_check(const struct sched_unit *unit) { struct csched_unit * const svc = CSCHED_UNIT(unit); struct csched_dom * const sdom = svc->sdom; @@ -700,8 +700,8 @@ __csched_vcpu_is_cache_hot(const struct csched_private *prv, static inline int __csched_unit_is_migrateable(const struct csched_private *prv, - struct sched_unit *unit, - int dest_cpu, cpumask_t *mask) + const struct sched_unit *unit, + int dest_cpu, const cpumask_t *mask) { const struct csched_unit *svc = CSCHED_UNIT(unit); /* @@ -725,7 +725,7 @@ _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit, /* We must always use cpu's scratch space */ cpumask_t *cpus = cpumask_scratch_cpu(cpu); cpumask_t idlers; - cpumask_t *online = cpupool_domain_master_cpumask(unit->domain); + const cpumask_t *online = cpupool_domain_master_cpumask(unit->domain); struct csched_pcpu *spc = NULL; int balance_step; @@ -932,7 +932,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu) { struct sched_unit *currunit = current->sched_unit; struct csched_unit * const svc = CSCHED_UNIT(currunit); - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); const struct scheduler *ops = sr->scheduler; ASSERT( sched_unit_master(currunit) == cpu ); @@ -1084,7 +1084,7 @@ csched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { struct csched_unit * const svc = CSCHED_UNIT(unit); unsigned int cpu = sched_unit_master(unit); - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); SCHED_STAT_CRANK(unit_sleep); @@ -1577,7 +1577,7 @@ static void csched_tick(void *_cpu) { unsigned int cpu = (unsigned long)_cpu; - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); struct csched_pcpu *spc = CSCHED_PCPU(cpu); struct csched_private *prv = CSCHED_PRIV(sr->scheduler); @@ -1604,7 +1604,7 @@ csched_tick(void *_cpu) static struct csched_unit * csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) { - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); const struct csched_private * const prv = CSCHED_PRIV(sr->scheduler); const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu); struct csched_unit *speer; @@ -1681,10 +1681,10 @@ static struct csched_unit * csched_load_balance(struct csched_private *prv, int cpu, struct csched_unit *snext, bool *stolen) { - struct cpupool *c = get_sched_res(cpu)->cpupool; + const struct cpupool *c = get_sched_res(cpu)->cpupool; struct csched_unit *speer; cpumask_t workers; - cpumask_t *online = c->res_valid; + const cpumask_t *online = c->res_valid; int peer_cpu, first_cpu, peer_node, bstep; int node = cpu_to_node(cpu); @@ -2008,7 +2008,7 @@ out: } static void -csched_dump_unit(struct csched_unit *svc) +csched_dump_unit(const struct csched_unit *svc) { struct csched_dom * const sdom = svc->sdom; @@ -2041,10 +2041,11 @@ csched_dump_unit(struct csched_unit *svc) static void csched_dump_pcpu(const struct scheduler *ops, int cpu) { - struct list_head *runq, *iter; + const struct list_head *runq; + struct list_head *iter; struct csched_private *prv = CSCHED_PRIV(ops); - struct csched_pcpu *spc; - struct csched_unit *svc; + const struct csched_pcpu *spc; + const struct csched_unit *svc; spinlock_t *lock; unsigned long flags; int loop; @@ -2132,12 +2133,13 @@ csched_dump(const struct scheduler *ops) loop = 0; list_for_each( iter_sdom, &prv->active_sdom ) { - struct csched_dom *sdom; + const struct csched_dom *sdom; + sdom = list_entry(iter_sdom, struct csched_dom, active_sdom_elem); list_for_each( iter_svc, &sdom->active_unit ) { - struct csched_unit *svc; + const struct csched_unit *svc; spinlock_t *lock; svc = list_entry(iter_svc, struct csched_unit, active_unit_elem); diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c index f9e521a3a8..1ed7bbde2f 100644 --- a/xen/common/sched/sched_credit2.c +++ b/xen/common/sched/sched_credit2.c @@ -692,7 +692,7 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask) */ static int get_fallback_cpu(struct csched2_unit *svc) { - struct sched_unit *unit = svc->unit; + const struct sched_unit *unit = svc->unit; unsigned int bs; SCHED_STAT_CRANK(need_fallback_cpu); @@ -774,7 +774,7 @@ static int get_fallback_cpu(struct csched2_unit *svc) * * FIXME: Do pre-calculated division? */ -static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time, +static void t2c_update(const struct csched2_runqueue_data *rqd, s_time_t time, struct csched2_unit *svc) { uint64_t val = time * rqd->max_weight + svc->residual; @@ -783,7 +783,8 @@ static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time, svc->credit -= val; } -static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct csched2_unit *svc) +static s_time_t c2t(const struct csched2_runqueue_data *rqd, s_time_t credit, + const struct csched2_unit *svc) { return credit * svc->weight / rqd->max_weight; } @@ -792,7 +793,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct c * Runqueue related code. */ -static inline int unit_on_runq(struct csched2_unit *svc) +static inline int unit_on_runq(const struct csched2_unit *svc) { return !list_empty(&svc->runq_elem); } @@ -849,9 +850,9 @@ static inline bool same_core(unsigned int cpua, unsigned int cpub) } static unsigned int -cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) +cpu_to_runqueue(const struct csched2_private *prv, unsigned int cpu) { - struct csched2_runqueue_data *rqd; + const struct csched2_runqueue_data *rqd; unsigned int rqi; for ( rqi = 0; rqi < nr_cpu_ids; rqi++ ) @@ -917,7 +918,7 @@ static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight, list_for_each( iter, &rqd->svc ) { - struct csched2_unit * svc = list_entry(iter, struct csched2_unit, rqd_elem); + const struct csched2_unit * svc = list_entry(iter, struct csched2_unit, rqd_elem); if ( svc->weight > max_weight ) max_weight = svc->weight; @@ -970,7 +971,7 @@ _runq_assign(struct csched2_unit *svc, struct csched2_runqueue_data *rqd) } static void -runq_assign(const struct scheduler *ops, struct sched_unit *unit) +runq_assign(const struct scheduler *ops, const struct sched_unit *unit) { struct csched2_unit *svc = unit->priv; @@ -997,7 +998,7 @@ _runq_deassign(struct csched2_unit *svc) } static void -runq_deassign(const struct scheduler *ops, struct sched_unit *unit) +runq_deassign(const struct scheduler *ops, const struct sched_unit *unit) { struct csched2_unit *svc = unit->priv; @@ -1203,7 +1204,7 @@ static void update_svc_load(const struct scheduler *ops, struct csched2_unit *svc, int change, s_time_t now) { - struct csched2_private *prv = csched2_priv(ops); + const struct csched2_private *prv = csched2_priv(ops); s_time_t delta, unit_load; unsigned int P, W; @@ -1362,11 +1363,11 @@ static inline bool is_preemptable(const struct csched2_unit *svc, * Within the same class, the highest difference of credit. */ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now, - struct csched2_unit *new, unsigned int cpu) + const struct csched2_unit *new, unsigned int cpu) { struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); struct csched2_unit * cur = csched2_unit(curr_on_cpu(cpu)); - struct csched2_private *prv = csched2_priv(ops); + const struct csched2_private *prv = csched2_priv(ops); s_time_t score; /* @@ -1441,7 +1442,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_unit *new, s_time_t now) struct sched_unit *unit = new->unit; unsigned int bs, cpu = sched_unit_master(unit); struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); - cpumask_t *online = cpupool_domain_master_cpumask(unit->domain); + const cpumask_t *online = cpupool_domain_master_cpumask(unit->domain); cpumask_t mask; ASSERT(new->rqd == rqd); @@ -2005,7 +2006,7 @@ static void replenish_domain_budget(void* data) #ifndef NDEBUG static inline void -csched2_unit_check(struct sched_unit *unit) +csched2_unit_check(const struct sched_unit *unit) { struct csched2_unit * const svc = csched2_unit(unit); struct csched2_dom * const sdom = svc->sdom; @@ -2541,8 +2542,8 @@ static void migrate(const struct scheduler *ops, * - svc is not already flagged to migrate, * - if svc is allowed to run on at least one of the pcpus of rqd. */ -static bool unit_is_migrateable(struct csched2_unit *svc, - struct csched2_runqueue_data *rqd) +static bool unit_is_migrateable(const struct csched2_unit *svc, + const struct csched2_runqueue_data *rqd) { struct sched_unit *unit = svc->unit; int cpu = sched_unit_master(unit); @@ -3076,7 +3077,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data) static void csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { - struct csched2_unit *svc = unit->priv; + const struct csched2_unit *svc = unit->priv; struct csched2_dom * const sdom = svc->sdom; spinlock_t *lock; @@ -3142,7 +3143,7 @@ csched2_runtime(const struct scheduler *ops, int cpu, int rt_credit; /* Proposed runtime measured in credits */ struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); struct list_head *runq = &rqd->runq; - struct csched2_private *prv = csched2_priv(ops); + const struct csched2_private *prv = csched2_priv(ops); /* * If we're idle, just stay so. Others (or external events) @@ -3239,7 +3240,7 @@ runq_candidate(struct csched2_runqueue_data *rqd, unsigned int *skipped) { struct list_head *iter, *temp; - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); struct csched2_unit *snext = NULL; struct csched2_private *prv = csched2_priv(sr->scheduler); bool yield = false, soft_aff_preempt = false; @@ -3603,7 +3604,8 @@ static void csched2_schedule( } static void -csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc) +csched2_dump_unit(const struct csched2_private *prv, + const struct csched2_unit *svc) { printk("[%i.%i] flags=%x cpu=%i", svc->unit->domain->domain_id, @@ -3626,8 +3628,8 @@ csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc) static inline void dump_pcpu(const struct scheduler *ops, int cpu) { - struct csched2_private *prv = csched2_priv(ops); - struct csched2_unit *svc; + const struct csched2_private *prv = csched2_priv(ops); + const struct csched2_unit *svc; printk("CPU[%02d] runq=%d, sibling={%*pbl}, core={%*pbl}\n", cpu, c2r(cpu), @@ -3695,8 +3697,8 @@ csched2_dump(const struct scheduler *ops) loop = 0; list_for_each( iter_sdom, &prv->sdom ) { - struct csched2_dom *sdom; - struct sched_unit *unit; + const struct csched2_dom *sdom; + const struct sched_unit *unit; sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem); @@ -3737,7 +3739,7 @@ csched2_dump(const struct scheduler *ops) printk("RUNQ:\n"); list_for_each( iter, runq ) { - struct csched2_unit *svc = runq_elem(iter); + const struct csched2_unit *svc = runq_elem(iter); if ( svc ) { diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c index 11aab25743..4906e02c62 100644 --- a/xen/common/sched/sched_null.c +++ b/xen/common/sched/sched_null.c @@ -278,12 +278,12 @@ static void null_free_domdata(const struct scheduler *ops, void *data) * So this is not part of any hot path. */ static struct sched_resource * -pick_res(struct null_private *prv, const struct sched_unit *unit) +pick_res(const struct null_private *prv, const struct sched_unit *unit) { unsigned int bs; unsigned int cpu = sched_unit_master(unit), new_cpu; - cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain); - struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; + const cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain); + const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); @@ -375,7 +375,7 @@ static void unit_assign(struct null_private *prv, struct sched_unit *unit, } /* Returns true if a cpu was tickled */ -static bool unit_deassign(struct null_private *prv, struct sched_unit *unit) +static bool unit_deassign(struct null_private *prv, const struct sched_unit *unit) { unsigned int bs; unsigned int cpu = sched_unit_master(unit); @@ -441,7 +441,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops, { struct sched_resource *sr = get_sched_res(cpu); struct null_private *prv = null_priv(new_ops); - struct null_unit *nvc = vdata; + const struct null_unit *nvc = vdata; ASSERT(nvc && is_idle_unit(nvc->unit)); @@ -940,7 +940,8 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev, prev->next_task->migrated = false; } -static inline void dump_unit(struct null_private *prv, struct null_unit *nvc) +static inline void dump_unit(const struct null_private *prv, + const struct null_unit *nvc) { printk("[%i.%i] pcpu=%d", nvc->unit->domain->domain_id, nvc->unit->unit_id, list_empty(&nvc->waitq_elem) ? @@ -950,8 +951,8 @@ static inline void dump_unit(struct null_private *prv, struct null_unit *nvc) static void null_dump_pcpu(const struct scheduler *ops, int cpu) { struct null_private *prv = null_priv(ops); - struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; - struct null_unit *nvc; + const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; + const struct null_unit *nvc; spinlock_t *lock; unsigned long flags; diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index 8646d77343..560614ed9d 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -352,7 +352,7 @@ static void rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); - struct rt_unit *svc; + const struct rt_unit *svc; unsigned long flags; spin_lock_irqsave(&prv->lock, flags); @@ -371,8 +371,8 @@ rt_dump(const struct scheduler *ops) { struct list_head *runq, *depletedq, *replq, *iter; struct rt_private *prv = rt_priv(ops); - struct rt_unit *svc; - struct rt_dom *sdom; + const struct rt_unit *svc; + const struct rt_dom *sdom; unsigned long flags; spin_lock_irqsave(&prv->lock, flags); @@ -408,7 +408,7 @@ rt_dump(const struct scheduler *ops) printk("Domain info:\n"); list_for_each ( iter, &prv->sdom ) { - struct sched_unit *unit; + const struct sched_unit *unit; sdom = list_entry(iter, struct rt_dom, sdom_elem); printk("\tdomain: %d\n", sdom->dom->domain_id); @@ -509,7 +509,7 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *), list_for_each ( iter, queue ) { - struct rt_unit * iter_svc = (*qelem)(iter); + const struct rt_unit * iter_svc = (*qelem)(iter); if ( compare_unit_priority(svc, iter_svc) > 0 ) break; pos = true; @@ -547,7 +547,7 @@ replq_remove(const struct scheduler *ops, struct rt_unit *svc) */ if ( !list_empty(replq) ) { - struct rt_unit *svc_next = replq_elem(replq->next); + const struct rt_unit *svc_next = replq_elem(replq->next); set_timer(&prv->repl_timer, svc_next->cur_deadline); } else @@ -604,7 +604,7 @@ static void replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) { struct list_head *replq = rt_replq(ops); - struct rt_unit *rearm_svc = svc; + const struct rt_unit *rearm_svc = svc; bool rearm = false; ASSERT( unit_on_replq(svc) ); @@ -640,7 +640,7 @@ static struct sched_resource * rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu) { cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu); - cpumask_t *online; + const cpumask_t *online; int cpu; online = cpupool_domain_master_cpumask(unit->domain); @@ -1028,7 +1028,7 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu) struct rt_unit *svc = NULL; struct rt_unit *iter_svc = NULL; cpumask_t *cpu_common = cpumask_scratch_cpu(cpu); - cpumask_t *online; + const cpumask_t *online; list_for_each ( iter, runq ) { @@ -1197,15 +1197,15 @@ rt_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) * lock is grabbed before calling this function */ static void -runq_tickle(const struct scheduler *ops, struct rt_unit *new) +runq_tickle(const struct scheduler *ops, const struct rt_unit *new) { struct rt_private *prv = rt_priv(ops); - struct rt_unit *latest_deadline_unit = NULL; /* lowest priority */ - struct rt_unit *iter_svc; - struct sched_unit *iter_unit; + const struct rt_unit *latest_deadline_unit = NULL; /* lowest priority */ + const struct rt_unit *iter_svc; + const struct sched_unit *iter_unit; int cpu = 0, cpu_to_tickle = 0; cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id()); - cpumask_t *online; + const cpumask_t *online; if ( new == NULL || is_idle_unit(new->unit) ) return; @@ -1379,7 +1379,7 @@ rt_dom_cntl( { struct rt_private *prv = rt_priv(ops); struct rt_unit *svc; - struct sched_unit *unit; + const struct sched_unit *unit; unsigned long flags; int rc = 0; struct xen_domctl_schedparam_vcpu local_sched; @@ -1484,7 +1484,7 @@ rt_dom_cntl( */ static void repl_timer_handler(void *data){ s_time_t now; - struct scheduler *ops = data; + const struct scheduler *ops = data; struct rt_private *prv = rt_priv(ops); struct list_head *replq = rt_replq(ops); struct list_head *runq = rt_runq(ops); diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index ddbface969..1d98e1fa8d 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -175,7 +175,7 @@ static inline struct scheduler *dom_scheduler(const struct domain *d) static inline struct scheduler *unit_scheduler(const struct sched_unit *unit) { - struct domain *d = unit->domain; + const struct domain *d = unit->domain; if ( likely(d->cpupool != NULL) ) return d->cpupool->sched; @@ -202,7 +202,7 @@ static inline struct scheduler *vcpu_scheduler(const struct vcpu *v) } #define VCPU2ONLINE(_v) cpupool_domain_master_cpumask((_v)->domain) -static inline void trace_runstate_change(struct vcpu *v, int new_state) +static inline void trace_runstate_change(const struct vcpu *v, int new_state) { struct { uint32_t vcpu:16, domain:16; } d; uint32_t event; @@ -220,7 +220,7 @@ static inline void trace_runstate_change(struct vcpu *v, int new_state) __trace_var(event, 1/*tsc*/, sizeof(d), &d); } -static inline void trace_continue_running(struct vcpu *v) +static inline void trace_continue_running(const struct vcpu *v) { struct { uint32_t vcpu:16, domain:16; } d; @@ -302,7 +302,8 @@ void sched_guest_idle(void (*idle) (void), unsigned int cpu) atomic_dec(&per_cpu(sched_urgent_count, cpu)); } -void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) +void vcpu_runstate_get(const struct vcpu *v, + struct vcpu_runstate_info *runstate) { spinlock_t *lock; s_time_t delta; @@ -324,7 +325,7 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) uint64_t get_cpu_idle_time(unsigned int cpu) { struct vcpu_runstate_info state = { 0 }; - struct vcpu *v = idle_vcpu[cpu]; + const struct vcpu *v = idle_vcpu[cpu]; if ( cpu_online(cpu) && v ) vcpu_runstate_get(v, &state); @@ -392,7 +393,7 @@ static void sched_free_unit_mem(struct sched_unit *unit) static void sched_free_unit(struct sched_unit *unit, struct vcpu *v) { - struct vcpu *vunit; + const struct vcpu *vunit; unsigned int cnt = 0; /* Don't count to be released vcpu, might be not in vcpu list yet. */ @@ -522,7 +523,7 @@ static unsigned int sched_select_initial_cpu(const struct vcpu *v) int sched_init_vcpu(struct vcpu *v) { - struct domain *d = v->domain; + const struct domain *d = v->domain; struct sched_unit *unit; unsigned int processor; @@ -913,7 +914,7 @@ static void sched_unit_move_locked(struct sched_unit *unit, unsigned int new_cpu) { unsigned int old_cpu = unit->res->master_cpu; - struct vcpu *v; + const struct vcpu *v; rcu_read_lock(&sched_res_rculock); @@ -1090,7 +1091,7 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit) return false; } -static void sched_reset_affinity_broken(struct sched_unit *unit) +static void sched_reset_affinity_broken(const struct sched_unit *unit) { struct vcpu *v; @@ -1176,7 +1177,7 @@ void restore_vcpu_affinity(struct domain *d) int cpu_disable_scheduler(unsigned int cpu) { struct domain *d; - struct cpupool *c; + const struct cpupool *c; cpumask_t online_affinity; int ret = 0; @@ -1251,8 +1252,8 @@ out: static int cpu_disable_scheduler_check(unsigned int cpu) { struct domain *d; - struct vcpu *v; - struct cpupool *c; + const struct vcpu *v; + const struct cpupool *c; c = get_sched_res(cpu)->cpupool; if ( c == NULL ) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index e4263de2d5..fcf8e5037b 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -771,7 +771,7 @@ static inline void hypercall_cancel_continuation(struct vcpu *v) extern struct domain *domain_list; /* Caller must hold the domlist_read_lock or domlist_update_lock. */ -static inline struct domain *first_domain_in_cpupool( struct cpupool *c) +static inline struct domain *first_domain_in_cpupool(const struct cpupool *c) { struct domain *d; for (d = rcu_dereference(domain_list); d && d->cpupool != c; @@ -779,7 +779,7 @@ static inline struct domain *first_domain_in_cpupool( struct cpupool *c) return d; } static inline struct domain *next_domain_in_cpupool( - struct domain *d, struct cpupool *c) + struct domain *d, const struct cpupool *c) { for (d = rcu_dereference(d->next_in_list); d && d->cpupool != c; d = rcu_dereference(d->next_in_list)); @@ -923,7 +923,8 @@ void restore_vcpu_affinity(struct domain *d); int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, struct xen_domctl_vcpuaffinity *vcpuaff); -void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate); +void vcpu_runstate_get(const struct vcpu *v, + struct vcpu_runstate_info *runstate); uint64_t get_cpu_idle_time(unsigned int cpu); void sched_guest_idle(void (*idle) (void), unsigned int cpu); void scheduler_enable(void); @@ -1042,7 +1043,7 @@ extern enum cpufreq_controller { int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); int cpupool_get_id(const struct domain *d); -cpumask_t *cpupool_valid_cpus(struct cpupool *pool); +const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool); extern void dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi);