From patchwork Fri Mar 29 15:08:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10877291 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3C021874 for ; Fri, 29 Mar 2019 15:11:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BE6529863 for ; Fri, 29 Mar 2019 15:11:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 994C429895; Fri, 29 Mar 2019 15:11:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DC58C29863 for ; Fri, 29 Mar 2019 15:11:48 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8Z-0003Ub-3P; Fri, 29 Mar 2019 15:09:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8X-0003U1-KC for xen-devel@lists.xenproject.org; Fri, 29 Mar 2019 15:09:41 +0000 X-Inumbo-ID: ac3f8ec8-5234-11e9-bf36-474b044db6e6 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ac3f8ec8-5234-11e9-bf36-474b044db6e6; Fri, 29 Mar 2019 15:09:39 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B7EB2AF94; Fri, 29 Mar 2019 15:09:38 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 29 Mar 2019 16:08:45 +0100 Message-Id: <20190329150934.17694-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Subject: [Xen-devel] [PATCH RFC 00/49] xen: add core scheduling support X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Kevin Tian , Stefano Stabellini , Wei Liu , Jun Nakajima , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Paul Durrant , Josh Whitehead , Meng Xu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This series is very RFC!!!! Add support for core- and socket-scheduling in the Xen hypervisor. Via boot parameter sched_granularity=core (or sched_granularity=socket) it is possible to change the scheduling granularity from thread (the default) to either whole cores or even sockets. All logical cpus (threads) of the core or socket are always scheduled together. This means that on a core always vcpus of the same domain will be active, and those vcpus will always be scheduled at the same time. This is achieved by switching the scheduler to no longer see vcpus as the primary object to schedule, but "schedule items". Each schedule item consists of as many vcpus as each core has threads on the current system. The vcpu->item relation is fixed. I have done some very basic performance testing: on a 4 cpu system (2 cores with 2 threads each) I did a "make -j 4" for building the Xen hypervisor. With This test has been run on dom0, once with no other guest active and once with another guest with 4 vcpus running the same test. The results are (always elapsed time, system time, user time): sched_granularity=thread, no other guest: 116.10 177.65 207.84 sched_granularity=core, no other guest: 114.04 175.47 207.45 sched_granularity=thread, other guest: 202.30 334.21 384.63 sched_granularity=core, other guest: 207.24 293.04 371.37 All tests have been performed with credit2, the other schedulers are untested up to now. Cpupools are not yet working, as moving cpus between cpupools needs more work. HVM domains do not work yet, there is a doublefault in Xen at the end of Seabios. I'm currently investigating this issue. This is x86-only for the moment. ARM doesn't even build with the series applied. For full ARM support I might need some help with the ARM specific context switch handling. The first 7 patches have been sent to xen-devel already, I'm just adding them here for convenience as they are prerequisites. I'm especially looking for feedback regarding the overall idea and design. Juergen Gross (49): xen/sched: call cpu_disable_scheduler() via cpu notifier xen: add helper for calling notifier_call_chain() to common/cpu.c xen: add new cpu notifier action CPU_RESUME_FAILED xen: don't free percpu areas during suspend xen/cpupool: simplify suspend/resume handling xen/sched: don't disable scheduler on cpus during suspend xen/sched: fix credit2 smt idle handling xen/sched: use new sched_item instead of vcpu in scheduler interfaces xen/sched: alloc struct sched_item for each vcpu xen/sched: move per-vcpu scheduler private data pointer to sched_item xen/sched: build a linked list of struct sched_item xen/sched: introduce struct sched_resource xen/sched: let pick_cpu return a scheduler resource xen/sched: switch schedule_data.curr to point at sched_item xen/sched: move per cpu scheduler private data into struct sched_resource xen/sched: switch vcpu_schedule_lock to item_schedule_lock xen/sched: move some per-vcpu items to struct sched_item xen/sched: add scheduler helpers hiding vcpu xen/sched: add domain pointer to struct sched_item xen/sched: add id to struct sched_item xen/sched: rename scheduler related perf counters xen/sched: switch struct task_slice from vcpu to sched_item xen/sched: move is_running indicator to struct sched_item xen/sched: make null scheduler vcpu agnostic. xen/sched: make rt scheduler vcpu agnostic. xen/sched: make credit scheduler vcpu agnostic. xen/sched: make credit2 scheduler vcpu agnostic. xen/sched: make arinc653 scheduler vcpu agnostic. xen: add sched_item_pause_nosync() and sched_item_unpause() xen: let vcpu_create() select processor xen/sched: use sched_resource cpu instead smp_processor_id in schedulers xen/sched: switch schedule() from vcpus to sched_items xen/sched: switch sched_move_irqs() to take sched_item as parameter xen: switch from for_each_vcpu() to for_each_sched_item() xen/sched: add runstate counters to struct sched_item xen/sched: rework and rename vcpu_force_reschedule() xen/sched: Change vcpu_migrate_*() to operate on schedule item xen/sched: move struct task_slice into struct sched_item xen/sched: add code to sync scheduling of all vcpus of a sched item xen/sched: add support for multiple vcpus per sched item where missing x86: make loading of GDT at context switch more modular xen/sched: add support for guest vcpu idle xen/sched: modify cpupool_domain_cpumask() to be an item mask xen: round up max vcpus to scheduling granularity xen/sched: support allocating multiple vcpus into one sched item xen/sched: add a scheduler_percpu_init() function xen/sched: support core scheduling in continue_running() xen/sched: make vcpu_wake() core scheduling aware xen/sched: add scheduling granularity enum xen/arch/arm/domain.c | 14 + xen/arch/arm/domain_build.c | 13 +- xen/arch/arm/smpboot.c | 6 +- xen/arch/x86/dom0_build.c | 11 +- xen/arch/x86/domain.c | 243 +++++-- xen/arch/x86/hvm/dom0_build.c | 9 +- xen/arch/x86/hvm/hvm.c | 7 +- xen/arch/x86/hvm/viridian/viridian.c | 1 + xen/arch/x86/hvm/vlapic.c | 1 + xen/arch/x86/hvm/vmx/vmcs.c | 6 +- xen/arch/x86/hvm/vmx/vmx.c | 5 +- xen/arch/x86/mm.c | 10 +- xen/arch/x86/percpu.c | 3 +- xen/arch/x86/pv/descriptor-tables.c | 6 +- xen/arch/x86/pv/dom0_build.c | 10 +- xen/arch/x86/pv/domain.c | 19 + xen/arch/x86/pv/emul-priv-op.c | 2 + xen/arch/x86/pv/shim.c | 4 +- xen/arch/x86/pv/traps.c | 6 +- xen/arch/x86/setup.c | 2 + xen/arch/x86/smpboot.c | 5 +- xen/arch/x86/traps.c | 10 +- xen/common/cpu.c | 61 +- xen/common/cpupool.c | 161 ++--- xen/common/domain.c | 36 +- xen/common/domctl.c | 28 +- xen/common/keyhandler.c | 7 +- xen/common/sched_arinc653.c | 258 ++++--- xen/common/sched_credit.c | 704 +++++++++--------- xen/common/sched_credit2.c | 1143 +++++++++++++++--------------- xen/common/sched_null.c | 423 +++++------ xen/common/sched_rt.c | 538 +++++++------- xen/common/schedule.c | 1292 ++++++++++++++++++++++++---------- xen/common/softirq.c | 6 +- xen/common/wait.c | 5 +- xen/include/asm-x86/cpuidle.h | 2 +- xen/include/asm-x86/dom0_build.h | 3 +- xen/include/asm-x86/domain.h | 3 + xen/include/xen/cpu.h | 29 +- xen/include/xen/domain.h | 3 +- xen/include/xen/perfc_defn.h | 32 +- xen/include/xen/sched-if.h | 276 ++++++-- xen/include/xen/sched.h | 40 +- xen/include/xen/softirq.h | 1 + 44 files changed, 3175 insertions(+), 2269 deletions(-)