From patchwork Wed Apr 6 17:22:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 8763981 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 563BEC0553 for ; Wed, 6 Apr 2016 17:25:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4440820251 for ; Wed, 6 Apr 2016 17:25:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C0042201ED for ; Wed, 6 Apr 2016 17:25:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1anrBF-0001KH-Hk; Wed, 06 Apr 2016 17:23:49 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1anrBE-0001Jb-7e for xen-devel@lists.xenproject.org; Wed, 06 Apr 2016 17:23:48 +0000 Received: from [85.158.143.35] by server-3.bemta-6.messagelabs.com id 32/C4-07120-32645075; Wed, 06 Apr 2016 17:23:47 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrLIsWRWlGSWpSXmKPExsXiVRvkoqvkxhp uMH2/scX3LZOZHBg9Dn+4whLAGMWamZeUX5HAmvHkzj72ghsOFTdnvWVqYHxs3MXIxSEkMJ1R 4lHnEUYQh0VgDavE9sWNYI6EwCVWifYry4AcTiAnRuLIxZVQdpXElF3/2UBsIQEViZvbVzFBj FrCJLH37AUmkISwgJ7EkaM/2CHsCIlTL9aCNbMJGEi82bGXFcQWEVCSuLdqMlg9s0ChxP3V21 lAbBYBVYmNc9rBbF4Be4l1JyYD2RwcnAIOEnsmu0PstZe4+uUEM4gtKiAnsfJyCytEuaDEyZl PwMqZBTQl1u/Sh5guL7H97RzmCYwis5BUzUKomoWkagEj8ypG9eLUorLUIl1DvaSizPSMktzE zBxdQwMzvdzU4uLE9NScxKRiveT83E2MwOBnAIIdjDufOx1ilORgUhLl9ZRgDRfiS8pPqcxIL M6ILyrNSS0+xKjBwSEw4ezc6UxSLHn5ealKErwcrkB1gkWp6akVaZk5wPiEKZXg4FES4XUFSf MWFyTmFmemQ6ROMepybJl6by2TENgMKXHely5ARQIgRRmleXAjYKniEqOslDAvI9CBQjwFqUW 5mSWo8q8YxTkYlYR5TUFW8WTmlcBtegV0BBPQEfXCTCBHlCQipKQaGOOOvn7z6+3TI5Yvv1ka Fezu/6m46mge13sJmcBLVazcHdZi14LyZ02a2TBBd+2OxQWiTTzx628+PyOZ3FjcGK0a0M+1d +v+Iwk1B3uEJt9kT1D7+TH9EPv1a7aTv/J26hZGc566KmImuzZoW1yd6ustfIpx99wiKpyefU 7fIe92/t7p8t4/zUosxRmJhlrMRcWJAEuD1p0QAwAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-12.tower-21.messagelabs.com!1459963426!7989464!1 X-Originating-IP: [74.125.82.68] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 36394 invoked from network); 6 Apr 2016 17:23:46 -0000 Received: from mail-wm0-f68.google.com (HELO mail-wm0-f68.google.com) (74.125.82.68) by server-12.tower-21.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 6 Apr 2016 17:23:46 -0000 Received: by mail-wm0-f68.google.com with SMTP id n3so15156152wmn.1 for ; Wed, 06 Apr 2016 10:23:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=5YbZVeepXby6Bfz1DOniEY5s7laTTpL6r66uM7hL2Qc=; b=xMr5y/BVFmUfoyzysqcQE758PXW0J+XChQlqqx1u5xoPznbvzTwZ/Qm6uxIy3t0J59 S7lHJQjJ42JqyQVEZQqDrR1Q5FiVaIkqx0aWve+IniP8JqfJ8W4L0HJqRFvlbiSnpwTf i5B+0lT6zLD4NuwKQZXIGvXARmrObnsAUdZMPc4WKIOvKl1344XmI+FPRt37Uk5lSVEO bH6SfU4V37sGSBYIfyvuY1POYs3pLKBPtzKWt1abWAe1EhNp5r2W8TUQ5lwKohcOz1E8 14GfrLEfP65UhvOaH9Hdz1P4AAt/tGGKnhQfcIb77ITpQNMMg7QuAJ0XhpjCoYDZuIcl xKew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=5YbZVeepXby6Bfz1DOniEY5s7laTTpL6r66uM7hL2Qc=; b=mVw/rgh3P43YwJA2Inod+CDKhF4eD3xqfZC4+NPy87Tdokb9019s1OQhEA5walqXct rdfARkVTu2lwm58+IdUJ5ySWhhql6yvw/6KQGoGKpMMO2t1MZkJWpPLW9jl62zZBSO3I R6OGnCoKWg+WdfrJGUbUYenLAVl9SfO6fQss4x/E+YNJamX3GRa+wtXkW5jnmpPR7+4O ngyzFWQW+E1hH+lWkmjqfQxXE9bCWqkaejaG9UHKHojmMDu9pRF/Y5CSNgDD+VekB0Ug HjvPZF9LPt7Ut3MtwdRXisrbddK+m8+P70mNsU7962H9Bol5n9qqMDlj1deoT6YB+6Jt 1xZg== X-Gm-Message-State: AD7BkJIIJB4mtsG0F3gW6iNd3ELh4AKbB7i80zLAcMCMO09ArVvt+MczG5VU4eyoxXMADA== X-Received: by 10.28.126.210 with SMTP id z201mr4987940wmc.74.1459963375071; Wed, 06 Apr 2016 10:22:55 -0700 (PDT) Received: from Solace.fritz.box (net-37-116-155-252.cust.vodafonedsl.it. [37.116.155.252]) by smtp.gmail.com with ESMTPSA id w202sm4680509wmw.18.2016.04.06.10.22.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Apr 2016 10:22:54 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Wed, 06 Apr 2016 19:22:51 +0200 Message-ID: <20160406172251.25877.99415.stgit@Solace.fritz.box> In-Reply-To: <20160406170023.25877.15622.stgit@Solace.fritz.box> References: <20160406170023.25877.15622.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Juergen Gross , Meng Xu Subject: [Xen-devel] [PATCH v2 02/11] xen: sched: implement .init_pdata in Credit, Credit2 and RTDS X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In fact, if a scheduler needs per-pCPU information, that needs to be initialized appropriately. So, we take the code that is performing initializations from (right now) .alloc_pdata, and use it for .init_pdata, leaving only actualy allocations in the former, if any (which is the case in RTDS and Credit1). On the other hand, in Credit2, since we don't really need any per-pCPU data allocation, everything that was being done in .alloc_pdata, is now done in .init_pdata. And the fact that now .alloc_pdata can be left undefined, allows us to just get rid of it. Still for Credit2, the fact that .init_pdata is called during CPU_STARTING (rather than CPU_UP_PREPARE) kills the need for the scheduler to setup a similar callback itself, simplifying the code. And thanks to such simplification, it is now also ok to move some of the logic meant at double checking that a cpu was (or was not) initialized, into ASSERTS (rather than an if() and a BUG_ON). Signed-off-by: Dario Faggioli Reviewed-by: Meng Xu Reviewed-by: George Dunlap --- Cc: Juergen Gross --- Changes from v2: * make the ASSERT() in credit more linear, as suggested during review; * minor adjustements to the changelog, as suggested during review. --- xen/common/sched_credit.c | 20 +++++++++--- xen/common/sched_credit2.c | 72 +++----------------------------------------- xen/common/sched_rt.c | 11 ++++++- 3 files changed, 28 insertions(+), 75 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 63a4a63..f503e73 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -527,8 +527,6 @@ static void * csched_alloc_pdata(const struct scheduler *ops, int cpu) { struct csched_pcpu *spc; - struct csched_private *prv = CSCHED_PRIV(ops); - unsigned long flags; /* Allocate per-PCPU info */ spc = xzalloc(struct csched_pcpu); @@ -541,6 +539,19 @@ csched_alloc_pdata(const struct scheduler *ops, int cpu) return ERR_PTR(-ENOMEM); } + return spc; +} + +static void +csched_init_pdata(const struct scheduler *ops, void *pdata, int cpu) +{ + struct csched_private *prv = CSCHED_PRIV(ops); + struct csched_pcpu * const spc = pdata; + unsigned long flags; + + /* cpu data needs to be allocated, but STILL uninitialized */ + ASSERT(spc && spc->runq.next == NULL && spc->runq.prev == NULL); + spin_lock_irqsave(&prv->lock, flags); /* Initialize/update system-wide config */ @@ -561,16 +572,12 @@ csched_alloc_pdata(const struct scheduler *ops, int cpu) INIT_LIST_HEAD(&spc->runq); spc->runq_sort_last = prv->runq_sort; spc->idle_bias = nr_cpu_ids - 1; - if ( per_cpu(schedule_data, cpu).sched_priv == NULL ) - per_cpu(schedule_data, cpu).sched_priv = spc; /* Start off idling... */ BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu))); cpumask_set_cpu(cpu, prv->idlers); spin_unlock_irqrestore(&prv->lock, flags); - - return spc; } #ifndef NDEBUG @@ -2054,6 +2061,7 @@ static const struct scheduler sched_credit_def = { .alloc_vdata = csched_alloc_vdata, .free_vdata = csched_free_vdata, .alloc_pdata = csched_alloc_pdata, + .init_pdata = csched_init_pdata, .free_pdata = csched_free_pdata, .alloc_domdata = csched_alloc_domdata, .free_domdata = csched_free_domdata, diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index e97d8be..8a56953 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -1971,7 +1971,8 @@ static void deactivate_runqueue(struct csched2_private *prv, int rqi) cpumask_clear_cpu(rqi, &prv->active_queues); } -static void init_pcpu(const struct scheduler *ops, int cpu) +static void +csched2_init_pdata(const struct scheduler *ops, void *pdata, int cpu) { unsigned rqi; unsigned long flags; @@ -1981,12 +1982,7 @@ static void init_pcpu(const struct scheduler *ops, int cpu) spin_lock_irqsave(&prv->lock, flags); - if ( cpumask_test_cpu(cpu, &prv->initialized) ) - { - printk("%s: Strange, cpu %d already initialized!\n", __func__, cpu); - spin_unlock_irqrestore(&prv->lock, flags); - return; - } + ASSERT(!cpumask_test_cpu(cpu, &prv->initialized)); /* Figure out which runqueue to put it in */ rqi = 0; @@ -2036,20 +2032,6 @@ static void init_pcpu(const struct scheduler *ops, int cpu) return; } -static void * -csched2_alloc_pdata(const struct scheduler *ops, int cpu) -{ - /* Check to see if the cpu is online yet */ - /* Note: cpu 0 doesn't get a STARTING callback */ - if ( cpu == 0 || cpu_to_socket(cpu) != XEN_INVALID_SOCKET_ID ) - init_pcpu(ops, cpu); - else - printk("%s: cpu %d not online yet, deferring initializatgion\n", - __func__, cpu); - - return NULL; -} - static void csched2_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) { @@ -2061,7 +2043,7 @@ csched2_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) spin_lock_irqsave(&prv->lock, flags); - BUG_ON(!cpumask_test_cpu(cpu, &prv->initialized)); + ASSERT(cpumask_test_cpu(cpu, &prv->initialized)); /* Find the old runqueue and remove this cpu from it */ rqi = prv->runq_map[cpu]; @@ -2099,49 +2081,6 @@ csched2_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) } static int -csched2_cpu_starting(int cpu) -{ - struct scheduler *ops; - - /* Hope this is safe from cpupools switching things around. :-) */ - ops = per_cpu(scheduler, cpu); - - if ( ops->alloc_pdata == csched2_alloc_pdata ) - init_pcpu(ops, cpu); - - return NOTIFY_DONE; -} - -static int cpu_credit2_callback( - struct notifier_block *nfb, unsigned long action, void *hcpu) -{ - unsigned int cpu = (unsigned long)hcpu; - int rc = 0; - - switch ( action ) - { - case CPU_STARTING: - csched2_cpu_starting(cpu); - break; - default: - break; - } - - return !rc ? NOTIFY_DONE : notifier_from_errno(rc); -} - -static struct notifier_block cpu_credit2_nfb = { - .notifier_call = cpu_credit2_callback -}; - -static int -csched2_global_init(void) -{ - register_cpu_notifier(&cpu_credit2_nfb); - return 0; -} - -static int csched2_init(struct scheduler *ops) { int i; @@ -2219,12 +2158,11 @@ static const struct scheduler sched_credit2_def = { .dump_cpu_state = csched2_dump_pcpu, .dump_settings = csched2_dump, - .global_init = csched2_global_init, .init = csched2_init, .deinit = csched2_deinit, .alloc_vdata = csched2_alloc_vdata, .free_vdata = csched2_free_vdata, - .alloc_pdata = csched2_alloc_pdata, + .init_pdata = csched2_init_pdata, .free_pdata = csched2_free_pdata, .alloc_domdata = csched2_alloc_domdata, .free_domdata = csched2_free_domdata, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index aece318..b96bd93 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -666,8 +666,8 @@ rt_deinit(struct scheduler *ops) * Point per_cpu spinlock to the global system lock; * All cpu have same global system lock */ -static void * -rt_alloc_pdata(const struct scheduler *ops, int cpu) +static void +rt_init_pdata(const struct scheduler *ops, void *pdata, int cpu) { struct rt_private *prv = rt_priv(ops); spinlock_t *old_lock; @@ -680,6 +680,12 @@ rt_alloc_pdata(const struct scheduler *ops, int cpu) /* _Not_ pcpu_schedule_unlock(): per_cpu().schedule_lock changed! */ spin_unlock_irqrestore(old_lock, flags); +} + +static void * +rt_alloc_pdata(const struct scheduler *ops, int cpu) +{ + struct rt_private *prv = rt_priv(ops); if ( !alloc_cpumask_var(&_cpumask_scratch[cpu]) ) return ERR_PTR(-ENOMEM); @@ -1461,6 +1467,7 @@ static const struct scheduler sched_rtds_def = { .deinit = rt_deinit, .alloc_pdata = rt_alloc_pdata, .free_pdata = rt_free_pdata, + .init_pdata = rt_init_pdata, .alloc_domdata = rt_alloc_domdata, .free_domdata = rt_free_domdata, .init_domain = rt_dom_init,