From patchwork Fri Mar 18 19:04:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 8623301 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DBD099F6E1 for ; Fri, 18 Mar 2016 19:07:18 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D909520351 for ; Fri, 18 Mar 2016 19:07:17 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE47120306 for ; Fri, 18 Mar 2016 19:07:16 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1agzhf-0000Gg-7t; Fri, 18 Mar 2016 19:04:55 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1agzhe-0000GE-93 for xen-devel@lists.xenproject.org; Fri, 18 Mar 2016 19:04:54 +0000 Received: from [85.158.137.68] by server-10.bemta-3.messagelabs.com id 2A/35-03003-5515CE65; Fri, 18 Mar 2016 19:04:53 +0000 X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-6.tower-31.messagelabs.com!1458327892!3932812!1 X-Originating-IP: [74.125.82.67] X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 47563 invoked from network); 18 Mar 2016 19:04:52 -0000 Received: from mail-wm0-f67.google.com (HELO mail-wm0-f67.google.com) (74.125.82.67) by server-6.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 18 Mar 2016 19:04:52 -0000 Received: by mail-wm0-f67.google.com with SMTP id x188so8147311wmg.0 for ; Fri, 18 Mar 2016 12:04:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=H8eqbjsBh/65ndfnbGyzIG5VBdLPJQz8Upb2i3Rjvv0=; b=JSBpIkKaAPkpnMhL8LTmSKhbTVvTChoc6XJBKSMNnXFRWtXWTrygjjTreC9cJ6ZfFR SIf9Iky6/Jx3e8He0uRp1v3s0cX14te7eBGHCYama7WGMlCCsJpcjhrSBT+h7NxK9UD5 lZcdBlToIaSFYoTaRYg/ned6wJbooObLLrjcB9eXc1VQFjYp6LQgkqp5v+m1n96+soIv uebrv42cSJb02R0ro9rP8g2lmVTGkz20pAkLBhXBPbA1oL9IFuRdOJ4xuZUw0Qe+r/3x /V6xVz8nZ8iGvV2gZxcxTLaDPlohJ8VBnPFoGasXqizOYvg5a9/i/pSrokv0S35MqghS ZsdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=H8eqbjsBh/65ndfnbGyzIG5VBdLPJQz8Upb2i3Rjvv0=; b=J9dRc0P2zOaxXuqg9qcyJvzTDy3LY7ConDvKvt41AtPTKDf1qEKgMIXg8uCtzd4QPk iRFtPowWP8BYVeIVN0J2Jg7SpcageKm+7CYgvZ2cmvdoE7SjSzkt/NX5Vgh06s3gMeWR KvMXJUm1uOtZvF4a+1gDQPypU8lbjiMNC/YVYU+mptczPCLrfQ1RXKJEW4J/A2+4Raf2 27D0zQSkOWoRWojXye9xB1DKKvdqWUY7Is2pqnYzhWeBxnGufvuaKVZUyS9Jdmne93nV MANoBoRP8i0EIbaIeUrP0mzNn4ZC2Ve9dpQ15Y3rfU/2G2ztWFjpEXBp4FLOpTDANvby yKjQ== X-Gm-Message-State: AD7BkJIJG2zBrQWRvKamqKObAis1Zi6ucrGvMwvABGcKAiEPCxyFVhXcSU7iSCYtCRA4rA== X-Received: by 10.194.2.41 with SMTP id 9mr19336053wjr.10.1458327892494; Fri, 18 Mar 2016 12:04:52 -0700 (PDT) Received: from Solace.station (net-2-35-170-8.cust.vodafonedsl.it. [2.35.170.8]) by smtp.gmail.com with ESMTPSA id gt7sm13431754wjc.1.2016.03.18.12.04.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 Mar 2016 12:04:51 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Fri, 18 Mar 2016 20:04:47 +0100 Message-ID: <20160318190447.8117.72371.stgit@Solace.station> In-Reply-To: <20160318185524.8117.74837.stgit@Solace.station> References: <20160318185524.8117.74837.stgit@Solace.station> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap Subject: [Xen-devel] [PATCH 07/16] xen: sched: prepare a .switch_sched hook for Credit2 X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In fact, right now, if we switch cpu X from, say, Credit to Credit2, we do: schedule_cpu_switch(X, csched --> csched2): //scheduler[x] is csched //schedule_lock[x] is csched_lock csched2_alloc_pdata(x) csched2_init_pdata(x) pcpu_schedule_lock(x) --> takes csched_lock schedule_lock[x] = csched2_lock spin_unlock(csched_lock) [1] pcpu_schedule_lock(x) ----> takes csched2_lock scheduler[X] = csched2 pcpu_schedule_unlock(x) --> unlocks csched2_lock csched_free_pdata(x) So, if anything scheduling related and involving CPU X happens, at time [1], we will: - take csched2_lock, - operate on Credit1 functions and data structures, which is no good! Furthermore, if we switch cpu X from RTDS to Credit2, we do: schedule_cpu_switch(X, RTDS --> csched2): //scheduler[x] is rtds //schedule_lock[x] is rtds_lock csched2_alloc_pdata(x) csched2_init_pdata(x) pcpu_schedule_lock(x) --> takes rtds_lock schedule_lock[x] = csched2_lock spin_unlock(rtds_lock) pcpu_schedule_lock(x) ----> takes csched2_lock scheduler[x] = csched2 pcpu_schedule_unlock(x) --> unlocks csched2_lock rtds_free_pdata(x) spin_lock(rtds_lock) ASSERT(schedule_lock[x] == rtds_lock) [2] schedule_lock[x] = DEFAULT_SCHEDULE_LOCK [3] spin_unlock(rtds_lock) Which means: 1) the ASSERT at [2] triggers! 2) At [3], we screw up the lock remapping we've done for ourself in csched2_init_pdata()! The former problem arises because there is a window during which the lock is already the new one, but the scheduler is still the old one. The latter problem, becase we let other's scheduler mess with lock (re)mapping during their freeing path, instead of doing it ourself. This patch, therefore, introduces the new switch_sched hook, for Credit2, as done already (in "xen: sched: prepare .switch_sched for Credit1") for Credit1. Signed-off-by: Dario Faggioli --- Cc: George Dunlap --- xen/common/sched_credit2.c | 43 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 41 insertions(+), 2 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 919ca13..25d8e85 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -1968,7 +1968,8 @@ static void deactivate_runqueue(struct csched2_private *prv, int rqi) cpumask_clear_cpu(rqi, &prv->active_queues); } -static void +/* Returns the ID of the runqueue the cpu is assigned to. */ +static unsigned init_pdata(struct csched2_private *prv, unsigned int cpu) { unsigned rqi; @@ -2021,7 +2022,7 @@ init_pdata(struct csched2_private *prv, unsigned int cpu) cpumask_set_cpu(cpu, &prv->initialized); - return; + return rqi; } static void @@ -2035,6 +2036,43 @@ csched2_init_pdata(const struct scheduler *ops, void *pdata, int cpu) spin_unlock_irqrestore(&prv->lock, flags); } +/* Change the scheduler of cpu to us (Credit2). */ +static void +csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu, + void *pdata, void *vdata) +{ + struct csched2_private *prv = CSCHED2_PRIV(new_ops); + struct csched2_vcpu *svc = vdata; + spinlock_t *old_lock; + unsigned rqi; + + ASSERT(!pdata && svc && is_idle_vcpu(svc->vcpu)); + + spin_lock_irq(&prv->lock); + /* + * We may be acquiring the lock of another scheduler here (the one cpu + * still belongs to when calling this function). That is ok as, anyone + * trying to schedule on this cpu will block until when we release that + * lock (bottom of this function). When unblocked --because of the loop + * implemented by schedule_lock() functions-- he will notice the lock + * changed, and acquire ours before being able to proceed. + */ + old_lock = pcpu_schedule_lock(cpu); + + idle_vcpu[cpu]->sched_priv = vdata; + + rqi = init_pdata(prv, cpu); + + per_cpu(scheduler, cpu) = new_ops; + per_cpu(schedule_data, cpu).sched_priv = NULL; /* no pdata */ + /* (Re?)route the lock to the per pCPU lock. */ + per_cpu(schedule_data, cpu).schedule_lock = &prv->rqd[rqi].lock; + + /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */ + spin_unlock(old_lock); + spin_unlock_irq(&prv->lock); +} + static void csched2_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) { @@ -2167,6 +2205,7 @@ static const struct scheduler sched_credit2_def = { .free_vdata = csched2_free_vdata, .init_pdata = csched2_init_pdata, .free_pdata = csched2_free_pdata, + .switch_sched = csched2_switch_sched, .alloc_domdata = csched2_alloc_domdata, .free_domdata = csched2_free_domdata, };