From patchwork Fri Jan 29 10:21:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 8162101 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AB893BEEE5 for ; Fri, 29 Jan 2016 10:25:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DA1C6200FF for ; Fri, 29 Jan 2016 10:25:14 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1A35520155 for ; Fri, 29 Jan 2016 10:25:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aP6Bf-0008EI-0i; Fri, 29 Jan 2016 10:21:55 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aP6Bd-0008ED-1p for xen-devel@lists.xen.org; Fri, 29 Jan 2016 10:21:53 +0000 Received: from [193.109.254.147] by server-11.bemta-14.messagelabs.com id A8/85-28228-04D3BA65; Fri, 29 Jan 2016 10:21:52 +0000 X-Env-Sender: jgross@suse.com X-Msg-Ref: server-10.tower-27.messagelabs.com!1454062911!20037830!1 X-Originating-IP: [195.135.220.15] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 53812 invoked from network); 29 Jan 2016 10:21:51 -0000 Received: from mx2.suse.de (HELO mx2.suse.de) (195.135.220.15) by server-10.tower-27.messagelabs.com with DHE-RSA-CAMELLIA256-SHA encrypted SMTP; 29 Jan 2016 10:21:51 -0000 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E7C98ABB5; Fri, 29 Jan 2016 10:21:50 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xen.org, jbeulich@suse.com, george.dunlap@eu.citrix.com, dario.faggioli@citrix.com Date: Fri, 29 Jan 2016 11:21:48 +0100 Message-Id: <1454062908-32013-1-git-send-email-jgross@suse.com> X-Mailer: git-send-email 2.6.2 Cc: Juergen Gross Subject: [Xen-devel] [PATCH] xen: recalculate per-cpupool credits when updating timeslice X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When modifying the timeslice of the credit scheduler in a cpupool the cpupool global credit value (n_cpus * credits_per_tslice) isn't recalculated. This will lead to wrong scheduling decisions later. Do the recalculation when updating the timeslice. Signed-off-by: Juergen Gross Tested-by: Alan.Robinson --- xen/common/sched_credit.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 03fb2c2..912511e 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1086,12 +1086,19 @@ csched_dom_cntl( static inline void __csched_set_tslice(struct csched_private *prv, unsigned timeslice) { + unsigned long flags; + + spin_lock_irqsave(&prv->lock, flags); + prv->tslice_ms = timeslice; prv->ticks_per_tslice = CSCHED_TICKS_PER_TSLICE; if ( prv->tslice_ms < prv->ticks_per_tslice ) prv->ticks_per_tslice = 1; prv->tick_period_us = prv->tslice_ms * 1000 / prv->ticks_per_tslice; prv->credits_per_tslice = CSCHED_CREDITS_PER_MSEC * prv->tslice_ms; + prv->credit = prv->credits_per_tslice * prv->ncpus; + + spin_unlock_irqrestore(&prv->lock, flags); } static int