From patchwork Wed May 22 00:17:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 13669833 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49D0C23BE; Wed, 22 May 2024 00:18:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716337117; cv=none; b=M5FoCADKZuWuUNtLsA3wBfo5QCdo+hqF207hUnVxrFJZ0RySD1M58DRrzRMmhuTwCCPxhkEHPA+3rdMSMuSyxkhLvENFfdkhnjRakxHM4DcujJ8xSjkDrEz1BBK4TBwIz7Cs+BBLvgYO4IrO7Wa3FIN1xjxkP0tywvGOdvh+6Ow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716337117; c=relaxed/simple; bh=KVL11mNFOe5FNqTLKijeVjwg5UuKyYE5qtUYky/f4y0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ih800XFGsZhtifdKZtWecDb69zzL8MA8x0HojCY9ysU6sCke5Ub9h2DMd5hkU2NxrZZtRqbDCF6t/lPfcQCsvVE8G3UOf/BSjjLzEKn3P3w3VYaM9DIDQwFS5EwddokfuXfDIcdiSe/Tdop4muhf2Lz5ZZG4ZAv/cc+q897RtGk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=casper.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=JSgFE3+i; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=casper.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="JSgFE3+i" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=79BDN/uYAypqTs1CgsRKyCVAI6kWKrtRRuhydi33cIU=; b=JSgFE3+icTIMsuKBoyYUY12zsI dhDWAZcaODw1byJCqBilCdnwDK+cwo5F77GfTZmwVbG7GiDkyt+phcO6ho52Ng4meUYqJTAusnyH3 CmSdIxNedXOHUY4JkPwzdtBLXZVTe/S/v3MU89hzwbo1GYqlUY0bK2eWX9D8WG/A4ti6NsAVmePVe 4ZynJ5Lg2GZpp0FB0qoSP2Np1hL+cq1Uw06hb6a5CWoXvBg7WTcectZfCWHniiClCo60uAi4mjIXE FpEq74A/G2GNwOt97HcooASYiJ67vpYSgq3smhmCSsdcnOcUEU6OLBZJSZ6L8VEwRJSiyoy6Vnvyy gawCo8iQ==; Received: from [2001:8b0:10b:1::ebe] (helo=i7.infradead.org) by casper.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9ZgT-0000000081W-18S4; Wed, 22 May 2024 00:18:21 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1s9ZgS-00000002b5c-3lUa; Wed, 22 May 2024 01:18:20 +0100 From: David Woodhouse To: kvm@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Paul Durrant , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Shuah Khan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, jalliste@amazon.co.uk, sveith@amazon.de, zide.chen@intel.com, Dongli Zhang , Chenyi Qiang Subject: [RFC PATCH v3 21/21] sched/cputime: Cope with steal time going backwards or negative Date: Wed, 22 May 2024 01:17:16 +0100 Message-ID: <20240522001817.619072-22-dwmw2@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240522001817.619072-1-dwmw2@infradead.org> References: <20240522001817.619072-1-dwmw2@infradead.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Sender: David Woodhouse X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html From: David Woodhouse In steal_account_process_time(), a delta is calculated between the value returned by paravirt_steal_clock(), and this_rq()->prev_steal_time which is assumed to be the *previous* value returned by paravirt_steal_clock(). However, instead of just assigning the newly-read value directly into ->prev_steal_time for use in the next iteration, ->prev_steal_time is *incremented* by the calculated delta. This used to be roughly the same, modulo conversion to jiffies and back, until commit 807e5b80687c0 ("sched/cputime: Add steal time support to full dynticks CPU time accounting") started clamping that delta to a maximum of the actual time elapsed. So now, if the value returned by paravirt_steal_clock() jumps by a large amount, instead of a *single* period of reporting 100% steal time, the system will report 100% steal time for as long as it takes to "catch up" with the reported value. Which is up to 584 years. But there is a benefit to advancing ->prev_steal_time only by the time which was *accounted* as having been stolen. It means that any extra time truncated by the clamping will be accounted in the next sample period rather than lost. Given the stochastic nature of the sampling, that is more accurate overall. So, continue to advance ->prev_steal_time by the accounted value as long as the delta isn't egregiously large (for which, use maxtime * 2). If the delta is more than that, just set ->prev_steal_time directly to the value returned by paravirt_steal_clock(). Fixes: 807e5b80687c0 ("sched/cputime: Add steal time support to full dynticks CPU time accounting") Signed-off-by: David Woodhouse Reviewed-by: Paul Durrant --- kernel/sched/cputime.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index af7952f12e6c..3a8a8b38966d 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -254,13 +254,21 @@ static __always_inline u64 steal_account_process_time(u64 maxtime) { #ifdef CONFIG_PARAVIRT if (static_key_false(¶virt_steal_enabled)) { - u64 steal; - - steal = paravirt_steal_clock(smp_processor_id()); - steal -= this_rq()->prev_steal_time; - steal = min(steal, maxtime); + u64 steal, abs_steal; + + abs_steal = paravirt_steal_clock(smp_processor_id()); + steal = abs_steal - this_rq()->prev_steal_time; + if (unlikely(steal > maxtime)) { + /* + * If the delta isn't egregious, it can be counted + * in the next time period. Only advance by maxtime. + */ + if (steal < maxtime * 2) + abs_steal = this_rq()->prev_steal_time + maxtime; + steal = maxtime; + } account_steal_time(steal); - this_rq()->prev_steal_time += steal; + this_rq()->prev_steal_time = abs_steal; return steal; }