From patchwork Thu Jul 18 23:21:15 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Boyd X-Patchwork-Id: 2829965 Return-Path: X-Original-To: patchwork-linux-arm-msm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 0E6A5C0319 for ; Thu, 18 Jul 2013 23:25:41 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2D0D320314 for ; Thu, 18 Jul 2013 23:25:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 40E9A20317 for ; Thu, 18 Jul 2013 23:25:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934643Ab3GRXVg (ORCPT ); Thu, 18 Jul 2013 19:21:36 -0400 Received: from smtp.codeaurora.org ([198.145.11.231]:58699 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933481Ab3GRXVf (ORCPT ); Thu, 18 Jul 2013 19:21:35 -0400 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 9F92B13F11D; Thu, 18 Jul 2013 23:21:34 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 924D213F120; Thu, 18 Jul 2013 23:21:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from sboyd-linux.qualcomm.com (i-global252.qualcomm.com [199.106.103.252]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: sboyd@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id E2FBF13F11D; Thu, 18 Jul 2013 23:21:33 +0000 (UTC) From: Stephen Boyd To: John Stultz Cc: linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Thomas Gleixner , Russell King , Catalin Marinas , Will Deacon , Christopher Covington Subject: [PATCH v4 02/17] sched_clock: Use seqcount instead of rolling our own Date: Thu, 18 Jul 2013 16:21:15 -0700 Message-Id: <1374189690-10810-3-git-send-email-sboyd@codeaurora.org> X-Mailer: git-send-email 1.8.3.3.754.g9c3c367 In-Reply-To: <1374189690-10810-1-git-send-email-sboyd@codeaurora.org> References: <1374189690-10810-1-git-send-email-sboyd@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We're going to increase the cyc value to 64 bits in the near future. Doing that is going to break the custom seqcount implementation in the sched_clock code because 64 bit numbers aren't guaranteed to be atomic. Replace the cyc_copy with a seqcount to avoid this problem. Cc: Russell King Signed-off-by: Stephen Boyd Acked-by: Will Deacon --- kernel/time/sched_clock.c | 27 ++++++++------------------- 1 file changed, 8 insertions(+), 19 deletions(-) diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index a326f27..396f7b9 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -14,11 +14,12 @@ #include #include #include +#include struct clock_data { u64 epoch_ns; u32 epoch_cyc; - u32 epoch_cyc_copy; + seqcount_t seq; unsigned long rate; u32 mult; u32 shift; @@ -54,23 +55,16 @@ static unsigned long long notrace sched_clock_32(void) u64 epoch_ns; u32 epoch_cyc; u32 cyc; + unsigned long seq; if (cd.suspended) return cd.epoch_ns; - /* - * Load the epoch_cyc and epoch_ns atomically. We do this by - * ensuring that we always write epoch_cyc, epoch_ns and - * epoch_cyc_copy in strict order, and read them in strict order. - * If epoch_cyc and epoch_cyc_copy are not equal, then we're in - * the middle of an update, and we should repeat the load. - */ do { + seq = read_seqcount_begin(&cd.seq); epoch_cyc = cd.epoch_cyc; - smp_rmb(); epoch_ns = cd.epoch_ns; - smp_rmb(); - } while (epoch_cyc != cd.epoch_cyc_copy); + } while (read_seqcount_retry(&cd.seq, seq)); cyc = read_sched_clock(); cyc = (cyc - epoch_cyc) & sched_clock_mask; @@ -90,16 +84,12 @@ static void notrace update_sched_clock(void) ns = cd.epoch_ns + cyc_to_ns((cyc - cd.epoch_cyc) & sched_clock_mask, cd.mult, cd.shift); - /* - * Write epoch_cyc and epoch_ns in a way that the update is - * detectable in cyc_to_fixed_sched_clock(). - */ + raw_local_irq_save(flags); - cd.epoch_cyc_copy = cyc; - smp_wmb(); + write_seqcount_begin(&cd.seq); cd.epoch_ns = ns; - smp_wmb(); cd.epoch_cyc = cyc; + write_seqcount_end(&cd.seq); raw_local_irq_restore(flags); } @@ -195,7 +185,6 @@ static int sched_clock_suspend(void) static void sched_clock_resume(void) { cd.epoch_cyc = read_sched_clock(); - cd.epoch_cyc_copy = cd.epoch_cyc; cd.suspended = false; }