From patchwork Fri Apr 17 05:26:31 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Magnus Damm X-Patchwork-Id: 18623 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n3H5Si65004920 for ; Fri, 17 Apr 2009 05:29:18 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752284AbZDQF3S (ORCPT ); Fri, 17 Apr 2009 01:29:18 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753180AbZDQF3S (ORCPT ); Fri, 17 Apr 2009 01:29:18 -0400 Received: from rv-out-0506.google.com ([209.85.198.238]:16290 "EHLO rv-out-0506.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752284AbZDQF3R (ORCPT ); Fri, 17 Apr 2009 01:29:17 -0400 Received: by rv-out-0506.google.com with SMTP id f9so757431rvb.1 for ; Thu, 16 Apr 2009 22:29:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:from:to:cc:date:message-id :subject; bh=ZzITiKsOF7FZEIKb8/GPgg32NMUsa/OBqDkvPc0eeeU=; b=G9UZFjNVuvGTO/oXrr5l2iI+IlVukMfDmAPZ2OeazXZP2WM9QstoN+0t5qsvMbEzZj DSkAcMlC3YzRhMM5+/CwKTPjdcsyoSTED6Wl/r05l+JtfO2HARnoIvSUWuTTDlz1UOYW xZ1UEv35xHHMOk0CLRAAnUoLUxWyVHzkHuQfI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:date:message-id:subject; b=nqA+h0Owb3uDpdh1j8OM1+ZaMUFSbzMko8bs0RPVgQSe/KhdK9gCqdnSEKlkdSH0KO 8kn5nYxC/t/R7d1VHD4KCnx+pcWlvRmpqOCVjMIasZ25uffE/DcNvqj5ZZLbaokqL/on Hnl+hmErCSHFHNFk27TFhKkLve5NlLLf51rCg= Received: by 10.140.134.10 with SMTP id h10mr1080751rvd.30.1239946154057; Thu, 16 Apr 2009 22:29:14 -0700 (PDT) Received: from rx1.opensource.se (mailhost.igel.co.jp [219.106.231.130]) by mx.google.com with ESMTPS id b8sm5166408rvf.22.2009.04.16.22.29.12 (version=TLSv1/SSLv3 cipher=RC4-MD5); Thu, 16 Apr 2009 22:29:13 -0700 (PDT) From: Magnus Damm To: linux-sh@vger.kernel.org Cc: johnstul@us.ibm.com, mingo@elte.hu, lethal@linux-sh.org, tglx@linutronix.de, akpm@linux-foundation.org, Magnus Damm Date: Fri, 17 Apr 2009 14:26:31 +0900 Message-Id: <20090417052631.8114.81559.sendpatchset@rx1.opensource.se> Subject: [PATCH] clocksource: sh_cmt clocksource support Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org From: Magnus Damm Add clocksource support to the sh_cmt driver. With this in place we can do tickless with a single CMT channel. Signed-off-by: Magnus Damm --- This patch depends on the following -mm patches: clocksource-add-enable-and-disable-callbacks.patch clocksource-pass-clocksource-to-read-callback.patch drivers/clocksource/sh_cmt.c | 66 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- 0006/drivers/clocksource/sh_cmt.c +++ work/drivers/clocksource/sh_cmt.c 2009-02-04 19:48:04.000000000 +0900 @@ -47,6 +47,7 @@ struct sh_cmt_priv { unsigned long rate; spinlock_t lock; struct clock_event_device ced; + struct clocksource cs; unsigned long total_cycles; }; @@ -376,6 +377,68 @@ static void sh_cmt_stop(struct sh_cmt_pr spin_unlock_irqrestore(&p->lock, flags); } +static struct sh_cmt_priv *cs_to_sh_cmt(struct clocksource *cs) +{ + return container_of(cs, struct sh_cmt_priv, cs); +} + +static cycle_t sh_cmt_clocksource_read(struct clocksource *cs) +{ + struct sh_cmt_priv *p = cs_to_sh_cmt(cs); + unsigned long flags, raw; + unsigned long value; + int has_wrapped; + + spin_lock_irqsave(&p->lock, flags); + value = p->total_cycles; + raw = sh_cmt_get_counter(p, &has_wrapped); + + if (unlikely(has_wrapped)) + raw = p->match_value; + spin_unlock_irqrestore(&p->lock, flags); + + return value + raw; +} + +static int sh_cmt_clocksource_enable(struct clocksource *cs) +{ + struct sh_cmt_priv *p = cs_to_sh_cmt(cs); + int ret; + + p->total_cycles = 0; + + ret = sh_cmt_start(p, FLAG_CLOCKSOURCE); + if (ret) + return ret; + + /* TODO: calculate good shift from rate and counter bit width */ + cs->shift = 0; + cs->mult = clocksource_hz2mult(p->rate, cs->shift); + return 0; +} + +static void sh_cmt_clocksource_disable(struct clocksource *cs) +{ + sh_cmt_stop(cs_to_sh_cmt(cs), FLAG_CLOCKSOURCE); +} + +static int sh_cmt_register_clocksource(struct sh_cmt_priv *p, + char *name, unsigned long rating) +{ + struct clocksource *cs = &p->cs; + + cs->name = name; + cs->rating = rating; + cs->read = sh_cmt_clocksource_read; + cs->enable = sh_cmt_clocksource_enable; + cs->disable = sh_cmt_clocksource_disable; + cs->mask = CLOCKSOURCE_MASK(sizeof(unsigned long) * 8); + cs->flags = CLOCK_SOURCE_IS_CONTINUOUS; + pr_info("sh_cmt: %s used as clock source\n", cs->name); + clocksource_register(cs); + return 0; +} + static struct sh_cmt_priv *ced_to_sh_cmt(struct clock_event_device *ced) { return container_of(ced, struct sh_cmt_priv, ced); @@ -484,6 +547,9 @@ int sh_cmt_register(struct sh_cmt_priv * if (clockevent_rating) sh_cmt_register_clockevent(p, name, clockevent_rating); + if (clocksource_rating) + sh_cmt_register_clocksource(p, name, clocksource_rating); + return 0; }