From patchwork Wed Oct 30 20:21:08 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Mack X-Patchwork-Id: 3116471 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6FE79BF924 for ; Wed, 30 Oct 2013 20:21:53 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3F7872039F for ; Wed, 30 Oct 2013 20:21:52 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E661320375 for ; Wed, 30 Oct 2013 20:21:50 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VbcGu-0000el-UG; Wed, 30 Oct 2013 20:21:45 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VbcGs-00064L-MH; Wed, 30 Oct 2013 20:21:42 +0000 Received: from mail-bk0-x233.google.com ([2a00:1450:4008:c01::233]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VbcGq-00063p-0I for linux-arm-kernel@lists.infradead.org; Wed, 30 Oct 2013 20:21:41 +0000 Received: by mail-bk0-f51.google.com with SMTP id e11so659179bkh.38 for ; Wed, 30 Oct 2013 13:21:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=pZ3wX+ZnIDfQGapZ0z24Y6WBhfpj6CD0ayZe5M2NHEE=; b=N3XbDkbAZzcxBBqQDPLrDsaM4vQ0/cCYE88lXq44BLqu7sVx6mEqsFogYo8eR/K1/A VKNoFGn9sIq8EOtoRhFWbWrPo7GPp6eINn0TxselzhLbv9KbNxWgwt/wQ9bXhPD4Fi20 227/Nl51YLb7/1t77KsZMQv+5GfzqzwU/4exjbSsFDkSCfhl4B6mA1+PmARWRREfbDhb hmkLT9I+dcJh8Cypqmleb4nVQHp/7uaVj5cHSZQ5vBUFbrZif2ASD1kfA0Id6Y6eXgGZ rxl9eBa8m9ItAiDFs0auYERF8ebmSrulloIZHiuIGjT4ealcu6qUAzN0d5nMh2xbokSi mm6Q== X-Received: by 10.204.245.4 with SMTP id ls4mr3008638bkb.33.1383164475977; Wed, 30 Oct 2013 13:21:15 -0700 (PDT) Received: from tamtam.fritz.box ([2001:4dd0:ff00:9394:224:d7ff:fec6:a0ec]) by mx.google.com with ESMTPSA id l5sm19861772bko.7.2013.10.30.13.21.14 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 30 Oct 2013 13:21:15 -0700 (PDT) From: Daniel Mack To: linux-omap@vger.kernel.org, joelf@ti.com, gururaja.hebbar@ti.com, balajitk@ti.com Subject: [PATCH v4] ARM: omap: edma: add suspend suspend/resume hooks Date: Wed, 30 Oct 2013 21:21:08 +0100 Message-Id: <1383164468-4610-1-git-send-email-zonque@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131030_162140_213391_8CB7FC3D X-CRM114-Status: GOOD ( 18.30 ) X-Spam-Score: -2.0 (--) Cc: mporter@ti.com, nsekhar@ti.com, s.neumann@raumfeld.com, Daniel Mack , Russ.Dill@ti.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch makes the edma driver resume correctly after suspend. Tested on an AM33xx platform with cyclic audio streams and omap_hsmmc. All information can be reconstructed by already known runtime information. As we now use some functions that were previously only used from __init context, annotations had to be dropped. Signed-off-by: Daniel Mack Acked-by: Joel Fernandes --- There was actually only a v3 ever, I made a mistake when formating the first version of this patch. To prevent confusion though, I named this one v4. v3 -> v4: * dropped extra allocations, and reconstruct register values from already known driver states. Hi Joel, Gururaja, Balaji, thanks a lot for your feedback. I successfully tested this version with davinci mcasp as well as omap_hsmmc. I'd appreciate another round of reviews :) Thanks, Daniel arch/arm/common/edma.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 79 insertions(+), 3 deletions(-) diff --git a/arch/arm/common/edma.c b/arch/arm/common/edma.c index 8e1a024..f15cdb9 100644 --- a/arch/arm/common/edma.c +++ b/arch/arm/common/edma.c @@ -239,6 +239,8 @@ struct edma { /* list of channels with no even trigger; terminated by "-1" */ const s8 *noevent; + struct edma_soc_info *info; + /* The edma_inuse bit for each PaRAM slot is clear unless the * channel is in use ... by ARM or DSP, for QDMA, or whatever. */ @@ -290,13 +292,13 @@ static void map_dmach_queue(unsigned ctlr, unsigned ch_no, ~(0x7 << bit), queue_no << bit); } -static void __init map_queue_tc(unsigned ctlr, int queue_no, int tc_no) +static void map_queue_tc(unsigned ctlr, int queue_no, int tc_no) { int bit = queue_no * 4; edma_modify(ctlr, EDMA_QUETCMAP, ~(0x7 << bit), ((tc_no & 0x7) << bit)); } -static void __init assign_priority_to_queue(unsigned ctlr, int queue_no, +static void assign_priority_to_queue(unsigned ctlr, int queue_no, int priority) { int bit = queue_no * 4; @@ -315,7 +317,7 @@ static void __init assign_priority_to_queue(unsigned ctlr, int queue_no, * included in that particular EDMA variant (Eg : dm646x) * */ -static void __init map_dmach_param(unsigned ctlr) +static void map_dmach_param(unsigned ctlr) { int i; for (i = 0; i < EDMA_MAX_DMACH; i++) @@ -1785,15 +1787,89 @@ static int edma_probe(struct platform_device *pdev) edma_write_array2(j, EDMA_DRAE, i, 1, 0x0); edma_write_array(j, EDMA_QRAE, i, 0x0); } + edma_cc[j]->info = info[j]; arch_num_cc++; } return 0; } +static int edma_pm_suspend(struct device *dev) +{ + int j; + + pm_runtime_get_sync(dev); + + for (j = 0; j < arch_num_cc; j++) { + struct edma *ecc = edma_cc[j]; + + disable_irq(ecc->irq_res_start); + disable_irq(ecc->irq_res_end); + } + + pm_runtime_put_sync(dev); + + return 0; +} + +static int edma_pm_resume(struct device *dev) +{ + int i, j; + + pm_runtime_get_sync(dev); + + for (j = 0; j < arch_num_cc; j++) { + struct edma *cc = edma_cc[j]; + + s8 (*queue_priority_mapping)[2]; + s8 (*queue_tc_mapping)[2]; + + queue_tc_mapping = cc->info->queue_tc_mapping; + queue_priority_mapping = cc->info->queue_priority_mapping; + + /* Event queue to TC mapping */ + for (i = 0; queue_tc_mapping[i][0] != -1; i++) + map_queue_tc(j, queue_tc_mapping[i][0], + queue_tc_mapping[i][1]); + + /* Event queue priority mapping */ + for (i = 0; queue_priority_mapping[i][0] != -1; i++) + assign_priority_to_queue(j, + queue_priority_mapping[i][0], + queue_priority_mapping[i][1]); + + /* Map the channel to param entry if channel mapping logic + * exist + */ + if (edma_read(j, EDMA_CCCFG) & CHMAP_EXIST) + map_dmach_param(j); + + for (i = 0; i < cc->num_channels; i++) + if (test_bit(i, cc->edma_inuse)) { + /* ensure access through shadow region 0 */ + edma_or_array2(j, EDMA_DRAE, 0, i >> 5, + BIT(i & 0x1f)); + + setup_dma_interrupt(i, + cc->intr_data[i].callback, + cc->intr_data[i].data); + } + + enable_irq(cc->irq_res_start); + enable_irq(cc->irq_res_end); + } + + pm_runtime_put_sync(dev); + + return 0; +} + +static SIMPLE_DEV_PM_OPS(edma_pm_ops, edma_pm_suspend, edma_pm_resume); + static struct platform_driver edma_driver = { .driver = { .name = "edma", + .pm = &edma_pm_ops, .of_match_table = edma_of_ids, }, .probe = edma_probe,