From patchwork Thu Nov 7 22:32:29 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Mack X-Patchwork-Id: 3154661 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E0426BEEB2 for ; Thu, 7 Nov 2013 22:33:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E23F8204A0 for ; Thu, 7 Nov 2013 22:33:11 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E6F722049C for ; Thu, 7 Nov 2013 22:33:10 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VeY8Q-0005ns-NS; Thu, 07 Nov 2013 22:33:06 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VeY8O-0004b7-AV; Thu, 07 Nov 2013 22:33:04 +0000 Received: from mail-bk0-x22b.google.com ([2a00:1450:4008:c01::22b]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VeY8L-0004aW-Mf for linux-arm-kernel@lists.infradead.org; Thu, 07 Nov 2013 22:33:02 +0000 Received: by mail-bk0-f43.google.com with SMTP id mz13so482138bkb.16 for ; Thu, 07 Nov 2013 14:32:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=dpHXBOG1jz2Km2SBFbnIV54tg+egu1HlrsKbMpmCr8w=; b=N4likGizyQJ/eufbJTIbInn4yRVgezkTzEXbqdQ5BnH+54ffRbC5RHhCNbuJyz/7BE Zexq6WwDXf6OI5v/Ni0rUfcuU0Kf6XOEIUQScFhSBpbHmZbqMQSCnpMhW8Ow1kTvChEd X4hEvOdbBMvh7aZCbdcjXOFMY2eKZ1Jd7umk8T4HeAG+Mr9XXes8aMnr/ecOkmw4U21p nyuNvldoLs8NSfAvManckNrIvbLoPMHlPAy7edu/Bm7+5p4I05L12u9oyNSpYCt5lXQD 7Vtjo6uvxwwp8jpM+eX9c52/93aUpTcNwxtuWE9zweRpim3ulg1PrIA1PvYUVXc0pBup avMw== X-Received: by 10.205.22.71 with SMTP id qv7mr8239784bkb.20.1383863555536; Thu, 07 Nov 2013 14:32:35 -0800 (PST) Received: from tamtam.fritz.box ([2001:4dd0:ff00:9394:224:d7ff:fec6:a0ec]) by mx.google.com with ESMTPSA id kk2sm3730624bkb.10.2013.11.07.14.32.33 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Nov 2013 14:32:34 -0800 (PST) From: Daniel Mack To: linux-omap@vger.kernel.org, joelf@ti.com, gururaja.hebbar@ti.com, balajitk@ti.com Subject: [PATCH v5] ARM: omap: edma: add suspend suspend/resume hooks Date: Thu, 7 Nov 2013 23:32:29 +0100 Message-Id: <1383863549-7438-1-git-send-email-zonque@gmail.com> X-Mailer: git-send-email 1.8.4.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131107_173301_899963_C7A5B10A X-CRM114-Status: GOOD ( 17.25 ) X-Spam-Score: -2.0 (--) Cc: nm@ti.com, mporter@ti.com, nsekhar@ti.com, s.neumann@raumfeld.com, Daniel Mack , Russ.Dill@ti.com, vaibhav.bedia@gmail.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch makes the edma driver resume correctly after suspend. Tested on an AM33xx platform with cyclic audio streams and omap_hsmmc. All information can be reconstructed by already known runtime information. As we now use some functions that were previously only used from __init context, annotations had to be dropped. Signed-off-by: Daniel Mack --- Ok, here is v5. v4 -> v5: * dropped pm_runtime_* function calls entirely * moved the function pointers to .suspend/resume _noirq Again, thanks for the reviews. I'm still uncertain which function order is most appropriate though. Thanks, Daniel arch/arm/common/edma.c | 78 ++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 75 insertions(+), 3 deletions(-) diff --git a/arch/arm/common/edma.c b/arch/arm/common/edma.c index 8e1a024..7deacde 100644 --- a/arch/arm/common/edma.c +++ b/arch/arm/common/edma.c @@ -239,6 +239,8 @@ struct edma { /* list of channels with no even trigger; terminated by "-1" */ const s8 *noevent; + struct edma_soc_info *info; + /* The edma_inuse bit for each PaRAM slot is clear unless the * channel is in use ... by ARM or DSP, for QDMA, or whatever. */ @@ -290,13 +292,13 @@ static void map_dmach_queue(unsigned ctlr, unsigned ch_no, ~(0x7 << bit), queue_no << bit); } -static void __init map_queue_tc(unsigned ctlr, int queue_no, int tc_no) +static void map_queue_tc(unsigned ctlr, int queue_no, int tc_no) { int bit = queue_no * 4; edma_modify(ctlr, EDMA_QUETCMAP, ~(0x7 << bit), ((tc_no & 0x7) << bit)); } -static void __init assign_priority_to_queue(unsigned ctlr, int queue_no, +static void assign_priority_to_queue(unsigned ctlr, int queue_no, int priority) { int bit = queue_no * 4; @@ -315,7 +317,7 @@ static void __init assign_priority_to_queue(unsigned ctlr, int queue_no, * included in that particular EDMA variant (Eg : dm646x) * */ -static void __init map_dmach_param(unsigned ctlr) +static void map_dmach_param(unsigned ctlr) { int i; for (i = 0; i < EDMA_MAX_DMACH; i++) @@ -1785,15 +1787,85 @@ static int edma_probe(struct platform_device *pdev) edma_write_array2(j, EDMA_DRAE, i, 1, 0x0); edma_write_array(j, EDMA_QRAE, i, 0x0); } + edma_cc[j]->info = info[j]; arch_num_cc++; } return 0; } +static int edma_pm_suspend(struct device *dev) +{ + int j; + + for (j = 0; j < arch_num_cc; j++) { + struct edma *ecc = edma_cc[j]; + + disable_irq(ecc->irq_res_start); + disable_irq(ecc->irq_res_end); + } + + return 0; +} + +static int edma_pm_resume(struct device *dev) +{ + int i, j; + + for (j = 0; j < arch_num_cc; j++) { + struct edma *cc = edma_cc[j]; + + s8 (*queue_priority_mapping)[2]; + s8 (*queue_tc_mapping)[2]; + + queue_tc_mapping = cc->info->queue_tc_mapping; + queue_priority_mapping = cc->info->queue_priority_mapping; + + /* Event queue to TC mapping */ + for (i = 0; queue_tc_mapping[i][0] != -1; i++) + map_queue_tc(j, queue_tc_mapping[i][0], + queue_tc_mapping[i][1]); + + /* Event queue priority mapping */ + for (i = 0; queue_priority_mapping[i][0] != -1; i++) + assign_priority_to_queue(j, + queue_priority_mapping[i][0], + queue_priority_mapping[i][1]); + + /* Map the channel to param entry if channel mapping logic + * exist + */ + if (edma_read(j, EDMA_CCCFG) & CHMAP_EXIST) + map_dmach_param(j); + + for (i = 0; i < cc->num_channels; i++) { + if (test_bit(i, cc->edma_inuse)) { + /* ensure access through shadow region 0 */ + edma_or_array2(j, EDMA_DRAE, 0, i >> 5, + BIT(i & 0x1f)); + + setup_dma_interrupt(i, + cc->intr_data[i].callback, + cc->intr_data[i].data); + } + } + + enable_irq(cc->irq_res_start); + enable_irq(cc->irq_res_end); + } + + return 0; +} + +static const struct dev_pm_ops edma_pm_ops = { + .suspend = edma_pm_suspend, + .resume_noirq = edma_pm_resume, +}; + static struct platform_driver edma_driver = { .driver = { .name = "edma", + .pm = &edma_pm_ops, .of_match_table = edma_of_ids, }, .probe = edma_probe,