From patchwork Thu Aug 6 01:27:24 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Boyd X-Patchwork-Id: 6955201 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6DCFDC05AD for ; Thu, 6 Aug 2015 01:30:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7851220675 for ; Thu, 6 Aug 2015 01:30:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7875820670 for ; Thu, 6 Aug 2015 01:29:59 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZN9yH-0004lH-Ki; Thu, 06 Aug 2015 01:27:49 +0000 Received: from smtp.codeaurora.org ([198.145.29.96]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZN9yE-0004jQ-8T for linux-arm-kernel@lists.infradead.org; Thu, 06 Aug 2015 01:27:46 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 974A41405B7; Thu, 6 Aug 2015 01:27:25 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 684781402EC; Thu, 6 Aug 2015 01:27:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from [10.134.64.202] (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: sboyd@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id AC4441402EC; Thu, 6 Aug 2015 01:27:24 +0000 (UTC) Message-ID: <55C2B7FC.5090402@codeaurora.org> Date: Wed, 05 Aug 2015 18:27:24 -0700 From: Stephen Boyd User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130329 Thunderbird/17.0.5 MIME-Version: 1.0 To: Kumar Gala Subject: Re: [PATCH v5 2/2] firmware: qcom: scm: Add support for ARM64 SoCs References: <1430249038-30987-1-git-send-email-galak@codeaurora.org> <1430249038-30987-2-git-send-email-galak@codeaurora.org> In-Reply-To: <1430249038-30987-2-git-send-email-galak@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150805_182746_345323_F9594260 X-CRM114-Status: GOOD ( 16.96 ) X-Spam-Score: -2.0 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-msm@vger.kernel.org, arm@kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Lina Iyer Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On 04/28/2015 12:23 PM, Kumar Gala wrote: > + > +int __qcom_scm_call_armv8_64(u64 x0, u64 x1, u64 x2, u64 x3, u64 x4, u64 x5, > + u64 *ret1, u64 *ret2, u64 *ret3) > +{ > + register u64 r0 asm("r0") = x0; > + register u64 r1 asm("r1") = x1; > + register u64 r2 asm("r2") = x2; > + register u64 r3 asm("r3") = x3; > + register u64 r4 asm("r4") = x4; > + register u64 r5 asm("r5") = x5; This should set x6 to 0. register u32 r6 asm("r6") = 0; for example. > + > + do { > + asm volatile( > + __asmeq("%0", "x0") > + __asmeq("%1", "x1") > + __asmeq("%2", "x2") > + __asmeq("%3", "x3") > + __asmeq("%4", "x0") > + __asmeq("%5", "x1") > + __asmeq("%6", "x2") > + __asmeq("%7", "x3") > + __asmeq("%8", "x4") > + __asmeq("%9", "x5") And then the asmeq thing here for x6. > +#ifdef REQUIRES_SEC > + ".arch_extension sec\n" > +#endif > + "smc #0\n" > + : "=r" (r0), "=r" (r1), "=r" (r2), "=r" (r3) > + : "r" (r0), "r" (r1), "r" (r2), "r" (r3), "r" (r4), > + "r" (r5) And add x6 as an input here. > + : "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13", and remove x6 as a clobber. > + "x14", "x15", "x16", "x17"); > + } while (r0 == QCOM_SCM_INTERRUPTED); > + > + if (ret1) > + *ret1 = r1; > + if (ret2) > + *ret2 = r2; > + if (ret3) > + *ret3 = r3; > + > + return r0; > +} > + > +int __qcom_scm_call_armv8_32(u32 w0, u32 w1, u32 w2, u32 w3, u32 w4, u32 w5, > + u64 *ret1, u64 *ret2, u64 *ret3) > +{ > + register u32 r0 asm("r0") = w0; > + register u32 r1 asm("r1") = w1; > + register u32 r2 asm("r2") = w2; > + register u32 r3 asm("r3") = w3; > + register u32 r4 asm("r4") = w4; > + register u32 r5 asm("r5") = w5; This needs to set r6 to 0 as well register u32 r6 asm("r6") = 0; for example. > + > + do { > + asm volatile( > + __asmeq("%0", "x0") > + __asmeq("%1", "x1") > + __asmeq("%2", "x2") > + __asmeq("%3", "x3") > + __asmeq("%4", "x0") > + __asmeq("%5", "x1") > + __asmeq("%6", "x2") > + __asmeq("%7", "x3") > + __asmeq("%8", "x4") > + __asmeq("%9", "x5") And then another asmeq thing here for x6. > +#ifdef REQUIRES_SEC > + ".arch_extension sec\n" > +#endif > + "smc #0\n" > + : "=r" (r0), "=r" (r1), "=r" (r2), "=r" (r3) > + : "r" (r0), "r" (r1), "r" (r2), "r" (r3), "r" (r4), > + "r" (r5) And then add r6 here as an input > + : "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13", And remove r6 from the clobber. > + "x14", "x15", "x16", "x17"); > + > + } while (r0 == QCOM_SCM_INTERRUPTED); > + > + if (ret1) > + *ret1 = r1; > + if (ret2) > + *ret2 = r2; > + if (ret3) > + *ret3 = r3; > + > + return r0; > +} > + > Here's a totally untested patch for that. Signed-off-by: Stephen Boyd ----8<----- diff --git a/drivers/firmware/qcom_scm-64.c b/drivers/firmware/qcom_scm-64.c index a95fd9b5d576..8f7e65ff524c 100644 --- a/drivers/firmware/qcom_scm-64.c +++ b/drivers/firmware/qcom_scm-64.c @@ -114,6 +114,7 @@ int __qcom_scm_call_armv8_64(u64 x0, u64 x1, u64 x2, u64 x3, u64 x4, u64 x5, register u64 r3 asm("r3") = x3; register u64 r4 asm("r4") = x4; register u64 r5 asm("r5") = x5; + register u64 r6 asm("r5") = 0; do { asm volatile( @@ -127,14 +128,15 @@ int __qcom_scm_call_armv8_64(u64 x0, u64 x1, u64 x2, u64 x3, u64 x4, u64 x5, __asmeq("%7", "x3") __asmeq("%8", "x4") __asmeq("%9", "x5") + __asmeq("%10", "x6") #ifdef REQUIRES_SEC ".arch_extension sec\n" #endif "smc #0\n" : "=r" (r0), "=r" (r1), "=r" (r2), "=r" (r3) : "r" (r0), "r" (r1), "r" (r2), "r" (r3), "r" (r4), - "r" (r5) - : "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13", + "r" (r5), "r" (r6) + : "x7", "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16", "x17"); } while (r0 == QCOM_SCM_INTERRUPTED); @@ -157,6 +159,7 @@ int __qcom_scm_call_armv8_32(u32 w0, u32 w1, u32 w2, u32 w3, u32 w4, u32 w5, register u32 r3 asm("r3") = w3; register u32 r4 asm("r4") = w4; register u32 r5 asm("r5") = w5; + register u32 r6 asm("r6") = 0; do { asm volatile( @@ -170,14 +173,15 @@ int __qcom_scm_call_armv8_32(u32 w0, u32 w1, u32 w2, u32 w3, u32 w4, u32 w5, __asmeq("%7", "x3") __asmeq("%8", "x4") __asmeq("%9", "x5") + __asmeq("%10", "x6") #ifdef REQUIRES_SEC ".arch_extension sec\n" #endif "smc #0\n" : "=r" (r0), "=r" (r1), "=r" (r2), "=r" (r3) : "r" (r0), "r" (r1), "r" (r2), "r" (r3), "r" (r4), - "r" (r5) - : "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13", + "r" (r5), "r" (r6) + : "x7", "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16", "x17"); } while (r0 == QCOM_SCM_INTERRUPTED);