From patchwork Thu May 28 16:11:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Gala X-Patchwork-Id: 6499911 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D4F549F1C1 for ; Thu, 28 May 2015 16:14:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 76DF12074B for ; Thu, 28 May 2015 16:14:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 02D5B2071B for ; Thu, 28 May 2015 16:14:22 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Yy0PR-00012o-ET; Thu, 28 May 2015 16:11:53 +0000 Received: from smtp.codeaurora.org ([198.145.29.96]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Yy0PL-0000ta-Ax for linux-arm-kernel@lists.infradead.org; Thu, 28 May 2015 16:11:48 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id CD94413FF0F; Thu, 28 May 2015 16:11:23 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id BB3DF13FF18; Thu, 28 May 2015 16:11:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from galak-ubuntu.qualcomm.com (rrcs-67-52-129-61.west.biz.rr.com [67.52.129.61]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: galak@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id A122213FF0F; Thu, 28 May 2015 16:11:21 +0000 (UTC) From: Kumar Gala To: linux-arm-msm@vger.kernel.org, Mark Rutland Subject: [PATCH v6] firmware: qcom: scm: Add support for ARM64 SoCs Date: Thu, 28 May 2015 11:11:20 -0500 Message-Id: <1432829480-14156-1-git-send-email-galak@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150528_091147_457590_AEA6E220 X-CRM114-Status: GOOD ( 28.20 ) X-Spam-Score: -0.0 (/) Cc: catalin.marinas@arm.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, arm@kernel.org, Kumar Gala , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add an implementation of the SCM interface that works on ARM64 SoCs. This is used by things like determine if we have HDCP support or not on the system. Signed-off-by: Kumar Gala --- * v6: - Added comment about HDCP usage - Folded in HDCP SCM call - implement boot interfaces as -EINVAL * v5: - use common error defines from qcom_scm.h - removed R*_STR defines * v4: - Folded in change to qcom_scm_cpu_power_down to remove HOTPLUG flag from Lina. arch/arm64/Kconfig | 1 + drivers/firmware/Makefile | 4 + drivers/firmware/qcom_scm-64.c | 406 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 411 insertions(+) create mode 100644 drivers/firmware/qcom_scm-64.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4269dba..8878800 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -190,6 +190,7 @@ config ARCH_MEDIATEK config ARCH_QCOM bool "Qualcomm Platforms" select PINCTRL + select QCOM_SCM help This enables support for the ARMv8 based Qualcomm chipsets. diff --git a/drivers/firmware/Makefile b/drivers/firmware/Makefile index 3001f1a..c79751a 100644 --- a/drivers/firmware/Makefile +++ b/drivers/firmware/Makefile @@ -12,7 +12,11 @@ obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o obj-$(CONFIG_QCOM_SCM) += qcom_scm.o +ifdef CONFIG_64BIT +obj-$(CONFIG_QCOM_SCM) += qcom_scm-64.o +else obj-$(CONFIG_QCOM_SCM) += qcom_scm-32.o +endif CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) obj-$(CONFIG_GOOGLE_FIRMWARE) += google/ diff --git a/drivers/firmware/qcom_scm-64.c b/drivers/firmware/qcom_scm-64.c new file mode 100644 index 0000000..90e0fa3 --- /dev/null +++ b/drivers/firmware/qcom_scm-64.c @@ -0,0 +1,406 @@ +/* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA + * 02110-1301, USA. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "qcom_scm.h" + +#define QCOM_SCM_SIP_FNID(s, c) (((((s) & 0xFF) << 8) | ((c) & 0xFF)) | 0x02000000) + +#define MAX_QCOM_SCM_ARGS 10 +#define MAX_QCOM_SCM_RETS 3 + +#define QCOM_SCM_ARGS_IMPL(num, a, b, c, d, e, f, g, h, i, j, ...) (\ + (((a) & 0xff) << 4) | \ + (((b) & 0xff) << 6) | \ + (((c) & 0xff) << 8) | \ + (((d) & 0xff) << 10) | \ + (((e) & 0xff) << 12) | \ + (((f) & 0xff) << 14) | \ + (((g) & 0xff) << 16) | \ + (((h) & 0xff) << 18) | \ + (((i) & 0xff) << 20) | \ + (((j) & 0xff) << 22) | \ + (num & 0xffff)) + +#define QCOM_SCM_ARGS(...) QCOM_SCM_ARGS_IMPL(__VA_ARGS__, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) + +/** + * struct qcom_scm_desc + * @arginfo: Metadata describing the arguments in args[] + * @args: The array of arguments for the secure syscall + * @ret: The values returned by the secure syscall + * @extra_arg_buf: The buffer containing extra arguments + (that don't fit in available registers) + * @x5: The 4rd argument to the secure syscall or physical address of + extra_arg_buf + */ +struct qcom_scm_desc { + u32 arginfo; + u64 args[MAX_QCOM_SCM_ARGS]; + u64 ret[MAX_QCOM_SCM_RETS]; + + /* private */ + void *extra_arg_buf; + u64 x5; +}; + + +#define QCOM_SCM_EBUSY -55 +#define QCOM_SCM_V2_EBUSY -12 + +static DEFINE_MUTEX(qcom_scm_lock); + +#define QCOM_SCM_EBUSY_WAIT_MS 30 +#define QCOM_SCM_EBUSY_MAX_RETRY 20 + +#define N_EXT_QCOM_SCM_ARGS 7 +#define FIRST_EXT_ARG_IDX 3 +#define SMC_ATOMIC_SYSCALL 31 +#define N_REGISTER_ARGS (MAX_QCOM_SCM_ARGS - N_EXT_QCOM_SCM_ARGS + 1) +#define SMC64_MASK 0x40000000 +#define SMC_ATOMIC_MASK 0x80000000 + +static int qcom_scm_remap_error(int err) +{ + switch (err) { + case QCOM_SCM_ERROR: + return -EIO; + case QCOM_SCM_EINVAL_ADDR: + case QCOM_SCM_EINVAL_ARG: + return -EINVAL; + case QCOM_SCM_EOPNOTSUPP: + return -EOPNOTSUPP; + case QCOM_SCM_ENOMEM: + return -ENOMEM; + case QCOM_SCM_EBUSY: + return QCOM_SCM_EBUSY; + case QCOM_SCM_V2_EBUSY: + return QCOM_SCM_V2_EBUSY; + } + return -EINVAL; +} + +int __qcom_scm_call_armv8_64(u64 x0, u64 x1, u64 x2, u64 x3, u64 x4, u64 x5, + u64 *ret1, u64 *ret2, u64 *ret3) +{ + register u64 r0 asm("r0") = x0; + register u64 r1 asm("r1") = x1; + register u64 r2 asm("r2") = x2; + register u64 r3 asm("r3") = x3; + register u64 r4 asm("r4") = x4; + register u64 r5 asm("r5") = x5; + + do { + asm volatile( + __asmeq("%0", "x0") + __asmeq("%1", "x1") + __asmeq("%2", "x2") + __asmeq("%3", "x3") + __asmeq("%4", "x0") + __asmeq("%5", "x1") + __asmeq("%6", "x2") + __asmeq("%7", "x3") + __asmeq("%8", "x4") + __asmeq("%9", "x5") +#ifdef REQUIRES_SEC + ".arch_extension sec\n" +#endif + "smc #0\n" + : "=r" (r0), "=r" (r1), "=r" (r2), "=r" (r3) + : "r" (r0), "r" (r1), "r" (r2), "r" (r3), "r" (r4), + "r" (r5) + : "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13", + "x14", "x15", "x16", "x17"); + } while (r0 == QCOM_SCM_INTERRUPTED); + + if (ret1) + *ret1 = r1; + if (ret2) + *ret2 = r2; + if (ret3) + *ret3 = r3; + + return r0; +} + +int __qcom_scm_call_armv8_32(u32 w0, u32 w1, u32 w2, u32 w3, u32 w4, u32 w5, + u64 *ret1, u64 *ret2, u64 *ret3) +{ + register u32 r0 asm("r0") = w0; + register u32 r1 asm("r1") = w1; + register u32 r2 asm("r2") = w2; + register u32 r3 asm("r3") = w3; + register u32 r4 asm("r4") = w4; + register u32 r5 asm("r5") = w5; + + do { + asm volatile( + __asmeq("%0", "x0") + __asmeq("%1", "x1") + __asmeq("%2", "x2") + __asmeq("%3", "x3") + __asmeq("%4", "x0") + __asmeq("%5", "x1") + __asmeq("%6", "x2") + __asmeq("%7", "x3") + __asmeq("%8", "x4") + __asmeq("%9", "x5") +#ifdef REQUIRES_SEC + ".arch_extension sec\n" +#endif + "smc #0\n" + : "=r" (r0), "=r" (r1), "=r" (r2), "=r" (r3) + : "r" (r0), "r" (r1), "r" (r2), "r" (r3), "r" (r4), + "r" (r5) + : "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13", + "x14", "x15", "x16", "x17"); + + } while (r0 == QCOM_SCM_INTERRUPTED); + + if (ret1) + *ret1 = r1; + if (ret2) + *ret2 = r2; + if (ret3) + *ret3 = r3; + + return r0; +} + +struct qcom_scm_extra_arg { + union { + u32 args32[N_EXT_QCOM_SCM_ARGS]; + u64 args64[N_EXT_QCOM_SCM_ARGS]; + }; +}; + +static enum qcom_scm_interface_version { + QCOM_SCM_UNKNOWN, + QCOM_SCM_LEGACY, + QCOM_SCM_ARMV8_32, + QCOM_SCM_ARMV8_64, +} qcom_scm_version = QCOM_SCM_UNKNOWN; + +/* This will be set to specify SMC32 or SMC64 */ +static u32 qcom_scm_version_mask; + +/* + * If there are more than N_REGISTER_ARGS, allocate a buffer and place + * the additional arguments in it. The extra argument buffer will be + * pointed to by X5. + */ +static int allocate_extra_arg_buffer(struct qcom_scm_desc *desc, gfp_t flags) +{ + int i, j; + struct qcom_scm_extra_arg *argbuf; + int arglen = desc->arginfo & 0xf; + size_t argbuflen = PAGE_ALIGN(sizeof(struct qcom_scm_extra_arg)); + + desc->x5 = desc->args[FIRST_EXT_ARG_IDX]; + + if (likely(arglen <= N_REGISTER_ARGS)) { + desc->extra_arg_buf = NULL; + return 0; + } + + argbuf = kzalloc(argbuflen, flags); + if (!argbuf) { + pr_err("qcom_scm_call: failed to alloc mem for extended argument buffer\n"); + return -ENOMEM; + } + + desc->extra_arg_buf = argbuf; + + j = FIRST_EXT_ARG_IDX; + if (qcom_scm_version == QCOM_SCM_ARMV8_64) + for (i = 0; i < N_EXT_QCOM_SCM_ARGS; i++) + argbuf->args64[i] = desc->args[j++]; + else + for (i = 0; i < N_EXT_QCOM_SCM_ARGS; i++) + argbuf->args32[i] = desc->args[j++]; + desc->x5 = virt_to_phys(argbuf); + __flush_dcache_area(argbuf, argbuflen); + + return 0; +} + +/** + * qcom_scm_call() - Invoke a syscall in the secure world + * @svc_id: service identifier + * @cmd_id: command identifier + * @fn_id: The function ID for this syscall + * @desc: Descriptor structure containing arguments and return values + * + * Sends a command to the SCM and waits for the command to finish processing. + * This should *only* be called in pre-emptible context. + * + * A note on cache maintenance: + * Note that any buffers that are expected to be accessed by the secure world + * must be flushed before invoking qcom_scm_call and invalidated in the cache + * immediately after qcom_scm_call returns. An important point that must be noted + * is that on ARMV8 architectures, invalidation actually also causes a dirty + * cache line to be cleaned (flushed + unset-dirty-bit). Therefore it is of + * paramount importance that the buffer be flushed before invoking qcom_scm_call, + * even if you don't care about the contents of that buffer. + * + * Note that cache maintenance on the argument buffer (desc->args) is taken care + * of by qcom_scm_call; however, callers are responsible for any other cached + * buffers passed over to the secure world. +*/ +static int qcom_scm_call(u32 svc_id, u32 cmd_id, struct qcom_scm_desc *desc) +{ + int arglen = desc->arginfo & 0xf; + int ret, retry_count = 0; + u32 fn_id = QCOM_SCM_SIP_FNID(svc_id, cmd_id); + u64 x0; + + ret = allocate_extra_arg_buffer(desc, GFP_KERNEL); + if (ret) + return ret; + + x0 = fn_id | qcom_scm_version_mask; + + do { + mutex_lock(&qcom_scm_lock); + + desc->ret[0] = desc->ret[1] = desc->ret[2] = 0; + + pr_debug("qcom_scm_call: func id %#llx, args: %#x, %#llx, %#llx, %#llx, %#llx\n", + x0, desc->arginfo, desc->args[0], desc->args[1], + desc->args[2], desc->x5); + + if (qcom_scm_version == QCOM_SCM_ARMV8_64) + ret = __qcom_scm_call_armv8_64(x0, desc->arginfo, + desc->args[0], desc->args[1], + desc->args[2], desc->x5, + &desc->ret[0], &desc->ret[1], + &desc->ret[2]); + else + ret = __qcom_scm_call_armv8_32(x0, desc->arginfo, + desc->args[0], desc->args[1], + desc->args[2], desc->x5, + &desc->ret[0], &desc->ret[1], + &desc->ret[2]); + mutex_unlock(&qcom_scm_lock); + + if (ret == QCOM_SCM_V2_EBUSY) + msleep(QCOM_SCM_EBUSY_WAIT_MS); + } while (ret == QCOM_SCM_V2_EBUSY && (retry_count++ < QCOM_SCM_EBUSY_MAX_RETRY)); + + if (ret < 0) + pr_err("qcom_scm_call failed: func id %#llx, arginfo: %#x, args: %#llx, %#llx, %#llx, %#llx, ret: %d, syscall returns: %#llx, %#llx, %#llx\n", + x0, desc->arginfo, desc->args[0], desc->args[1], + desc->args[2], desc->x5, ret, desc->ret[0], + desc->ret[1], desc->ret[2]); + + if (arglen > N_REGISTER_ARGS) + kfree(desc->extra_arg_buf); + if (ret < 0) + return qcom_scm_remap_error(ret); + return 0; +} + +int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) +{ + return -EINVAL; +} + +int __qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) +{ + return -EINVAL; +} + +void __qcom_scm_cpu_power_down(u32 flags) +{ + return; +} + +int __qcom_scm_is_call_available(u32 svc_id, u32 cmd_id) +{ + int ret; + struct qcom_scm_desc desc = {0}; + + desc.arginfo = QCOM_SCM_ARGS(1); + desc.args[0] = QCOM_SCM_SIP_FNID(svc_id, cmd_id); + + ret = qcom_scm_call(QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD, &desc); + + if (ret) + return ret; + + return desc.ret[0]; +} + +int __qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp) +{ + int ret, i, j; + struct qcom_scm_desc desc = {0}; + + if (req_cnt > QCOM_SCM_HDCP_MAX_REQ_CNT) + return -ERANGE; + + for (i = 0, j = 0; i < req_cnt; i++) { + desc.args[j++] = req[i].addr; + desc.args[j++] = req[i].val; + } + desc.arginfo = QCOM_SCM_ARGS(j); + + ret = qcom_scm_call(QCOM_SCM_SVC_HDCP, QCOM_SCM_CMD_HDCP, &desc); + *resp = desc.ret[0]; + + return ret; +} + +static int __init qcom_scm_init(void) +{ + int ret; + u64 ret1 = 0, x0; + + /* First try a SMC64 call */ + qcom_scm_version = QCOM_SCM_ARMV8_64; + x0 = QCOM_SCM_SIP_FNID(QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD); + x0 |= SMC_ATOMIC_MASK; + ret = __qcom_scm_call_armv8_64(x0 | SMC64_MASK, QCOM_SCM_ARGS(1), x0, 0, 0, 0, + &ret1, NULL, NULL); + if (ret || !ret1) { + /* Try SMC32 call */ + ret1 = 0; + ret = __qcom_scm_call_armv8_32(x0, QCOM_SCM_ARGS(1), x0, 0, 0, + 0, &ret1, NULL, NULL); + if (ret || !ret1) + qcom_scm_version = QCOM_SCM_LEGACY; + else + qcom_scm_version = QCOM_SCM_ARMV8_32; + } else + qcom_scm_version_mask = SMC64_MASK; + + pr_debug("qcom_scm_call: qcom_scm version is %x, mask is %x\n", + qcom_scm_version, qcom_scm_version_mask); + + return 0; +} +early_initcall(qcom_scm_init);