From patchwork Mon Aug 1 21:12:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 12934052 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 69EB2C00144 for ; Mon, 1 Aug 2022 21:15:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=POT126uA2r1RvAJCXKu1eQBgbgP4awHONDjsXZpp92Q=; b=uwEGKx1BV2bo1n 2ctlshSMLBWIkLW5JHyNg6WKF6UY9OIIq/IbAWbjWOQUa11ebm5MQBQ8Ym8Nm8QvE/wNPuJZXJiR2 FPzTiUzaexpR3U/Iez2xFLqUDKQ2Duxi6CpCN9TI8ermXFexJNXRXV6JR+kpL4ScKZdIE4iKcMdZI +FMkwZqORWTSiG6RPnK2O2cBsO4hwoB2s22y8+N2LSNXqXBETrHa1rw0i2AHnlnTflwlh65WdhB9b DbUPP8KXF9bJnhfxOgJLm4L503crw/P1b8pofrdHKjus8acqyoBGu96Jqie7HXcnJ0UmZbuKwEBDT wHDQCrTq8aqsUnsoRP9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIckE-00AhXF-SQ; Mon, 01 Aug 2022 21:14:35 +0000 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIcjj-00Ah6P-Uz for linux-arm-kernel@lists.infradead.org; Mon, 01 Aug 2022 21:14:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1659388443; x=1690924443; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2wwXDsSDy8neRX8+Ed9SYEvVYy0LIc3XuXU/Jlw0R6s=; b=gXy5rb1XWJGnSXXDVNVc8DzNfZo+lpNjZrpP5X/Wyz5wym3LOFc/DIa1 wbMmmtlZTnCzpqxYwL8gg3KiQdOKuCP82ffeYaXWHf1dxaE/T10b2px4h Q8WMZdyOIX3EvZxq+zBTRM421e9JfVuI3f7ftEmXvOkzjwkaSpioAR7GR c=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-01.qualcomm.com with ESMTP; 01 Aug 2022 14:13:57 -0700 X-QCInternal: smtphost Received: from nasanex01b.na.qualcomm.com ([10.46.141.250]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2022 14:13:57 -0700 Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 1 Aug 2022 14:13:56 -0700 From: Elliot Berman To: Bjorn Andersson , Lorenzo Pieralisi , Sudeep Holla , "Marc Zyngier" CC: Elliot Berman , Murali Nalajala , Trilok Soni , "Srivatsa Vaddagiri" , Carl van Schaik , Andy Gross , , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , Will Deacon , Catalin Marinas , , , Subject: [PATCH v2 03/11] arm64: gunyah: Add Gunyah hypercalls ABI Date: Mon, 1 Aug 2022 14:12:32 -0700 Message-ID: <20220801211240.597859-4-quic_eberman@quicinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220801211240.597859-1-quic_eberman@quicinc.com> References: <20220801211240.597859-1-quic_eberman@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01b.na.qualcomm.com (10.46.141.250) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220801_141404_084854_A783015D X-CRM114-Status: GOOD ( 17.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add initial support to perform Gunyah hypercalls. The arm64 ABI for Gunyah hypercalls generally follows the SMC Calling Convention. Signed-off-by: Elliot Berman --- MAINTAINERS | 1 + arch/arm64/include/asm/gunyah.h | 134 ++++++++++++++++++++++++++++++++ 2 files changed, 135 insertions(+) create mode 100644 arch/arm64/include/asm/gunyah.h diff --git a/MAINTAINERS b/MAINTAINERS index 0cd12ea6c11c..02f97ac90cdf 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8743,6 +8743,7 @@ L: linux-arm-msm@vger.kernel.org S: Maintained F: Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml F: Documentation/virt/gunyah/ +F: arch/arm64/include/asm/gunyah.h HABANALABS PCI DRIVER M: Oded Gabbay diff --git a/arch/arm64/include/asm/gunyah.h b/arch/arm64/include/asm/gunyah.h new file mode 100644 index 000000000000..4820e9389f40 --- /dev/null +++ b/arch/arm64/include/asm/gunyah.h @@ -0,0 +1,134 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ +#ifndef __ASM_GUNYAH_H +#define __ASM_GUNYAH_H + +#include +#include + +#define GH_CALL_TYPE_PLATFORM_CALL 0 +#define GH_CALL_TYPE_HYPERCALL 2 +#define GH_CALL_TYPE_SERVICE 3 +#define GH_CALL_TYPE_SHIFT 14 +#define GH_CALL_FUNCTION_NUM_MASK 0x3fff + +#define GH_SERVICE(fn) ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_32, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + (GH_CALL_TYPE_SERVICE << GH_CALL_TYPE_SHIFT) \ + | ((fn) & GH_CALL_FUNCTION_NUM_MASK)) + +#define GH_HYPERCALL(fn) ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + (GH_CALL_TYPE_HYPERCALL << GH_CALL_TYPE_SHIFT) \ + | ((fn) & GH_CALL_FUNCTION_NUM_MASK)) + +#define ___gh_count_args(_0, _1, _2, _3, _4, _5, _6, _7, _8, x, ...) x + +#define __gh_count_args(...) \ + ___gh_count_args(_, ## __VA_ARGS__, 8, 7, 6, 5, 4, 3, 2, 1, 0) + +#define __gh_skip_0(...) __VA_ARGS__ +#define __gh_skip_1(a, ...) __VA_ARGS__ +#define __gh_skip_2(a, b, ...) __VA_ARGS__ +#define __gh_skip_3(a, b, c, ...) __VA_ARGS__ +#define __gh_skip_4(a, b, c, d, ...) __VA_ARGS__ +#define __gh_skip_5(a, b, c, d, e, ...) __VA_ARGS__ +#define __gh_skip_6(a, b, c, d, e, f, ...) __VA_ARGS__ +#define __gh_skip_7(a, b, c, d, e, f, g, ...) __VA_ARGS__ +#define __gh_skip_8(a, b, c, d, e, f, g, h, ...) __VA_ARGS__ +#define __gh_to_res(nargs, ...) __gh_skip_ ## nargs (__VA_ARGS__) + +#define __gh_declare_arg_0(...) + +#define __gh_declare_arg_1(arg1, ...) \ + .a1 = (arg1) + +#define __gh_declare_arg_2(arg1, arg2, ...) \ + __gh_declare_arg_1(arg1), \ + .a2 = (arg2) + +#define __gh_declare_arg_3(arg1, arg2, arg3, ...) \ + __gh_declare_arg_2(arg1, arg2), \ + .a3 = (arg3) + +#define __gh_declare_arg_4(arg1, arg2, arg3, arg4, ...) \ + __gh_declare_arg_3(arg1, arg2, arg3), \ + .a4 = (arg4) + +#define __gh_declare_arg_5(arg1, arg2, arg3, arg4, arg5, ...) \ + __gh_declare_arg_4(arg1, arg2, arg3, arg4), \ + .a5 = (arg5) + +#define __gh_declare_arg_6(arg1, arg2, arg3, arg4, arg5, arg6, ...) \ + __gh_declare_arg_5(arg1, arg2, arg3, arg4, arg5), \ + .a6 = (arg6) + +#define __gh_declare_arg_7(arg1, arg2, arg3, arg4, arg5, arg6, arg7, ...) \ + __gh_declare_arg_6(arg1, arg2, arg3, arg4, arg5, arg6), \ + .a7 = (arg7) + +#define __gh_declare_arg_8(arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, ...) \ + __gh_declare_arg_7(arg1, arg2, arg3, arg4, arg5, arg6, arg7), \ + .a8 = (arg8) + +#define ___gh_declare_args(nargs) __gh_declare_arg_ ## nargs +#define __gh_declare_args(nargs) ___gh_declare_args(nargs) +#define _gh_declare_args(nargs, ...) __gh_declare_args(nargs)(__VA_ARGS__) + +#define __gh_assign_res_0(...) + +#define __gh_assign_res_1(r1) \ + r1 = __res.a0 + +#define __gh_assign_res_2(r1, r2) \ + __gh_assign_res_1(r1); \ + r2 = __res.a1 + +#define __gh_assign_res_3(r1, r2, r3) \ + __gh_assign_res_2(r1, r2); \ + r3 = __res.a2 + +#define __gh_assign_res_4(r1, r2, r3, r4) \ + __gh_assign_res_3(r1, r2, r3); \ + r4 = __res.a3 + +#define __gh_assign_res_5(r1, r2, r3, r4, r5) \ + __gh_assign_res_4(r1, r2, r3, r4); \ + r5 = __res.a4 + +#define __gh_assign_res_6(r1, r2, r3, r4, r5, r6) \ + __gh_assign_res_5(r1, r2, r3, r4, r5); \ + r6 = __res.a5 + +#define __gh_assign_res_7(r1, r2, r3, r4, r5, r6, r7) \ + __gh_assign_res_6(r1, r2, r3, r4, r5, r6); \ + r7 = __res.a6 + +#define __gh_assign_res_8(r1, r2, r3, r4, r5, r6, r7, r8) \ + __gh_assign_res_7(r1, r2, r3, r4, r5, r6, r7); \ + r8 = __res.a7 + +#define ___gh_assign_res(nargs) __gh_assign_res_ ## nargs +#define __gh_assign_res(nargs) ___gh_assign_res(nargs) +#define _gh_assign_res(...) __gh_assign_res(__gh_count_args(__VA_ARGS__))(__VA_ARGS__) + +/** + * arch_gh_hypercall() - Performs an AArch64-specific call into hypervisor using Gunyah ABI + * @hcall_num: Hypercall function ID to invoke + * @nargs: Number of input arguments + * @...: First nargs are the input arguments. Remaining arguments are output variables. + */ +#define arch_gh_hypercall(hcall_num, nargs, ...) \ + do { \ + struct arm_smccc_1_2_regs __res; \ + struct arm_smccc_1_2_regs __args = { \ + .a0 = hcall_num, \ + _gh_declare_args(nargs, __VA_ARGS__) \ + }; \ + arm_smccc_1_2_hvc(&__args, &__res); \ + _gh_assign_res(__gh_to_res(nargs, __VA_ARGS__)); \ + } while (0) + +#endif