From patchwork Tue Jun 11 18:46:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Babchuk X-Patchwork-Id: 10988149 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F9F41708 for ; Tue, 11 Jun 2019 18:48:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80ABE28389 for ; Tue, 11 Jun 2019 18:48:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 74D4A286A0; Tue, 11 Jun 2019 18:48:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B330B286AD for ; Tue, 11 Jun 2019 18:48:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1haln9-0006MB-L3; Tue, 11 Jun 2019 18:46:43 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1haln8-0006L2-IV for xen-devel@lists.xenproject.org; Tue, 11 Jun 2019 18:46:42 +0000 X-Inumbo-ID: 3ed65456-8c79-11e9-8980-bc764e045a96 Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe1f::62a]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 3ed65456-8c79-11e9-8980-bc764e045a96; Tue, 11 Jun 2019 18:46:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nrDLb9OMWsGrBE8kfuCWWtwczqEY3zTZIYSv+8+6ExM=; b=DoGF5T870ClTk7lp+etFuYe/GAwanapKwnNZLxRU6Eyooobt0tULQ423O34ccQhIvgzsa0ivM4EpSfzfdxVH3OD5YWGxhgfdGZQ+lVOHCYc/UJpT4HHCYNwIbBzZiaTd3scgEpCRmUkA+5eDc6t3cnJQfA8T8WhjQnycvYP9WyEIJK8opY7vMRQh9RWFLuSno/uqGQ4fPlcRpc9fMR8PyiZ3A/7Cbe5d1jcMPMQ8OkUoKTDNCshuW7Vtzna+frK2DqjkYfqhKyI61M6sbSeHk5/tRLXOZYrcKaA/q6VafQjy4INC3fHcggcLRRP3Q/KxsP45zO1fulAmVfbCa6N5Qg== Received: from AM0PR03MB4148.eurprd03.prod.outlook.com (20.176.214.210) by AM0PR03MB3795.eurprd03.prod.outlook.com (52.135.146.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1965.17; Tue, 11 Jun 2019 18:46:36 +0000 Received: from AM0PR03MB4148.eurprd03.prod.outlook.com ([fe80::d09e:ef3:88b6:b1eb]) by AM0PR03MB4148.eurprd03.prod.outlook.com ([fe80::d09e:ef3:88b6:b1eb%7]) with mapi id 15.20.1965.011; Tue, 11 Jun 2019 18:46:36 +0000 From: Volodymyr Babchuk To: "xen-devel@lists.xenproject.org" Thread-Topic: [PATCH v6 06/10] xen/arm: optee: add support for RPC SHM buffers Thread-Index: AQHVIIX/S52g+F1mZEmyRz4+Td76lg== Date: Tue, 11 Jun 2019 18:46:36 +0000 Message-ID: <20190611184541.7281-7-volodymyr_babchuk@epam.com> References: <20190611184541.7281-1-volodymyr_babchuk@epam.com> In-Reply-To: <20190611184541.7281-1-volodymyr_babchuk@epam.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Volodymyr_Babchuk@epam.com; x-originating-ip: [85.223.209.22] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: ab0af08e-5448-4906-648c-08d6ee9d21f4 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:AM0PR03MB3795; x-ms-traffictypediagnostic: AM0PR03MB3795: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2201; x-forefront-prvs: 006546F32A x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(39860400002)(346002)(136003)(376002)(396003)(366004)(189003)(199004)(73956011)(256004)(1076003)(305945005)(5660300002)(14444005)(76116006)(486006)(7736002)(91956017)(53936002)(14454004)(71190400001)(71200400001)(26005)(80792005)(76176011)(66946007)(2501003)(6506007)(102836004)(25786009)(55236004)(186003)(36756003)(86362001)(68736007)(2351001)(99286004)(2906002)(81166006)(81156014)(66476007)(4326008)(5640700003)(8676002)(8936002)(64756008)(66446008)(6512007)(478600001)(66556008)(316002)(6486002)(54906003)(2616005)(446003)(6116002)(6436002)(3846002)(66066001)(11346002)(72206003)(476003)(6916009); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR03MB3795; H:AM0PR03MB4148.eurprd03.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: epam.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: iNYmPPH23jFP2SCxa0gfNCJvLKOufbdkLFmclJBMGGlFx2Ll2wygLYcIJoWBYY6LgsZyVN/tTAswHMFwPs4Em+DamQ4HyDc8AmvXnNnNKOOvCNDsv/QmcEy3bVcn3bWfdmXHo2jAS1ae8ER+Hob1XA0e7bq7mN99DqjDSOPtPaSk3wKx9twAAqe1BgqY0XtuU7EWBC1WZzuaxjOsLMJ82eFvj++ZISMcF/uN9K7nim3/whCpctjbJ1JNst7cqvvNpR0ZjkeDwrRElpOBr4iNbvthLMEh6kkDBIeNkpKHSTQvHVPzGEBk/fk6Qk9WRx2VTHEmO8tWweiQZcuZg7wFsji9mJkKqmoFHU8z7EbMCmyEfeYd1Lgyw4atUtHvDfX/7rh6WUNPQtpE+iYr7sfT9s9AA2mI6aFDwSRUdBS+Yog= MIME-Version: 1.0 X-OriginatorOrg: epam.com X-MS-Exchange-CrossTenant-Network-Message-Id: ab0af08e-5448-4906-648c-08d6ee9d21f4 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jun 2019 18:46:36.5605 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Volodymyr_Babchuk@epam.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB3795 Subject: [Xen-devel] [PATCH v6 06/10] xen/arm: optee: add support for RPC SHM buffers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "tee-dev@lists.linaro.org" , Julien Grall , Stefano Stabellini , Volodymyr Babchuk Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP OP-TEE usually uses the same idea with command buffers (see previous commit) to issue RPC requests. Problem is that initially it has no buffer, where it can write request. So the first RPC request it makes is special: it requests NW to allocate shared buffer for other RPC requests. Usually this buffer is allocated only once for every OP-TEE thread and it remains allocated all the time until guest shuts down. Guest can ask OP-TEE to disable RPC buffers caching, in this case OP-TEE will ask guest to allocate/free buffer for the each RPC. Mediator needs to pin this buffer to make sure that page will be not free while it is shared with OP-TEE. Life cycle of this buffer is controlled by OP-TEE. It asks guest to create buffer and it asks it to free it. So it there is not much sense to limit number of those buffers, because we already limit the number of concurrent standard calls and prevention of RPC buffer allocation will impair OP-TEE functionality. Those buffers can be freed in two ways: either OP-TEE issues OPTEE_SMC_RPC_FUNC_FREE RPC request or guest tries to disable buffer caching by calling OPTEE_SMC_DISABLE_SHM_CACHE function. In the latter case OP-TEE will return cookie of the SHM buffer it just freed. OP-TEE expects that this RPC buffer have size of OPTEE_MSG_NONCONTIG_PAGE_SIZE, which equals to 4096 and is aligned with the same size. So, basically it expects one 4k page from the guest. This is the same as Xen's PAGE_SIZE. Signed-off-by: Volodymyr Babchuk Acked-by: Julien Grall --- All the patches to optee.c should be merged together. They were split to ease up review. But they depend heavily on each other. Changes from v4: - handle_rpc_func_alloc() now calls do_call_with_arg() directly Changes from v3: - Removed MAX_RPC_SHMS constant. Now this value depends on number of OP-TEE threads - Various formatting fixes - Added checks for guest memory type Changes from v2: - Added check to ensure that guests does not return two SHM buffers with the same cookie - Fixed coding style - Storing RPC parameters during RPC return to make sure, that guest will not change them during call continuation --- xen/arch/arm/tee/optee.c | 149 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 145 insertions(+), 4 deletions(-) diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c index f092492849..175789fb00 100644 --- a/xen/arch/arm/tee/optee.c +++ b/xen/arch/arm/tee/optee.c @@ -81,9 +81,17 @@ struct optee_std_call { register_t rpc_params[2]; }; +/* Pre-allocated SHM buffer for RPC commands */ +struct shm_rpc { + struct list_head list; + struct page_info *guest_page; + uint64_t cookie; +}; + /* Domain context */ struct optee_domain { struct list_head call_list; + struct list_head shm_rpc_list; atomic_t call_count; spinlock_t lock; }; @@ -158,6 +166,7 @@ static int optee_domain_init(struct domain *d) } INIT_LIST_HEAD(&ctx->call_list); + INIT_LIST_HEAD(&ctx->shm_rpc_list); atomic_set(&ctx->call_count, 0); spin_lock_init(&ctx->lock); @@ -199,7 +208,11 @@ static struct optee_std_call *allocate_std_call(struct optee_domain *ctx) struct optee_std_call *call; int count; - /* Make sure that guest does not execute more than max_optee_threads */ + /* + * Make sure that guest does not execute more than max_optee_threads. + * This also indirectly limits number of RPC SHM buffers, because OP-TEE + * allocates one such buffer per standard call. + */ count = atomic_add_unless(&ctx->call_count, 1, max_optee_threads); if ( count == max_optee_threads ) return ERR_PTR(-ENOSPC); @@ -294,10 +307,80 @@ static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call) spin_unlock(&ctx->lock); } +static struct shm_rpc *allocate_and_pin_shm_rpc(struct optee_domain *ctx, + gfn_t gfn, uint64_t cookie) +{ + struct shm_rpc *shm_rpc, *shm_rpc_tmp; + + shm_rpc = xzalloc(struct shm_rpc); + if ( !shm_rpc ) + return ERR_PTR(-ENOMEM); + + /* This page will be shared with OP-TEE, so we need to pin it. */ + shm_rpc->guest_page = get_domain_ram_page(gfn); + if ( !shm_rpc->guest_page ) + goto err; + + shm_rpc->cookie = cookie; + + spin_lock(&ctx->lock); + /* Check if there is existing SHM with the same cookie. */ + list_for_each_entry( shm_rpc_tmp, &ctx->shm_rpc_list, list ) + { + if ( shm_rpc_tmp->cookie == cookie ) + { + spin_unlock(&ctx->lock); + gdprintk(XENLOG_WARNING, "Guest tries to use the same RPC SHM cookie %lx\n", + cookie); + goto err; + } + } + + list_add_tail(&shm_rpc->list, &ctx->shm_rpc_list); + spin_unlock(&ctx->lock); + + return shm_rpc; + +err: + if ( shm_rpc->guest_page ) + put_page(shm_rpc->guest_page); + xfree(shm_rpc); + + return ERR_PTR(-EINVAL); +} + +static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie) +{ + struct shm_rpc *shm_rpc; + bool found = false; + + spin_lock(&ctx->lock); + + list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list ) + { + if ( shm_rpc->cookie == cookie ) + { + found = true; + list_del(&shm_rpc->list); + break; + } + } + spin_unlock(&ctx->lock); + + if ( !found ) + return; + + ASSERT(shm_rpc->guest_page); + put_page(shm_rpc->guest_page); + + xfree(shm_rpc); +} + static int optee_relinquish_resources(struct domain *d) { struct arm_smccc_res resp; struct optee_std_call *call, *call_tmp; + struct shm_rpc *shm_rpc, *shm_rpc_tmp; struct optee_domain *ctx = d->arch.tee; if ( !ctx ) @@ -314,6 +397,16 @@ static int optee_relinquish_resources(struct domain *d) if ( hypercall_preempt_check() ) return -ERESTART; + /* + * Number of this buffers also depends on max_optee_threads, so + * check the comment above. + */ + list_for_each_entry_safe( shm_rpc, shm_rpc_tmp, &ctx->shm_rpc_list, list ) + free_shm_rpc(ctx, shm_rpc->cookie); + + if ( hypercall_preempt_check() ) + return -ERESTART; + /* * Inform OP-TEE that domain is shutting down. This is * also a fast SMC call, like OPTEE_SMC_VM_CREATED, so @@ -328,6 +421,7 @@ static int optee_relinquish_resources(struct domain *d) ASSERT(!spin_is_locked(&ctx->lock)); ASSERT(!atomic_read(&ctx->call_count)); + ASSERT(list_empty(&ctx->shm_rpc_list)); XFREE(d->arch.tee); @@ -587,6 +681,48 @@ err: * request from OP-TEE and wished to resume the interrupted standard * call. */ +static void handle_rpc_func_alloc(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call) +{ + struct shm_rpc *shm_rpc; + register_t r1, r2; + paddr_t ptr = regpair_to_uint64(get_user_reg(regs, 1), + get_user_reg(regs, 2)); + uint64_t cookie = regpair_to_uint64(get_user_reg(regs, 4), + get_user_reg(regs, 5)); + + if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) ) + { + gdprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n"); + /* + * OP-TEE is waiting for a response to the RPC. We can't just + * return error to the guest. We need to provide some invalid + * value to OP-TEE, so it can handle error on its side. + */ + ptr = 0; + goto out; + } + + shm_rpc = allocate_and_pin_shm_rpc(ctx, gaddr_to_gfn(ptr), cookie); + if ( IS_ERR(shm_rpc) ) + { + gdprintk(XENLOG_WARNING, "Failed to allocate shm_rpc object: %ld\n", + PTR_ERR(shm_rpc)); + ptr = 0; + } + else + ptr = page_to_maddr(shm_rpc->guest_page); + +out: + uint64_to_regpair(&r1, &r2, ptr); + + do_call_with_arg(ctx, call, regs, OPTEE_SMC_CALL_RETURN_FROM_RPC, r1, r2, + get_user_reg(regs, 3), + get_user_reg(regs, 4), + get_user_reg(regs, 5)); +} + static void handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs) { struct optee_std_call *call; @@ -610,11 +746,15 @@ static void handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs) switch ( call->rpc_op ) { case OPTEE_SMC_RPC_FUNC_ALLOC: - /* TODO: Add handling */ - break; + handle_rpc_func_alloc(ctx, regs, call); + return; case OPTEE_SMC_RPC_FUNC_FREE: - /* TODO: Add handling */ + { + uint64_t cookie = regpair_to_uint64(call->rpc_params[0], + call->rpc_params[1]); + free_shm_rpc(ctx, cookie); break; + } case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR: break; case OPTEE_SMC_RPC_FUNC_CMD: @@ -720,6 +860,7 @@ static bool optee_handle_call(struct cpu_user_regs *regs) OPTEE_CLIENT_ID(current->domain), &resp); set_user_reg(regs, 0, resp.a0); if ( resp.a0 == OPTEE_SMC_RETURN_OK ) { + free_shm_rpc(ctx, regpair_to_uint64(resp.a1, resp.a2)); set_user_reg(regs, 1, resp.a1); set_user_reg(regs, 2, resp.a2); }