From patchwork Fri Aug 23 18:48:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Babchuk X-Patchwork-Id: 11112229 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 18F711709 for ; Fri, 23 Aug 2019 18:50:54 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DCDB721874 for ; Fri, 23 Aug 2019 18:50:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=epam.com header.i=@epam.com header.b="ZHTk2KO7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DCDB721874 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=epam.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i1EcQ-0001kH-M4; Fri, 23 Aug 2019 18:49:02 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i1EcP-0001jN-9O for xen-devel@lists.xenproject.org; Fri, 23 Aug 2019 18:49:01 +0000 X-Inumbo-ID: a56b2062-c5d6-11e9-adef-12813bfff9fa Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [40.107.14.44]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a56b2062-c5d6-11e9-adef-12813bfff9fa; Fri, 23 Aug 2019 18:48:51 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lRy3785bRM2bWA5cVdX0qbEDU5mnYJ2YFMFt3wXpHiIqyw11qpBnZHj8JIqfqVEL3eP/7BAyQTzs3quzTtvVsbac3lIwKhagdvxv4aNFuDjyGz5NzBHxNHjH60H/q+x71Pq2xt02onkIMcBWN+Z1BwCYinUW+sZGzHvzhtxh7Xb0LSUwVVLjCSObS/UZ1FDc0HtCOFUeSmeNeZiE8CeUnEX8jkcUZiWWISUkTW+OC0KztEG6WIaxD9Be7jNpncUhJ/XTxPjNgk87VTbQKwVo1VwmCh54MAq0mYZdDrABDSqWAu5Jjh+NRDdtJzez75YXHhbHP38RVeVPTHOHRKAnuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J3vzKqChQlQMrBBa5Lj78mD3Xi0BfXeQy6cOitat1zA=; b=B0edAL9Q7dY2yUGSaWpUz6TvDdoWCB6Ir9A5OOOYNuFf0bzfy3N+1bJ40HZRFZK3X7ElFJrvmk1vIp8ilu4hUzwkGXkt0Kmi39vzJ62U8X+SAsqXxVw/vYM7t2HW/6oJlJtt7helAvCrsFxy2uUN/Pey2Or2zDt+EOOJnJNit8de6uPJBcWrRRe6ZFgybEjBYVSMYN84MdCEE7JrXQ+AQHJNdyVmc+wfZRv+aHAqwYlMCssAwxzkmEKA6YxvCRdCece1sjeVXX8oxmHJMX1K1gW9vo8Q4OyFMLFkyKmDRhveeaIWOrsGGBeW/L4cMCxqhaoZPQZioT2C/S4SsFBGTQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J3vzKqChQlQMrBBa5Lj78mD3Xi0BfXeQy6cOitat1zA=; b=ZHTk2KO7pxQ87rKXm6qrKXDpFuCNUCmlOqr7PM8QVq2cHMuM+mY0ZDOfdsh8YFo3rTJ1HDKf/iSbBstijGaU5Y/GDnch2I2wZlvCI7fS6Sf6Hw4Di49qIhid8WljW2hkzVzKBg1F4n3XKJlzXLXeclp/DXNoVOL4oiED6PvmgGhbKYye8NGXpwMD1dnQsMstDAi/BeVUmukqlYoqIbfpfxMvoDK8MMUjDyJqINymjEa9AU9kBMZbjwuOiwY+J5lkpwp/kAhJOCwgpQbSe87IKPeNPplY+xYn2GfAn8TKEZ1fU8aO7/AqU0wO7e6nenzywgIZ6Wx4FzBzFFKe/Nzx+Q== Received: from AM0PR03MB4148.eurprd03.prod.outlook.com (20.177.40.10) by AM0PR03MB4690.eurprd03.prod.outlook.com (20.177.41.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2178.18; Fri, 23 Aug 2019 18:48:50 +0000 Received: from AM0PR03MB4148.eurprd03.prod.outlook.com ([fe80::71e3:834d:5708:5a0a]) by AM0PR03MB4148.eurprd03.prod.outlook.com ([fe80::71e3:834d:5708:5a0a%5]) with mapi id 15.20.2199.015; Fri, 23 Aug 2019 18:48:50 +0000 From: Volodymyr Babchuk To: "xen-devel@lists.xenproject.org" Thread-Topic: [PATCH 4/5] xen/arm: optee: handle share buffer translation error Thread-Index: AQHVWeNnnvmrLONixEqfoDDhmmN88g== Date: Fri, 23 Aug 2019 18:48:50 +0000 Message-ID: <20190823184826.14525-5-volodymyr_babchuk@epam.com> References: <20190823184826.14525-1-volodymyr_babchuk@epam.com> In-Reply-To: <20190823184826.14525-1-volodymyr_babchuk@epam.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Volodymyr_Babchuk@epam.com; x-originating-ip: [85.223.209.22] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: a8a5d122-502e-4422-d6ea-08d727fa8a09 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600166)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:AM0PR03MB4690; x-ms-traffictypediagnostic: AM0PR03MB4690:|AM0PR03MB4690: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:7691; x-forefront-prvs: 0138CD935C x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(346002)(136003)(376002)(39860400002)(366004)(396003)(51234002)(189003)(199004)(6436002)(305945005)(25786009)(76176011)(81166006)(71190400001)(478600001)(7736002)(81156014)(5660300002)(8676002)(71200400001)(4326008)(186003)(6506007)(99286004)(2351001)(6486002)(1076003)(2501003)(26005)(2906002)(6916009)(66446008)(55236004)(66946007)(91956017)(14454004)(76116006)(66476007)(66556008)(36756003)(64756008)(6116002)(6512007)(446003)(11346002)(3846002)(476003)(2616005)(316002)(54906003)(86362001)(80792005)(102836004)(5640700003)(8936002)(66066001)(14444005)(486006)(256004)(53936002); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR03MB4690; H:AM0PR03MB4148.eurprd03.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: epam.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: v8vTlLgDbU//E7+sVlRuwITeOezCVya1fhZVM3P6jT7YDTVeVzOk6cWKuE12ebrrvWEP+4SExke6+LusTPnhCMZxARVmI9aI9ojsCtJXwIR0SUG74UoaaIaZXS21UOtqpkEwZg7AYPGyc/mBwH+uEwlpJfhSJcTAicb5vXP8q6ORmzYD1dj4koq23i50iOhb6S0n9lC40lxaVSFO+6/fkV0Szqa6k9XSWiuAYOahGjko4vpJ/v3YPlYIxYkwYwSA/3MDTywgkZ8ZnrWldcwwqctBwz0km1oNhY/GqYTNQDkwhw/PlOnO4f9xnyTZu4IS2KD+vbwhaei5nCy4cBVZF4F7IrQVnu4l/7LodBJ1c5mlD7c4vkLot6W5TTghNXqFJ90U0GKNx6ewXjwQyRtp3xPqdkSQ+Fs7h2kRHbjD2Zg= MIME-Version: 1.0 X-OriginatorOrg: epam.com X-MS-Exchange-CrossTenant-Network-Message-Id: a8a5d122-502e-4422-d6ea-08d727fa8a09 X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Aug 2019 18:48:50.5793 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 6umkZ6YVKoBMGgF+5PHm80yTq2IfSXLqWefHgnHnem3+w4NMvn6tzZbIGBZZuzxhB+bRu4jou87B4O0Qtaqmo54nev4zP/S3g3r3dTM9zT4= X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4690 Subject: [Xen-devel] [PATCH 4/5] xen/arm: optee: handle share buffer translation error X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "tee-dev@lists.linaro.org" , Julien Grall , Stefano Stabellini , Volodymyr Babchuk Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" There is a case possible, when OP-TEE asks guest to allocate shared buffer, but Xen for some reason can't translate buffer's addresses. In this situation we should do two things: 1. Tell guest to free allocated buffer, so there will be no memory leak for guest. 2. Tell OP-TEE that buffer allocation failed. To ask guest to free allocated buffer we should perform the same thing, as OP-TEE does - issue RPC request. This is done by filling request buffer (luckily we can reuse the same buffer, that OP-TEE used to issue original request) and then return to guest with special return code. Then we need to handle next call from guest in a special way: as RPC was issued by Xen, not by OP-TEE, it should be handled by Xen. Basically, this is the mechanism to preempt OP-TEE mediator. The same mechanism can be used in the future to preempt mediator during translation large (>512 pages) shared buffers. Signed-off-by: Volodymyr Babchuk --- xen/arch/arm/tee/optee.c | 167 +++++++++++++++++++++++++++++++-------- 1 file changed, 136 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c index 3ce6e7fa55..4eebc60b62 100644 --- a/xen/arch/arm/tee/optee.c +++ b/xen/arch/arm/tee/optee.c @@ -96,6 +96,11 @@ OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \ OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) +enum optee_call_state { + OPTEEM_CALL_NORMAL = 0, + OPTEEM_CALL_XEN_RPC, +}; + static unsigned int __read_mostly max_optee_threads; /* @@ -112,6 +117,9 @@ struct optee_std_call { paddr_t guest_arg_ipa; int optee_thread_id; int rpc_op; + /* Saved buffer type for the last buffer allocate request */ + unsigned int rpc_buffer_type; + enum optee_call_state state; uint64_t rpc_data_cookie; bool in_flight; register_t rpc_params[2]; @@ -299,6 +307,7 @@ static struct optee_std_call *allocate_std_call(struct optee_domain *ctx) call->optee_thread_id = -1; call->in_flight = true; + call->state = OPTEEM_CALL_NORMAL; spin_lock(&ctx->lock); list_add_tail(&call->list, &ctx->call_list); @@ -1075,6 +1084,10 @@ static int handle_rpc_return(struct optee_domain *ctx, ret = -ERESTART; } + /* Save the buffer type in case we will want to free it */ + if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC ) + call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a; + unmap_domain_page(shm_rpc->xen_arg); } @@ -1239,18 +1252,102 @@ err: return; } +/* + * Prepare RPC request to free shared buffer in the same way, as + * OP-TEE does this. + * + * Return values: + * true - successfully prepared RPC request + * false - there was an error + */ +static bool issue_rpc_cmd_free(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call, + struct shm_rpc *shm_rpc, + uint64_t cookie) +{ + register_t r1, r2; + + /* In case if guest will forget to update it with meaningful value */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->cmd = OPTEE_RPC_CMD_SHM_FREE; + shm_rpc->xen_arg->num_params = 1; + shm_rpc->xen_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + shm_rpc->xen_arg->params[0].u.value.a = call->rpc_buffer_type; + shm_rpc->xen_arg->params[0].u.value.b = cookie; + + if ( access_guest_memory_by_ipa(current->domain, + gfn_to_gaddr(shm_rpc->gfn), + shm_rpc->xen_arg, + OPTEE_MSG_GET_ARG_SIZE(1), + true) ) + { + /* + * Well, this is quite bad. We have error in error path. + * This can happen only if guest behaves badly, so all + * we can do is to return error to OP-TEE and leave + * guest's memory leaked. + */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->num_params = 0; + + return false; + } + + uint64_to_regpair(&r1, &r2, shm_rpc->cookie); + + call->state = OPTEEM_CALL_XEN_RPC; + call->rpc_op = OPTEE_SMC_RPC_FUNC_CMD; + call->rpc_params[0] = r1; + call->rpc_params[1] = r2; + call->optee_thread_id = get_user_reg(regs, 3); + + set_user_reg(regs, 0, OPTEE_SMC_RETURN_RPC_CMD); + set_user_reg(regs, 1, r1); + set_user_reg(regs, 2, r2); + + return true; +} + +/* Handles return from Xen-issued RPC */ +static void handle_xen_rpc_return(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call, + struct shm_rpc *shm_rpc) +{ + call->state = OPTEEM_CALL_NORMAL; + + /* + * Right now we have only one reason to be there - we asked guest + * to free shared buffer and it did it. Now we can tell OP-TEE that + * buffer allocation failed. + */ + + /* + * We are not checking return value from a guest because we assume + * that OPTEE_RPC_CMD_SHM_FREE newer fails. + */ + + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->num_params = 0; +} + /* * This function is called when guest is finished processing RPC * request from OP-TEE and wished to resume the interrupted standard * call. + * + * Return values: + * false - there was an error, do not call OP-TEE + * true - success, proceed as normal */ -static void handle_rpc_cmd_alloc(struct optee_domain *ctx, +static bool handle_rpc_cmd_alloc(struct optee_domain *ctx, struct cpu_user_regs *regs, struct optee_std_call *call, struct shm_rpc *shm_rpc) { if ( shm_rpc->xen_arg->ret || shm_rpc->xen_arg->num_params != 1 ) - return; + return true; if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | OPTEE_MSG_ATTR_NONCONTIG) ) @@ -1258,7 +1355,7 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, gdprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer: %"PRIx64"\n", shm_rpc->xen_arg->params[0].attr); - return; + return true; } /* Free pg list for buffer */ @@ -1274,21 +1371,14 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, { call->rpc_data_cookie = 0; /* - * Okay, so there was problem with guest's buffer and we need - * to tell about this to OP-TEE. - */ - shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; - shm_rpc->xen_arg->num_params = 0; - /* - * TODO: With current implementation, OP-TEE will not issue - * RPC to free this buffer. Guest and OP-TEE will be out of - * sync: guest believes that it provided buffer to OP-TEE, - * while OP-TEE thinks of opposite. Ideally, we need to - * emulate RPC with OPTEE_MSG_RPC_CMD_SHM_FREE command. + * We are unable to translate guest's buffer, so we need tell guest + * to free it, before returning error to OP-TEE. */ - gprintk(XENLOG_WARNING, - "translate_noncontig() failed, OP-TEE/guest state is out of sync.\n"); + return !issue_rpc_cmd_free(ctx, regs, call, shm_rpc, + shm_rpc->xen_arg->params[0].u.tmem.shm_ref); } + + return true; } static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, @@ -1338,22 +1428,37 @@ static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, goto out; } - switch (shm_rpc->xen_arg->cmd) + if ( call->state == OPTEEM_CALL_NORMAL ) { - case OPTEE_RPC_CMD_GET_TIME: - case OPTEE_RPC_CMD_WAIT_QUEUE: - case OPTEE_RPC_CMD_SUSPEND: - break; - case OPTEE_RPC_CMD_SHM_ALLOC: - handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc); - break; - case OPTEE_RPC_CMD_SHM_FREE: - free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); - if ( call->rpc_data_cookie == shm_rpc->xen_arg->params[0].u.value.b ) - call->rpc_data_cookie = 0; - break; - default: - break; + switch (shm_rpc->xen_arg->cmd) + { + case OPTEE_RPC_CMD_GET_TIME: + case OPTEE_RPC_CMD_WAIT_QUEUE: + case OPTEE_RPC_CMD_SUSPEND: + break; + case OPTEE_RPC_CMD_SHM_ALLOC: + if ( !handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc) ) + { + /* We failed to translate buffer, report back to guest */ + unmap_domain_page(shm_rpc->xen_arg); + put_std_call(ctx, call); + + return; + } + break; + case OPTEE_RPC_CMD_SHM_FREE: + free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); + if ( call->rpc_data_cookie == + shm_rpc->xen_arg->params[0].u.value.b ) + call->rpc_data_cookie = 0; + break; + default: + break; + } + } + else + { + handle_xen_rpc_return(ctx, regs, call, shm_rpc); } out: