From patchwork Wed Sep 18 18:51:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Babchuk X-Patchwork-Id: 11151149 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42EB0112B for ; Wed, 18 Sep 2019 18:52:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1269121928 for ; Wed, 18 Sep 2019 18:52:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=epam.com header.i=@epam.com header.b="s/zUTAzP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1269121928 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=epam.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iAf2m-0006IT-Lp; Wed, 18 Sep 2019 18:51:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iAf2l-0006Hs-AU for xen-devel@lists.xenproject.org; Wed, 18 Sep 2019 18:51:11 +0000 X-Inumbo-ID: 43de519a-da45-11e9-a337-bc764e2007e4 Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.43]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 43de519a-da45-11e9-a337-bc764e2007e4; Wed, 18 Sep 2019 18:51:04 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gbZAwUMZdhftgAC/9AtTZZB5zkkY3SlZNrwjJB/lWetgObRkirHuyAqmBToAiJvZVujqpczjJ1khltmJ1NUVfka4Q/IO72V/G1EBHVAhszWgWmztB+tS36PbJ4o8hvZKBwHfCNRPXaDintHvGhLpkDYX/9dYSv0cTWrUpbuAHwNV7FvcJQawfnIBrmGGr2BQ23juZQSj14cZ1ztPs/AVvTpkGbHhNefeXjfzv5EvSr0wjRcLNTGeK4kbxwjmI3qr5D2NJ0T6OqDC5tVx+Bun9lueOMzOQ9qS6eRVbcrDKeF8hcW8Ew0Kvel6UUDKm5WSv+L42PU2pH29ANBxDquR/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pjsT0snnco0gXPBDeF57uLeR/PMguD4mh0/rGAvu1yc=; b=oDOu8Cq7ZU3EZbzpX/xjpmsoblgz+UeRpKcB4X8Q92rouZb8Mjn6fPMO4fAh+OwvTm074Mg0a0iTaT+nRRKMbKXNArftUN39WIwY6r+gP/dArXJXk1QcixzVZ01l0AJ9JXRtsT9Nu+5G3DJtuGK3LNeIeZ27H9pZGnJAYY+62YT6hbHgf5urhfiu35RyqMRWkEZB7Lfqsg/HkB/KVBA8cnumf8ppGrzEOUhSdkxhL3YPY1KSh+B06GF6C2r5X5MZFfBJ+OayIMYdDyACUesjCKVqCLX12bdLeB3Zx3bDGfNvV6OrBIPAmC0KdtC2joWbrsFHtEPVDiMVT5mTkmYNDQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pjsT0snnco0gXPBDeF57uLeR/PMguD4mh0/rGAvu1yc=; b=s/zUTAzP06Fs9754xe4heu8RHccEbFBSnwyc0ZueR1DH4j6q+Lzi323yNNsdgh4zqHAa8h6l1zLtiLEKEMME0xCjNLZNeZOe5EIKtlXz0vtCQrG2ZTtH/2Y7rFKrSHLyVA16AfHajgnG9u0m7rZ7h48avnt4N5bROwOAJFXKI7s= Received: from AM0PR03MB4148.eurprd03.prod.outlook.com (20.177.40.10) by AM0PR03MB5761.eurprd03.prod.outlook.com (20.179.252.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2263.15; Wed, 18 Sep 2019 18:51:03 +0000 Received: from AM0PR03MB4148.eurprd03.prod.outlook.com ([fe80::71e3:834d:5708:5a0a]) by AM0PR03MB4148.eurprd03.prod.outlook.com ([fe80::71e3:834d:5708:5a0a%5]) with mapi id 15.20.2199.015; Wed, 18 Sep 2019 18:51:02 +0000 From: Volodymyr Babchuk To: "xen-devel@lists.xenproject.org" Thread-Topic: [PATCH v2 4/6] xen/arm: optee: handle shared buffer translation error Thread-Index: AQHVblIEDLSpGG3MwUmqKYBWTkOKGw== Date: Wed, 18 Sep 2019 18:51:02 +0000 Message-ID: <20190918185041.22738-5-volodymyr_babchuk@epam.com> References: <20190918185041.22738-1-volodymyr_babchuk@epam.com> In-Reply-To: <20190918185041.22738-1-volodymyr_babchuk@epam.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Volodymyr_Babchuk@epam.com; x-originating-ip: [85.223.209.22] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 95ae7685-a35b-4ed6-0f29-08d73c692799 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600167)(711020)(4605104)(1401327)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7193020); SRVR:AM0PR03MB5761; x-ms-traffictypediagnostic: AM0PR03MB5761:|AM0PR03MB5761: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:7691; x-forefront-prvs: 01644DCF4A x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(366004)(136003)(376002)(39860400002)(346002)(396003)(189003)(51234002)(199004)(66476007)(6116002)(478600001)(14454004)(66446008)(80792005)(2616005)(55236004)(8936002)(66066001)(5660300002)(2501003)(81156014)(305945005)(7736002)(81166006)(66556008)(8676002)(256004)(71200400001)(14444005)(71190400001)(99286004)(54906003)(1076003)(86362001)(26005)(11346002)(6512007)(5640700003)(2906002)(186003)(102836004)(3846002)(64756008)(76116006)(2351001)(91956017)(6506007)(25786009)(486006)(66946007)(36756003)(476003)(446003)(4326008)(316002)(6436002)(76176011)(6486002)(6916009); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR03MB5761; H:AM0PR03MB4148.eurprd03.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: epam.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: Ynt4z/dMU2Lp6v43nTdlMnmiCPi0UJJZlBG4U4jbvxBnflH/V4MGcyeW4V7IjvjR+hGD5KUtwR4QfTjcfhPVrYeN/MFNz36uparTK8/ZUutlzhtXiwvjBVZp9+iIU8CFhuOFmVDyVw78oIJBYwiBMndRrLVsvshyPbXNWKQsNIaocvlIpXRBOIvsdKuSU86j+o5o4fT0vkP1HzSBGBNayrBlwKLYEWFqfKFo6OZlVLGFkdfWYd0lIyGfaBHCmoMR+CHpo3TVyMhYhnoEP/L7943MyUqgjHJCVoZ6lwD6yJUKQ8Oo9nWQ+B/SGKJio0Akqs6OpuOn68DaHQLW9XkjWFHI02+MG48G+tfKaSnnf+y4kglVvkj6kGHafF9MOVfeHimUThFYaqfbiogWMUT5DuUIyNbHZCKGHN2+rEZ/ZEM= MIME-Version: 1.0 X-OriginatorOrg: epam.com X-MS-Exchange-CrossTenant-Network-Message-Id: 95ae7685-a35b-4ed6-0f29-08d73c692799 X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Sep 2019 18:51:02.7880 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: uCK97fWKYjcvHN5Q0Ve6iXJIKaDKKgDNksefi6ro19oHYPehBFJSiLDcP980tVo/HtxsYnjJCx7HnxVWsjGb0v/LvjM97OVDy9gQLasG0Xw= X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5761 Subject: [Xen-devel] [PATCH v2 4/6] xen/arm: optee: handle shared buffer translation error X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "tee-dev@lists.linaro.org" , Julien Grall , Stefano Stabellini , Volodymyr Babchuk Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" There is a case possible, when OP-TEE asks guest to allocate shared buffer, but Xen for some reason can't translate buffer's addresses. In this situation we should do two things: 1. Tell guest to free allocated buffer, so there will be no memory leak for guest. 2. Tell OP-TEE that buffer allocation failed. To ask guest to free allocated buffer we should perform the same thing, as OP-TEE does - issue RPC request. This is done by filling request buffer (luckily we can reuse the same buffer, that OP-TEE used to issue original request) and then return to guest with special return code. Then we need to handle next call from guest in a special way: as RPC was issued by Xen, not by OP-TEE, it should be handled by Xen. Basically, this is the mechanism to preempt OP-TEE mediator. The same mechanism can be used in the future to preempt mediator during translation large (>512 pages) shared buffers. Signed-off-by: Volodymyr Babchuk --- Changes from v1: - Renamed OPTEEM_CALL_* to OPTEE_CALL_* - Fixed comments - Added ASSERT() in handle_xen_rpc_return() --- xen/arch/arm/tee/optee.c | 172 ++++++++++++++++++++++++++++++++------- 1 file changed, 141 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c index 88be959819..0a3205f9e8 100644 --- a/xen/arch/arm/tee/optee.c +++ b/xen/arch/arm/tee/optee.c @@ -98,6 +98,11 @@ OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \ OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) +enum optee_call_state { + OPTEE_CALL_NORMAL, + OPTEE_CALL_XEN_RPC, +}; + static unsigned int __read_mostly max_optee_threads; /* @@ -114,6 +119,9 @@ struct optee_std_call { paddr_t guest_arg_ipa; int optee_thread_id; int rpc_op; + /* Saved buffer type for the current buffer allocate request */ + unsigned int rpc_buffer_type; + enum optee_call_state state; uint64_t rpc_data_cookie; bool in_flight; register_t rpc_params[2]; @@ -301,6 +309,7 @@ static struct optee_std_call *allocate_std_call(struct optee_domain *ctx) call->optee_thread_id = -1; call->in_flight = true; + call->state = OPTEE_CALL_NORMAL; spin_lock(&ctx->lock); list_add_tail(&call->list, &ctx->call_list); @@ -1086,6 +1095,10 @@ static int handle_rpc_return(struct optee_domain *ctx, ret = -ERESTART; } + /* Save the buffer type in case we will want to free it */ + if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC ) + call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a; + unmap_domain_page(shm_rpc->xen_arg); } @@ -1250,18 +1263,107 @@ err: return; } +/* + * Prepare RPC request to free shared buffer in the same way, as + * OP-TEE does this. + * + * Return values: + * true - successfully prepared RPC request + * false - there was an error + */ +static bool issue_rpc_cmd_free(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call, + struct shm_rpc *shm_rpc, + uint64_t cookie) +{ + register_t r1, r2; + + /* In case if guest will forget to update it with meaningful value */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->cmd = OPTEE_RPC_CMD_SHM_FREE; + shm_rpc->xen_arg->num_params = 1; + shm_rpc->xen_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + shm_rpc->xen_arg->params[0].u.value.a = call->rpc_buffer_type; + shm_rpc->xen_arg->params[0].u.value.b = cookie; + + if ( access_guest_memory_by_ipa(current->domain, + gfn_to_gaddr(shm_rpc->gfn), + shm_rpc->xen_arg, + OPTEE_MSG_GET_ARG_SIZE(1), + true) ) + { + /* + * Well, this is quite bad. We have error in the error + * path. This can happen only if guest behaves badly, so all + * we can do is to return error to OP-TEE and leave guest's + * memory leaked. We already have freed all resources + * allocated for this buffer, but guest will never receive + * OPTEE_RPC_CMD_SHM_FREE request, so it will not know that it + * can release allocated buffer. + */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->num_params = 0; + + return false; + } + + uint64_to_regpair(&r1, &r2, shm_rpc->cookie); + + call->state = OPTEE_CALL_XEN_RPC; + call->rpc_op = OPTEE_SMC_RPC_FUNC_CMD; + call->rpc_params[0] = r1; + call->rpc_params[1] = r2; + call->optee_thread_id = get_user_reg(regs, 3); + + set_user_reg(regs, 0, OPTEE_SMC_RETURN_RPC_CMD); + set_user_reg(regs, 1, r1); + set_user_reg(regs, 2, r2); + + return true; +} + +/* Handles return from Xen-issued RPC */ +static void handle_xen_rpc_return(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call, + struct shm_rpc *shm_rpc) +{ + call->state = OPTEE_CALL_NORMAL; + + /* + * Right now we have only one reason to be there - we asked guest + * to free shared buffer and it did it. Now we can tell OP-TEE + * that buffer allocation failed. We are not storing exact command + * type, only type of RPC return. So, this is the only check we + * can perform there. + */ + ASSERT(call->rpc_op == OPTEE_SMC_RPC_FUNC_CMD); + + /* + * We are not checking return value from a guest because we assume + * that OPTEE_RPC_CMD_SHM_FREE never fails. + */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->num_params = 0; +} + /* * This function is called when guest is finished processing RPC * request from OP-TEE and wished to resume the interrupted standard * call. + * + * Return values: + * false - there was an error, do not call OP-TEE + * true - success, proceed as normal */ -static void handle_rpc_cmd_alloc(struct optee_domain *ctx, +static bool handle_rpc_cmd_alloc(struct optee_domain *ctx, struct cpu_user_regs *regs, struct optee_std_call *call, struct shm_rpc *shm_rpc) { if ( shm_rpc->xen_arg->ret || shm_rpc->xen_arg->num_params != 1 ) - return; + return true; if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | OPTEE_MSG_ATTR_NONCONTIG) ) @@ -1269,7 +1371,7 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, gdprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer: %"PRIx64"\n", shm_rpc->xen_arg->params[0].attr); - return; + return true; } /* Free pg list for buffer */ @@ -1285,21 +1387,14 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, { call->rpc_data_cookie = 0; /* - * Okay, so there was problem with guest's buffer and we need - * to tell about this to OP-TEE. - */ - shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; - shm_rpc->xen_arg->num_params = 0; - /* - * TODO: With current implementation, OP-TEE will not issue - * RPC to free this buffer. Guest and OP-TEE will be out of - * sync: guest believes that it provided buffer to OP-TEE, - * while OP-TEE thinks of opposite. Ideally, we need to - * emulate RPC with OPTEE_MSG_RPC_CMD_SHM_FREE command. + * We are unable to translate guest's buffer, so we need tell guest + * to free it, before reporting an error to OP-TEE. */ - gprintk(XENLOG_WARNING, - "translate_noncontig() failed, OP-TEE/guest state is out of sync.\n"); + return !issue_rpc_cmd_free(ctx, regs, call, shm_rpc, + shm_rpc->xen_arg->params[0].u.tmem.shm_ref); } + + return true; } static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, @@ -1349,22 +1444,37 @@ static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, goto out; } - switch (shm_rpc->xen_arg->cmd) + if ( call->state == OPTEE_CALL_NORMAL ) { - case OPTEE_RPC_CMD_GET_TIME: - case OPTEE_RPC_CMD_WAIT_QUEUE: - case OPTEE_RPC_CMD_SUSPEND: - break; - case OPTEE_RPC_CMD_SHM_ALLOC: - handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc); - break; - case OPTEE_RPC_CMD_SHM_FREE: - free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); - if ( call->rpc_data_cookie == shm_rpc->xen_arg->params[0].u.value.b ) - call->rpc_data_cookie = 0; - break; - default: - break; + switch (shm_rpc->xen_arg->cmd) + { + case OPTEE_RPC_CMD_GET_TIME: + case OPTEE_RPC_CMD_WAIT_QUEUE: + case OPTEE_RPC_CMD_SUSPEND: + break; + case OPTEE_RPC_CMD_SHM_ALLOC: + if ( !handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc) ) + { + /* We failed to translate buffer, report back to guest */ + unmap_domain_page(shm_rpc->xen_arg); + put_std_call(ctx, call); + + return; + } + break; + case OPTEE_RPC_CMD_SHM_FREE: + free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); + if ( call->rpc_data_cookie == + shm_rpc->xen_arg->params[0].u.value.b ) + call->rpc_data_cookie = 0; + break; + default: + break; + } + } + else + { + handle_xen_rpc_return(ctx, regs, call, shm_rpc); } out: