Message ID | 20190823184826.14525-5-volodymyr_babchuk@epam.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arch/arm: optee: fix TODOs and remove "experimental" status | expand |
Hi Volodymyr, On 8/23/19 7:48 PM, Volodymyr Babchuk wrote: > There is a case possible, when OP-TEE asks guest to allocate shared > buffer, but Xen for some reason can't translate buffer's addresses. In > this situation we should do two things: > > 1. Tell guest to free allocated buffer, so there will be no memory > leak for guest. > > 2. Tell OP-TEE that buffer allocation failed. > > To ask guest to free allocated buffer we should perform the same > thing, as OP-TEE does - issue RPC request. This is done by filling > request buffer (luckily we can reuse the same buffer, that OP-TEE used > to issue original request) and then return to guest with special > return code. > > Then we need to handle next call from guest in a special way: as RPC > was issued by Xen, not by OP-TEE, it should be handled by Xen. > Basically, this is the mechanism to preempt OP-TEE mediator. > > The same mechanism can be used in the future to preempt mediator > during translation large (>512 pages) shared buffers. > > Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> > --- > xen/arch/arm/tee/optee.c | 167 +++++++++++++++++++++++++++++++-------- > 1 file changed, 136 insertions(+), 31 deletions(-) > > diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c > index 3ce6e7fa55..4eebc60b62 100644 > --- a/xen/arch/arm/tee/optee.c > +++ b/xen/arch/arm/tee/optee.c > @@ -96,6 +96,11 @@ > OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \ > OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) > > +enum optee_call_state { > + OPTEEM_CALL_NORMAL = 0, enum always start counting at 0. Also, looking at the code, it does not seem you need to know the value. Right? > + OPTEEM_CALL_XEN_RPC, I am a bit confused, the enum is called optee_call_state but all the enum are prefixed with OPTEEM_CALL_. Why the discrepancy? > +}; > + > static unsigned int __read_mostly max_optee_threads; > > /* > @@ -112,6 +117,9 @@ struct optee_std_call { > paddr_t guest_arg_ipa; > int optee_thread_id; > int rpc_op; > + /* Saved buffer type for the last buffer allocate request */ Looking at the code, it feels to me you are saving the buffer type for the current command and not the last. Did I miss anything? > + unsigned int rpc_buffer_type; > + enum optee_call_state state; > uint64_t rpc_data_cookie; > bool in_flight; > register_t rpc_params[2]; > @@ -299,6 +307,7 @@ static struct optee_std_call *allocate_std_call(struct optee_domain *ctx) > > call->optee_thread_id = -1; > call->in_flight = true; > + call->state = OPTEEM_CALL_NORMAL; > > spin_lock(&ctx->lock); > list_add_tail(&call->list, &ctx->call_list); > @@ -1075,6 +1084,10 @@ static int handle_rpc_return(struct optee_domain *ctx, > ret = -ERESTART; > } > > + /* Save the buffer type in case we will want to free it */ > + if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC ) > + call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a; > + > unmap_domain_page(shm_rpc->xen_arg); > } > > @@ -1239,18 +1252,102 @@ err: > return; > } > > +/* > + * Prepare RPC request to free shared buffer in the same way, as > + * OP-TEE does this. > + * > + * Return values: > + * true - successfully prepared RPC request > + * false - there was an error > + */ > +static bool issue_rpc_cmd_free(struct optee_domain *ctx, > + struct cpu_user_regs *regs, > + struct optee_std_call *call, > + struct shm_rpc *shm_rpc, > + uint64_t cookie) > +{ > + register_t r1, r2; > + > + /* In case if guest will forget to update it with meaningful value */ > + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; > + shm_rpc->xen_arg->cmd = OPTEE_RPC_CMD_SHM_FREE; > + shm_rpc->xen_arg->num_params = 1; > + shm_rpc->xen_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; > + shm_rpc->xen_arg->params[0].u.value.a = call->rpc_buffer_type; > + shm_rpc->xen_arg->params[0].u.value.b = cookie; > + > + if ( access_guest_memory_by_ipa(current->domain, > + gfn_to_gaddr(shm_rpc->gfn), > + shm_rpc->xen_arg, > + OPTEE_MSG_GET_ARG_SIZE(1), > + true) ) > + { > + /* > + * Well, this is quite bad. We have error in error path. > + * This can happen only if guest behaves badly, so all > + * we can do is to return error to OP-TEE and leave > + * guest's memory leaked. Could you expand a bit more what you mean by "guest's memory leaked"? What the state of the page from Xen PoV? I.e. is there any reference taken by the OP-TEE mediator? Will the page be freed once the guest is destroyed?... > + */ > + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; > + shm_rpc->xen_arg->num_params = 0; > + > + return false; > + } > + > + uint64_to_regpair(&r1, &r2, shm_rpc->cookie); > + > + call->state = OPTEEM_CALL_XEN_RPC; > + call->rpc_op = OPTEE_SMC_RPC_FUNC_CMD; > + call->rpc_params[0] = r1; > + call->rpc_params[1] = r2; > + call->optee_thread_id = get_user_reg(regs, 3); > + > + set_user_reg(regs, 0, OPTEE_SMC_RETURN_RPC_CMD); > + set_user_reg(regs, 1, r1); > + set_user_reg(regs, 2, r2); > + > + return true; > +} > + > +/* Handles return from Xen-issued RPC */ > +static void handle_xen_rpc_return(struct optee_domain *ctx, > + struct cpu_user_regs *regs, > + struct optee_std_call *call, > + struct shm_rpc *shm_rpc) > +{ > + call->state = OPTEEM_CALL_NORMAL; > + > + /* > + * Right now we have only one reason to be there - we asked guest > + * to free shared buffer and it did it. Now we can tell OP-TEE that > + * buffer allocation failed. > + */ Should we add an ASSERT to ensure the command is the one we expect? > + > + /* > + * We are not checking return value from a guest because we assume > + * that OPTEE_RPC_CMD_SHM_FREE newer fails. s/newer/never/ > + */ > + > + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; > + shm_rpc->xen_arg->num_params = 0; > +} > + > /* > * This function is called when guest is finished processing RPC > * request from OP-TEE and wished to resume the interrupted standard > * call. > + * > + * Return values: > + * false - there was an error, do not call OP-TEE > + * true - success, proceed as normal > */ > -static void handle_rpc_cmd_alloc(struct optee_domain *ctx, > +static bool handle_rpc_cmd_alloc(struct optee_domain *ctx, > struct cpu_user_regs *regs, > struct optee_std_call *call, > struct shm_rpc *shm_rpc) > { > if ( shm_rpc->xen_arg->ret || shm_rpc->xen_arg->num_params != 1 ) > - return; > + return true; > > if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | > OPTEE_MSG_ATTR_NONCONTIG) ) > @@ -1258,7 +1355,7 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, > gdprintk(XENLOG_WARNING, > "Invalid attrs for shared mem buffer: %"PRIx64"\n", > shm_rpc->xen_arg->params[0].attr); > - return; > + return true; > } > > /* Free pg list for buffer */ > @@ -1274,21 +1371,14 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, > { > call->rpc_data_cookie = 0; > /* > - * Okay, so there was problem with guest's buffer and we need > - * to tell about this to OP-TEE. > - */ > - shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; > - shm_rpc->xen_arg->num_params = 0; > - /* > - * TODO: With current implementation, OP-TEE will not issue > - * RPC to free this buffer. Guest and OP-TEE will be out of > - * sync: guest believes that it provided buffer to OP-TEE, > - * while OP-TEE thinks of opposite. Ideally, we need to > - * emulate RPC with OPTEE_MSG_RPC_CMD_SHM_FREE command. > + * We are unable to translate guest's buffer, so we need tell guest > + * to free it, before returning error to OP-TEE. Do you mean "reporting" instead of "returning"? Also s/error/an error/ > */ > - gprintk(XENLOG_WARNING, > - "translate_noncontig() failed, OP-TEE/guest state is out of sync.\n"); > + return !issue_rpc_cmd_free(ctx, regs, call, shm_rpc, > + shm_rpc->xen_arg->params[0].u.tmem.shm_ref); > } > + > + return true; > } > > static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, > @@ -1338,22 +1428,37 @@ static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, > goto out; > } > > - switch (shm_rpc->xen_arg->cmd) > + if ( call->state == OPTEEM_CALL_NORMAL ) > { > - case OPTEE_RPC_CMD_GET_TIME: > - case OPTEE_RPC_CMD_WAIT_QUEUE: > - case OPTEE_RPC_CMD_SUSPEND: > - break; > - case OPTEE_RPC_CMD_SHM_ALLOC: > - handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc); > - break; > - case OPTEE_RPC_CMD_SHM_FREE: > - free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); > - if ( call->rpc_data_cookie == shm_rpc->xen_arg->params[0].u.value.b ) > - call->rpc_data_cookie = 0; > - break; > - default: > - break; > + switch (shm_rpc->xen_arg->cmd) > + { > + case OPTEE_RPC_CMD_GET_TIME: > + case OPTEE_RPC_CMD_WAIT_QUEUE: > + case OPTEE_RPC_CMD_SUSPEND: > + break; > + case OPTEE_RPC_CMD_SHM_ALLOC: > + if ( !handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc) ) > + { > + /* We failed to translate buffer, report back to guest */ > + unmap_domain_page(shm_rpc->xen_arg); > + put_std_call(ctx, call); > + > + return; > + } > + break; > + case OPTEE_RPC_CMD_SHM_FREE: > + free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); > + if ( call->rpc_data_cookie == > + shm_rpc->xen_arg->params[0].u.value.b ) > + call->rpc_data_cookie = 0; > + break; > + default: > + break; > + } > + } > + else > + { > + handle_xen_rpc_return(ctx, regs, call, shm_rpc); > } > > out: > Cheers,
Julien Grall writes: > Hi Volodymyr, > > On 8/23/19 7:48 PM, Volodymyr Babchuk wrote: >> There is a case possible, when OP-TEE asks guest to allocate shared >> buffer, but Xen for some reason can't translate buffer's addresses. In >> this situation we should do two things: >> >> 1. Tell guest to free allocated buffer, so there will be no memory >> leak for guest. >> >> 2. Tell OP-TEE that buffer allocation failed. >> >> To ask guest to free allocated buffer we should perform the same >> thing, as OP-TEE does - issue RPC request. This is done by filling >> request buffer (luckily we can reuse the same buffer, that OP-TEE used >> to issue original request) and then return to guest with special >> return code. >> >> Then we need to handle next call from guest in a special way: as RPC >> was issued by Xen, not by OP-TEE, it should be handled by Xen. >> Basically, this is the mechanism to preempt OP-TEE mediator. >> >> The same mechanism can be used in the future to preempt mediator >> during translation large (>512 pages) shared buffers. >> >> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> >> --- >> xen/arch/arm/tee/optee.c | 167 +++++++++++++++++++++++++++++++-------- >> 1 file changed, 136 insertions(+), 31 deletions(-) >> >> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c >> index 3ce6e7fa55..4eebc60b62 100644 >> --- a/xen/arch/arm/tee/optee.c >> +++ b/xen/arch/arm/tee/optee.c >> @@ -96,6 +96,11 @@ >> OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \ >> OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) >> +enum optee_call_state { >> + OPTEEM_CALL_NORMAL = 0, > > enum always start counting at 0. Also, looking at the code, it does > not seem you need to know the value. Right? Yep. This is a bad habit. Will remove. > >> + OPTEEM_CALL_XEN_RPC, > > I am a bit confused, the enum is called optee_call_state but all the > enum are prefixed with OPTEEM_CALL_. Why the discrepancy? Because I'm bad at naming things :) OPTEEM_CALL_STATE_XEN_RPC looks too long. But you are right, so I'll rename the enum values. Unless, you have a better idea for this. > >> +}; >> + >> static unsigned int __read_mostly max_optee_threads; >> /* >> @@ -112,6 +117,9 @@ struct optee_std_call { >> paddr_t guest_arg_ipa; >> int optee_thread_id; >> int rpc_op; >> + /* Saved buffer type for the last buffer allocate request */ > > Looking at the code, it feels to me you are saving the buffer type for > the current command and not the last. Did I miss anything? Yes, right. Will rename. >> + unsigned int rpc_buffer_type; >> + enum optee_call_state state; >> uint64_t rpc_data_cookie; >> bool in_flight; >> register_t rpc_params[2]; >> @@ -299,6 +307,7 @@ static struct optee_std_call *allocate_std_call(struct optee_domain *ctx) >> call->optee_thread_id = -1; >> call->in_flight = true; >> + call->state = OPTEEM_CALL_NORMAL; >> spin_lock(&ctx->lock); >> list_add_tail(&call->list, &ctx->call_list); >> @@ -1075,6 +1084,10 @@ static int handle_rpc_return(struct optee_domain *ctx, >> ret = -ERESTART; >> } >> + /* Save the buffer type in case we will want to free it >> */ >> + if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC ) >> + call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a; >> + >> unmap_domain_page(shm_rpc->xen_arg); >> } >> @@ -1239,18 +1252,102 @@ err: >> return; >> } >> +/* >> + * Prepare RPC request to free shared buffer in the same way, as >> + * OP-TEE does this. >> + * >> + * Return values: >> + * true - successfully prepared RPC request >> + * false - there was an error >> + */ >> +static bool issue_rpc_cmd_free(struct optee_domain *ctx, >> + struct cpu_user_regs *regs, >> + struct optee_std_call *call, >> + struct shm_rpc *shm_rpc, >> + uint64_t cookie) >> +{ >> + register_t r1, r2; >> + >> + /* In case if guest will forget to update it with meaningful value */ >> + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; >> + shm_rpc->xen_arg->cmd = OPTEE_RPC_CMD_SHM_FREE; >> + shm_rpc->xen_arg->num_params = 1; >> + shm_rpc->xen_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; >> + shm_rpc->xen_arg->params[0].u.value.a = call->rpc_buffer_type; >> + shm_rpc->xen_arg->params[0].u.value.b = cookie; >> + >> + if ( access_guest_memory_by_ipa(current->domain, >> + gfn_to_gaddr(shm_rpc->gfn), >> + shm_rpc->xen_arg, >> + OPTEE_MSG_GET_ARG_SIZE(1), >> + true) ) >> + { >> + /* >> + * Well, this is quite bad. We have error in error path. >> + * This can happen only if guest behaves badly, so all >> + * we can do is to return error to OP-TEE and leave >> + * guest's memory leaked. > > Could you expand a bit more what you mean by "guest's memory leaked"? There will be memory leak somewhere in the guest. Yes, looks like it is misleading... What I mean, is that OP-TEE requests guest to allocate some memory. Guest does not know, when OP-TEE finishes using this memory, so guest will free the memory only by OP-TEE's request. We can't emulate this request in current circumstances, so guest will keep part of own memory reserved for OP-TEE infinitely. > What the state of the page from Xen PoV? From Xen point of view all will be perfectly fine. > I.e. is there any reference > taken by the OP-TEE mediator? Will the page be freed once the guest is > destroyed?... As I said, it has nothing to do with the page as Xen it sees. Mediator will call put_page() prior to entering this function. So, no Xen resources are used. > >> + */ >> + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; >> + shm_rpc->xen_arg->num_params = 0; >> + >> + return false; >> + } >> + >> + uint64_to_regpair(&r1, &r2, shm_rpc->cookie); >> + >> + call->state = OPTEEM_CALL_XEN_RPC; >> + call->rpc_op = OPTEE_SMC_RPC_FUNC_CMD; >> + call->rpc_params[0] = r1; >> + call->rpc_params[1] = r2; >> + call->optee_thread_id = get_user_reg(regs, 3); >> + >> + set_user_reg(regs, 0, OPTEE_SMC_RETURN_RPC_CMD); >> + set_user_reg(regs, 1, r1); >> + set_user_reg(regs, 2, r2); >> + >> + return true; >> +} >> + >> +/* Handles return from Xen-issued RPC */ >> +static void handle_xen_rpc_return(struct optee_domain *ctx, >> + struct cpu_user_regs *regs, >> + struct optee_std_call *call, >> + struct shm_rpc *shm_rpc) >> +{ >> + call->state = OPTEEM_CALL_NORMAL; >> + >> + /* >> + * Right now we have only one reason to be there - we asked guest >> + * to free shared buffer and it did it. Now we can tell OP-TEE that >> + * buffer allocation failed. >> + */ > > Should we add an ASSERT to ensure the command is the one we expect? It is strange, that it is missing, actually. Looks like I forgot to add it. But, looking at xen-error-handling, maybe BOG_ON() would be better? >> + >> + /* >> + * We are not checking return value from a guest because we assume >> + * that OPTEE_RPC_CMD_SHM_FREE newer fails. > > s/newer/never/ Oops. Thank you. >> + */ >> + >> + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; >> + shm_rpc->xen_arg->num_params = 0; >> +} >> + >> /* >> * This function is called when guest is finished processing RPC >> * request from OP-TEE and wished to resume the interrupted standard >> * call. >> + * >> + * Return values: >> + * false - there was an error, do not call OP-TEE >> + * true - success, proceed as normal >> */ >> -static void handle_rpc_cmd_alloc(struct optee_domain *ctx, >> +static bool handle_rpc_cmd_alloc(struct optee_domain *ctx, >> struct cpu_user_regs *regs, >> struct optee_std_call *call, >> struct shm_rpc *shm_rpc) >> { >> if ( shm_rpc->xen_arg->ret || shm_rpc->xen_arg->num_params != 1 ) >> - return; >> + return true; >> if ( shm_rpc->xen_arg->params[0].attr != >> (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | >> OPTEE_MSG_ATTR_NONCONTIG) ) >> @@ -1258,7 +1355,7 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, >> gdprintk(XENLOG_WARNING, >> "Invalid attrs for shared mem buffer: %"PRIx64"\n", >> shm_rpc->xen_arg->params[0].attr); >> - return; >> + return true; >> } >> /* Free pg list for buffer */ >> @@ -1274,21 +1371,14 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, >> { >> call->rpc_data_cookie = 0; >> /* >> - * Okay, so there was problem with guest's buffer and we need >> - * to tell about this to OP-TEE. >> - */ >> - shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; >> - shm_rpc->xen_arg->num_params = 0; >> - /* >> - * TODO: With current implementation, OP-TEE will not issue >> - * RPC to free this buffer. Guest and OP-TEE will be out of >> - * sync: guest believes that it provided buffer to OP-TEE, >> - * while OP-TEE thinks of opposite. Ideally, we need to >> - * emulate RPC with OPTEE_MSG_RPC_CMD_SHM_FREE command. >> + * We are unable to translate guest's buffer, so we need tell guest >> + * to free it, before returning error to OP-TEE. > > Do you mean "reporting" instead of "returning"? Yes, I do. > Also s/error/an error/ Sure. Thank you. -- Volodymyr Babchuk at EPAM
Hi Volodymyr, On 9/11/19 7:32 PM, Volodymyr Babchuk wrote: > > Julien Grall writes: > >> Hi Volodymyr, >> >> On 8/23/19 7:48 PM, Volodymyr Babchuk wrote: >>> There is a case possible, when OP-TEE asks guest to allocate shared >>> buffer, but Xen for some reason can't translate buffer's addresses. In >>> this situation we should do two things: >>> >>> 1. Tell guest to free allocated buffer, so there will be no memory >>> leak for guest. >>> >>> 2. Tell OP-TEE that buffer allocation failed. >>> >>> To ask guest to free allocated buffer we should perform the same >>> thing, as OP-TEE does - issue RPC request. This is done by filling >>> request buffer (luckily we can reuse the same buffer, that OP-TEE used >>> to issue original request) and then return to guest with special >>> return code. >>> >>> Then we need to handle next call from guest in a special way: as RPC >>> was issued by Xen, not by OP-TEE, it should be handled by Xen. >>> Basically, this is the mechanism to preempt OP-TEE mediator. >>> >>> The same mechanism can be used in the future to preempt mediator >>> during translation large (>512 pages) shared buffers. >>> >>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> >>> --- >>> xen/arch/arm/tee/optee.c | 167 +++++++++++++++++++++++++++++++-------- >>> 1 file changed, 136 insertions(+), 31 deletions(-) >>> >>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c >>> index 3ce6e7fa55..4eebc60b62 100644 >>> --- a/xen/arch/arm/tee/optee.c >>> +++ b/xen/arch/arm/tee/optee.c >>> @@ -96,6 +96,11 @@ >>> OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \ >>> OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) >>> +enum optee_call_state { >>> + OPTEEM_CALL_NORMAL = 0, >> >> enum always start counting at 0. Also, looking at the code, it does >> not seem you need to know the value. Right? > Yep. This is a bad habit. Will remove. > >> >>> + OPTEEM_CALL_XEN_RPC, >> >> I am a bit confused, the enum is called optee_call_state but all the >> enum are prefixed with OPTEEM_CALL_. Why the discrepancy? > Because I'm bad at naming things :) > > OPTEEM_CALL_STATE_XEN_RPC looks too long. But you are right, so I'll > rename the enum values. Unless, you have a better idea for this. My point was not about adding _STATE to the enum values but the fact you call the enum optee but the value OPTEEM (note the extra M in the later). So my only request here is to call the enum opteem_call_state or prefix all the enum value with OPTEE. > >> >>> +}; >>> + >>> static unsigned int __read_mostly max_optee_threads; >>> /* >>> @@ -112,6 +117,9 @@ struct optee_std_call { >>> paddr_t guest_arg_ipa; >>> int optee_thread_id; >>> int rpc_op; >>> + /* Saved buffer type for the last buffer allocate request */ >> >> Looking at the code, it feels to me you are saving the buffer type for >> the current command and not the last. Did I miss anything? > Yes, right. Will rename. > >>> + unsigned int rpc_buffer_type; >>> + enum optee_call_state state; >>> uint64_t rpc_data_cookie; >>> bool in_flight; >>> register_t rpc_params[2]; >>> @@ -299,6 +307,7 @@ static struct optee_std_call *allocate_std_call(struct optee_domain *ctx) >>> call->optee_thread_id = -1; >>> call->in_flight = true; >>> + call->state = OPTEEM_CALL_NORMAL; >>> spin_lock(&ctx->lock); >>> list_add_tail(&call->list, &ctx->call_list); >>> @@ -1075,6 +1084,10 @@ static int handle_rpc_return(struct optee_domain *ctx, >>> ret = -ERESTART; >>> } >>> + /* Save the buffer type in case we will want to free it >>> */ >>> + if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC ) >>> + call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a; >>> + >>> unmap_domain_page(shm_rpc->xen_arg); >>> } >>> @@ -1239,18 +1252,102 @@ err: >>> return; >>> } >>> +/* >>> + * Prepare RPC request to free shared buffer in the same way, as >>> + * OP-TEE does this. >>> + * >>> + * Return values: >>> + * true - successfully prepared RPC request >>> + * false - there was an error >>> + */ >>> +static bool issue_rpc_cmd_free(struct optee_domain *ctx, >>> + struct cpu_user_regs *regs, >>> + struct optee_std_call *call, >>> + struct shm_rpc *shm_rpc, >>> + uint64_t cookie) >>> +{ >>> + register_t r1, r2; >>> + >>> + /* In case if guest will forget to update it with meaningful value */ >>> + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; >>> + shm_rpc->xen_arg->cmd = OPTEE_RPC_CMD_SHM_FREE; >>> + shm_rpc->xen_arg->num_params = 1; >>> + shm_rpc->xen_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; >>> + shm_rpc->xen_arg->params[0].u.value.a = call->rpc_buffer_type; >>> + shm_rpc->xen_arg->params[0].u.value.b = cookie; >>> + >>> + if ( access_guest_memory_by_ipa(current->domain, >>> + gfn_to_gaddr(shm_rpc->gfn), >>> + shm_rpc->xen_arg, >>> + OPTEE_MSG_GET_ARG_SIZE(1), >>> + true) ) >>> + { >>> + /* >>> + * Well, this is quite bad. We have error in error path. >>> + * This can happen only if guest behaves badly, so all >>> + * we can do is to return error to OP-TEE and leave >>> + * guest's memory leaked. >> >> Could you expand a bit more what you mean by "guest's memory leaked"? > There will be memory leak somewhere in the guest. Yes, looks > like it is misleading... > > What I mean, is that OP-TEE requests guest to allocate some > memory. Guest does not know, when OP-TEE finishes using this memory, so > guest will free the memory only by OP-TEE's request. We can't emulate > this request in current circumstances, so guest will keep part of own > memory reserved for OP-TEE infinitely. > >> What the state of the page from Xen PoV? > From Xen point of view all will be perfectly fine. > >> I.e. is there any reference >> taken by the OP-TEE mediator? Will the page be freed once the guest is >> destroyed?... > As I said, it has nothing to do with the page as Xen it sees. Mediator > will call put_page() prior to entering this function. So, no Xen > resources are used. It makes sense, Thank you for the explanation. Please update the comment accordingly. > >> >>> + */ >>> + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; >>> + shm_rpc->xen_arg->num_params = 0; >>> + >>> + return false; >>> + } >>> + >>> + uint64_to_regpair(&r1, &r2, shm_rpc->cookie); >>> + >>> + call->state = OPTEEM_CALL_XEN_RPC; >>> + call->rpc_op = OPTEE_SMC_RPC_FUNC_CMD; >>> + call->rpc_params[0] = r1; >>> + call->rpc_params[1] = r2; >>> + call->optee_thread_id = get_user_reg(regs, 3); >>> + >>> + set_user_reg(regs, 0, OPTEE_SMC_RETURN_RPC_CMD); >>> + set_user_reg(regs, 1, r1); >>> + set_user_reg(regs, 2, r2); >>> + >>> + return true; >>> +} >>> + >>> +/* Handles return from Xen-issued RPC */ >>> +static void handle_xen_rpc_return(struct optee_domain *ctx, >>> + struct cpu_user_regs *regs, >>> + struct optee_std_call *call, >>> + struct shm_rpc *shm_rpc) >>> +{ >>> + call->state = OPTEEM_CALL_NORMAL; >>> + >>> + /* >>> + * Right now we have only one reason to be there - we asked guest >>> + * to free shared buffer and it did it. Now we can tell OP-TEE that >>> + * buffer allocation failed. >>> + */ >> >> Should we add an ASSERT to ensure the command is the one we expect? > It is strange, that it is missing, actually. Looks like I forgot to add > it. But, looking at xen-error-handling, maybe BOG_ON() would be better? The documentation in xen-error-handling needs some update. IIRC George had a patch for updating the documentation on the mailing list. BUG_ON() (and BUG()) should only be used if this is an error the hypervisor can't recover. I am actually slowly go through the tree and removing those who are in the guest path as some could be triggered on new revision of the architecture :(. In this case, this is in guest path and an error case. If something has been missed and the guest may trigger the BUG_ON(). While this is a DOS, this is still not desirable. So there are three solutions: 1) Crash the guest 2) Add an ASSERT() 3) Print a warning This is an error path so 2) might be less desirable if we don't do full coverage of the code in debug mode. Cheers,
diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c index 3ce6e7fa55..4eebc60b62 100644 --- a/xen/arch/arm/tee/optee.c +++ b/xen/arch/arm/tee/optee.c @@ -96,6 +96,11 @@ OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \ OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) +enum optee_call_state { + OPTEEM_CALL_NORMAL = 0, + OPTEEM_CALL_XEN_RPC, +}; + static unsigned int __read_mostly max_optee_threads; /* @@ -112,6 +117,9 @@ struct optee_std_call { paddr_t guest_arg_ipa; int optee_thread_id; int rpc_op; + /* Saved buffer type for the last buffer allocate request */ + unsigned int rpc_buffer_type; + enum optee_call_state state; uint64_t rpc_data_cookie; bool in_flight; register_t rpc_params[2]; @@ -299,6 +307,7 @@ static struct optee_std_call *allocate_std_call(struct optee_domain *ctx) call->optee_thread_id = -1; call->in_flight = true; + call->state = OPTEEM_CALL_NORMAL; spin_lock(&ctx->lock); list_add_tail(&call->list, &ctx->call_list); @@ -1075,6 +1084,10 @@ static int handle_rpc_return(struct optee_domain *ctx, ret = -ERESTART; } + /* Save the buffer type in case we will want to free it */ + if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC ) + call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a; + unmap_domain_page(shm_rpc->xen_arg); } @@ -1239,18 +1252,102 @@ err: return; } +/* + * Prepare RPC request to free shared buffer in the same way, as + * OP-TEE does this. + * + * Return values: + * true - successfully prepared RPC request + * false - there was an error + */ +static bool issue_rpc_cmd_free(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call, + struct shm_rpc *shm_rpc, + uint64_t cookie) +{ + register_t r1, r2; + + /* In case if guest will forget to update it with meaningful value */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->cmd = OPTEE_RPC_CMD_SHM_FREE; + shm_rpc->xen_arg->num_params = 1; + shm_rpc->xen_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + shm_rpc->xen_arg->params[0].u.value.a = call->rpc_buffer_type; + shm_rpc->xen_arg->params[0].u.value.b = cookie; + + if ( access_guest_memory_by_ipa(current->domain, + gfn_to_gaddr(shm_rpc->gfn), + shm_rpc->xen_arg, + OPTEE_MSG_GET_ARG_SIZE(1), + true) ) + { + /* + * Well, this is quite bad. We have error in error path. + * This can happen only if guest behaves badly, so all + * we can do is to return error to OP-TEE and leave + * guest's memory leaked. + */ + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->num_params = 0; + + return false; + } + + uint64_to_regpair(&r1, &r2, shm_rpc->cookie); + + call->state = OPTEEM_CALL_XEN_RPC; + call->rpc_op = OPTEE_SMC_RPC_FUNC_CMD; + call->rpc_params[0] = r1; + call->rpc_params[1] = r2; + call->optee_thread_id = get_user_reg(regs, 3); + + set_user_reg(regs, 0, OPTEE_SMC_RETURN_RPC_CMD); + set_user_reg(regs, 1, r1); + set_user_reg(regs, 2, r2); + + return true; +} + +/* Handles return from Xen-issued RPC */ +static void handle_xen_rpc_return(struct optee_domain *ctx, + struct cpu_user_regs *regs, + struct optee_std_call *call, + struct shm_rpc *shm_rpc) +{ + call->state = OPTEEM_CALL_NORMAL; + + /* + * Right now we have only one reason to be there - we asked guest + * to free shared buffer and it did it. Now we can tell OP-TEE that + * buffer allocation failed. + */ + + /* + * We are not checking return value from a guest because we assume + * that OPTEE_RPC_CMD_SHM_FREE newer fails. + */ + + shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; + shm_rpc->xen_arg->num_params = 0; +} + /* * This function is called when guest is finished processing RPC * request from OP-TEE and wished to resume the interrupted standard * call. + * + * Return values: + * false - there was an error, do not call OP-TEE + * true - success, proceed as normal */ -static void handle_rpc_cmd_alloc(struct optee_domain *ctx, +static bool handle_rpc_cmd_alloc(struct optee_domain *ctx, struct cpu_user_regs *regs, struct optee_std_call *call, struct shm_rpc *shm_rpc) { if ( shm_rpc->xen_arg->ret || shm_rpc->xen_arg->num_params != 1 ) - return; + return true; if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | OPTEE_MSG_ATTR_NONCONTIG) ) @@ -1258,7 +1355,7 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, gdprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer: %"PRIx64"\n", shm_rpc->xen_arg->params[0].attr); - return; + return true; } /* Free pg list for buffer */ @@ -1274,21 +1371,14 @@ static void handle_rpc_cmd_alloc(struct optee_domain *ctx, { call->rpc_data_cookie = 0; /* - * Okay, so there was problem with guest's buffer and we need - * to tell about this to OP-TEE. - */ - shm_rpc->xen_arg->ret = TEEC_ERROR_GENERIC; - shm_rpc->xen_arg->num_params = 0; - /* - * TODO: With current implementation, OP-TEE will not issue - * RPC to free this buffer. Guest and OP-TEE will be out of - * sync: guest believes that it provided buffer to OP-TEE, - * while OP-TEE thinks of opposite. Ideally, we need to - * emulate RPC with OPTEE_MSG_RPC_CMD_SHM_FREE command. + * We are unable to translate guest's buffer, so we need tell guest + * to free it, before returning error to OP-TEE. */ - gprintk(XENLOG_WARNING, - "translate_noncontig() failed, OP-TEE/guest state is out of sync.\n"); + return !issue_rpc_cmd_free(ctx, regs, call, shm_rpc, + shm_rpc->xen_arg->params[0].u.tmem.shm_ref); } + + return true; } static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, @@ -1338,22 +1428,37 @@ static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs, goto out; } - switch (shm_rpc->xen_arg->cmd) + if ( call->state == OPTEEM_CALL_NORMAL ) { - case OPTEE_RPC_CMD_GET_TIME: - case OPTEE_RPC_CMD_WAIT_QUEUE: - case OPTEE_RPC_CMD_SUSPEND: - break; - case OPTEE_RPC_CMD_SHM_ALLOC: - handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc); - break; - case OPTEE_RPC_CMD_SHM_FREE: - free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); - if ( call->rpc_data_cookie == shm_rpc->xen_arg->params[0].u.value.b ) - call->rpc_data_cookie = 0; - break; - default: - break; + switch (shm_rpc->xen_arg->cmd) + { + case OPTEE_RPC_CMD_GET_TIME: + case OPTEE_RPC_CMD_WAIT_QUEUE: + case OPTEE_RPC_CMD_SUSPEND: + break; + case OPTEE_RPC_CMD_SHM_ALLOC: + if ( !handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc) ) + { + /* We failed to translate buffer, report back to guest */ + unmap_domain_page(shm_rpc->xen_arg); + put_std_call(ctx, call); + + return; + } + break; + case OPTEE_RPC_CMD_SHM_FREE: + free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b); + if ( call->rpc_data_cookie == + shm_rpc->xen_arg->params[0].u.value.b ) + call->rpc_data_cookie = 0; + break; + default: + break; + } + } + else + { + handle_xen_rpc_return(ctx, regs, call, shm_rpc); } out:
There is a case possible, when OP-TEE asks guest to allocate shared buffer, but Xen for some reason can't translate buffer's addresses. In this situation we should do two things: 1. Tell guest to free allocated buffer, so there will be no memory leak for guest. 2. Tell OP-TEE that buffer allocation failed. To ask guest to free allocated buffer we should perform the same thing, as OP-TEE does - issue RPC request. This is done by filling request buffer (luckily we can reuse the same buffer, that OP-TEE used to issue original request) and then return to guest with special return code. Then we need to handle next call from guest in a special way: as RPC was issued by Xen, not by OP-TEE, it should be handled by Xen. Basically, this is the mechanism to preempt OP-TEE mediator. The same mechanism can be used in the future to preempt mediator during translation large (>512 pages) shared buffers. Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> --- xen/arch/arm/tee/optee.c | 167 +++++++++++++++++++++++++++++++-------- 1 file changed, 136 insertions(+), 31 deletions(-)