Message ID | 20190823184826.14525-3-volodymyr_babchuk@epam.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arch/arm: optee: fix TODOs and remove "experimental" status | expand |
Hi Volodymyr, On 8/23/19 7:48 PM, Volodymyr Babchuk wrote: > Now we have limit for one shared buffer size, so we can be sure that > one call to free_optee_shm_buf() will not free all > MAX_TOTAL_SMH_BUF_PG pages at once. Thus, we now can check for > hypercall_preempt_check() in the loop inside > optee_relinquish_resources() and this will ensure that we are not > missing preemption. I am not sure to understand the correlation between the two sentences. Even if previously the guest could pin up to MAX_TOTAL_SHM_BUF_PG in one call, a well-behaved guest would result to do multiple calls and therefore preemption would have been useful. So I think the commit message needs some rewording. > > Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> > --- > xen/arch/arm/tee/optee.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c > index f4fa8a7758..a84ffa3089 100644 > --- a/xen/arch/arm/tee/optee.c > +++ b/xen/arch/arm/tee/optee.c > @@ -634,14 +634,14 @@ static int optee_relinquish_resources(struct domain *d) > if ( hypercall_preempt_check() ) > return -ERESTART; > > - /* > - * TODO: Guest can pin up to MAX_TOTAL_SMH_BUF_PG pages and all of > - * them will be put in this loop. It is worth considering to > - * check for preemption inside the loop. > - */ > list_for_each_entry_safe( optee_shm_buf, optee_shm_buf_tmp, > &ctx->optee_shm_buf_list, list ) > + { > + if ( hypercall_preempt_check() ) So on the first iteration, you will check twice preemption (one before the loop and just entering). hypercall_preempt_check(). The function is not entirely free on Arm because of the implementation of vgic_vcpu_pending_irq(). So preventing pointless call would be nice. In this case, the hypercall_preempt_check() before the loop could be dropped. > + return -ERESTART; > + > free_optee_shm_buf(ctx, optee_shm_buf->cookie); > + } > > if ( hypercall_preempt_check() ) > return -ERESTART; > Cheers,
Julien Grall writes: > Hi Volodymyr, > > On 8/23/19 7:48 PM, Volodymyr Babchuk wrote: >> Now we have limit for one shared buffer size, so we can be sure that >> one call to free_optee_shm_buf() will not free all >> MAX_TOTAL_SMH_BUF_PG pages at once. Thus, we now can check for >> hypercall_preempt_check() in the loop inside >> optee_relinquish_resources() and this will ensure that we are not >> missing preemption. > > I am not sure to understand the correlation between the two > sentences. Even if previously the guest could pin up to > MAX_TOTAL_SHM_BUF_PG in one call, a well-behaved guest would result to > do multiple calls and therefore preemption would have been useful. Looks like now I don't understand you. I'm talking about shared buffers. We have limited shared buffer to some reasonable size. There is bad- or well-behaving guests in this context, because guest can't share one big buffer in multiple calls. In other worlds, if guest *needs* to share 512MB buffer with OP-TEE, it will be forced to do this in one call. But we are forbidding big buffers right now. optee_relinquish_resources() is called during domain destruction. At this time we can have a number of still living shared buffers, each of one is no bigger than 512 pages. Thanks to this, we can call hypercall_preempt_check() only in optee_relinquish_resources(), but not in free_optee_shm_buf(). If we will allow guest to register bigger buffer, than we will be forced to check for preemption in free_optee_shm_buf() as well. This is what I meant in the commit message. > So I think the commit message needs some rewording. Probably yes... >> >> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> >> --- >> xen/arch/arm/tee/optee.c | 10 +++++----- >> 1 file changed, 5 insertions(+), 5 deletions(-) >> >> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c >> index f4fa8a7758..a84ffa3089 100644 >> --- a/xen/arch/arm/tee/optee.c >> +++ b/xen/arch/arm/tee/optee.c >> @@ -634,14 +634,14 @@ static int optee_relinquish_resources(struct domain *d) >> if ( hypercall_preempt_check() ) >> return -ERESTART; >> - /* >> - * TODO: Guest can pin up to MAX_TOTAL_SMH_BUF_PG pages and all of >> - * them will be put in this loop. It is worth considering to >> - * check for preemption inside the loop. >> - */ >> list_for_each_entry_safe( optee_shm_buf, optee_shm_buf_tmp, >> &ctx->optee_shm_buf_list, list ) >> + { >> + if ( hypercall_preempt_check() ) > > So on the first iteration, you will check twice preemption (one before > the loop and just entering). hypercall_preempt_check(). The function > is not entirely free on Arm because of the implementation of > vgic_vcpu_pending_irq(). So preventing pointless call would be nice. > > In this case, the hypercall_preempt_check() before the loop could be > dropped. Yes, you are right. > >> + return -ERESTART; >> + >> free_optee_shm_buf(ctx, optee_shm_buf->cookie); >> + } >> if ( hypercall_preempt_check() ) >> return -ERESTART; >> > > Cheers, -- Volodymyr Babchuk at EPAM
Hi Volodymyr, On 9/11/19 7:53 PM, Volodymyr Babchuk wrote: > > Julien Grall writes: > >> Hi Volodymyr, >> >> On 8/23/19 7:48 PM, Volodymyr Babchuk wrote: >>> Now we have limit for one shared buffer size, so we can be sure that >>> one call to free_optee_shm_buf() will not free all >>> MAX_TOTAL_SMH_BUF_PG pages at once. Thus, we now can check for >>> hypercall_preempt_check() in the loop inside >>> optee_relinquish_resources() and this will ensure that we are not >>> missing preemption. >> >> I am not sure to understand the correlation between the two >> sentences. Even if previously the guest could pin up to >> MAX_TOTAL_SHM_BUF_PG in one call, a well-behaved guest would result to >> do multiple calls and therefore preemption would have been useful. > Looks like now I don't understand you. > > I'm talking about shared buffers. We have limited shared buffer to some > reasonable size. There is bad- or well-behaving guests in this context, > because guest can't share one big buffer in multiple calls. In other > worlds, if guest *needs* to share 512MB buffer with OP-TEE, it will be > forced to do this in one call. But we are forbidding big buffers right > now. > > optee_relinquish_resources() is called during domain destruction. At > this time we can have a number of still living shared buffers, each of > one is no bigger than 512 pages. Thanks to this, we can call > hypercall_preempt_check() only in optee_relinquish_resources(), but not > in free_optee_shm_buf(). I understand what you mean, however my point is that this patch does not dependent of the previous patch. Even if this patch goes alone, you will improve well-behaved guest. For ill-behaved guest, the problem will stay the same so no change. > > If we will allow guest to register bigger buffer, than we will be forced > to check for preemption in free_optee_shm_buf() as well. Well yes, however this patch would still be useful independently of the size of the buffer. Cheers,
Julien Grall writes: > Hi Volodymyr, > > On 9/11/19 7:53 PM, Volodymyr Babchuk wrote: >> >> Julien Grall writes: >> >>> Hi Volodymyr, >>> >>> On 8/23/19 7:48 PM, Volodymyr Babchuk wrote: >>>> Now we have limit for one shared buffer size, so we can be sure that >>>> one call to free_optee_shm_buf() will not free all >>>> MAX_TOTAL_SMH_BUF_PG pages at once. Thus, we now can check for >>>> hypercall_preempt_check() in the loop inside >>>> optee_relinquish_resources() and this will ensure that we are not >>>> missing preemption. >>> >>> I am not sure to understand the correlation between the two >>> sentences. Even if previously the guest could pin up to >>> MAX_TOTAL_SHM_BUF_PG in one call, a well-behaved guest would result to >>> do multiple calls and therefore preemption would have been useful. >> Looks like now I don't understand you. >> >> I'm talking about shared buffers. We have limited shared buffer to some >> reasonable size. There is bad- or well-behaving guests in this context, >> because guest can't share one big buffer in multiple calls. In other >> worlds, if guest *needs* to share 512MB buffer with OP-TEE, it will be >> forced to do this in one call. But we are forbidding big buffers right >> now. >> >> optee_relinquish_resources() is called during domain destruction. At >> this time we can have a number of still living shared buffers, each of >> one is no bigger than 512 pages. Thanks to this, we can call >> hypercall_preempt_check() only in optee_relinquish_resources(), but not >> in free_optee_shm_buf(). > > I understand what you mean, however my point is that this patch does > not dependent of the previous patch. Even if this patch goes alone, > you will improve well-behaved guest. For ill-behaved guest, the > problem will stay the same so no change. > Ah, I see now. Okay, I'll rework the commit description.
diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c index f4fa8a7758..a84ffa3089 100644 --- a/xen/arch/arm/tee/optee.c +++ b/xen/arch/arm/tee/optee.c @@ -634,14 +634,14 @@ static int optee_relinquish_resources(struct domain *d) if ( hypercall_preempt_check() ) return -ERESTART; - /* - * TODO: Guest can pin up to MAX_TOTAL_SMH_BUF_PG pages and all of - * them will be put in this loop. It is worth considering to - * check for preemption inside the loop. - */ list_for_each_entry_safe( optee_shm_buf, optee_shm_buf_tmp, &ctx->optee_shm_buf_list, list ) + { + if ( hypercall_preempt_check() ) + return -ERESTART; + free_optee_shm_buf(ctx, optee_shm_buf->cookie); + } if ( hypercall_preempt_check() ) return -ERESTART;
Now we have limit for one shared buffer size, so we can be sure that one call to free_optee_shm_buf() will not free all MAX_TOTAL_SMH_BUF_PG pages at once. Thus, we now can check for hypercall_preempt_check() in the loop inside optee_relinquish_resources() and this will ensure that we are not missing preemption. Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> --- xen/arch/arm/tee/optee.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)