diff mbox

[2/2,RFC,v2,with,seqcount] reservation: add suppport for read-only access using rcu

Message ID 5347A9FD.2070706@vmware.com (mailing list archive)
State New, archived
Headers show

Commit Message

Thomas Hellstrom April 11, 2014, 8:38 a.m. UTC
Hi, Maarten.

Here I believe we encounter a lot of locking inconsistencies.

First, it seems you're use a number of pointers as RCU pointers without
annotating them as such and use the correct rcu
macros when assigning those pointers.

Some pointers (like the pointers in the shared fence list) are both used
as RCU pointers (in dma_buf_poll()) for example,
or considered protected by the seqlock
(reservation_object_get_fences_rcu()), which I believe is OK, but then
the pointers must
be assigned using the correct rcu macros. In the memcpy in
reservation_object_get_fences_rcu() we might get away with an
ugly typecast, but with a verbose comment that the pointers are
considered protected by the seqlock at that location.

So I've updated (attached) the headers with proper __rcu annotation and
locking comments according to how they are being used in the various
reading functions.
I believe if we want to get rid of this we need to validate those
pointers using the seqlock as well.
This will generate a lot of sparse warnings in those places needing
rcu_dereference()
rcu_assign_pointer()
rcu_dereference_protected()

With this I think we can get rid of all ACCESS_ONCE macros: It's not
needed when the rcu_x() macros are used, and
it's never needed for the members protected by the seqlock, (provided
that the seq is tested). The only place where I think that's
*not* the case is at the krealloc in reservation_object_get_fences_rcu().

Also I have some more comments in the
reservation_object_get_fences_rcu() function below:


On 04/10/2014 05:00 PM, Maarten Lankhorst wrote:
> op 10-04-14 13:08, Thomas Hellstrom schreef:
>> On 04/10/2014 12:07 PM, Maarten Lankhorst wrote:
>>> Hey,
>>>
>>> op 10-04-14 10:46, Thomas Hellstrom schreef:
>>>> Hi!
>>>>
>>>> Ugh. This became more complicated than I thought, but I'm OK with
>>>> moving
>>>> TTM over to fence while we sort out
>>>> how / if we're going to use this.
>>>>
>>>> While reviewing, it struck me that this is kind of error-prone, and
>>>> hard
>>>> to follow since we're operating on a structure that may be
>>>> continually updated under us, needing a lot of RCU-specific macros and
>>>> barriers.
>>> Yeah, but with the exception of dma_buf_poll I don't think there is
>>> anything else
>>> outside drivers/base/reservation.c has to deal with rcu.
>>>
>>>> Also the rcu wait appears to not complete until there are no busy
>>>> fences
>>>> left (new ones can be added while we wait) rather than
>>>> waiting on a snapshot of busy fences.
>>> This has been by design, because 'wait for bo idle' type of functions
>>> only care
>>> if the bo is completely idle or not.
>> No, not when using RCU, because the bo may be busy again before the
>> function returns :)
>> Complete idleness can only be guaranteed if holding the reservation, or
>> otherwise making sure
>> that no new rendering is submitted to the buffer, so it's an overkill to
>> wait for complete idleness here.
> You're probably right, but it makes waiting a lot easier if I don't
> have to deal with memory allocations. :P
>>> It would be easy to make a snapshot even without seqlocks, just copy
>>> reservation_object_test_signaled_rcu to return a shared list if
>>> test_all is set, or return pointer to exclusive otherwise.
>>>
>>>> I wonder if these issues can be addressed by having a function that
>>>> provides a snapshot of all busy fences: This can be accomplished
>>>> either by including the exclusive fence in the fence_list structure
>>>> and
>>>> allocate a new such structure each time it is updated. The RCU reader
>>>> could then just make a copy of the current fence_list structure
>>>> pointed
>>>> to by &obj->fence, but I'm not sure we want to reallocate *each*
>>>> time we
>>>> update the fence pointer.
>>> No, the most common operation is updating fence pointers, which is why
>>> the current design makes that cheap. It's also why doing rcu reads is
>>> more expensive.
>>>> The other approach uses a seqlock to obtain a consistent snapshot, and
>>>> I've attached an incomplete outline, and I'm not 100% whether it's
>>>> OK to
>>>> combine RCU and seqlocks in this way...
>>>>
>>>> Both these approaches have the benefit of hiding the RCU
>>>> snapshotting in
>>>> a single function, that can then be used by any waiting
>>>> or polling function.
>>>>
>>> I think the middle way with using seqlocks to protect the fence_excl
>>> pointer and shared list combination,
>>> and using RCU to protect the refcounts for fences and the availability
>>> of the list could work for our usecase
>>> and might remove a bunch of memory barriers. But yeah that depends on
>>> layering rcu and seqlocks.
>>> No idea if that is allowed. But I suppose it is.
>>>
>>> Also, you're being overly paranoid with seqlock reading, we would only
>>> need something like this:
>>>
>>> rcu_read_lock()
>>>      preempt_disable()
>>>      seq = read_seqcount_begin()
>>>      read fence_excl, shared_count = ACCESS_ONCE(fence->shared_count)
>>>      copy shared to a struct.
>>>      if (read_seqcount_retry()) { unlock and retry }
>>>    preempt_enable();
>>>    use fence_get_rcu() to bump refcount on everything, if that fails
>>> unlock, put, and retry
>>> rcu_read_unlock()
>>>
>>> But the shared list would still need to be RCU'd, to make sure we're
>>> not reading freed garbage.
>> Ah. OK,
>> But I think we should use rcu inside seqcount, because
>> read_seqcount_begin() may spin for a long time if there are
>> many writers. Also I don't think the preempt_disable() is needed for
>> read_seq critical sections other than they might
>> decrease the risc of retries..
>>
> Reading the seqlock code makes me suspect that's the case too. The
> lockdep code calls
> local_irq_disable, so it's probably safe without preemption disabled.
>
> ~Maarten
>
> I like the ability of not allocating memory, so I kept
> reservation_object_wait_timeout_rcu mostly
> the way it was. This code appears to fail on nouveau when using the
> shared members,
> but I'm not completely sure whether the error is in nouveau or this
> code yet.
>
> --8<--------
> [RFC v2] reservation: add suppport for read-only access using rcu
>
> This adds 4 more functions to deal with rcu.
>
> reservation_object_get_fences_rcu() will obtain the list of shared
> and exclusive fences without obtaining the ww_mutex.
>
> reservation_object_wait_timeout_rcu() will wait on all fences of the
> reservation_object, without obtaining the ww_mutex.
>
> reservation_object_test_signaled_rcu() will test if all fences of the
> reservation_object are signaled without using the ww_mutex.
>
> reservation_object_get_excl() is added because touching the fence_excl
> member directly will trigger a sparse warning.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>
> diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
> index d89a98d2c37b..ca6ef0c4b358 100644
> --- a/drivers/base/dma-buf.c
> +++ b/drivers/base/dma-buf.c
>
> +int reservation_object_get_fences_rcu(struct reservation_object *obj,
> +                      struct fence **pfence_excl,
> +                      unsigned *pshared_count,
> +                      struct fence ***pshared)
> +{
> +    unsigned shared_count = 0;
> +    unsigned retry = 1;
> +    struct fence **shared = NULL, *fence_excl = NULL;
> +    int ret = 0;
> +
> +    while (retry) {
> +        struct reservation_object_list *fobj;
> +        unsigned seq, retry;

You're shadowing retry?


> +
> +        seq = read_seqcount_begin(&obj->seq);
> +
> +        rcu_read_lock();
> +
> +        fobj = ACCESS_ONCE(obj->fence);
> +        if (fobj) {
> +            struct fence **nshared;
> +
> +            shared_count = ACCESS_ONCE(fobj->shared_count);
> +            nshared = krealloc(shared, sizeof(*shared) *
> shared_count, GFP_KERNEL);

krealloc inside rcu_read_lock(). Better to put this first in the loop.

>
> +            if (!nshared) {
> +                ret = -ENOMEM;
> +                shared_count = retry = 0;
> +                goto unlock;
> +            }
> +            shared = nshared;
> +            memcpy(shared, fobj->shared, sizeof(*shared) *
> shared_count);
> +        } else
> +            shared_count = 0;
> +        fence_excl = obj->fence_excl;
> +
> +        retry = read_seqcount_retry(&obj->seq, seq);
> +        if (retry)
> +            goto unlock;
> +
> +        if (!fence_excl || fence_get_rcu(fence_excl)) {
> +            unsigned i;
> +
> +            for (i = 0; i < shared_count; ++i) {
> +                if (fence_get_rcu(shared[i]))
> +                    continue;
> +
> +                /* uh oh, refcount failed, abort and retry */
> +                while (i--)
> +                    fence_put(shared[i]);
> +
> +                if (fence_excl) {
> +                    fence_put(fence_excl);
> +                    fence_excl = NULL;
> +                }
> +
> +                retry = 1;
> +                break;
> +            }
> +        } else
> +            retry = 1;
> +
> +unlock:
> +        rcu_read_unlock();
> +    }
> +    *pshared_count = shared_count;
> +    if (shared_count)
> +        *pshared = shared;
> +    else {
> +        *pshared = NULL;
> +        kfree(shared);
> +    }
> +    *pfence_excl = fence_excl;
> +
> +    return ret;
> +}
> +EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu);
> +

Thanks,
Thomas

Comments

Maarten Lankhorst April 11, 2014, 9:24 a.m. UTC | #1
op 11-04-14 10:38, Thomas Hellstrom schreef:
> Hi, Maarten.
>
> Here I believe we encounter a lot of locking inconsistencies.
>
> First, it seems you're use a number of pointers as RCU pointers without
> annotating them as such and use the correct rcu
> macros when assigning those pointers.
>
> Some pointers (like the pointers in the shared fence list) are both used
> as RCU pointers (in dma_buf_poll()) for example,
> or considered protected by the seqlock
> (reservation_object_get_fences_rcu()), which I believe is OK, but then
> the pointers must
> be assigned using the correct rcu macros. In the memcpy in
> reservation_object_get_fences_rcu() we might get away with an
> ugly typecast, but with a verbose comment that the pointers are
> considered protected by the seqlock at that location.
>
> So I've updated (attached) the headers with proper __rcu annotation and
> locking comments according to how they are being used in the various
> reading functions.
> I believe if we want to get rid of this we need to validate those
> pointers using the seqlock as well.
> This will generate a lot of sparse warnings in those places needing
> rcu_dereference()
> rcu_assign_pointer()
> rcu_dereference_protected()
>
> With this I think we can get rid of all ACCESS_ONCE macros: It's not
> needed when the rcu_x() macros are used, and
> it's never needed for the members protected by the seqlock, (provided
> that the seq is tested). The only place where I think that's
> *not* the case is at the krealloc in reservation_object_get_fences_rcu().
>
> Also I have some more comments in the
> reservation_object_get_fences_rcu() function below:
I felt that the barriers needed for rcu were already provided by checking the seqcount lock.
But looking at rcu_dereference makes it seem harmless to add it in more places, it handles
the ACCESS_ONCE and barrier() for us.

We could probably get away with using RCU_INIT_POINTER on the writer side,
because the smp_wmb is already done by arranging seqcount updates correctly.

> diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
> index d89a98d2c37b..ca6ef0c4b358 100644
> --- a/drivers/base/dma-buf.c
> +++ b/drivers/base/dma-buf.c
>
> +int reservation_object_get_fences_rcu(struct reservation_object *obj,
> +                      struct fence **pfence_excl,
> +                      unsigned *pshared_count,
> +                      struct fence ***pshared)
> +{
> +    unsigned shared_count = 0;
> +    unsigned retry = 1;
> +    struct fence **shared = NULL, *fence_excl = NULL;
> +    int ret = 0;
> +
> +    while (retry) {
> +        struct reservation_object_list *fobj;
> +        unsigned seq, retry;
> You're shadowing retry?
Oops.
>
>> +
>> +        seq = read_seqcount_begin(&obj->seq);
>> +
>> +        rcu_read_lock();
>> +
>> +        fobj = ACCESS_ONCE(obj->fence);
>> +        if (fobj) {
>> +            struct fence **nshared;
>> +
>> +            shared_count = ACCESS_ONCE(fobj->shared_count);
>> +            nshared = krealloc(shared, sizeof(*shared) *
>> shared_count, GFP_KERNEL);
> krealloc inside rcu_read_lock(). Better to put this first in the loop.
Except that shared_count isn't known until the rcu_read_lock is taken.
> Thanks,
> Thomas
~Maarten
Thomas Hellstrom April 11, 2014, 10:11 a.m. UTC | #2
On 04/11/2014 11:24 AM, Maarten Lankhorst wrote:
> op 11-04-14 10:38, Thomas Hellstrom schreef:
>> Hi, Maarten.
>>
>> Here I believe we encounter a lot of locking inconsistencies.
>>
>> First, it seems you're use a number of pointers as RCU pointers without
>> annotating them as such and use the correct rcu
>> macros when assigning those pointers.
>>
>> Some pointers (like the pointers in the shared fence list) are both used
>> as RCU pointers (in dma_buf_poll()) for example,
>> or considered protected by the seqlock
>> (reservation_object_get_fences_rcu()), which I believe is OK, but then
>> the pointers must
>> be assigned using the correct rcu macros. In the memcpy in
>> reservation_object_get_fences_rcu() we might get away with an
>> ugly typecast, but with a verbose comment that the pointers are
>> considered protected by the seqlock at that location.
>>
>> So I've updated (attached) the headers with proper __rcu annotation and
>> locking comments according to how they are being used in the various
>> reading functions.
>> I believe if we want to get rid of this we need to validate those
>> pointers using the seqlock as well.
>> This will generate a lot of sparse warnings in those places needing
>> rcu_dereference()
>> rcu_assign_pointer()
>> rcu_dereference_protected()
>>
>> With this I think we can get rid of all ACCESS_ONCE macros: It's not
>> needed when the rcu_x() macros are used, and
>> it's never needed for the members protected by the seqlock, (provided
>> that the seq is tested). The only place where I think that's
>> *not* the case is at the krealloc in
>> reservation_object_get_fences_rcu().
>>
>> Also I have some more comments in the
>> reservation_object_get_fences_rcu() function below:
> I felt that the barriers needed for rcu were already provided by
> checking the seqcount lock.
> But looking at rcu_dereference makes it seem harmless to add it in
> more places, it handles
> the ACCESS_ONCE and barrier() for us.

And it makes the code more maintainable, and helps sparse doing a lot of
checking for us. I guess
we can tolerate a couple of extra barriers for that.

>
> We could probably get away with using RCU_INIT_POINTER on the writer
> side,
> because the smp_wmb is already done by arranging seqcount updates
> correctly.

Hmm. yes, probably. At least in the replace function. I think if we do
it in other places, we should add comments as to where
the smp_wmb() is located, for future reference.


Also  I saw in a couple of places where you're checking the shared
pointers, you're not checking for NULL pointers, which I guess may
happen if shared_count and pointers are not in full sync?

Thanks,
/Thomas


>
>> diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
>> index d89a98d2c37b..ca6ef0c4b358 100644
>> --- a/drivers/base/dma-buf.c
>> +++ b/drivers/base/dma-buf.c
>>
>> +int reservation_object_get_fences_rcu(struct reservation_object *obj,
>> +                      struct fence **pfence_excl,
>> +                      unsigned *pshared_count,
>> +                      struct fence ***pshared)
>> +{
>> +    unsigned shared_count = 0;
>> +    unsigned retry = 1;
>> +    struct fence **shared = NULL, *fence_excl = NULL;
>> +    int ret = 0;
>> +
>> +    while (retry) {
>> +        struct reservation_object_list *fobj;
>> +        unsigned seq, retry;
>> You're shadowing retry?
> Oops.
>>
>>> +
>>> +        seq = read_seqcount_begin(&obj->seq);
>>> +
>>> +        rcu_read_lock();
>>> +
>>> +        fobj = ACCESS_ONCE(obj->fence);
>>> +        if (fobj) {
>>> +            struct fence **nshared;
>>> +
>>> +            shared_count = ACCESS_ONCE(fobj->shared_count);
>>> +            nshared = krealloc(shared, sizeof(*shared) *
>>> shared_count, GFP_KERNEL);
>> krealloc inside rcu_read_lock(). Better to put this first in the loop.
> Except that shared_count isn't known until the rcu_read_lock is taken.
>> Thanks,
>> Thomas
> ~Maarten
Maarten Lankhorst April 11, 2014, 6:09 p.m. UTC | #3
op 11-04-14 12:11, Thomas Hellstrom schreef:
> On 04/11/2014 11:24 AM, Maarten Lankhorst wrote:
>> op 11-04-14 10:38, Thomas Hellstrom schreef:
>>> Hi, Maarten.
>>>
>>> Here I believe we encounter a lot of locking inconsistencies.
>>>
>>> First, it seems you're use a number of pointers as RCU pointers without
>>> annotating them as such and use the correct rcu
>>> macros when assigning those pointers.
>>>
>>> Some pointers (like the pointers in the shared fence list) are both used
>>> as RCU pointers (in dma_buf_poll()) for example,
>>> or considered protected by the seqlock
>>> (reservation_object_get_fences_rcu()), which I believe is OK, but then
>>> the pointers must
>>> be assigned using the correct rcu macros. In the memcpy in
>>> reservation_object_get_fences_rcu() we might get away with an
>>> ugly typecast, but with a verbose comment that the pointers are
>>> considered protected by the seqlock at that location.
>>>
>>> So I've updated (attached) the headers with proper __rcu annotation and
>>> locking comments according to how they are being used in the various
>>> reading functions.
>>> I believe if we want to get rid of this we need to validate those
>>> pointers using the seqlock as well.
>>> This will generate a lot of sparse warnings in those places needing
>>> rcu_dereference()
>>> rcu_assign_pointer()
>>> rcu_dereference_protected()
>>>
>>> With this I think we can get rid of all ACCESS_ONCE macros: It's not
>>> needed when the rcu_x() macros are used, and
>>> it's never needed for the members protected by the seqlock, (provided
>>> that the seq is tested). The only place where I think that's
>>> *not* the case is at the krealloc in
>>> reservation_object_get_fences_rcu().
>>>
>>> Also I have some more comments in the
>>> reservation_object_get_fences_rcu() function below:
>> I felt that the barriers needed for rcu were already provided by
>> checking the seqcount lock.
>> But looking at rcu_dereference makes it seem harmless to add it in
>> more places, it handles
>> the ACCESS_ONCE and barrier() for us.
> And it makes the code more maintainable, and helps sparse doing a lot of
> checking for us. I guess
> we can tolerate a couple of extra barriers for that.
>
>> We could probably get away with using RCU_INIT_POINTER on the writer
>> side,
>> because the smp_wmb is already done by arranging seqcount updates
>> correctly.
> Hmm. yes, probably. At least in the replace function. I think if we do
> it in other places, we should add comments as to where
> the smp_wmb() is located, for future reference.
>
>
> Also  I saw in a couple of places where you're checking the shared
> pointers, you're not checking for NULL pointers, which I guess may
> happen if shared_count and pointers are not in full sync?
>
No, because shared_count is protected with seqcount. I only allow appending to the array, so when
shared_count is validated by seqcount it means that the [0...shared_count) indexes are valid and non-null.
What could happen though is that the fence at a specific index is updated with another one from the same
context, but that's harmless.

~Maarten
Thomas Hellstrom April 11, 2014, 7:30 p.m. UTC | #4
Hi!

On 04/11/2014 08:09 PM, Maarten Lankhorst wrote:
> op 11-04-14 12:11, Thomas Hellstrom schreef:
>> On 04/11/2014 11:24 AM, Maarten Lankhorst wrote:
>>> op 11-04-14 10:38, Thomas Hellstrom schreef:
>>>> Hi, Maarten.
>>>>
>>>> Here I believe we encounter a lot of locking inconsistencies.
>>>>
>>>> First, it seems you're use a number of pointers as RCU pointers
>>>> without
>>>> annotating them as such and use the correct rcu
>>>> macros when assigning those pointers.
>>>>
>>>> Some pointers (like the pointers in the shared fence list) are both
>>>> used
>>>> as RCU pointers (in dma_buf_poll()) for example,
>>>> or considered protected by the seqlock
>>>> (reservation_object_get_fences_rcu()), which I believe is OK, but then
>>>> the pointers must
>>>> be assigned using the correct rcu macros. In the memcpy in
>>>> reservation_object_get_fences_rcu() we might get away with an
>>>> ugly typecast, but with a verbose comment that the pointers are
>>>> considered protected by the seqlock at that location.
>>>>
>>>> So I've updated (attached) the headers with proper __rcu annotation
>>>> and
>>>> locking comments according to how they are being used in the various
>>>> reading functions.
>>>> I believe if we want to get rid of this we need to validate those
>>>> pointers using the seqlock as well.
>>>> This will generate a lot of sparse warnings in those places needing
>>>> rcu_dereference()
>>>> rcu_assign_pointer()
>>>> rcu_dereference_protected()
>>>>
>>>> With this I think we can get rid of all ACCESS_ONCE macros: It's not
>>>> needed when the rcu_x() macros are used, and
>>>> it's never needed for the members protected by the seqlock, (provided
>>>> that the seq is tested). The only place where I think that's
>>>> *not* the case is at the krealloc in
>>>> reservation_object_get_fences_rcu().
>>>>
>>>> Also I have some more comments in the
>>>> reservation_object_get_fences_rcu() function below:
>>> I felt that the barriers needed for rcu were already provided by
>>> checking the seqcount lock.
>>> But looking at rcu_dereference makes it seem harmless to add it in
>>> more places, it handles
>>> the ACCESS_ONCE and barrier() for us.
>> And it makes the code more maintainable, and helps sparse doing a lot of
>> checking for us. I guess
>> we can tolerate a couple of extra barriers for that.
>>
>>> We could probably get away with using RCU_INIT_POINTER on the writer
>>> side,
>>> because the smp_wmb is already done by arranging seqcount updates
>>> correctly.
>> Hmm. yes, probably. At least in the replace function. I think if we do
>> it in other places, we should add comments as to where
>> the smp_wmb() is located, for future reference.
>>
>>
>> Also  I saw in a couple of places where you're checking the shared
>> pointers, you're not checking for NULL pointers, which I guess may
>> happen if shared_count and pointers are not in full sync?
>>
> No, because shared_count is protected with seqcount. I only allow
> appending to the array, so when
> shared_count is validated by seqcount it means that the
> [0...shared_count) indexes are valid and non-null.
> What could happen though is that the fence at a specific index is
> updated with another one from the same
> context, but that's harmless.

Hmm.
Shouldn't we have a way to clean signaled fences from reservation
objects? Perhaps when we attach a new fence, or after a wait with
ww_mutex held? Otherwise we'd have a lot of completely unused fence
objects hanging around for no reason. I don't think we need to be as
picky as TTM, but I think we should do something?

/Thomas



>
> ~Maarten
Thomas Hellstrom April 11, 2014, 7:35 p.m. UTC | #5
On 04/11/2014 08:09 PM, Maarten Lankhorst wrote:
> op 11-04-14 12:11, Thomas Hellstrom schreef:
>> On 04/11/2014 11:24 AM, Maarten Lankhorst wrote:
>>> op 11-04-14 10:38, Thomas Hellstrom schreef:
>>>> Hi, Maarten.
>>>>
>>>> Here I believe we encounter a lot of locking inconsistencies.
>>>>
>>>> First, it seems you're use a number of pointers as RCU pointers
>>>> without
>>>> annotating them as such and use the correct rcu
>>>> macros when assigning those pointers.
>>>>
>>>> Some pointers (like the pointers in the shared fence list) are both
>>>> used
>>>> as RCU pointers (in dma_buf_poll()) for example,
>>>> or considered protected by the seqlock
>>>> (reservation_object_get_fences_rcu()), which I believe is OK, but then
>>>> the pointers must
>>>> be assigned using the correct rcu macros. In the memcpy in
>>>> reservation_object_get_fences_rcu() we might get away with an
>>>> ugly typecast, but with a verbose comment that the pointers are
>>>> considered protected by the seqlock at that location.
>>>>
>>>> So I've updated (attached) the headers with proper __rcu annotation
>>>> and
>>>> locking comments according to how they are being used in the various
>>>> reading functions.
>>>> I believe if we want to get rid of this we need to validate those
>>>> pointers using the seqlock as well.
>>>> This will generate a lot of sparse warnings in those places needing
>>>> rcu_dereference()
>>>> rcu_assign_pointer()
>>>> rcu_dereference_protected()
>>>>
>>>> With this I think we can get rid of all ACCESS_ONCE macros: It's not
>>>> needed when the rcu_x() macros are used, and
>>>> it's never needed for the members protected by the seqlock, (provided
>>>> that the seq is tested). The only place where I think that's
>>>> *not* the case is at the krealloc in
>>>> reservation_object_get_fences_rcu().
>>>>
>>>> Also I have some more comments in the
>>>> reservation_object_get_fences_rcu() function below:
>>> I felt that the barriers needed for rcu were already provided by
>>> checking the seqcount lock.
>>> But looking at rcu_dereference makes it seem harmless to add it in
>>> more places, it handles
>>> the ACCESS_ONCE and barrier() for us.
>> And it makes the code more maintainable, and helps sparse doing a lot of
>> checking for us. I guess
>> we can tolerate a couple of extra barriers for that.
>>
>>> We could probably get away with using RCU_INIT_POINTER on the writer
>>> side,
>>> because the smp_wmb is already done by arranging seqcount updates
>>> correctly.
>> Hmm. yes, probably. At least in the replace function. I think if we do
>> it in other places, we should add comments as to where
>> the smp_wmb() is located, for future reference.
>>
>>
>> Also  I saw in a couple of places where you're checking the shared
>> pointers, you're not checking for NULL pointers, which I guess may
>> happen if shared_count and pointers are not in full sync?
>>
> No, because shared_count is protected with seqcount. I only allow
> appending to the array, so when
> shared_count is validated by seqcount it means that the
> [0...shared_count) indexes are valid and non-null.
> What could happen though is that the fence at a specific index is
> updated with another one from the same
> context, but that's harmless.
>

Hmm, doesn't attaching an exclusive fence clear all shared fence
pointers from under a reader?

/Thomas





> ~Maarten
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
Maarten Lankhorst April 14, 2014, 7:04 a.m. UTC | #6
op 11-04-14 21:30, Thomas Hellstrom schreef:
> Hi!
>
> On 04/11/2014 08:09 PM, Maarten Lankhorst wrote:
>> op 11-04-14 12:11, Thomas Hellstrom schreef:
>>> On 04/11/2014 11:24 AM, Maarten Lankhorst wrote:
>>>> op 11-04-14 10:38, Thomas Hellstrom schreef:
>>>>> Hi, Maarten.
>>>>>
>>>>> Here I believe we encounter a lot of locking inconsistencies.
>>>>>
>>>>> First, it seems you're use a number of pointers as RCU pointers
>>>>> without
>>>>> annotating them as such and use the correct rcu
>>>>> macros when assigning those pointers.
>>>>>
>>>>> Some pointers (like the pointers in the shared fence list) are both
>>>>> used
>>>>> as RCU pointers (in dma_buf_poll()) for example,
>>>>> or considered protected by the seqlock
>>>>> (reservation_object_get_fences_rcu()), which I believe is OK, but then
>>>>> the pointers must
>>>>> be assigned using the correct rcu macros. In the memcpy in
>>>>> reservation_object_get_fences_rcu() we might get away with an
>>>>> ugly typecast, but with a verbose comment that the pointers are
>>>>> considered protected by the seqlock at that location.
>>>>>
>>>>> So I've updated (attached) the headers with proper __rcu annotation
>>>>> and
>>>>> locking comments according to how they are being used in the various
>>>>> reading functions.
>>>>> I believe if we want to get rid of this we need to validate those
>>>>> pointers using the seqlock as well.
>>>>> This will generate a lot of sparse warnings in those places needing
>>>>> rcu_dereference()
>>>>> rcu_assign_pointer()
>>>>> rcu_dereference_protected()
>>>>>
>>>>> With this I think we can get rid of all ACCESS_ONCE macros: It's not
>>>>> needed when the rcu_x() macros are used, and
>>>>> it's never needed for the members protected by the seqlock, (provided
>>>>> that the seq is tested). The only place where I think that's
>>>>> *not* the case is at the krealloc in
>>>>> reservation_object_get_fences_rcu().
>>>>>
>>>>> Also I have some more comments in the
>>>>> reservation_object_get_fences_rcu() function below:
>>>> I felt that the barriers needed for rcu were already provided by
>>>> checking the seqcount lock.
>>>> But looking at rcu_dereference makes it seem harmless to add it in
>>>> more places, it handles
>>>> the ACCESS_ONCE and barrier() for us.
>>> And it makes the code more maintainable, and helps sparse doing a lot of
>>> checking for us. I guess
>>> we can tolerate a couple of extra barriers for that.
>>>
>>>> We could probably get away with using RCU_INIT_POINTER on the writer
>>>> side,
>>>> because the smp_wmb is already done by arranging seqcount updates
>>>> correctly.
>>> Hmm. yes, probably. At least in the replace function. I think if we do
>>> it in other places, we should add comments as to where
>>> the smp_wmb() is located, for future reference.
>>>
>>>
>>> Also  I saw in a couple of places where you're checking the shared
>>> pointers, you're not checking for NULL pointers, which I guess may
>>> happen if shared_count and pointers are not in full sync?
>>>
>> No, because shared_count is protected with seqcount. I only allow
>> appending to the array, so when
>> shared_count is validated by seqcount it means that the
>> [0...shared_count) indexes are valid and non-null.
>> What could happen though is that the fence at a specific index is
>> updated with another one from the same
>> context, but that's harmless.
> Hmm.
> Shouldn't we have a way to clean signaled fences from reservation
> objects? Perhaps when we attach a new fence, or after a wait with
> ww_mutex held? Otherwise we'd have a lot of completely unused fence
> objects hanging around for no reason. I don't think we need to be as
> picky as TTM, but I think we should do something?
>
Calling reservation_object_add_excl_fence with a NULL fence works, I do this in ttm_bo_wait().
It requires ww_mutex.

~Maarten
Maarten Lankhorst April 14, 2014, 7:42 a.m. UTC | #7
op 11-04-14 21:35, Thomas Hellstrom schreef:
> On 04/11/2014 08:09 PM, Maarten Lankhorst wrote:
>> op 11-04-14 12:11, Thomas Hellstrom schreef:
>>> On 04/11/2014 11:24 AM, Maarten Lankhorst wrote:
>>>> op 11-04-14 10:38, Thomas Hellstrom schreef:
>>>>> Hi, Maarten.
>>>>>
>>>>> Here I believe we encounter a lot of locking inconsistencies.
>>>>>
>>>>> First, it seems you're use a number of pointers as RCU pointers
>>>>> without
>>>>> annotating them as such and use the correct rcu
>>>>> macros when assigning those pointers.
>>>>>
>>>>> Some pointers (like the pointers in the shared fence list) are both
>>>>> used
>>>>> as RCU pointers (in dma_buf_poll()) for example,
>>>>> or considered protected by the seqlock
>>>>> (reservation_object_get_fences_rcu()), which I believe is OK, but then
>>>>> the pointers must
>>>>> be assigned using the correct rcu macros. In the memcpy in
>>>>> reservation_object_get_fences_rcu() we might get away with an
>>>>> ugly typecast, but with a verbose comment that the pointers are
>>>>> considered protected by the seqlock at that location.
>>>>>
>>>>> So I've updated (attached) the headers with proper __rcu annotation
>>>>> and
>>>>> locking comments according to how they are being used in the various
>>>>> reading functions.
>>>>> I believe if we want to get rid of this we need to validate those
>>>>> pointers using the seqlock as well.
>>>>> This will generate a lot of sparse warnings in those places needing
>>>>> rcu_dereference()
>>>>> rcu_assign_pointer()
>>>>> rcu_dereference_protected()
>>>>>
>>>>> With this I think we can get rid of all ACCESS_ONCE macros: It's not
>>>>> needed when the rcu_x() macros are used, and
>>>>> it's never needed for the members protected by the seqlock, (provided
>>>>> that the seq is tested). The only place where I think that's
>>>>> *not* the case is at the krealloc in
>>>>> reservation_object_get_fences_rcu().
>>>>>
>>>>> Also I have some more comments in the
>>>>> reservation_object_get_fences_rcu() function below:
>>>> I felt that the barriers needed for rcu were already provided by
>>>> checking the seqcount lock.
>>>> But looking at rcu_dereference makes it seem harmless to add it in
>>>> more places, it handles
>>>> the ACCESS_ONCE and barrier() for us.
>>> And it makes the code more maintainable, and helps sparse doing a lot of
>>> checking for us. I guess
>>> we can tolerate a couple of extra barriers for that.
>>>
>>>> We could probably get away with using RCU_INIT_POINTER on the writer
>>>> side,
>>>> because the smp_wmb is already done by arranging seqcount updates
>>>> correctly.
>>> Hmm. yes, probably. At least in the replace function. I think if we do
>>> it in other places, we should add comments as to where
>>> the smp_wmb() is located, for future reference.
>>>
>>>
>>> Also  I saw in a couple of places where you're checking the shared
>>> pointers, you're not checking for NULL pointers, which I guess may
>>> happen if shared_count and pointers are not in full sync?
>>>
>> No, because shared_count is protected with seqcount. I only allow
>> appending to the array, so when
>> shared_count is validated by seqcount it means that the
>> [0...shared_count) indexes are valid and non-null.
>> What could happen though is that the fence at a specific index is
>> updated with another one from the same
>> context, but that's harmless.
>>
> Hmm, doesn't attaching an exclusive fence clear all shared fence
> pointers from under a reader?
>
No, for that reason. It only resets shared_count to 0. This is harmless because the shared fence pointers are
still valid long enough because of RCU delayed deletion. fence_get_rcu will fail when the refcount has
dropped to zero. This is enough of a check to prevent errors, so there's no need to explicitly clear the fence
pointers.

~Maarten
Thomas Hellstrom April 14, 2014, 7:45 a.m. UTC | #8
On 04/14/2014 09:42 AM, Maarten Lankhorst wrote:
> op 11-04-14 21:35, Thomas Hellstrom schreef:
>> On 04/11/2014 08:09 PM, Maarten Lankhorst wrote:
>>> op 11-04-14 12:11, Thomas Hellstrom schreef:
>>>> On 04/11/2014 11:24 AM, Maarten Lankhorst wrote:
>>>>> op 11-04-14 10:38, Thomas Hellstrom schreef:
>>>>>> Hi, Maarten.
>>>>>>
>>>>>> Here I believe we encounter a lot of locking inconsistencies.
>>>>>>
>>>>>> First, it seems you're use a number of pointers as RCU pointers
>>>>>> without
>>>>>> annotating them as such and use the correct rcu
>>>>>> macros when assigning those pointers.
>>>>>>
>>>>>> Some pointers (like the pointers in the shared fence list) are both
>>>>>> used
>>>>>> as RCU pointers (in dma_buf_poll()) for example,
>>>>>> or considered protected by the seqlock
>>>>>> (reservation_object_get_fences_rcu()), which I believe is OK, but
>>>>>> then
>>>>>> the pointers must
>>>>>> be assigned using the correct rcu macros. In the memcpy in
>>>>>> reservation_object_get_fences_rcu() we might get away with an
>>>>>> ugly typecast, but with a verbose comment that the pointers are
>>>>>> considered protected by the seqlock at that location.
>>>>>>
>>>>>> So I've updated (attached) the headers with proper __rcu annotation
>>>>>> and
>>>>>> locking comments according to how they are being used in the various
>>>>>> reading functions.
>>>>>> I believe if we want to get rid of this we need to validate those
>>>>>> pointers using the seqlock as well.
>>>>>> This will generate a lot of sparse warnings in those places needing
>>>>>> rcu_dereference()
>>>>>> rcu_assign_pointer()
>>>>>> rcu_dereference_protected()
>>>>>>
>>>>>> With this I think we can get rid of all ACCESS_ONCE macros: It's not
>>>>>> needed when the rcu_x() macros are used, and
>>>>>> it's never needed for the members protected by the seqlock,
>>>>>> (provided
>>>>>> that the seq is tested). The only place where I think that's
>>>>>> *not* the case is at the krealloc in
>>>>>> reservation_object_get_fences_rcu().
>>>>>>
>>>>>> Also I have some more comments in the
>>>>>> reservation_object_get_fences_rcu() function below:
>>>>> I felt that the barriers needed for rcu were already provided by
>>>>> checking the seqcount lock.
>>>>> But looking at rcu_dereference makes it seem harmless to add it in
>>>>> more places, it handles
>>>>> the ACCESS_ONCE and barrier() for us.
>>>> And it makes the code more maintainable, and helps sparse doing a
>>>> lot of
>>>> checking for us. I guess
>>>> we can tolerate a couple of extra barriers for that.
>>>>
>>>>> We could probably get away with using RCU_INIT_POINTER on the writer
>>>>> side,
>>>>> because the smp_wmb is already done by arranging seqcount updates
>>>>> correctly.
>>>> Hmm. yes, probably. At least in the replace function. I think if we do
>>>> it in other places, we should add comments as to where
>>>> the smp_wmb() is located, for future reference.
>>>>
>>>>
>>>> Also  I saw in a couple of places where you're checking the shared
>>>> pointers, you're not checking for NULL pointers, which I guess may
>>>> happen if shared_count and pointers are not in full sync?
>>>>
>>> No, because shared_count is protected with seqcount. I only allow
>>> appending to the array, so when
>>> shared_count is validated by seqcount it means that the
>>> [0...shared_count) indexes are valid and non-null.
>>> What could happen though is that the fence at a specific index is
>>> updated with another one from the same
>>> context, but that's harmless.
>>>
>> Hmm, doesn't attaching an exclusive fence clear all shared fence
>> pointers from under a reader?
>>
> No, for that reason. It only resets shared_count to 0.

Ah. OK. I guess I didn't read the code carefully enough.

Thanks,
Thomas



>
> ~Maarten
diff mbox

Patch

diff --git a/include/linux/fence.h b/include/linux/fence.h
index 8499ace..33a265d 100644
--- a/include/linux/fence.h
+++ b/include/linux/fence.h
@@ -200,10 +200,13 @@  static inline void fence_get(struct fence *fence)
  */
 static inline struct fence *fence_get_rcu(struct fence *fence)
 {
-	struct fence *f = ACCESS_ONCE(fence);
+	/*
+	 * Either we make the function operate on __rcu pointers
+	 * or remove ACCESS_ONCE
+	 */
 
-	if (kref_get_unless_zero(&f->refcount))
-		return f;
+	if (kref_get_unless_zero(&fence->refcount))
+		return fence;
 	else
 		return NULL;
 }
diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index d6e1f62..ab586a6 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -50,16 +50,26 @@  extern struct lock_class_key reservation_seqcount_class;
 
 struct reservation_object_list {
 	struct rcu_head rcu;
+	/* Protected by reservation_object::seq */
 	u32 shared_count, shared_max;
-	struct fence *shared[];
+	/* 
+	 * Immutable. Individual pointers in the array are protected
+	 * by reservation_object::seq and rcu. Hence while assigning those
+	 * pointers, rcu_assign_pointer is needed. When reading them
+	 * inside the seqlock, you may use rcu_dereference_protected().
+	 */
+	struct fence __rcu *shared[];
 };
 
 struct reservation_object {
 	struct ww_mutex lock;
 	seqcount_t seq;
 
+	/* protected by @seq */
 	struct fence *fence_excl;
-	struct reservation_object_list *fence;
+	/* rcu protected by @lock */
+	struct reservation_object_list __rcu *fence;
+	/* Protected by @lock */
 	struct reservation_object_list *staged;
 };
 
@@ -109,7 +119,7 @@  reservation_object_get_list(struct reservation_object *obj)
 {
 	reservation_object_assert_held(obj);
 
-	return obj->fence;
+	return rcu_dereference_protected(obj->fence, 1);
 }
 
 static inline struct fence *