Message ID | 20190722215340.3071-2-ilina@codeaurora.org (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers | show |
Series | [V2,1/4] drivers: qcom: rpmh-rsc: simplify TCS locking | expand |
Quoting Lina Iyer (2019-07-22 14:53:38) > Avoid locking in the interrupt context to improve latency. Since we > don't lock in the interrupt context, it is possible that we now could > race with the DRV_CONTROL register that writes the enable register and > cleared by the interrupt handler. For fire-n-forget requests, the > interrupt may be raised as soon as the TCS is triggered and the IRQ > handler may clear the enable bit before the DRV_CONTROL is read back. > > Use the non-sync variant when enabling the TCS register to avoid reading > back a value that may been cleared because the interrupt handler ran > immediately after triggering the TCS. > > Signed-off-by: Lina Iyer <ilina@codeaurora.org> > --- I have to read this patch carefully. The commit text isn't convincing me that it is actually safe to make this change. It mostly talks about the performance improvements and how we need to fix __tcs_trigger(), which is good, but I was hoping to be convinced that not grabbing the lock here is safe. How do we ensure that drv->tcs_in_use is cleared before we call tcs_write() and try to look for a free bit? Isn't it possible that we'll get into a situation where the bitmap is all used up but the hardware has just received an interrupt and is going to clear out a bit and then an rpmh write fails with -EBUSY? > drivers/soc/qcom/rpmh-rsc.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c > index 5ede8d6de3ad..694ba881624e 100644 > --- a/drivers/soc/qcom/rpmh-rsc.c > +++ b/drivers/soc/qcom/rpmh-rsc.c > @@ -242,9 +242,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p) > write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0); > write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, i, 0); > write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, BIT(i)); > - spin_lock(&drv->lock); > clear_bit(i, drv->tcs_in_use); > - spin_unlock(&drv->lock); > if (req) > rpmh_tx_done(req, err); > } > @@ -304,7 +302,7 @@ static void __tcs_trigger(struct rsc_drv *drv, int tcs_id) > enable = TCS_AMC_MODE_ENABLE; > write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); > enable |= TCS_AMC_MODE_TRIGGER; > - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); > + write_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, enable); > } > > static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
On Tue, Jul 23 2019 at 14:11 -0600, Stephen Boyd wrote: >Quoting Lina Iyer (2019-07-22 14:53:38) >> Avoid locking in the interrupt context to improve latency. Since we >> don't lock in the interrupt context, it is possible that we now could >> race with the DRV_CONTROL register that writes the enable register and >> cleared by the interrupt handler. For fire-n-forget requests, the >> interrupt may be raised as soon as the TCS is triggered and the IRQ >> handler may clear the enable bit before the DRV_CONTROL is read back. >> >> Use the non-sync variant when enabling the TCS register to avoid reading >> back a value that may been cleared because the interrupt handler ran >> immediately after triggering the TCS. >> >> Signed-off-by: Lina Iyer <ilina@codeaurora.org> >> --- > >I have to read this patch carefully. The commit text isn't convincing me >that it is actually safe to make this change. It mostly talks about the >performance improvements and how we need to fix __tcs_trigger(), which >is good, but I was hoping to be convinced that not grabbing the lock >here is safe. > >How do we ensure that drv->tcs_in_use is cleared before we call >tcs_write() and try to look for a free bit? Isn't it possible that we'll >get into a situation where the bitmap is all used up but the hardware >has just received an interrupt and is going to clear out a bit and then >an rpmh write fails with -EBUSY? > If we have a situation where there are no available free bits, we retry and that is part of the function. Since we have only 2 TCSes avaialble to write to the hardware and there could be multiple requests coming in, it is a very common situation. We try and acquire the drv->lock and if there are free TCS available and if available mark them busy and send our requests. If there are none available, we keep retrying. >> drivers/soc/qcom/rpmh-rsc.c | 4 +--- >> 1 file changed, 1 insertion(+), 3 deletions(-) >> >> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c >> index 5ede8d6de3ad..694ba881624e 100644 >> --- a/drivers/soc/qcom/rpmh-rsc.c >> +++ b/drivers/soc/qcom/rpmh-rsc.c >> @@ -242,9 +242,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p) >> write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0); >> write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, i, 0); >> write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, BIT(i)); >> - spin_lock(&drv->lock); >> clear_bit(i, drv->tcs_in_use); >> - spin_unlock(&drv->lock); >> if (req) >> rpmh_tx_done(req, err); >> } >> @@ -304,7 +302,7 @@ static void __tcs_trigger(struct rsc_drv *drv, int tcs_id) >> enable = TCS_AMC_MODE_ENABLE; >> write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); >> enable |= TCS_AMC_MODE_TRIGGER; >> - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); >> + write_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, enable); >> } >> >> static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
Quoting Lina Iyer (2019-07-24 07:52:51) > On Tue, Jul 23 2019 at 14:11 -0600, Stephen Boyd wrote: > >Quoting Lina Iyer (2019-07-22 14:53:38) > >> Avoid locking in the interrupt context to improve latency. Since we > >> don't lock in the interrupt context, it is possible that we now could > >> race with the DRV_CONTROL register that writes the enable register and > >> cleared by the interrupt handler. For fire-n-forget requests, the > >> interrupt may be raised as soon as the TCS is triggered and the IRQ > >> handler may clear the enable bit before the DRV_CONTROL is read back. > >> > >> Use the non-sync variant when enabling the TCS register to avoid reading > >> back a value that may been cleared because the interrupt handler ran > >> immediately after triggering the TCS. > >> > >> Signed-off-by: Lina Iyer <ilina@codeaurora.org> > >> --- > > > >I have to read this patch carefully. The commit text isn't convincing me > >that it is actually safe to make this change. It mostly talks about the > >performance improvements and how we need to fix __tcs_trigger(), which > >is good, but I was hoping to be convinced that not grabbing the lock > >here is safe. > > > >How do we ensure that drv->tcs_in_use is cleared before we call > >tcs_write() and try to look for a free bit? Isn't it possible that we'll > >get into a situation where the bitmap is all used up but the hardware > >has just received an interrupt and is going to clear out a bit and then > >an rpmh write fails with -EBUSY? > > > If we have a situation where there are no available free bits, we retry > and that is part of the function. Since we have only 2 TCSes avaialble > to write to the hardware and there could be multiple requests coming in, > it is a very common situation. We try and acquire the drv->lock and if > there are free TCS available and if available mark them busy and send > our requests. If there are none available, we keep retrying. > Ok. I wonder if we need some sort of barriers here too, like an smp_mb__after_atomic()? That way we can make sure that the write to clear the bit is seen by another CPU that could be spinning forever waiting for that bit to be cleared? Before this change the spinlock would be guaranteed to make these barriers for us, but now that doesn't seem to be the case. I really hope that this whole thing can be changed to be a mutex though, in which case we can use the bit_wait() API, etc. to put tasks to sleep while RPMh is processing things.
On Wed, Jul 24 2019 at 13:38 -0600, Stephen Boyd wrote: >Quoting Lina Iyer (2019-07-24 07:52:51) >> On Tue, Jul 23 2019 at 14:11 -0600, Stephen Boyd wrote: >> >Quoting Lina Iyer (2019-07-22 14:53:38) >> >> Avoid locking in the interrupt context to improve latency. Since we >> >> don't lock in the interrupt context, it is possible that we now could >> >> race with the DRV_CONTROL register that writes the enable register and >> >> cleared by the interrupt handler. For fire-n-forget requests, the >> >> interrupt may be raised as soon as the TCS is triggered and the IRQ >> >> handler may clear the enable bit before the DRV_CONTROL is read back. >> >> >> >> Use the non-sync variant when enabling the TCS register to avoid reading >> >> back a value that may been cleared because the interrupt handler ran >> >> immediately after triggering the TCS. >> >> >> >> Signed-off-by: Lina Iyer <ilina@codeaurora.org> >> >> --- >> > >> >I have to read this patch carefully. The commit text isn't convincing me >> >that it is actually safe to make this change. It mostly talks about the >> >performance improvements and how we need to fix __tcs_trigger(), which >> >is good, but I was hoping to be convinced that not grabbing the lock >> >here is safe. >> > >> >How do we ensure that drv->tcs_in_use is cleared before we call >> >tcs_write() and try to look for a free bit? Isn't it possible that we'll >> >get into a situation where the bitmap is all used up but the hardware >> >has just received an interrupt and is going to clear out a bit and then >> >an rpmh write fails with -EBUSY? >> > >> If we have a situation where there are no available free bits, we retry >> and that is part of the function. Since we have only 2 TCSes avaialble >> to write to the hardware and there could be multiple requests coming in, >> it is a very common situation. We try and acquire the drv->lock and if >> there are free TCS available and if available mark them busy and send >> our requests. If there are none available, we keep retrying. >> > >Ok. I wonder if we need some sort of barriers here too, like an >smp_mb__after_atomic()? That way we can make sure that the write to >clear the bit is seen by another CPU that could be spinning forever >waiting for that bit to be cleared? Before this change the spinlock >would be guaranteed to make these barriers for us, but now that doesn't >seem to be the case. I really hope that this whole thing can be changed >to be a mutex though, in which case we can use the bit_wait() API, etc. >to put tasks to sleep while RPMh is processing things. > We have drivers that want to send requests in atomic contexts and therefore mutex locks would not work. --Lina
Hi, On Wed, Jul 24, 2019 at 1:36 PM Lina Iyer <ilina@codeaurora.org> wrote: > > On Wed, Jul 24 2019 at 13:38 -0600, Stephen Boyd wrote: > >Quoting Lina Iyer (2019-07-24 07:52:51) > >> On Tue, Jul 23 2019 at 14:11 -0600, Stephen Boyd wrote: > >> >Quoting Lina Iyer (2019-07-22 14:53:38) > >> >> Avoid locking in the interrupt context to improve latency. Since we > >> >> don't lock in the interrupt context, it is possible that we now could > >> >> race with the DRV_CONTROL register that writes the enable register and > >> >> cleared by the interrupt handler. For fire-n-forget requests, the > >> >> interrupt may be raised as soon as the TCS is triggered and the IRQ > >> >> handler may clear the enable bit before the DRV_CONTROL is read back. > >> >> > >> >> Use the non-sync variant when enabling the TCS register to avoid reading > >> >> back a value that may been cleared because the interrupt handler ran > >> >> immediately after triggering the TCS. > >> >> > >> >> Signed-off-by: Lina Iyer <ilina@codeaurora.org> > >> >> --- > >> > > >> >I have to read this patch carefully. The commit text isn't convincing me > >> >that it is actually safe to make this change. It mostly talks about the > >> >performance improvements and how we need to fix __tcs_trigger(), which > >> >is good, but I was hoping to be convinced that not grabbing the lock > >> >here is safe. > >> > > >> >How do we ensure that drv->tcs_in_use is cleared before we call > >> >tcs_write() and try to look for a free bit? Isn't it possible that we'll > >> >get into a situation where the bitmap is all used up but the hardware > >> >has just received an interrupt and is going to clear out a bit and then > >> >an rpmh write fails with -EBUSY? > >> > > >> If we have a situation where there are no available free bits, we retry > >> and that is part of the function. Since we have only 2 TCSes avaialble > >> to write to the hardware and there could be multiple requests coming in, > >> it is a very common situation. We try and acquire the drv->lock and if > >> there are free TCS available and if available mark them busy and send > >> our requests. If there are none available, we keep retrying. > >> > > > >Ok. I wonder if we need some sort of barriers here too, like an > >smp_mb__after_atomic()? That way we can make sure that the write to > >clear the bit is seen by another CPU that could be spinning forever > >waiting for that bit to be cleared? Before this change the spinlock > >would be guaranteed to make these barriers for us, but now that doesn't > >seem to be the case. I really hope that this whole thing can be changed > >to be a mutex though, in which case we can use the bit_wait() API, etc. > >to put tasks to sleep while RPMh is processing things. > > > We have drivers that want to send requests in atomic contexts and > therefore mutex locks would not work. Jumping in without reading all the context, but I saw this fly by and it seemed odd. If I'm way off base then please ignore... Can you give more details? Why are these drivers in atomic contexts? If they are in atomic contexts because they are running in the context of an interrupt then your next patch in the series isn't so correct. Also: when people submit requests in atomic context are they always submitting an asynchronous request? In that case we could (presumably) just use a spinlock to protect the queue of async requests and a mutex for everything else? -Doug
On Wed, Jul 24 2019 at 17:28 -0600, Doug Anderson wrote: >Hi, > >On Wed, Jul 24, 2019 at 1:36 PM Lina Iyer <ilina@codeaurora.org> wrote: >> >> On Wed, Jul 24 2019 at 13:38 -0600, Stephen Boyd wrote: >> >Quoting Lina Iyer (2019-07-24 07:52:51) >> >> On Tue, Jul 23 2019 at 14:11 -0600, Stephen Boyd wrote: >> >> >Quoting Lina Iyer (2019-07-22 14:53:38) >> >> >> Avoid locking in the interrupt context to improve latency. Since we >> >> >> don't lock in the interrupt context, it is possible that we now could >> >> >> race with the DRV_CONTROL register that writes the enable register and >> >> >> cleared by the interrupt handler. For fire-n-forget requests, the >> >> >> interrupt may be raised as soon as the TCS is triggered and the IRQ >> >> >> handler may clear the enable bit before the DRV_CONTROL is read back. >> >> >> >> >> >> Use the non-sync variant when enabling the TCS register to avoid reading >> >> >> back a value that may been cleared because the interrupt handler ran >> >> >> immediately after triggering the TCS. >> >> >> >> >> >> Signed-off-by: Lina Iyer <ilina@codeaurora.org> >> >> >> --- >> >> > >> >> >I have to read this patch carefully. The commit text isn't convincing me >> >> >that it is actually safe to make this change. It mostly talks about the >> >> >performance improvements and how we need to fix __tcs_trigger(), which >> >> >is good, but I was hoping to be convinced that not grabbing the lock >> >> >here is safe. >> >> > >> >> >How do we ensure that drv->tcs_in_use is cleared before we call >> >> >tcs_write() and try to look for a free bit? Isn't it possible that we'll >> >> >get into a situation where the bitmap is all used up but the hardware >> >> >has just received an interrupt and is going to clear out a bit and then >> >> >an rpmh write fails with -EBUSY? >> >> > >> >> If we have a situation where there are no available free bits, we retry >> >> and that is part of the function. Since we have only 2 TCSes avaialble >> >> to write to the hardware and there could be multiple requests coming in, >> >> it is a very common situation. We try and acquire the drv->lock and if >> >> there are free TCS available and if available mark them busy and send >> >> our requests. If there are none available, we keep retrying. >> >> >> > >> >Ok. I wonder if we need some sort of barriers here too, like an >> >smp_mb__after_atomic()? That way we can make sure that the write to >> >clear the bit is seen by another CPU that could be spinning forever >> >waiting for that bit to be cleared? Before this change the spinlock >> >would be guaranteed to make these barriers for us, but now that doesn't >> >seem to be the case. I really hope that this whole thing can be changed >> >to be a mutex though, in which case we can use the bit_wait() API, etc. >> >to put tasks to sleep while RPMh is processing things. >> > >> We have drivers that want to send requests in atomic contexts and >> therefore mutex locks would not work. > >Jumping in without reading all the context, but I saw this fly by and >it seemed odd. If I'm way off base then please ignore... > >Can you give more details? Why are these drivers in atomic contexts? >If they are in atomic contexts because they are running in the context >of an interrupt then your next patch in the series isn't so correct. > >Also: when people submit requests in atomic context are they always >submitting an asynchronous request? In that case we could >(presumably) just use a spinlock to protect the queue of async >requests and a mutex for everything else? Yes, drivers only make async requests in interrupt contexts. They cannot use the sync variants. The async and sync variants are streamlined into the same code path. Hence the use of spinlocks instead of mutexes through the critical path. --Lina
Hi, On Thu, Jul 25, 2019 at 8:18 AM Lina Iyer <ilina@codeaurora.org> wrote: > > On Wed, Jul 24 2019 at 17:28 -0600, Doug Anderson wrote: > >Hi, > > > >On Wed, Jul 24, 2019 at 1:36 PM Lina Iyer <ilina@codeaurora.org> wrote: > >> > >> On Wed, Jul 24 2019 at 13:38 -0600, Stephen Boyd wrote: > >> >Quoting Lina Iyer (2019-07-24 07:52:51) > >> >> On Tue, Jul 23 2019 at 14:11 -0600, Stephen Boyd wrote: > >> >> >Quoting Lina Iyer (2019-07-22 14:53:38) > >> >> >> Avoid locking in the interrupt context to improve latency. Since we > >> >> >> don't lock in the interrupt context, it is possible that we now could > >> >> >> race with the DRV_CONTROL register that writes the enable register and > >> >> >> cleared by the interrupt handler. For fire-n-forget requests, the > >> >> >> interrupt may be raised as soon as the TCS is triggered and the IRQ > >> >> >> handler may clear the enable bit before the DRV_CONTROL is read back. > >> >> >> > >> >> >> Use the non-sync variant when enabling the TCS register to avoid reading > >> >> >> back a value that may been cleared because the interrupt handler ran > >> >> >> immediately after triggering the TCS. > >> >> >> > >> >> >> Signed-off-by: Lina Iyer <ilina@codeaurora.org> > >> >> >> --- > >> >> > > >> >> >I have to read this patch carefully. The commit text isn't convincing me > >> >> >that it is actually safe to make this change. It mostly talks about the > >> >> >performance improvements and how we need to fix __tcs_trigger(), which > >> >> >is good, but I was hoping to be convinced that not grabbing the lock > >> >> >here is safe. > >> >> > > >> >> >How do we ensure that drv->tcs_in_use is cleared before we call > >> >> >tcs_write() and try to look for a free bit? Isn't it possible that we'll > >> >> >get into a situation where the bitmap is all used up but the hardware > >> >> >has just received an interrupt and is going to clear out a bit and then > >> >> >an rpmh write fails with -EBUSY? > >> >> > > >> >> If we have a situation where there are no available free bits, we retry > >> >> and that is part of the function. Since we have only 2 TCSes avaialble > >> >> to write to the hardware and there could be multiple requests coming in, > >> >> it is a very common situation. We try and acquire the drv->lock and if > >> >> there are free TCS available and if available mark them busy and send > >> >> our requests. If there are none available, we keep retrying. > >> >> > >> > > >> >Ok. I wonder if we need some sort of barriers here too, like an > >> >smp_mb__after_atomic()? That way we can make sure that the write to > >> >clear the bit is seen by another CPU that could be spinning forever > >> >waiting for that bit to be cleared? Before this change the spinlock > >> >would be guaranteed to make these barriers for us, but now that doesn't > >> >seem to be the case. I really hope that this whole thing can be changed > >> >to be a mutex though, in which case we can use the bit_wait() API, etc. > >> >to put tasks to sleep while RPMh is processing things. > >> > > >> We have drivers that want to send requests in atomic contexts and > >> therefore mutex locks would not work. > > > >Jumping in without reading all the context, but I saw this fly by and > >it seemed odd. If I'm way off base then please ignore... > > > >Can you give more details? Why are these drivers in atomic contexts? > >If they are in atomic contexts because they are running in the context > >of an interrupt then your next patch in the series isn't so correct. > > > >Also: when people submit requests in atomic context are they always > >submitting an asynchronous request? In that case we could > >(presumably) just use a spinlock to protect the queue of async > >requests and a mutex for everything else? > Yes, drivers only make async requests in interrupt contexts. So correct me if I'm off base, but you're saying that drivers make requests in interrupt contexts even after your whole series and that's why you're using spinlocks instead of mutexes. ...but then in patch #3 in your series you say: > Switch over from using _irqsave/_irqrestore variants since we no longer > race with a lock from the interrupt handler. Those seem like contradictions. What happens if someone is holding the lock, then an interrupt fires, then the interrupt routine wants to do an async request. Boom, right? > They cannot > use the sync variants. The async and sync variants are streamlined into > the same code path. Hence the use of spinlocks instead of mutexes > through the critical path. I will perhaps defer to Stephen who was the one thinking that a mutex would be a big win here. ...but if a mutex truly is a big win then it doesn't seem like it'd be that hard to have a linked list (protected by a spinlock) and then some type of async worker that: 1. Grab the spinlock, pops one element off the linked list, release the spinlock 2. Grab the mutex, send the one element, release the mutex 3. Go back to step #1. This will keep the spinlock held for as little time as possible. -Doug
On Thu, Jul 25 2019 at 09:44 -0600, Doug Anderson wrote: >Hi, > >On Thu, Jul 25, 2019 at 8:18 AM Lina Iyer <ilina@codeaurora.org> wrote: >> >> On Wed, Jul 24 2019 at 17:28 -0600, Doug Anderson wrote: >> >Hi, >> > >> >On Wed, Jul 24, 2019 at 1:36 PM Lina Iyer <ilina@codeaurora.org> wrote: >> >> >> >> On Wed, Jul 24 2019 at 13:38 -0600, Stephen Boyd wrote: >> >> >Quoting Lina Iyer (2019-07-24 07:52:51) >> >> >> On Tue, Jul 23 2019 at 14:11 -0600, Stephen Boyd wrote: >> >> >> >Quoting Lina Iyer (2019-07-22 14:53:38) >> >> >> >> Avoid locking in the interrupt context to improve latency. Since we >> >> >> >> don't lock in the interrupt context, it is possible that we now could >> >> >> >> race with the DRV_CONTROL register that writes the enable register and >> >> >> >> cleared by the interrupt handler. For fire-n-forget requests, the >> >> >> >> interrupt may be raised as soon as the TCS is triggered and the IRQ >> >> >> >> handler may clear the enable bit before the DRV_CONTROL is read back. >> >> >> >> >> >> >> >> Use the non-sync variant when enabling the TCS register to avoid reading >> >> >> >> back a value that may been cleared because the interrupt handler ran >> >> >> >> immediately after triggering the TCS. >> >> >> >> >> >> >> >> Signed-off-by: Lina Iyer <ilina@codeaurora.org> >> >> >> >> --- >> >> >> > >> >> >> >I have to read this patch carefully. The commit text isn't convincing me >> >> >> >that it is actually safe to make this change. It mostly talks about the >> >> >> >performance improvements and how we need to fix __tcs_trigger(), which >> >> >> >is good, but I was hoping to be convinced that not grabbing the lock >> >> >> >here is safe. >> >> >> > >> >> >> >How do we ensure that drv->tcs_in_use is cleared before we call >> >> >> >tcs_write() and try to look for a free bit? Isn't it possible that we'll >> >> >> >get into a situation where the bitmap is all used up but the hardware >> >> >> >has just received an interrupt and is going to clear out a bit and then >> >> >> >an rpmh write fails with -EBUSY? >> >> >> > >> >> >> If we have a situation where there are no available free bits, we retry >> >> >> and that is part of the function. Since we have only 2 TCSes avaialble >> >> >> to write to the hardware and there could be multiple requests coming in, >> >> >> it is a very common situation. We try and acquire the drv->lock and if >> >> >> there are free TCS available and if available mark them busy and send >> >> >> our requests. If there are none available, we keep retrying. >> >> >> >> >> > >> >> >Ok. I wonder if we need some sort of barriers here too, like an >> >> >smp_mb__after_atomic()? That way we can make sure that the write to >> >> >clear the bit is seen by another CPU that could be spinning forever >> >> >waiting for that bit to be cleared? Before this change the spinlock >> >> >would be guaranteed to make these barriers for us, but now that doesn't >> >> >seem to be the case. I really hope that this whole thing can be changed >> >> >to be a mutex though, in which case we can use the bit_wait() API, etc. >> >> >to put tasks to sleep while RPMh is processing things. >> >> > >> >> We have drivers that want to send requests in atomic contexts and >> >> therefore mutex locks would not work. >> > >> >Jumping in without reading all the context, but I saw this fly by and >> >it seemed odd. If I'm way off base then please ignore... >> > >> >Can you give more details? Why are these drivers in atomic contexts? >> >If they are in atomic contexts because they are running in the context >> >of an interrupt then your next patch in the series isn't so correct. >> > >> >Also: when people submit requests in atomic context are they always >> >submitting an asynchronous request? In that case we could >> >(presumably) just use a spinlock to protect the queue of async >> >requests and a mutex for everything else? >> Yes, drivers only make async requests in interrupt contexts. > >So correct me if I'm off base, but you're saying that drivers make >requests in interrupt contexts even after your whole series and that's >why you're using spinlocks instead of mutexes. ...but then in patch >#3 in your series you say: > >> Switch over from using _irqsave/_irqrestore variants since we no longer >> race with a lock from the interrupt handler. > >Those seem like contradictions. What happens if someone is holding >the lock, then an interrupt fires, then the interrupt routine wants to >do an async request. Boom, right? > The interrupt routine is handled by the driver and only completes the waiting object (for sync requests). No other requests can be made from our interrupt handler. >> They cannot >> use the sync variants. The async and sync variants are streamlined into >> the same code path. Hence the use of spinlocks instead of mutexes >> through the critical path. > >I will perhaps defer to Stephen who was the one thinking that a mutex >would be a big win here. ...but if a mutex truly is a big win then it >doesn't seem like it'd be that hard to have a linked list (protected >by a spinlock) and then some type of async worker that: > >1. Grab the spinlock, pops one element off the linked list, release the spinlock >2. Grab the mutex, send the one element, release the mutex This would be a problem when the request is made from an irq handler. We want to keep things simple and quick. >3. Go back to step #1. > >This will keep the spinlock held for as little time as possible.
Quoting Lina Iyer (2019-07-29 12:01:39) > On Thu, Jul 25 2019 at 09:44 -0600, Doug Anderson wrote: > >On Thu, Jul 25, 2019 at 8:18 AM Lina Iyer <ilina@codeaurora.org> wrote: > >> > >> On Wed, Jul 24 2019 at 17:28 -0600, Doug Anderson wrote: > >> > > >> >Jumping in without reading all the context, but I saw this fly by and > >> >it seemed odd. If I'm way off base then please ignore... > >> > > >> >Can you give more details? Why are these drivers in atomic contexts? > >> >If they are in atomic contexts because they are running in the context > >> >of an interrupt then your next patch in the series isn't so correct. > >> > > >> >Also: when people submit requests in atomic context are they always > >> >submitting an asynchronous request? In that case we could > >> >(presumably) just use a spinlock to protect the queue of async > >> >requests and a mutex for everything else? > >> Yes, drivers only make async requests in interrupt contexts. > > > >So correct me if I'm off base, but you're saying that drivers make > >requests in interrupt contexts even after your whole series and that's > >why you're using spinlocks instead of mutexes. ...but then in patch > >#3 in your series you say: > > > >> Switch over from using _irqsave/_irqrestore variants since we no longer > >> race with a lock from the interrupt handler. > > > >Those seem like contradictions. What happens if someone is holding > >the lock, then an interrupt fires, then the interrupt routine wants to > >do an async request. Boom, right? > > > The interrupt routine is handled by the driver and only completes the > waiting object (for sync requests). No other requests can be made from > our interrupt handler. The question is more if an interrupt handler for some consumer driver can call into this code and make an async request. Is that possible? If so, the concern is that the driver's interrupt handler can run and try to grab the lock on a CPU that already holds the lock in a non-irq disabled context. This would lead to a deadlock while the CPU servicing the interrupt waits for the lock held by another task that's been interrupted. > > >> They cannot > >> use the sync variants. The async and sync variants are streamlined into > >> the same code path. Hence the use of spinlocks instead of mutexes > >> through the critical path. > > > >I will perhaps defer to Stephen who was the one thinking that a mutex > >would be a big win here. ...but if a mutex truly is a big win then it > >doesn't seem like it'd be that hard to have a linked list (protected > >by a spinlock) and then some type of async worker that: > > > >1. Grab the spinlock, pops one element off the linked list, release the spinlock > >2. Grab the mutex, send the one element, release the mutex > This would be a problem when the request is made from an irq handler. We > want to keep things simple and quick. > Is the problem that you want to use RPMh code from deep within the idle thread? As part of some sort of CPU idle driver for qcom platforms? The way this discussion is going it sounds like nothing is standing in the way of a design that use a kthread to pump messages off a queue of messages that is protected by a spinlock. The kthread would be woken up by the sync or async write to continue to pump messages out until the queue is empty.
On Mon, Jul 29 2019 at 14:56 -0600, Stephen Boyd wrote: >Quoting Lina Iyer (2019-07-29 12:01:39) >> On Thu, Jul 25 2019 at 09:44 -0600, Doug Anderson wrote: >> >On Thu, Jul 25, 2019 at 8:18 AM Lina Iyer <ilina@codeaurora.org> wrote: >> >> >> >> On Wed, Jul 24 2019 at 17:28 -0600, Doug Anderson wrote: >> >> > >> >> >Jumping in without reading all the context, but I saw this fly by and >> >> >it seemed odd. If I'm way off base then please ignore... >> >> > >> >> >Can you give more details? Why are these drivers in atomic contexts? >> >> >If they are in atomic contexts because they are running in the context >> >> >of an interrupt then your next patch in the series isn't so correct. >> >> > >> >> >Also: when people submit requests in atomic context are they always >> >> >submitting an asynchronous request? In that case we could >> >> >(presumably) just use a spinlock to protect the queue of async >> >> >requests and a mutex for everything else? >> >> Yes, drivers only make async requests in interrupt contexts. >> > >> >So correct me if I'm off base, but you're saying that drivers make >> >requests in interrupt contexts even after your whole series and that's >> >why you're using spinlocks instead of mutexes. ...but then in patch >> >#3 in your series you say: >> > >> >> Switch over from using _irqsave/_irqrestore variants since we no longer >> >> race with a lock from the interrupt handler. >> > >> >Those seem like contradictions. What happens if someone is holding >> >the lock, then an interrupt fires, then the interrupt routine wants to >> >do an async request. Boom, right? >> > >> The interrupt routine is handled by the driver and only completes the >> waiting object (for sync requests). No other requests can be made from >> our interrupt handler. > >The question is more if an interrupt handler for some consumer driver >can call into this code and make an async request. Is that possible? If >so, the concern is that the driver's interrupt handler can run and try >to grab the lock on a CPU that already holds the lock in a non-irq >disabled context. This would lead to a deadlock while the CPU servicing >the interrupt waits for the lock held by another task that's been >interrupted. > Hmm.. this patch will cause that issue, since we remove the irqsave aspects of the locking. Let me give that a thought. >> >> >> They cannot >> >> use the sync variants. The async and sync variants are streamlined into >> >> the same code path. Hence the use of spinlocks instead of mutexes >> >> through the critical path. >> > >> >I will perhaps defer to Stephen who was the one thinking that a mutex >> >would be a big win here. ...but if a mutex truly is a big win then it >> >doesn't seem like it'd be that hard to have a linked list (protected >> >by a spinlock) and then some type of async worker that: >> > >> >1. Grab the spinlock, pops one element off the linked list, release the spinlock >> >2. Grab the mutex, send the one element, release the mutex >> This would be a problem when the request is made from an irq handler. We >> want to keep things simple and quick. >> > >Is the problem that you want to use RPMh code from deep within the idle >thread? As part of some sort of CPU idle driver for qcom platforms? The >way this discussion is going it sounds like nothing is standing in the >way of a design that use a kthread to pump messages off a queue of >messages that is protected by a spinlock. The kthread would be woken up >by the sync or async write to continue to pump messages out until the >queue is empty. > While it is true that we want to use RPMH in cpuidle driver. Its just that we had threads and all in our downstream 845 and it complicated the whole setup a bit too much to our liking and did not help debug either. I would rather not get all that back in the driver. --Lina
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c index 5ede8d6de3ad..694ba881624e 100644 --- a/drivers/soc/qcom/rpmh-rsc.c +++ b/drivers/soc/qcom/rpmh-rsc.c @@ -242,9 +242,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p) write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0); write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, i, 0); write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, BIT(i)); - spin_lock(&drv->lock); clear_bit(i, drv->tcs_in_use); - spin_unlock(&drv->lock); if (req) rpmh_tx_done(req, err); } @@ -304,7 +302,7 @@ static void __tcs_trigger(struct rsc_drv *drv, int tcs_id) enable = TCS_AMC_MODE_ENABLE; write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); enable |= TCS_AMC_MODE_TRIGGER; - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); + write_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, enable); } static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
Avoid locking in the interrupt context to improve latency. Since we don't lock in the interrupt context, it is possible that we now could race with the DRV_CONTROL register that writes the enable register and cleared by the interrupt handler. For fire-n-forget requests, the interrupt may be raised as soon as the TCS is triggered and the IRQ handler may clear the enable bit before the DRV_CONTROL is read back. Use the non-sync variant when enabling the TCS register to avoid reading back a value that may been cleared because the interrupt handler ran immediately after triggering the TCS. Signed-off-by: Lina Iyer <ilina@codeaurora.org> --- drivers/soc/qcom/rpmh-rsc.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)