Message ID | 20231201014101.15802-3-quic_parellan@quicinc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Stabilize use of vblank_refcount | expand |
On Thu, Nov 30, 2023 at 05:40:55PM -0800, Paloma Arellano wrote: > Add a missing mutex lock to control vblank irq. Thus prevent race > conditions when registering/unregistering the irq callback. > I'm guessing that the mutex is needed because vblank_refcount, while being an atomic_t, doesn't actually provide any protection during concurrency? I also tried to follow the calls backwards, but I'm uncertain how you end up here concurrently. When wrapped in proper mutual exclusion, can't vblank_refcount just be turned into an "int"...given that you're not actually able to rely on it's atomic behavior anyways... So, please rewrite the commit message with a detailed description of how the concurrency happens, and please review if vblank_refcount should be an atomic at all... > v2: Slightly changed wording of commit message > v3: Mistakenly did not change wording in last version. It is done now. > > Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com> > --- > drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 6 ++++++ > drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h | 6 ++++++ > drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c | 2 ++ > drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c | 2 ++ > 4 files changed, 16 insertions(+) > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c > index 1cf7ff6caff4e..19ff7d1d5ccad 100644 > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c > @@ -119,6 +119,8 @@ enum dpu_enc_rc_states { > * Virtual encoder defers as much as possible to the physical encoders. > * Virtual encoder registers itself with the DRM Framework as the encoder. > * @base: drm_encoder base class for registration with DRM > + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder > + * for IRQ purposes I think this protects vblank_refcount, so state that instead of the vague "for IRQ purposes". > * @enc_spinlock: Virtual-Encoder-Wide Spin Lock for IRQ purposes > * @enabled: True if the encoder is active, protected by enc_lock > * @num_phys_encs: Actual number of physical encoders contained. > @@ -166,6 +168,7 @@ enum dpu_enc_rc_states { > */ > struct dpu_encoder_virt { > struct drm_encoder base; > + struct mutex vblank_ctl_lock; > spinlock_t enc_spinlock; > > bool enabled; > @@ -2255,6 +2258,7 @@ static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc, > phys_params.dpu_kms = dpu_kms; > phys_params.parent = &dpu_enc->base; > phys_params.enc_spinlock = &dpu_enc->enc_spinlock; > + phys_params.vblank_ctl_lock = &dpu_enc->vblank_ctl_lock; > > WARN_ON(disp_info->num_of_h_tiles < 1); > > @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, > dpu_enc->enabled = false; > mutex_init(&dpu_enc->enc_lock); > mutex_init(&dpu_enc->rc_lock); > + mutex_init(&dpu_enc->vblank_ctl_lock); Is this somehow propagated to multiple different dpu_encoder_phys instances, or why do you need to initialize it here and pass the pointer through 2 different intermediate structures before assigning it to phys_enc->vblank_ctl_lock below? > > ret = dpu_encoder_setup_display(dpu_enc, dpu_kms, disp_info); > if (ret) > @@ -2495,6 +2500,7 @@ void dpu_encoder_phys_init(struct dpu_encoder_phys *phys_enc, > phys_enc->dpu_kms = p->dpu_kms; > phys_enc->split_role = p->split_role; > phys_enc->enc_spinlock = p->enc_spinlock; > + phys_enc->vblank_ctl_lock = p->vblank_ctl_lock; Could you not just mutex_init() the one and only vblank_ctl_lock here? > phys_enc->enable_state = DPU_ENC_DISABLED; > > atomic_set(&phys_enc->vblank_refcount, 0); > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h > index 6f04c3d56e77c..5691bf6b82ee6 100644 > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h > @@ -155,6 +155,8 @@ enum dpu_intr_idx { > * @hw_wb: Hardware interface to the wb registers > * @dpu_kms: Pointer to the dpu_kms top level > * @cached_mode: DRM mode cached at mode_set time, acted on in enable > + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder > + * for IRQ purposes Same here. > * @enabled: Whether the encoder has enabled and running a mode > * @split_role: Role to play in a split-panel configuration > * @intf_mode: Interface mode > @@ -183,6 +185,7 @@ struct dpu_encoder_phys { > struct dpu_hw_wb *hw_wb; > struct dpu_kms *dpu_kms; > struct drm_display_mode cached_mode; > + struct mutex *vblank_ctl_lock; > enum dpu_enc_split_role split_role; > enum dpu_intf_mode intf_mode; > spinlock_t *enc_spinlock; > @@ -253,6 +256,8 @@ struct dpu_encoder_phys_cmd { > * @split_role: Role to play in a split-panel configuration > * @hw_intf: Hardware interface to the intf registers > * @hw_wb: Hardware interface to the wb registers > + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder > + * for IRQ purposes And here... Regards, Bjorn > * @enc_spinlock: Virtual-Encoder-Wide Spin Lock for IRQ purposes > */ > struct dpu_enc_phys_init_params { > @@ -261,6 +266,7 @@ struct dpu_enc_phys_init_params { > enum dpu_enc_split_role split_role; > struct dpu_hw_intf *hw_intf; > struct dpu_hw_wb *hw_wb; > + struct mutex *vblank_ctl_lock; > spinlock_t *enc_spinlock; > }; > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c > index 25babfe1f001a..dcf1f6a18ad6e 100644 > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c > @@ -244,6 +244,7 @@ static int dpu_encoder_phys_cmd_control_vblank_irq( > return -EINVAL; > } > > + mutex_lock(phys_enc->vblank_ctl_lock); > refcount = atomic_read(&phys_enc->vblank_refcount); > > /* Slave encoders don't report vblank */ > @@ -275,6 +276,7 @@ static int dpu_encoder_phys_cmd_control_vblank_irq( > } > > end: > + mutex_unlock(phys_enc->vblank_ctl_lock); > if (ret) { > DRM_ERROR("vblank irq err id:%u pp:%d ret:%d, enable %s/%d\n", > DRMID(phys_enc->parent), > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c > index 8e905d7267f9f..87bb49763785d 100644 > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c > @@ -364,6 +364,7 @@ static int dpu_encoder_phys_vid_control_vblank_irq( > int ret = 0; > int refcount; > > + mutex_lock(phys_enc->vblank_ctl_lock); > refcount = atomic_read(&phys_enc->vblank_refcount); > > /* Slave encoders don't report vblank */ > @@ -394,6 +395,7 @@ static int dpu_encoder_phys_vid_control_vblank_irq( > } > > end: > + mutex_unlock(phys_enc->vblank_ctl_lock); > if (ret) { > DRM_ERROR("failed: id:%u intf:%d ret:%d enable:%d refcnt:%d\n", > DRMID(phys_enc->parent), > -- > 2.41.0 > >
On Fri, 1 Dec 2023 at 05:47, Bjorn Andersson <quic_bjorande@quicinc.com> wrote: > > On Thu, Nov 30, 2023 at 05:40:55PM -0800, Paloma Arellano wrote: > > Add a missing mutex lock to control vblank irq. Thus prevent race > > conditions when registering/unregistering the irq callback. > > > > I'm guessing that the mutex is needed because vblank_refcount, while > being an atomic_t, doesn't actually provide any protection during > concurrency? > > I also tried to follow the calls backwards, but I'm uncertain how you > end up here concurrently. > > When wrapped in proper mutual exclusion, can't vblank_refcount just be > turned into an "int"...given that you're not actually able to rely on > it's atomic behavior anyways... > > > So, please rewrite the commit message with a detailed description of how > the concurrency happens, and please review if vblank_refcount should be > an atomic at all... > > > v2: Slightly changed wording of commit message > > v3: Mistakenly did not change wording in last version. It is done now. > > > > Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com> > > --- > > drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 6 ++++++ > > drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h | 6 ++++++ > > drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c | 2 ++ > > drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c | 2 ++ > > 4 files changed, 16 insertions(+) > > > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c > > index 1cf7ff6caff4e..19ff7d1d5ccad 100644 > > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c > > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c > > @@ -119,6 +119,8 @@ enum dpu_enc_rc_states { > > * Virtual encoder defers as much as possible to the physical encoders. > > * Virtual encoder registers itself with the DRM Framework as the encoder. > > * @base: drm_encoder base class for registration with DRM > > + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder > > + * for IRQ purposes > > I think this protects vblank_refcount, so state that instead of the > vague "for IRQ purposes". > > > * @enc_spinlock: Virtual-Encoder-Wide Spin Lock for IRQ purposes > > * @enabled: True if the encoder is active, protected by enc_lock > > * @num_phys_encs: Actual number of physical encoders contained. > > @@ -166,6 +168,7 @@ enum dpu_enc_rc_states { > > */ > > struct dpu_encoder_virt { > > struct drm_encoder base; > > + struct mutex vblank_ctl_lock; > > spinlock_t enc_spinlock; > > > > bool enabled; > > @@ -2255,6 +2258,7 @@ static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc, > > phys_params.dpu_kms = dpu_kms; > > phys_params.parent = &dpu_enc->base; > > phys_params.enc_spinlock = &dpu_enc->enc_spinlock; > > + phys_params.vblank_ctl_lock = &dpu_enc->vblank_ctl_lock; > > > > WARN_ON(disp_info->num_of_h_tiles < 1); > > > > @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, > > dpu_enc->enabled = false; > > mutex_init(&dpu_enc->enc_lock); > > mutex_init(&dpu_enc->rc_lock); > > + mutex_init(&dpu_enc->vblank_ctl_lock); > > Is this somehow propagated to multiple different dpu_encoder_phys > instances, or why do you need to initialize it here and pass the pointer > through 2 different intermediate structures before assigning it to > phys_enc->vblank_ctl_lock below? Yes, there can be two phys_enc instances for a single encoder, so this part is fine. > > > > > ret = dpu_encoder_setup_display(dpu_enc, dpu_kms, disp_info); > > if (ret) > > @@ -2495,6 +2500,7 @@ void dpu_encoder_phys_init(struct dpu_encoder_phys *phys_enc, > > phys_enc->dpu_kms = p->dpu_kms; > > phys_enc->split_role = p->split_role; > > phys_enc->enc_spinlock = p->enc_spinlock; > > + phys_enc->vblank_ctl_lock = p->vblank_ctl_lock; > > Could you not just mutex_init() the one and only vblank_ctl_lock here? > > > phys_enc->enable_state = DPU_ENC_DISABLED; > > > > atomic_set(&phys_enc->vblank_refcount, 0); > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h > > index 6f04c3d56e77c..5691bf6b82ee6 100644 > > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h > > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h > > @@ -155,6 +155,8 @@ enum dpu_intr_idx { > > * @hw_wb: Hardware interface to the wb registers > > * @dpu_kms: Pointer to the dpu_kms top level > > * @cached_mode: DRM mode cached at mode_set time, acted on in enable > > + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder > > + * for IRQ purposes > > Same here. > > > * @enabled: Whether the encoder has enabled and running a mode > > * @split_role: Role to play in a split-panel configuration > > * @intf_mode: Interface mode > > @@ -183,6 +185,7 @@ struct dpu_encoder_phys { > > struct dpu_hw_wb *hw_wb; > > struct dpu_kms *dpu_kms; > > struct drm_display_mode cached_mode; > > + struct mutex *vblank_ctl_lock; > > enum dpu_enc_split_role split_role; > > enum dpu_intf_mode intf_mode; > > spinlock_t *enc_spinlock; > > @@ -253,6 +256,8 @@ struct dpu_encoder_phys_cmd { > > * @split_role: Role to play in a split-panel configuration > > * @hw_intf: Hardware interface to the intf registers > > * @hw_wb: Hardware interface to the wb registers > > + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder > > + * for IRQ purposes > > And here... > > Regards, > Bjorn > > > * @enc_spinlock: Virtual-Encoder-Wide Spin Lock for IRQ purposes > > */ > > struct dpu_enc_phys_init_params { > > @@ -261,6 +266,7 @@ struct dpu_enc_phys_init_params { > > enum dpu_enc_split_role split_role; > > struct dpu_hw_intf *hw_intf; > > struct dpu_hw_wb *hw_wb; > > + struct mutex *vblank_ctl_lock; > > spinlock_t *enc_spinlock; > > }; > > > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c > > index 25babfe1f001a..dcf1f6a18ad6e 100644 > > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c > > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c > > @@ -244,6 +244,7 @@ static int dpu_encoder_phys_cmd_control_vblank_irq( > > return -EINVAL; > > } > > > > + mutex_lock(phys_enc->vblank_ctl_lock); > > refcount = atomic_read(&phys_enc->vblank_refcount); > > > > /* Slave encoders don't report vblank */ > > @@ -275,6 +276,7 @@ static int dpu_encoder_phys_cmd_control_vblank_irq( > > } > > > > end: > > + mutex_unlock(phys_enc->vblank_ctl_lock); > > if (ret) { > > DRM_ERROR("vblank irq err id:%u pp:%d ret:%d, enable %s/%d\n", > > DRMID(phys_enc->parent), > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c > > index 8e905d7267f9f..87bb49763785d 100644 > > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c > > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c > > @@ -364,6 +364,7 @@ static int dpu_encoder_phys_vid_control_vblank_irq( > > int ret = 0; > > int refcount; > > > > + mutex_lock(phys_enc->vblank_ctl_lock); > > refcount = atomic_read(&phys_enc->vblank_refcount); > > > > /* Slave encoders don't report vblank */ > > @@ -394,6 +395,7 @@ static int dpu_encoder_phys_vid_control_vblank_irq( > > } > > > > end: > > + mutex_unlock(phys_enc->vblank_ctl_lock); > > if (ret) { > > DRM_ERROR("failed: id:%u intf:%d ret:%d enable:%d refcnt:%d\n", > > DRMID(phys_enc->parent), > > -- > > 2.41.0 > > > >
On Fri, Dec 01, 2023 at 10:34:50AM +0200, Dmitry Baryshkov wrote: > On Fri, 1 Dec 2023 at 05:47, Bjorn Andersson <quic_bjorande@quicinc.com> wrote: > > On Thu, Nov 30, 2023 at 05:40:55PM -0800, Paloma Arellano wrote: [..] > > > @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, > > > dpu_enc->enabled = false; > > > mutex_init(&dpu_enc->enc_lock); > > > mutex_init(&dpu_enc->rc_lock); > > > + mutex_init(&dpu_enc->vblank_ctl_lock); > > > > Is this somehow propagated to multiple different dpu_encoder_phys > > instances, or why do you need to initialize it here and pass the pointer > > through 2 different intermediate structures before assigning it to > > phys_enc->vblank_ctl_lock below? > > Yes, there can be two phys_enc instances for a single encoder, so this > part is fine. > Thanks for the clarification, Dmitry. Sounds like it make sense then. But, if I read the code correctly the two instances will have separate vblank_refcount copies, and the dpu_core_irq_*() interface does mutual exclusion within. So why do we need shared mutual exclusion between the two? (This is where a proper description of the problem in the commit message would have been very helpful) Regards, Bjorn
On 12/1/2023 8:22 AM, Bjorn Andersson wrote: > On Fri, Dec 01, 2023 at 10:34:50AM +0200, Dmitry Baryshkov wrote: >> On Fri, 1 Dec 2023 at 05:47, Bjorn Andersson <quic_bjorande@quicinc.com> wrote: >>> On Thu, Nov 30, 2023 at 05:40:55PM -0800, Paloma Arellano wrote: > [..] >>>> @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, >>>> dpu_enc->enabled = false; >>>> mutex_init(&dpu_enc->enc_lock); >>>> mutex_init(&dpu_enc->rc_lock); >>>> + mutex_init(&dpu_enc->vblank_ctl_lock); >>> >>> Is this somehow propagated to multiple different dpu_encoder_phys >>> instances, or why do you need to initialize it here and pass the pointer >>> through 2 different intermediate structures before assigning it to >>> phys_enc->vblank_ctl_lock below? >> >> Yes, there can be two phys_enc instances for a single encoder, so this >> part is fine. >> > > Thanks for the clarification, Dmitry. Sounds like it make sense then. > > But, if I read the code correctly the two instances will have separate > vblank_refcount copies, and the dpu_core_irq_*() interface does mutual > exclusion within. So why do we need shared mutual exclusion between the > two? (This is where a proper description of the problem in the commit > message would have been very helpful) > Are you suggesting we just have one vblank_ctl_lock per encoder and not have one vblank_ctl_lock per phys encoder? I cannot think of a display specific reason for that other than just the SW layout. The reason its like this today is that control_vblank_irq is an encoder phys op because it does different things based on the type of encoder. Because its an encoder phys op, it has the vblank_ctl_lock at the phys structure and not the encoder one. Its just more about how the phys op is defined that each phys op operates on its phys's structure. Generally, if we have one encoder with two physical encoders we anyways bail out early for the other encoder so this is mostly a no-op for the slave phys encoder. Please take a look at below return point. 715 /* Slave encoders don't report vblank */ 716 if (!sde_encoder_phys_vid_is_master(phys_enc)) 717 goto end; 718 So technically its still providing protection for the same phys encoder but the catch is this control_vblank_irq can get called from different threads hence we need exclusion. > Regards, > Bjorn
On Fri, Dec 01, 2023 at 11:43:36AM -0800, Abhinav Kumar wrote: > > > On 12/1/2023 8:22 AM, Bjorn Andersson wrote: > > On Fri, Dec 01, 2023 at 10:34:50AM +0200, Dmitry Baryshkov wrote: > > > On Fri, 1 Dec 2023 at 05:47, Bjorn Andersson <quic_bjorande@quicinc.com> wrote: > > > > On Thu, Nov 30, 2023 at 05:40:55PM -0800, Paloma Arellano wrote: > > [..] > > > > > @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, > > > > > dpu_enc->enabled = false; > > > > > mutex_init(&dpu_enc->enc_lock); > > > > > mutex_init(&dpu_enc->rc_lock); > > > > > + mutex_init(&dpu_enc->vblank_ctl_lock); > > > > > > > > Is this somehow propagated to multiple different dpu_encoder_phys > > > > instances, or why do you need to initialize it here and pass the pointer > > > > through 2 different intermediate structures before assigning it to > > > > phys_enc->vblank_ctl_lock below? > > > > > > Yes, there can be two phys_enc instances for a single encoder, so this > > > part is fine. > > > > > > > Thanks for the clarification, Dmitry. Sounds like it make sense then. > > > > But, if I read the code correctly the two instances will have separate > > vblank_refcount copies, and the dpu_core_irq_*() interface does mutual > > exclusion within. So why do we need shared mutual exclusion between the > > two? (This is where a proper description of the problem in the commit > > message would have been very helpful) > > > > Are you suggesting we just have one vblank_ctl_lock per encoder and not have > one vblank_ctl_lock per phys encoder? I cannot think of a display specific > reason for that other than just the SW layout. > > The reason its like this today is that control_vblank_irq is an encoder phys > op because it does different things based on the type of encoder. > > Because its an encoder phys op, it has the vblank_ctl_lock at the phys > structure and not the encoder one. > > Its just more about how the phys op is defined that each phys op operates on > its phys's structure. > > Generally, if we have one encoder with two physical encoders we anyways bail > out early for the other encoder so this is mostly a no-op for the slave phys > encoder. > > Please take a look at below return point. > > 715 /* Slave encoders don't report vblank */ > 716 if (!sde_encoder_phys_vid_is_master(phys_enc)) > 717 goto end; > 718 > > So technically its still providing protection for the same phys encoder but > the catch is this control_vblank_irq can get called from different threads > hence we need exclusion. > The way I understand the code is that the atomic is used to refcount when to enable/disable the interrupt, and the new lock protects this refcount during concurrent updates. I have no concerns with this part. What I'm seeing is that the refcount it per phys_enc, and as such there would be no reason to have a common mutex to protect the two independent refcounts. But I'm probably misunderstanding something here... Regards, Bjorn
On 12/3/2023 7:31 PM, Bjorn Andersson wrote: > On Fri, Dec 01, 2023 at 11:43:36AM -0800, Abhinav Kumar wrote: >> >> >> On 12/1/2023 8:22 AM, Bjorn Andersson wrote: >>> On Fri, Dec 01, 2023 at 10:34:50AM +0200, Dmitry Baryshkov wrote: >>>> On Fri, 1 Dec 2023 at 05:47, Bjorn Andersson <quic_bjorande@quicinc.com> wrote: >>>>> On Thu, Nov 30, 2023 at 05:40:55PM -0800, Paloma Arellano wrote: >>> [..] >>>>>> @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, >>>>>> dpu_enc->enabled = false; >>>>>> mutex_init(&dpu_enc->enc_lock); >>>>>> mutex_init(&dpu_enc->rc_lock); >>>>>> + mutex_init(&dpu_enc->vblank_ctl_lock); >>>>> >>>>> Is this somehow propagated to multiple different dpu_encoder_phys >>>>> instances, or why do you need to initialize it here and pass the pointer >>>>> through 2 different intermediate structures before assigning it to >>>>> phys_enc->vblank_ctl_lock below? >>>> >>>> Yes, there can be two phys_enc instances for a single encoder, so this >>>> part is fine. >>>> >>> >>> Thanks for the clarification, Dmitry. Sounds like it make sense then. >>> >>> But, if I read the code correctly the two instances will have separate >>> vblank_refcount copies, and the dpu_core_irq_*() interface does mutual >>> exclusion within. So why do we need shared mutual exclusion between the >>> two? (This is where a proper description of the problem in the commit >>> message would have been very helpful) >>> >> >> Are you suggesting we just have one vblank_ctl_lock per encoder and not have >> one vblank_ctl_lock per phys encoder? I cannot think of a display specific >> reason for that other than just the SW layout. >> >> The reason its like this today is that control_vblank_irq is an encoder phys >> op because it does different things based on the type of encoder. >> >> Because its an encoder phys op, it has the vblank_ctl_lock at the phys >> structure and not the encoder one. >> >> Its just more about how the phys op is defined that each phys op operates on >> its phys's structure. >> >> Generally, if we have one encoder with two physical encoders we anyways bail >> out early for the other encoder so this is mostly a no-op for the slave phys >> encoder. >> >> Please take a look at below return point. >> >> 715 /* Slave encoders don't report vblank */ >> 716 if (!sde_encoder_phys_vid_is_master(phys_enc)) >> 717 goto end; >> 718 >> >> So technically its still providing protection for the same phys encoder but >> the catch is this control_vblank_irq can get called from different threads >> hence we need exclusion. >> > > The way I understand the code is that the atomic is used to refcount > when to enable/disable the interrupt, and the new lock protects this > refcount during concurrent updates. I have no concerns with this part. > Correct. > > What I'm seeing is that the refcount it per phys_enc, and as such there > would be no reason to have a common mutex to protect the two independent > refcounts. > > But I'm probably misunderstanding something here... > There is no reason to have a common mutex to protect the two independent refcounts. In fact, there is no need to even have two independent refcounts because whenever we have one encoder with two physical encoders, we use only the master physical encoder for vblanks like I pointed above. The only reason we have it like this is because today the vblank_refcount is part of phys_enc so the mutex handle is also now a part of it. Do you think if we move both the mutex and the vblank_refcount to the dpu_encoder from the dpu_encoder_phys and maintain the mutex at that level it will be less confusing for you? I am fine with that. > Regards, > Bjorn >
On Mon, Dec 04, 2023 at 11:22:24AM -0800, Abhinav Kumar wrote: > > > On 12/3/2023 7:31 PM, Bjorn Andersson wrote: > > On Fri, Dec 01, 2023 at 11:43:36AM -0800, Abhinav Kumar wrote: > > > > > > > > > On 12/1/2023 8:22 AM, Bjorn Andersson wrote: > > > > On Fri, Dec 01, 2023 at 10:34:50AM +0200, Dmitry Baryshkov wrote: > > > > > On Fri, 1 Dec 2023 at 05:47, Bjorn Andersson <quic_bjorande@quicinc.com> wrote: > > > > > > On Thu, Nov 30, 2023 at 05:40:55PM -0800, Paloma Arellano wrote: > > > > [..] > > > > > > > @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, > > > > > > > dpu_enc->enabled = false; > > > > > > > mutex_init(&dpu_enc->enc_lock); > > > > > > > mutex_init(&dpu_enc->rc_lock); > > > > > > > + mutex_init(&dpu_enc->vblank_ctl_lock); > > > > > > > > > > > > Is this somehow propagated to multiple different dpu_encoder_phys > > > > > > instances, or why do you need to initialize it here and pass the pointer > > > > > > through 2 different intermediate structures before assigning it to > > > > > > phys_enc->vblank_ctl_lock below? > > > > > > > > > > Yes, there can be two phys_enc instances for a single encoder, so this > > > > > part is fine. > > > > > > > > > > > > > Thanks for the clarification, Dmitry. Sounds like it make sense then. > > > > > > > > But, if I read the code correctly the two instances will have separate > > > > vblank_refcount copies, and the dpu_core_irq_*() interface does mutual > > > > exclusion within. So why do we need shared mutual exclusion between the > > > > two? (This is where a proper description of the problem in the commit > > > > message would have been very helpful) > > > > > > > > > > Are you suggesting we just have one vblank_ctl_lock per encoder and not have > > > one vblank_ctl_lock per phys encoder? I cannot think of a display specific > > > reason for that other than just the SW layout. > > > > > > The reason its like this today is that control_vblank_irq is an encoder phys > > > op because it does different things based on the type of encoder. > > > > > > Because its an encoder phys op, it has the vblank_ctl_lock at the phys > > > structure and not the encoder one. > > > > > > Its just more about how the phys op is defined that each phys op operates on > > > its phys's structure. > > > > > > Generally, if we have one encoder with two physical encoders we anyways bail > > > out early for the other encoder so this is mostly a no-op for the slave phys > > > encoder. > > > > > > Please take a look at below return point. > > > > > > 715 /* Slave encoders don't report vblank */ > > > 716 if (!sde_encoder_phys_vid_is_master(phys_enc)) > > > 717 goto end; > > > 718 > > > > > > So technically its still providing protection for the same phys encoder but > > > the catch is this control_vblank_irq can get called from different threads > > > hence we need exclusion. > > > > > > > The way I understand the code is that the atomic is used to refcount > > when to enable/disable the interrupt, and the new lock protects this > > refcount during concurrent updates. I have no concerns with this part. > > > > Correct. > > > > > What I'm seeing is that the refcount it per phys_enc, and as such there > > would be no reason to have a common mutex to protect the two independent > > refcounts. > > > > But I'm probably misunderstanding something here... > > > > There is no reason to have a common mutex to protect the two independent > refcounts. In fact, there is no need to even have two independent refcounts > because whenever we have one encoder with two physical encoders, we use only > the master physical encoder for vblanks like I pointed above. > > The only reason we have it like this is because today the vblank_refcount is > part of phys_enc so the mutex handle is also now a part of it. > > Do you think if we move both the mutex and the vblank_refcount to the > dpu_encoder from the dpu_encoder_phys and maintain the mutex at that level > it will be less confusing for you? > The two functions operate on dpu_encoder_phys objects, and as you say above the two instances doesn't need to be handled under shared mutual exclusion. Moving the serialization mechanism to dpu_encoder seems like it would create an entanglement, for the sake of making the lock common. If nothing else this would act as documentation to me that the two functions are intertwined somehow. I was rather hoping that we'd move the mutex_init() to dpu_encoder_phys_init() and avoid passing a reference around in unrelated parts of the code just to set up the sharing, if that's not necessary. Regards, Bjorn > I am fine with that. > > > Regards, > > Bjorn > >
On 12/5/2023 7:51 PM, Bjorn Andersson wrote: > On Mon, Dec 04, 2023 at 11:22:24AM -0800, Abhinav Kumar wrote: >> >> >> On 12/3/2023 7:31 PM, Bjorn Andersson wrote: >>> On Fri, Dec 01, 2023 at 11:43:36AM -0800, Abhinav Kumar wrote: >>>> >>>> >>>> On 12/1/2023 8:22 AM, Bjorn Andersson wrote: >>>>> On Fri, Dec 01, 2023 at 10:34:50AM +0200, Dmitry Baryshkov wrote: >>>>>> On Fri, 1 Dec 2023 at 05:47, Bjorn Andersson <quic_bjorande@quicinc.com> wrote: >>>>>>> On Thu, Nov 30, 2023 at 05:40:55PM -0800, Paloma Arellano wrote: >>>>> [..] >>>>>>>> @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, >>>>>>>> dpu_enc->enabled = false; >>>>>>>> mutex_init(&dpu_enc->enc_lock); >>>>>>>> mutex_init(&dpu_enc->rc_lock); >>>>>>>> + mutex_init(&dpu_enc->vblank_ctl_lock); >>>>>>> >>>>>>> Is this somehow propagated to multiple different dpu_encoder_phys >>>>>>> instances, or why do you need to initialize it here and pass the pointer >>>>>>> through 2 different intermediate structures before assigning it to >>>>>>> phys_enc->vblank_ctl_lock below? >>>>>> >>>>>> Yes, there can be two phys_enc instances for a single encoder, so this >>>>>> part is fine. >>>>>> >>>>> >>>>> Thanks for the clarification, Dmitry. Sounds like it make sense then. >>>>> >>>>> But, if I read the code correctly the two instances will have separate >>>>> vblank_refcount copies, and the dpu_core_irq_*() interface does mutual >>>>> exclusion within. So why do we need shared mutual exclusion between the >>>>> two? (This is where a proper description of the problem in the commit >>>>> message would have been very helpful) >>>>> >>>> >>>> Are you suggesting we just have one vblank_ctl_lock per encoder and not have >>>> one vblank_ctl_lock per phys encoder? I cannot think of a display specific >>>> reason for that other than just the SW layout. >>>> >>>> The reason its like this today is that control_vblank_irq is an encoder phys >>>> op because it does different things based on the type of encoder. >>>> >>>> Because its an encoder phys op, it has the vblank_ctl_lock at the phys >>>> structure and not the encoder one. >>>> >>>> Its just more about how the phys op is defined that each phys op operates on >>>> its phys's structure. >>>> >>>> Generally, if we have one encoder with two physical encoders we anyways bail >>>> out early for the other encoder so this is mostly a no-op for the slave phys >>>> encoder. >>>> >>>> Please take a look at below return point. >>>> >>>> 715 /* Slave encoders don't report vblank */ >>>> 716 if (!sde_encoder_phys_vid_is_master(phys_enc)) >>>> 717 goto end; >>>> 718 >>>> >>>> So technically its still providing protection for the same phys encoder but >>>> the catch is this control_vblank_irq can get called from different threads >>>> hence we need exclusion. >>>> >>> >>> The way I understand the code is that the atomic is used to refcount >>> when to enable/disable the interrupt, and the new lock protects this >>> refcount during concurrent updates. I have no concerns with this part. >>> >> >> Correct. >> >>> >>> What I'm seeing is that the refcount it per phys_enc, and as such there >>> would be no reason to have a common mutex to protect the two independent >>> refcounts. >>> >>> But I'm probably misunderstanding something here... >>> >> >> There is no reason to have a common mutex to protect the two independent >> refcounts. In fact, there is no need to even have two independent refcounts >> because whenever we have one encoder with two physical encoders, we use only >> the master physical encoder for vblanks like I pointed above. >> >> The only reason we have it like this is because today the vblank_refcount is >> part of phys_enc so the mutex handle is also now a part of it. >> >> Do you think if we move both the mutex and the vblank_refcount to the >> dpu_encoder from the dpu_encoder_phys and maintain the mutex at that level >> it will be less confusing for you? >> > > The two functions operate on dpu_encoder_phys objects, and as you say > above the two instances doesn't need to be handled under shared mutual > exclusion. > > Moving the serialization mechanism to dpu_encoder seems like it would > create an entanglement, for the sake of making the lock common. If > nothing else this would act as documentation to me that the two > functions are intertwined somehow. > > I was rather hoping that we'd move the mutex_init() to > dpu_encoder_phys_init() and avoid passing a reference around in > unrelated parts of the code just to set up the sharing, if that's not > necessary. > > Regards, > Bjorn > In principle we need only one mutex per encoder and not per encoder phys. Hence the two phys encoders are having the handle to the same mutex for that reason. If having it that way is confusing you for some reason, I offered the alternative. Otherwise I think what we have is enough and correct. We can update the commit text and doc around the mutex to explain what it does. I dont think your suggestion will work. dpu_encoder_phys_init() will be called once per each phys encoder so it can get called twice. Are you suggesting we have one mutex per phys encoder? Thats not necessary and its not correct as well. If not, perhaps you can post something for us to review your idea. >> I am fine with that. >> >>> Regards, >>> Bjorn >>>
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 1cf7ff6caff4e..19ff7d1d5ccad 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -119,6 +119,8 @@ enum dpu_enc_rc_states { * Virtual encoder defers as much as possible to the physical encoders. * Virtual encoder registers itself with the DRM Framework as the encoder. * @base: drm_encoder base class for registration with DRM + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder + * for IRQ purposes * @enc_spinlock: Virtual-Encoder-Wide Spin Lock for IRQ purposes * @enabled: True if the encoder is active, protected by enc_lock * @num_phys_encs: Actual number of physical encoders contained. @@ -166,6 +168,7 @@ enum dpu_enc_rc_states { */ struct dpu_encoder_virt { struct drm_encoder base; + struct mutex vblank_ctl_lock; spinlock_t enc_spinlock; bool enabled; @@ -2255,6 +2258,7 @@ static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc, phys_params.dpu_kms = dpu_kms; phys_params.parent = &dpu_enc->base; phys_params.enc_spinlock = &dpu_enc->enc_spinlock; + phys_params.vblank_ctl_lock = &dpu_enc->vblank_ctl_lock; WARN_ON(disp_info->num_of_h_tiles < 1); @@ -2386,6 +2390,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev, dpu_enc->enabled = false; mutex_init(&dpu_enc->enc_lock); mutex_init(&dpu_enc->rc_lock); + mutex_init(&dpu_enc->vblank_ctl_lock); ret = dpu_encoder_setup_display(dpu_enc, dpu_kms, disp_info); if (ret) @@ -2495,6 +2500,7 @@ void dpu_encoder_phys_init(struct dpu_encoder_phys *phys_enc, phys_enc->dpu_kms = p->dpu_kms; phys_enc->split_role = p->split_role; phys_enc->enc_spinlock = p->enc_spinlock; + phys_enc->vblank_ctl_lock = p->vblank_ctl_lock; phys_enc->enable_state = DPU_ENC_DISABLED; atomic_set(&phys_enc->vblank_refcount, 0); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h index 6f04c3d56e77c..5691bf6b82ee6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h @@ -155,6 +155,8 @@ enum dpu_intr_idx { * @hw_wb: Hardware interface to the wb registers * @dpu_kms: Pointer to the dpu_kms top level * @cached_mode: DRM mode cached at mode_set time, acted on in enable + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder + * for IRQ purposes * @enabled: Whether the encoder has enabled and running a mode * @split_role: Role to play in a split-panel configuration * @intf_mode: Interface mode @@ -183,6 +185,7 @@ struct dpu_encoder_phys { struct dpu_hw_wb *hw_wb; struct dpu_kms *dpu_kms; struct drm_display_mode cached_mode; + struct mutex *vblank_ctl_lock; enum dpu_enc_split_role split_role; enum dpu_intf_mode intf_mode; spinlock_t *enc_spinlock; @@ -253,6 +256,8 @@ struct dpu_encoder_phys_cmd { * @split_role: Role to play in a split-panel configuration * @hw_intf: Hardware interface to the intf registers * @hw_wb: Hardware interface to the wb registers + * @vblank_ctl_lock: Vblank ctl mutex lock to protect physical encoder + * for IRQ purposes * @enc_spinlock: Virtual-Encoder-Wide Spin Lock for IRQ purposes */ struct dpu_enc_phys_init_params { @@ -261,6 +266,7 @@ struct dpu_enc_phys_init_params { enum dpu_enc_split_role split_role; struct dpu_hw_intf *hw_intf; struct dpu_hw_wb *hw_wb; + struct mutex *vblank_ctl_lock; spinlock_t *enc_spinlock; }; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c index 25babfe1f001a..dcf1f6a18ad6e 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c @@ -244,6 +244,7 @@ static int dpu_encoder_phys_cmd_control_vblank_irq( return -EINVAL; } + mutex_lock(phys_enc->vblank_ctl_lock); refcount = atomic_read(&phys_enc->vblank_refcount); /* Slave encoders don't report vblank */ @@ -275,6 +276,7 @@ static int dpu_encoder_phys_cmd_control_vblank_irq( } end: + mutex_unlock(phys_enc->vblank_ctl_lock); if (ret) { DRM_ERROR("vblank irq err id:%u pp:%d ret:%d, enable %s/%d\n", DRMID(phys_enc->parent), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c index 8e905d7267f9f..87bb49763785d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c @@ -364,6 +364,7 @@ static int dpu_encoder_phys_vid_control_vblank_irq( int ret = 0; int refcount; + mutex_lock(phys_enc->vblank_ctl_lock); refcount = atomic_read(&phys_enc->vblank_refcount); /* Slave encoders don't report vblank */ @@ -394,6 +395,7 @@ static int dpu_encoder_phys_vid_control_vblank_irq( } end: + mutex_unlock(phys_enc->vblank_ctl_lock); if (ret) { DRM_ERROR("failed: id:%u intf:%d ret:%d enable:%d refcnt:%d\n", DRMID(phys_enc->parent),
Add a missing mutex lock to control vblank irq. Thus prevent race conditions when registering/unregistering the irq callback. v2: Slightly changed wording of commit message v3: Mistakenly did not change wording in last version. It is done now. Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com> --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 6 ++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h | 6 ++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c | 2 ++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c | 2 ++ 4 files changed, 16 insertions(+)