diff mbox

[6/6] OMAPDSS: HDMI: Create platform device to support audio

Message ID 1350350839-30408-7-git-send-email-ricardo.neri@ti.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ricardo Neri Oct. 16, 2012, 1:27 a.m. UTC
Creating the accessory devices, such as audio, from the HDMI driver
allows to regard HDMI as a single entity with audio an display
functionality. This intends to follow the design of drivers such
as MFD, in which a single entity handles the creation of the accesory
devices. Such devices are then used by domain-specific drivers; audio in
this case.

Also, this is in line with the DT implementation of HDMI, in which we will
have a single node to describe this feature of the OMAP SoC.

Signed-off-by: Ricardo Neri <ricardo.neri@ti.com>
---
 drivers/video/omap2/dss/hdmi.c |   68 ++++++++++++++++++++++++++++++++++++++++
 1 files changed, 68 insertions(+), 0 deletions(-)

Comments

Peter Ujfalusi Oct. 16, 2012, 9:30 a.m. UTC | #1
On 10/16/2012 03:27 AM, Ricardo Neri wrote:
> Creating the accessory devices, such as audio, from the HDMI driver
> allows to regard HDMI as a single entity with audio an display
> functionality. This intends to follow the design of drivers such
> as MFD, in which a single entity handles the creation of the accesory
> devices. Such devices are then used by domain-specific drivers; audio in
> this case.
> 
> Also, this is in line with the DT implementation of HDMI, in which we will
> have a single node to describe this feature of the OMAP SoC.

...

> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].start = res->start;
> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].end = res->end;
> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].flags = IORESOURCE_MEM;
> +
> +	res = platform_get_resource(hdmi.pdev, IORESOURCE_DMA, 0);
> +	if (!res) {
> +		DSSERR("can't get IORESOURCE_DMA HDMI\n");
> +		return -EINVAL;
> +	}
> +
> +	/* Pass this resource to audio_pdev */
> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].start = res->start;
> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].end = res->end;
> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].flags = IORESOURCE_DMA;
> +
> +	/* create platform device for HDMI audio driver */
> +	hdmi.audio_pdev = platform_device_register_simple(
> +							  "omap_hdmi_audio",
> +							  -1, hdmi_aud_res,
> +							   ARRAY_SIZE(hdmi_aud_res));

Should you also update arch/arm/mach-omap2/devices.c to not register the same
device?
When we do not boot with DT devices.c will create the same device earlier
(without pdata) which will prevent this device to be created and at the end
will prevent omap_hdmi_audio driver to probe due to missing pdata...

> +	if (IS_ERR(hdmi.audio_pdev)) {
> +		DSSERR("Can't instantiate hdmi-audio\n");
> +		return PTR_ERR(hdmi.audio_pdev);
> +	}
> +
> +	return 0;
> +}
> +
Ricardo Neri Oct. 16, 2012, 11:11 a.m. UTC | #2
Hi Peter,

Thanks for reviewing!

On 10/16/2012 04:30 AM, Péter Ujfalusi wrote:
> On 10/16/2012 03:27 AM, Ricardo Neri wrote:
>> Creating the accessory devices, such as audio, from the HDMI driver
>> allows to regard HDMI as a single entity with audio an display
>> functionality. This intends to follow the design of drivers such
>> as MFD, in which a single entity handles the creation of the accesory
>> devices. Such devices are then used by domain-specific drivers; audio in
>> this case.
>>
>> Also, this is in line with the DT implementation of HDMI, in which we will
>> have a single node to describe this feature of the OMAP SoC.
>
> ...
>
>> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].start = res->start;
>> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].end = res->end;
>> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].flags = IORESOURCE_MEM;
>> +
>> +	res = platform_get_resource(hdmi.pdev, IORESOURCE_DMA, 0);
>> +	if (!res) {
>> +		DSSERR("can't get IORESOURCE_DMA HDMI\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Pass this resource to audio_pdev */
>> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].start = res->start;
>> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].end = res->end;
>> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].flags = IORESOURCE_DMA;
>> +
>> +	/* create platform device for HDMI audio driver */
>> +	hdmi.audio_pdev = platform_device_register_simple(
>> +							  "omap_hdmi_audio",
>> +							  -1, hdmi_aud_res,
>> +							   ARRAY_SIZE(hdmi_aud_res));
>
> Should you also update arch/arm/mach-omap2/devices.c to not register the same
> device?
> When we do not boot with DT devices.c will create the same device earlier
> (without pdata) which will prevent this device to be created and at the end
> will prevent omap_hdmi_audio driver to probe due to missing pdata...

Yes, I have already a set of patches to remove the device creation from 
devices.c. I decided to send this patch set first to see if Tomi and the 
reviewers are OK with it. After they are accepted I will send the 
updates to devices.c and ASoC.

BR

Ricardo
>
>> +	if (IS_ERR(hdmi.audio_pdev)) {
>> +		DSSERR("Can't instantiate hdmi-audio\n");
>> +		return PTR_ERR(hdmi.audio_pdev);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tomi Valkeinen Oct. 22, 2012, 7:40 a.m. UTC | #3
On 2012-10-16 04:27, Ricardo Neri wrote:
> Creating the accessory devices, such as audio, from the HDMI driver
> allows to regard HDMI as a single entity with audio an display
> functionality. This intends to follow the design of drivers such
> as MFD, in which a single entity handles the creation of the accesory
> devices. Such devices are then used by domain-specific drivers; audio in
> this case.
> 
> Also, this is in line with the DT implementation of HDMI, in which we will
> have a single node to describe this feature of the OMAP SoC.
> 
> Signed-off-by: Ricardo Neri <ricardo.neri@ti.com>
> ---
>  drivers/video/omap2/dss/hdmi.c |   68 ++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 68 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/video/omap2/dss/hdmi.c b/drivers/video/omap2/dss/hdmi.c
> index e5be0a5..c62c5ab 100644
> --- a/drivers/video/omap2/dss/hdmi.c
> +++ b/drivers/video/omap2/dss/hdmi.c
> @@ -60,6 +60,9 @@
>  static struct {
>  	struct mutex lock;
>  	struct platform_device *pdev;
> +#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
> +	struct platform_device *audio_pdev;
> +#endif
>  
>  	struct hdmi_ip_data ip_data;
>  
> @@ -73,6 +76,13 @@ static struct {
>  	struct omap_dss_output output;
>  } hdmi;
>  
> +#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
> +#define HDMI_AUDIO_MEM_RESOURCE 0
> +#define HDMI_AUDIO_DMA_RESOURCE 1

I don't see much point with these definitions. They are hdmi.c internal,
so the audio driver can't use them, and so they aren't really fixed.

> +static struct resource hdmi_aud_res[2];

Did you check if the platform_device_register does a copy of these? If
it does, this can be local to the probe function.

> +#endif
> +
> +
>  /*
>   * Logic for the below structure :
>   * user enters the CEA or VESA timings by specifying the HDMI/DVI code.
> @@ -765,6 +775,50 @@ static void hdmi_put_clocks(void)
>  }
>  
>  #if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
> +static int hdmi_probe_audio(struct platform_device *pdev)
> +{
> +	struct resource *res;
> +
> +	hdmi.audio_pdev = ERR_PTR(-EINVAL);
> +
> +	res = platform_get_resource(hdmi.pdev, IORESOURCE_MEM, 0);
> +	if (!res) {
> +		DSSERR("can't get IORESOURCE_MEM HDMI\n");
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * Pass this resource to audio_pdev.
> +	 * Audio drivers should not remap it
> +	 */
> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].start = res->start;
> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].end = res->end;
> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].flags = IORESOURCE_MEM;
> +
> +	res = platform_get_resource(hdmi.pdev, IORESOURCE_DMA, 0);
> +	if (!res) {
> +		DSSERR("can't get IORESOURCE_DMA HDMI\n");
> +		return -EINVAL;
> +	}
> +
> +	/* Pass this resource to audio_pdev */
> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].start = res->start;
> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].end = res->end;
> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].flags = IORESOURCE_DMA;
> +
> +	/* create platform device for HDMI audio driver */
> +	hdmi.audio_pdev = platform_device_register_simple(
> +							  "omap_hdmi_audio",
> +							  -1, hdmi_aud_res,
> +							   ARRAY_SIZE(hdmi_aud_res));
> +	if (IS_ERR(hdmi.audio_pdev)) {
> +		DSSERR("Can't instantiate hdmi-audio\n");
> +		return PTR_ERR(hdmi.audio_pdev);
> +	}
> +
> +	return 0;
> +}

So, how will this work? All the audio related functions will be removed
from the (video) hdmi driver, and the audio driver will access the
registers independently? The audio driver will still need to access the
video parts, right?

I feel a bit uneasy about giving the same ioremapped register space to
two independent drivers... If we could split the registers to video and
audio parts, each driver only ioremapping their respective registers,
it'd be much better.

 Tomi
Ricardo Neri Oct. 23, 2012, 12:48 a.m. UTC | #4
Hi Tomi,

Thanks for reviewing!

On 10/22/2012 02:40 AM, Tomi Valkeinen wrote:
> On 2012-10-16 04:27, Ricardo Neri wrote:
>> Creating the accessory devices, such as audio, from the HDMI driver
>> allows to regard HDMI as a single entity with audio an display
>> functionality. This intends to follow the design of drivers such
>> as MFD, in which a single entity handles the creation of the accesory
>> devices. Such devices are then used by domain-specific drivers; audio in
>> this case.
>>
>> Also, this is in line with the DT implementation of HDMI, in which we will
>> have a single node to describe this feature of the OMAP SoC.
>>
>> Signed-off-by: Ricardo Neri <ricardo.neri@ti.com>
>> ---
>>   drivers/video/omap2/dss/hdmi.c |   68 ++++++++++++++++++++++++++++++++++++++++
>>   1 files changed, 68 insertions(+), 0 deletions(-)
>>
>> diff --git a/drivers/video/omap2/dss/hdmi.c b/drivers/video/omap2/dss/hdmi.c
>> index e5be0a5..c62c5ab 100644
>> --- a/drivers/video/omap2/dss/hdmi.c
>> +++ b/drivers/video/omap2/dss/hdmi.c
>> @@ -60,6 +60,9 @@
>>   static struct {
>>   	struct mutex lock;
>>   	struct platform_device *pdev;
>> +#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
>> +	struct platform_device *audio_pdev;
>> +#endif
>>
>>   	struct hdmi_ip_data ip_data;
>>
>> @@ -73,6 +76,13 @@ static struct {
>>   	struct omap_dss_output output;
>>   } hdmi;
>>
>> +#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
>> +#define HDMI_AUDIO_MEM_RESOURCE 0
>> +#define HDMI_AUDIO_DMA_RESOURCE 1
>
> I don't see much point with these definitions. They are hdmi.c internal,
> so the audio driver can't use them, and so they aren't really fixed.

I just thought it could make the code more readable; but if the 
resources array is going to be local, then they are not helpful.
>
>> +static struct resource hdmi_aud_res[2];
>
> Did you check if the platform_device_register does a copy of these? If
> it does, this can be local to the probe function.

Thanks! I checked, platform_device_register does the copy. I will put it 
as a local variable.
>
>> +#endif
>> +
>> +
>>   /*
>>    * Logic for the below structure :
>>    * user enters the CEA or VESA timings by specifying the HDMI/DVI code.
>> @@ -765,6 +775,50 @@ static void hdmi_put_clocks(void)
>>   }
>>
>>   #if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
>> +static int hdmi_probe_audio(struct platform_device *pdev)
>> +{
>> +	struct resource *res;
>> +
>> +	hdmi.audio_pdev = ERR_PTR(-EINVAL);
>> +
>> +	res = platform_get_resource(hdmi.pdev, IORESOURCE_MEM, 0);
>> +	if (!res) {
>> +		DSSERR("can't get IORESOURCE_MEM HDMI\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/*
>> +	 * Pass this resource to audio_pdev.
>> +	 * Audio drivers should not remap it
>> +	 */
>> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].start = res->start;
>> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].end = res->end;
>> +	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].flags = IORESOURCE_MEM;
>> +
>> +	res = platform_get_resource(hdmi.pdev, IORESOURCE_DMA, 0);
>> +	if (!res) {
>> +		DSSERR("can't get IORESOURCE_DMA HDMI\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Pass this resource to audio_pdev */
>> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].start = res->start;
>> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].end = res->end;
>> +	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].flags = IORESOURCE_DMA;
>> +
>> +	/* create platform device for HDMI audio driver */
>> +	hdmi.audio_pdev = platform_device_register_simple(
>> +							  "omap_hdmi_audio",
>> +							  -1, hdmi_aud_res,
>> +							   ARRAY_SIZE(hdmi_aud_res));
>> +	if (IS_ERR(hdmi.audio_pdev)) {
>> +		DSSERR("Can't instantiate hdmi-audio\n");
>> +		return PTR_ERR(hdmi.audio_pdev);
>> +	}
>> +
>> +	return 0;
>> +}
>
> So, how will this work? All the audio related functions will be removed
> from the (video) hdmi driver, and the audio driver will access the
> registers independently? The audio driver will still need to access the
> video parts, right?
That could be a new approach, but the idea here is to continue having an 
omapdss audio interface for audio drivers to use.

The root problem that I am trying to address is that the omapdss audio 
interface does not have functionality for DMA transfer of audio samples 
to the HDMI IP. Also, I am not sure how that could be done without 
duplicating the functionality that ASoC already provides.
>
> I feel a bit uneasy about giving the same ioremapped register space to
> two independent drivers... If we could split the registers to video and
> audio parts, each driver only ioremapping their respective registers,
> it'd be much better.

Fwiw, the audio drivers (at least my audio drivers) will not ioremap. 
They will just take the DMA request number and port. Maybe spliting the 
register space into audio and video is not practical as we would endup 
having many tiny address spaces.

BR,

Ricardo
>
>   Tomi
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tomi Valkeinen Oct. 23, 2012, 9:37 a.m. UTC | #5
On 2012-10-23 03:48, Ricardo Neri wrote:

>>> +#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
>>> +#define HDMI_AUDIO_MEM_RESOURCE 0
>>> +#define HDMI_AUDIO_DMA_RESOURCE 1
>>
>> I don't see much point with these definitions. They are hdmi.c internal,
>> so the audio driver can't use them, and so they aren't really fixed.
> 
> I just thought it could make the code more readable; but if the
> resources array is going to be local, then they are not helpful.

My point was that if the defines as hdmi.c internal, you need to add the
same defines into the audio code also in order to use them. And then
we'd have the same defines in two places.

Or, if audio code doesn't need them to parse the resources, then they
aren't really relevant here either, as you are just adding two resources
to the array, and their order is not important.

>> So, how will this work? All the audio related functions will be removed
>> from the (video) hdmi driver, and the audio driver will access the
>> registers independently? The audio driver will still need to access the
>> video parts, right?
> That could be a new approach, but the idea here is to continue having an
> omapdss audio interface for audio drivers to use.

Ok. Do you have a git tree with the audio code working with this
approach? Or can you just copy paste a few lines showing how the audio
driver uses this. It'd be easier to understand by seeing that side of
the code also.

The audio uses sDMA for the transfer?

> The root problem that I am trying to address is that the omapdss audio
> interface does not have functionality for DMA transfer of audio samples
> to the HDMI IP. Also, I am not sure how that could be done without
> duplicating the functionality that ASoC already provides.

Ok. But the audio driver still needs access to the HDMI registers? I'm
not worried about passing the DMA resource. Video side doesn't use that.
But video side uses the registers, and both having the same ioremapped
area could possibly lead both writing to the same register. Or perhaps
not the same register, but still doing conflicting things at the hw
level at the same time.

>> I feel a bit uneasy about giving the same ioremapped register space to
>> two independent drivers... If we could split the registers to video and
>> audio parts, each driver only ioremapping their respective registers,
>> it'd be much better.
> 
> Fwiw, the audio drivers (at least my audio drivers) will not ioremap.
> They will just take the DMA request number and port. Maybe spliting the
> register space into audio and video is not practical as we would endup
> having many tiny address spaces.

Yes, if there's no clear HDMI block division for video and audio, then
it doesn't sound good to split them up if we'd have lots of small
address spaces.

What registers does the audio side need to access? Why are part of the
registers accessed via the hdmi driver API, and some directly? I imagine
it'd be better to do either one of those, but not both.

 Tomi
Ricardo Neri Oct. 23, 2012, 3:42 p.m. UTC | #6
On 10/23/2012 04:37 AM, Tomi Valkeinen wrote:
> On 2012-10-23 03:48, Ricardo Neri wrote:
>
>>>> +#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
>>>> +#define HDMI_AUDIO_MEM_RESOURCE 0
>>>> +#define HDMI_AUDIO_DMA_RESOURCE 1
>>>
>>> I don't see much point with these definitions. They are hdmi.c internal,
>>> so the audio driver can't use them, and so they aren't really fixed.
>>
>> I just thought it could make the code more readable; but if the
>> resources array is going to be local, then they are not helpful.
>
> My point was that if the defines as hdmi.c internal, you need to add the
> same defines into the audio code also in order to use them. And then
> we'd have the same defines in two places.
>
> Or, if audio code doesn't need them to parse the resources, then they
> aren't really relevant here either, as you are just adding two resources
> to the array, and their order is not important.

Oh OK. So they are not needed at all.
>
>>> So, how will this work? All the audio related functions will be removed
>>> from the (video) hdmi driver, and the audio driver will access the
>>> registers independently? The audio driver will still need to access the
>>> video parts, right?
>> That could be a new approach, but the idea here is to continue having an
>> omapdss audio interface for audio drivers to use.
>
> Ok. Do you have a git tree with the audio code working with this
> approach? Or can you just copy paste a few lines showing how the audio
> driver uses this. It'd be easier to understand by seeing that side of
> the code also.

Here is the code:

static __devinit int omap_hdmi_probe(struct platform_device *pdev)
{
	...

	hdmi_rsrc = platform_get_resource(pdev, IORESOURCE_MEM, 0);
	if (!hdmi_rsrc) {
		dev_err(&pdev->dev, "Cannot obtain IORESOURCE_MEM");
		return -ENODEV;
	}

	hdmi_data->dma_params.port_addr =  hdmi_rsrc->start
		+ OMAP_HDMI_AUDIO_DMA_PORT;

	hdmi_rsrc = platform_get_resource(pdev, IORESOURCE_DMA, 0);
	if (!hdmi_rsrc) {
		dev_err(&pdev->dev, "Cannot obtain IORESOURCE_DMA");
		return -ENODEV;
	}

	hdmi_data->dma_params.dma_req =  hdmi_rsrc->start;
	hdmi_data->dma_params.name = "HDMI playback";

	...
}

You can also take a look here:
git://gitorious.org/omap-audio/linux-audio.git 
ricardon/topic/for-3.8-hdmi_rename_devs

at sound/soc/omap/omap-hdmi.c

or directly here:

http://gitorious.org/omap-audio/linux-audio/blobs/ricardon/topic/for-3.8-hdmi_rename_devs/sound/soc/omap/omap-hdmi.c
>
> The audio uses sDMA for the transfer?

Yes, it does.
>
>> The root problem that I am trying to address is that the omapdss audio
>> interface does not have functionality for DMA transfer of audio samples
>> to the HDMI IP. Also, I am not sure how that could be done without
>> duplicating the functionality that ASoC already provides.
>
> Ok. But the audio driver still needs access to the HDMI registers? I'm
> not worried about passing the DMA resource. Video side doesn't use that.

Audio driver does not access the HDMI registers nor ioremaps them. The 
audio driver relies solely on the OMAPDSS audio interface for audio 
configuration, start and stop.

> But video side uses the registers, and both having the same ioremapped
> area could possibly lead both writing to the same register. Or perhaps
> not the same register, but still doing conflicting things at the hw
> level at the same time.
Also, for things like display enable/disable, the audio driver relies 
the the display driver. If the display is disable or the current timing 
does not support audio, audio will just not play.
>
>>> I feel a bit uneasy about giving the same ioremapped register space to
>>> two independent drivers... If we could split the registers to video and
>>> audio parts, each driver only ioremapping their respective registers,
>>> it'd be much better.
>>
>> Fwiw, the audio drivers (at least my audio drivers) will not ioremap.
>> They will just take the DMA request number and port. Maybe spliting the
>> register space into audio and video is not practical as we would endup
>> having many tiny address spaces.
>
> Yes, if there's no clear HDMI block division for video and audio, then
> it doesn't sound good to split them up if we'd have lots of small
> address spaces.
>
> What registers does the audio side need to access?

It only needs access to the DMA audio data port. All other operations 
that the audio driver needs are done through the omapdss audio interface.

> Why are part of the
> registers accessed via the hdmi driver API, and some directly? I imagine
> it'd be better to do either one of those, but not both.

This is because in the current omapdss audio interface we have no 
functionality to handle the DMA transfers for audio. Do you think it 
would be good to explore implementing support for that? At this point it 
is not clear for me how to do it without duplicating the functionality 
that ASoC provides for that.

BR,

Ricardo
>
>   Tomi
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tomi Valkeinen Oct. 23, 2012, 4:17 p.m. UTC | #7
On 2012-10-23 18:42, Ricardo Neri wrote:

>> What registers does the audio side need to access?
> 
> It only needs access to the DMA audio data port. All other operations
> that the audio driver needs are done through the omapdss audio interface.

Hmm, so the audio side only needs the address of one register, for the
audio data, and this address is given to sDMA? You have
OMAP_HDMI_AUDIO_DMA_PORT in the audio code, is it the HDMI_WP_AUDIO_DATA
register?

If so, you could pass only that one address, instead of the whole HDMI
register space?

My point here is that we should make it very clear what the audio side
can access. If we pass the whole HDMI register space, we should, in
theory, be prepared to cope with the audio driver changing any register
there. But if we pass only one register (or certain small ranges), we
know the rest of the registers are safe.

>> Why are part of the
>> registers accessed via the hdmi driver API, and some directly? I imagine
>> it'd be better to do either one of those, but not both.
> 
> This is because in the current omapdss audio interface we have no
> functionality to handle the DMA transfers for audio. Do you think it
> would be good to explore implementing support for that? At this point it
> is not clear for me how to do it without duplicating the functionality
> that ASoC provides for that.

No, if the common ASoC code provides functionality for this, we should
at least try to use it.

I was looking at the ti_hdmi_4xxx_ip.h, searching for audio related
registers by searching "AUD" string. I don't know if it makes any sense
(you're better to have a say on that), but I think we could pass some of
the audio registers ranges to the audio driver for use.

For example, the HDMI_WP_AUDIO_* registers are together, we could give
those to the audio driver if it makes sense. HDMI_CORE_AV_AUD* registers
are a bit scattered, but... I guess it wouldn't be difficult to pass
those also, there's still only a couple separate ranges.

But I also have no problem with having the hdmi audio API in the hdmi
video driver, as we do now. And I think we in any case need a few
functions, even if we would give the audio driver access to the AUD
registers.

 Tomi
Ricardo Neri Oct. 23, 2012, 5:21 p.m. UTC | #8
On 10/23/2012 11:17 AM, Tomi Valkeinen wrote:
> On 2012-10-23 18:42, Ricardo Neri wrote:
>
>>> What registers does the audio side need to access?
>>
>> It only needs access to the DMA audio data port. All other operations
>> that the audio driver needs are done through the omapdss audio interface.
>
> Hmm, so the audio side only needs the address of one register, for the
> audio data, and this address is given to sDMA? You have
> OMAP_HDMI_AUDIO_DMA_PORT in the audio code, is it the HDMI_WP_AUDIO_DATA
> register?
Yes, that is the register it needs.
>
> If so, you could pass only that one address, instead of the whole HDMI
> register space?
Yes, that could work. I thought about that but the common HDMI driver 
would have to know the the IP-specific register, which it should not. 
Perhaps the IP-specific register address can be passed by a IP-specific 
function such as hdmi_get_audio_dma_port for the common HDMI driver to 
populate the resource.

Btw, could this be another reason to convert the IP-specific libraries 
to drivers?
>
> My point here is that we should make it very clear what the audio side
> can access. If we pass the whole HDMI register space, we should, in
> theory, be prepared to cope with the audio driver changing any register
> there. But if we pass only one register (or certain small ranges), we
> know the rest of the registers are safe.

True.
>
>>> Why are part of the
>>> registers accessed via the hdmi driver API, and some directly? I imagine
>>> it'd be better to do either one of those, but not both.
>>
>> This is because in the current omapdss audio interface we have no
>> functionality to handle the DMA transfers for audio. Do you think it
>> would be good to explore implementing support for that? At this point it
>> is not clear for me how to do it without duplicating the functionality
>> that ASoC provides for that.
>
> No, if the common ASoC code provides functionality for this, we should
> at least try to use it.
>
> I was looking at the ti_hdmi_4xxx_ip.h, searching for audio related
> registers by searching "AUD" string. I don't know if it makes any sense
> (you're better to have a say on that), but I think we could pass some of
> the audio registers ranges to the audio driver for use.
>
> For example, the HDMI_WP_AUDIO_* registers are together, we could give
> those to the audio driver if it makes sense. HDMI_CORE_AV_AUD* registers
> are a bit scattered, but... I guess it wouldn't be difficult to pass
> those also, there's still only a couple separate ranges.

Even though this would allow our HDMI drivers to be more inline with 
what other HDMI drivers do, things like power management and interrupts 
are still handled by DSS, unlike x86, for instance [1][2]. So the audio 
drivers will still depend on DSS. Also, the register layout is different 
for OMAP5 and audio registers are even more scattered. Furthermore, the 
common HDMI driver would have to know the IP-specific layout to know 
what register spaces expose to the audio driver (another reason to have 
IP-specific drivers?). So I would vote for continuing using the omapdss 
audio interface.

>
> But I also have no problem with having the hdmi audio API in the hdmi
> video driver, as we do now. And I think we in any case need a few
> functions, even if we would give the audio driver access to the AUD
> registers.
Yes, interrupt handling, for instance.

BR,

Ricardo


[1].http://www.spinics.net/lists/linux-omap/msg75969.html
[2]. http://www.spinics.net/lists/linux-omap/msg75968.html

>
>   Tomi
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tomi Valkeinen Oct. 24, 2012, 4:29 a.m. UTC | #9
On 2012-10-23 20:21, Ricardo Neri wrote:

>> If so, you could pass only that one address, instead of the whole HDMI
>> register space?
> Yes, that could work. I thought about that but the common HDMI driver
> would have to know the the IP-specific register, which it should not.

Argh, of course...

> Perhaps the IP-specific register address can be passed by a IP-specific
> function such as hdmi_get_audio_dma_port for the common HDMI driver to
> populate the resource.
> 
> Btw, could this be another reason to convert the IP-specific libraries
> to drivers?

Yes, I think it makes more and more sense to do that.

> Even though this would allow our HDMI drivers to be more inline with
> what other HDMI drivers do, things like power management and interrupts
> are still handled by DSS, unlike x86, for instance [1][2]. So the audio
> drivers will still depend on DSS. Also, the register layout is different
> for OMAP5 and audio registers are even more scattered. Furthermore, the
> common HDMI driver would have to know the IP-specific layout to know
> what register spaces expose to the audio driver (another reason to have
> IP-specific drivers?). So I would vote for continuing using the omapdss
> audio interface.

Okay.

I think your approach is ok for the time being. I don't like passing the
whole register space to the audio driver, but that's the best we can do
with the current driver.

Have you looked at converting to IP specific drivers? Any idea of the
effort? I'd like it to be done with the omap4 hdmi driver first, before
merging omap5 hdmi into the mainline, if at all possible.

 Tomi
Ricardo Neri Oct. 25, 2012, 2:31 p.m. UTC | #10
On 10/23/2012 11:29 PM, Tomi Valkeinen wrote:
> On 2012-10-23 20:21, Ricardo Neri wrote:
>
>>> If so, you could pass only that one address, instead of the whole HDMI
>>> register space?
>> Yes, that could work. I thought about that but the common HDMI driver
>> would have to know the the IP-specific register, which it should not.
>
> Argh, of course...
>
>> Perhaps the IP-specific register address can be passed by a IP-specific
>> function such as hdmi_get_audio_dma_port for the common HDMI driver to
>> populate the resource.
>>
>> Btw, could this be another reason to convert the IP-specific libraries
>> to drivers?
>
> Yes, I think it makes more and more sense to do that.
>
>> Even though this would allow our HDMI drivers to be more inline with
>> what other HDMI drivers do, things like power management and interrupts
>> are still handled by DSS, unlike x86, for instance [1][2]. So the audio
>> drivers will still depend on DSS. Also, the register layout is different
>> for OMAP5 and audio registers are even more scattered. Furthermore, the
>> common HDMI driver would have to know the IP-specific layout to know
>> what register spaces expose to the audio driver (another reason to have
>> IP-specific drivers?). So I would vote for continuing using the omapdss
>> audio interface.
>
> Okay.
>
> I think your approach is ok for the time being. I don't like passing the
> whole register space to the audio driver, but that's the best we can do
> with the current driver.

What about for now having a function in the IP library to be called from 
the common driver to determine the address of the port? Something like[1]:

	res = platform_get_resource_byname(hdmi.pdev,
					   IORESOURCE_MEM, "hdmi_wp");

	aud_offset = hdmi.ip_data.ops->audio_get_dma_port_off();
	aud_res[0].start = res->start + aud_offset;
	aud_res[0].end = res->start + aud_offset + 3;


>
> Have you looked at converting to IP specific drivers? Any idea of the
> effort? I'd like it to be done with the omap4 hdmi driver first, before
> merging omap5 hdmi into the mainline, if at all possible.
>
As a first step, I have started implementing a separate driver for the 
TPD12S015 as you suggested in the past. For converting the IP libraries 
into drivers, I still don't see how to keep them independent of omapdss 
to be reusable by DaVinci platforms (but afaik they are not using our 
libraries anyways). Maybe, a thin compatibility layer for omapdss (the 
hdmi_panel)? I still don't  have a clear idea. :/

BR,

Ricardo

[1]. 
http://gitorious.org/omap-audio/linux-audio/blobs/ricardon/topic/3.7-hdmi-clean/drivers/video/omap2/dss/hdmi.c#line1098

>   Tomi
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tomi Valkeinen Oct. 25, 2012, 2:54 p.m. UTC | #11
On 2012-10-25 17:31, Ricardo Neri wrote:
> 
> 
> On 10/23/2012 11:29 PM, Tomi Valkeinen wrote:
>> On 2012-10-23 20:21, Ricardo Neri wrote:
>>
>>>> If so, you could pass only that one address, instead of the whole HDMI
>>>> register space?
>>> Yes, that could work. I thought about that but the common HDMI driver
>>> would have to know the the IP-specific register, which it should not.
>>
>> Argh, of course...
>>
>>> Perhaps the IP-specific register address can be passed by a IP-specific
>>> function such as hdmi_get_audio_dma_port for the common HDMI driver to
>>> populate the resource.
>>>
>>> Btw, could this be another reason to convert the IP-specific libraries
>>> to drivers?
>>
>> Yes, I think it makes more and more sense to do that.
>>
>>> Even though this would allow our HDMI drivers to be more inline with
>>> what other HDMI drivers do, things like power management and interrupts
>>> are still handled by DSS, unlike x86, for instance [1][2]. So the audio
>>> drivers will still depend on DSS. Also, the register layout is different
>>> for OMAP5 and audio registers are even more scattered. Furthermore, the
>>> common HDMI driver would have to know the IP-specific layout to know
>>> what register spaces expose to the audio driver (another reason to have
>>> IP-specific drivers?). So I would vote for continuing using the omapdss
>>> audio interface.
>>
>> Okay.
>>
>> I think your approach is ok for the time being. I don't like passing the
>> whole register space to the audio driver, but that's the best we can do
>> with the current driver.
> 
> What about for now having a function in the IP library to be called from
> the common driver to determine the address of the port? Something like[1]:
> 
>     res = platform_get_resource_byname(hdmi.pdev,
>                        IORESOURCE_MEM, "hdmi_wp");
> 
>     aud_offset = hdmi.ip_data.ops->audio_get_dma_port_off();
>     aud_res[0].start = res->start + aud_offset;
>     aud_res[0].end = res->start + aud_offset + 3;

Yep, I think that looks quite clean. I wonder if there's a macro or func
to calculate the end position... It'd be more readable to set start and
size, instead if start and end.

>> Have you looked at converting to IP specific drivers? Any idea of the
>> effort? I'd like it to be done with the omap4 hdmi driver first, before
>> merging omap5 hdmi into the mainline, if at all possible.
>>
> As a first step, I have started implementing a separate driver for the
> TPD12S015 as you suggested in the past. For converting the IP libraries

I'm not sure you can to the TPD as a separate driver currently. Or, at
least in the way I've imagined it to be. My thinking is that it should
be a "video entity" between OMAP's HDMI output and the HDMI monitor, but
we don't currently support these kind of chains. That's why I suggested
to just cleanly separate the TPD code in separate areas, and prefix the
funcs with tpd, etc.

What have you planned for it?

> into drivers, I still don't see how to keep them independent of omapdss
> to be reusable by DaVinci platforms (but afaik they are not using our
> libraries anyways). Maybe, a thin compatibility layer for omapdss (the
> hdmi_panel)? I still don't  have a clear idea. :/

Yeah, I think we need quite a bit of restructuring to convert the IP
libs to drivers. Or more precisely, redirections.

What I imagine we'd have is some kind of hdmi_ops struct, containing
function pointers for the operations. These calls would go to IP driver.
The IP driver would create this struct at probe time, and register
itself somewhere.

Then hdmi panel driver would get this hdmi_ops pointer from the place
where it was registered to, and use it to call the functions.

 Tomi
Ricardo Neri Oct. 26, 2012, 12:46 a.m. UTC | #12
On 10/25/2012 09:54 AM, Tomi Valkeinen wrote:
> On 2012-10-25 17:31, Ricardo Neri wrote:
>>
>>
>> On 10/23/2012 11:29 PM, Tomi Valkeinen wrote:
>>> On 2012-10-23 20:21, Ricardo Neri wrote:
>>>
>>>>> If so, you could pass only that one address, instead of the whole HDMI
>>>>> register space?
>>>> Yes, that could work. I thought about that but the common HDMI driver
>>>> would have to know the the IP-specific register, which it should not.
>>>
>>> Argh, of course...
>>>
>>>> Perhaps the IP-specific register address can be passed by a IP-specific
>>>> function such as hdmi_get_audio_dma_port for the common HDMI driver to
>>>> populate the resource.
>>>>
>>>> Btw, could this be another reason to convert the IP-specific libraries
>>>> to drivers?
>>>
>>> Yes, I think it makes more and more sense to do that.
>>>
>>>> Even though this would allow our HDMI drivers to be more inline with
>>>> what other HDMI drivers do, things like power management and interrupts
>>>> are still handled by DSS, unlike x86, for instance [1][2]. So the audio
>>>> drivers will still depend on DSS. Also, the register layout is different
>>>> for OMAP5 and audio registers are even more scattered. Furthermore, the
>>>> common HDMI driver would have to know the IP-specific layout to know
>>>> what register spaces expose to the audio driver (another reason to have
>>>> IP-specific drivers?). So I would vote for continuing using the omapdss
>>>> audio interface.
>>>
>>> Okay.
>>>
>>> I think your approach is ok for the time being. I don't like passing the
>>> whole register space to the audio driver, but that's the best we can do
>>> with the current driver.
>>
>> What about for now having a function in the IP library to be called from
>> the common driver to determine the address of the port? Something like[1]:
>>
>>      res = platform_get_resource_byname(hdmi.pdev,
>>                         IORESOURCE_MEM, "hdmi_wp");
>>
>>      aud_offset = hdmi.ip_data.ops->audio_get_dma_port_off();
>>      aud_res[0].start = res->start + aud_offset;
>>      aud_res[0].end = res->start + aud_offset + 3;
>
> Yep, I think that looks quite clean. I wonder if there's a macro or func
> to calculate the end position... It'd be more readable to set start and
> size, instead if start and end.
I'll check.

>
>>> Have you looked at converting to IP specific drivers? Any idea of the
>>> effort? I'd like it to be done with the omap4 hdmi driver first, before
>>> merging omap5 hdmi into the mainline, if at all possible.
>>>
>> As a first step, I have started implementing a separate driver for the
>> TPD12S015 as you suggested in the past. For converting the IP libraries
>
> I'm not sure you can to the TPD as a separate driver currently. Or, at
> least in the way I've imagined it to be. My thinking is that it should
> be a "video entity" between OMAP's HDMI output and the HDMI monitor, but
> we don't currently support these kind of chains. That's why I suggested
> to just cleanly separate the TPD code in separate areas, and prefix the
> funcs with tpd, etc.
>
> What have you planned for it?

I was thinking that this driver would handle the LS_OE and the CT_CP_HDP 
GPIOs and the HDP interrupt. Thus, the HDMI driver would call 
tpd_ls_oe(1) instead of gpio_set_value. For the HPD interrupt, it would 
call the HDMI driver to put the PHY in LDOON state. Of course, the 
driver (or library or separate functions?) would get its info from the 
DT or board data.

The interrupt would have to be handled with care to ensure that PHY is 
never in TX when there is no cable attached.

For the DDC signals, only take care of the pull-up resistor settings.

Thus, my view of connecting the OMAP HDMI IP to the HDMI monitor is just 
enabling the shifter or getting the HPD events.
>
>> into drivers, I still don't see how to keep them independent of omapdss
>> to be reusable by DaVinci platforms (but afaik they are not using our
>> libraries anyways). Maybe, a thin compatibility layer for omapdss (the
>> hdmi_panel)? I still don't  have a clear idea. :/
>
> Yeah, I think we need quite a bit of restructuring to convert the IP
> libs to drivers. Or more precisely, redirections.
>
> What I imagine we'd have is some kind of hdmi_ops struct, containing
> function pointers for the operations. These calls would go to IP driver.
> The IP driver would create this struct at probe time, and register
> itself somewhere.

Could we have a sort of HDMI core driver? This is to serve as a hub of 
the IP operations and a layer between the panel and the IP driver. The 
IP driver would register with this driver. Also this core would handle 
HDMI stuff that is specific to the protocol (e.g., N/CTS calculation, 
Pclk vs TMDS clock for deep color, video timings, etc).
>
> Then hdmi panel driver would get this hdmi_ops pointer from the place
> where it was registered to, and use it to call the functions.

This would contain the omapdss specific stuff so that DaVinci can use 
the HDMI core and IP drivers.

BR,

Ricardo
>
>   Tomi
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/video/omap2/dss/hdmi.c b/drivers/video/omap2/dss/hdmi.c
index e5be0a5..c62c5ab 100644
--- a/drivers/video/omap2/dss/hdmi.c
+++ b/drivers/video/omap2/dss/hdmi.c
@@ -60,6 +60,9 @@ 
 static struct {
 	struct mutex lock;
 	struct platform_device *pdev;
+#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
+	struct platform_device *audio_pdev;
+#endif
 
 	struct hdmi_ip_data ip_data;
 
@@ -73,6 +76,13 @@  static struct {
 	struct omap_dss_output output;
 } hdmi;
 
+#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
+#define HDMI_AUDIO_MEM_RESOURCE 0
+#define HDMI_AUDIO_DMA_RESOURCE 1
+static struct resource hdmi_aud_res[2];
+#endif
+
+
 /*
  * Logic for the below structure :
  * user enters the CEA or VESA timings by specifying the HDMI/DVI code.
@@ -765,6 +775,50 @@  static void hdmi_put_clocks(void)
 }
 
 #if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
+static int hdmi_probe_audio(struct platform_device *pdev)
+{
+	struct resource *res;
+
+	hdmi.audio_pdev = ERR_PTR(-EINVAL);
+
+	res = platform_get_resource(hdmi.pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		DSSERR("can't get IORESOURCE_MEM HDMI\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Pass this resource to audio_pdev.
+	 * Audio drivers should not remap it
+	 */
+	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].start = res->start;
+	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].end = res->end;
+	hdmi_aud_res[HDMI_AUDIO_MEM_RESOURCE].flags = IORESOURCE_MEM;
+
+	res = platform_get_resource(hdmi.pdev, IORESOURCE_DMA, 0);
+	if (!res) {
+		DSSERR("can't get IORESOURCE_DMA HDMI\n");
+		return -EINVAL;
+	}
+
+	/* Pass this resource to audio_pdev */
+	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].start = res->start;
+	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].end = res->end;
+	hdmi_aud_res[HDMI_AUDIO_DMA_RESOURCE].flags = IORESOURCE_DMA;
+
+	/* create platform device for HDMI audio driver */
+	hdmi.audio_pdev = platform_device_register_simple(
+							  "omap_hdmi_audio",
+							  -1, hdmi_aud_res,
+							   ARRAY_SIZE(hdmi_aud_res));
+	if (IS_ERR(hdmi.audio_pdev)) {
+		DSSERR("Can't instantiate hdmi-audio\n");
+		return PTR_ERR(hdmi.audio_pdev);
+	}
+
+	return 0;
+}
+
 int hdmi_compute_acr(u32 sample_freq, u32 *n, u32 *cts)
 {
 	u32 deep_color;
@@ -1044,6 +1098,11 @@  static int __init omapdss_hdmihw_probe(struct platform_device *pdev)
 		goto err_panel_init;
 	}
 
+#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
+	r = hdmi_probe_audio(pdev);
+	if (r)
+		goto err_audio_dev;
+#endif
 	dss_debugfs_create_file("hdmi", hdmi_dump_regs);
 
 	hdmi_init_output(pdev);
@@ -1052,6 +1111,10 @@  static int __init omapdss_hdmihw_probe(struct platform_device *pdev)
 
 	return 0;
 
+#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
+err_audio_dev:
+	hdmi_panel_exit();
+#endif
 err_panel_init:
 	hdmi_put_clocks();
 	return r;
@@ -1066,6 +1129,11 @@  static int __exit hdmi_remove_child(struct device *dev, void *data)
 
 static int __exit omapdss_hdmihw_remove(struct platform_device *pdev)
 {
+#if defined(CONFIG_OMAP4_DSS_HDMI_AUDIO)
+	if (!IS_ERR(hdmi.audio_pdev))
+		platform_device_unregister(hdmi.audio_pdev);
+#endif
+
 	device_for_each_child(&pdev->dev, NULL, hdmi_remove_child);
 
 	dss_unregister_child_devices(&pdev->dev);