diff mbox

[PATCHv7,2/5] ASoC: eukrea-tlv320: Add DT support.

Message ID 1382616832-23593-2-git-send-email-denis@eukrea.com (mailing list archive)
State New, archived
Headers show

Commit Message

Denis Carikli Oct. 24, 2013, 12:13 p.m. UTC
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Ian Campbell <ijc+devicetree@hellion.org.uk>
Cc: devicetree@vger.kernel.org
Cc: Liam Girdwood <lgirdwood@gmail.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: alsa-devel@alsa-project.org
Cc: Sascha Hauer <kernel@pengutronix.de>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Eric Bénard <eric@eukrea.com>

Signed-off-by: Denis Carikli <denis@eukrea.com>
---
ChangeLog v6->v7:
- The cleanups went into another patch.
- The support for fsl_ssi doesn't depend on dt anymore.
- platform_name = "imx-ssi.0" is still set when in non-dt mode.
---
 .../devicetree/bindings/sound/eukrea-tlv320.txt    |   23 +++++
 sound/soc/fsl/Kconfig                              |    5 +-
 sound/soc/fsl/eukrea-tlv320.c                      |   88 ++++++++++++++++++--
 3 files changed, 107 insertions(+), 9 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/sound/eukrea-tlv320.txt

Comments

Kumar Gala Oct. 25, 2013, 3:39 a.m. UTC | #1
On Oct 24, 2013, at 7:13 AM, Denis Carikli wrote:

> Cc: Rob Herring <rob.herring@calxeda.com>
> Cc: Pawel Moll <pawel.moll@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Stephen Warren <swarren@wwwdotorg.org>
> Cc: Ian Campbell <ijc+devicetree@hellion.org.uk>
> Cc: devicetree@vger.kernel.org
> Cc: Liam Girdwood <lgirdwood@gmail.com>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: alsa-devel@alsa-project.org
> Cc: Sascha Hauer <kernel@pengutronix.de>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: Eric Bénard <eric@eukrea.com>
> 
> Signed-off-by: Denis Carikli <denis@eukrea.com>
> ---
> ChangeLog v6->v7:
> - The cleanups went into another patch.
> - The support for fsl_ssi doesn't depend on dt anymore.
> - platform_name = "imx-ssi.0" is still set when in non-dt mode.
> ---
> .../devicetree/bindings/sound/eukrea-tlv320.txt    |   23 +++++
> sound/soc/fsl/Kconfig                              |    5 +-
> sound/soc/fsl/eukrea-tlv320.c                      |   88 ++++++++++++++++++--
> 3 files changed, 107 insertions(+), 9 deletions(-)
> create mode 100644 Documentation/devicetree/bindings/sound/eukrea-tlv320.txt
> 
> diff --git a/Documentation/devicetree/bindings/sound/eukrea-tlv320.txt b/Documentation/devicetree/bindings/sound/eukrea-tlv320.txt
> new file mode 100644
> index 0000000..8791037
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/sound/eukrea-tlv320.txt
> @@ -0,0 +1,23 @@
> +Audio complex for Eukrea boards with tlv320aic23 codec.
> +
> +Required properties:
> +- compatible : "eukrea,eukrea-tlv320"
> +- model : The user-visible name of this sound complex.
> +- ssi-controller : The phandle of the SSI controller.
> +- audio-codec : The phandle of the tlv320aic23 audio codec.
> +- mux-int-port : The internal port of the i.MX audio muxer (AUDMUX).
> +- mux-ext-port : The external port of the i.MX audio muxer.

mux-int-port & mux-ext-port should probably be vendor prefixed. (I know there are examples in tree where this isn't done).

> +
> +Note: The AUDMUX port numbering should start at 1, which is consistent with
> +hardware manual.
> +
> +Example:
> +
> +	sound {
> +		compatible = "eukrea,eukrea-tlv320";
> +		model = "imx51-eukrea-tlv320aic23";
> +		ssi-controller = <&ssi2>;
> +		fsl,audio-codec = <&tlv320aic23>;

this is inconsistent with the binding description above, is it audio-codec or fsl,audio-codec?

> +		mux-int-port = <2>;
> +		mux-ext-port = <3>;
> +	};
Mark Brown Oct. 26, 2013, 11:48 a.m. UTC | #2
On Fri, Oct 25, 2013 at 08:15:13PM +0100, Grant Likely wrote:
> On Thu, 24 Oct 2013 14:13:49 +0200, Denis Carikli <denis@eukrea.com> wrote:

> > +- mux-int-port : The internal port of the i.MX audio muxer (AUDMUX).
> > +- mux-ext-port : The external port of the i.MX audio muxer.

> > +Note: The AUDMUX port numbering should start at 1, which is consistent with
> > +hardware manual.

> Looks okay to me. I've not been following the sound bindings very
> closely. Are the described properties a common pattern for sound complex
> bindings?

They're standard for i.MX - it's a mux within the SoC.
Matt Sealey Oct. 29, 2013, 4:56 p.m. UTC | #3
On Sat, Oct 26, 2013 at 6:48 AM, Mark Brown <broonie@kernel.org> wrote:
> On Fri, Oct 25, 2013 at 08:15:13PM +0100, Grant Likely wrote:
>> On Thu, 24 Oct 2013 14:13:49 +0200, Denis Carikli <denis@eukrea.com> wrote:
>
>> > +- mux-int-port : The internal port of the i.MX audio muxer (AUDMUX).
>> > +- mux-ext-port : The external port of the i.MX audio muxer.
>
>> > +Note: The AUDMUX port numbering should start at 1, which is consistent with
>> > +hardware manual.
>
>> Looks okay to me. I've not been following the sound bindings very
>> closely. Are the described properties a common pattern for sound complex
>> bindings?
>
> They're standard for i.MX - it's a mux within the SoC.

And these bindings all suck, which is not a fault of Eukrea. There
shouldn't be yet another duplication of the sound{} node though.

There is nothing specific to Eukrea in any of this. This is a TI audio
codec connected via i.MX SSI and routed through the SoC fabric by one
or another i.MX AUDMUX port.

Maybe this is a reasonable place to ask since I am holding off on
doing any audio support for the i.MX platform I want to push a DT for,
and I am still bashing my head over the logic in this. Why is this so
much more complicated than it needs to be? I can see codec specific
and Linux-specific things floating in trees. I can see redundancy in
property naming and misplacement of properties.. this whole thing
needs to be cleaned up because it's been done as "binding a Linux
driver into the device tree" and this is another example of the same
thing - taking the way ASoC is working right now and dumping platform
data externally.

The existing MXS and IMX SGTL5000 bindings are not SGTL5000 specific.
The sound node references a phandle for a codec - those properties
should be present in the codec node. In this vein, Eukrea's bindings
should not be codec or platform specific at all. This is a TI audio
codec being referenced in an SoC-specific audio fabric definition.

The better way of thinking here is to stop being so insular with
bindings. One binding for all these SoCs would suffice; codecs are
codec-specific and codec properties like audio routing from mixers to
pins should not be in the audio fabric definition. You can find the
codec from the existing binding; you can find the controller. One
thing it NEVER refers to is the audmux controller.. but it does
magically specify the internal and external ports required.. and all
the drivers (so far, sgtl5000 and wm8962) magically know what they're
doing here.

It is NOT some kind of undefined, unholy evil to reach outside of the
node you found the compatible property of to look elsewhere - the
fabric driver in this case has all the capability to dereference the
codec, controller, audiomux etc. nodes it's got phandles to, and find
the properties it needs to stuff into the handoff data for dai and
dapm. Sure, you can have a generic driver have some codec-specific
stuff in there where it has to be handled externally to a codec
driver.. This just shows there should be another level in the driver
subsystem (actually there sort of is already, isn't there?).

As it stands; Neither imx-sgtl5000.c or imx-wm8962.c do anything with
the controller or codec nodes except to stuff it into a handoff data
structure for the rest of the subsystem to glue ASoC together.
eukrea-tlv320.c is the same way even in this patch and imx-mc13783 is
languishing..

They are essentially line-for-line identical except a couple hacks
here and there - things that are understandably outside of the control
of the codec driver itself.  Some of it could even be driven down to
lower levels of the subsystem.

What we should have is something simpler and more "realistic" vs.
hardware design, and imx-sgtl5000, imx-wm8962, imx-mc13783, and
eukrea-tlv320 should be merged into a single fabric driver with a
single all-encompassing and altogether simpler binding.Please don't
continue to layer crappier parts of the Linux abstractions into the
device tree. The device tree should present the structure and layout
of the devices - not of the OS subsystems involved. Parsing the device
tree at various points in the subsystems - even if you end up doing it
3 times over - is the correct way. Not to try and consolidate and
optimize your tree to make your drivers smaller.

As it stands this could be easily changed right now with a couple
tweaks; compatibility can be maintained with the older bindings, too.
I'll write it up (binding def at least, I can't fix the drivers in a
reasonable time, unfortunately) and throw it out tonight if we're all
agreeable..
Mark Brown Oct. 29, 2013, 5:44 p.m. UTC | #4
On Tue, Oct 29, 2013 at 11:56:54AM -0500, Matt Sealey wrote:

> Maybe this is a reasonable place to ask since I am holding off on
> doing any audio support for the i.MX platform I want to push a DT for,
> and I am still bashing my head over the logic in this. Why is this so
> much more complicated than it needs to be? I can see codec specific
> and Linux-specific things floating in trees. I can see redundancy in
> property naming and misplacement of properties.. this whole thing
> needs to be cleaned up because it's been done as "binding a Linux
> driver into the device tree" and this is another example of the same
> thing - taking the way ASoC is working right now and dumping platform
> data externally.

Can you be more specific about all these points please?  It's hard to
engage without concrete technical discussion which there is very little
of in your rather lengthy mail.  Please also take a look at the previous
discussions on this stuff and make sure you understand the needs of more
advanced audio subsystems like those found in smartphones.

> should not be codec or platform specific at all. This is a TI audio
> codec being referenced in an SoC-specific audio fabric definition.

Please take a look at Morimoto-san's work on the generic sound card if
you want to work on a generic card, it'd be good if some of the people
complaining about this stuff could help him work on that as he doesn't
seem to be getting any help or review from anyone.
Matt Sealey Oct. 30, 2013, 3:05 a.m. UTC | #5
On Tue, Oct 29, 2013 at 12:44 PM, Mark Brown <broonie@kernel.org> wrote:
> On Tue, Oct 29, 2013 at 11:56:54AM -0500, Matt Sealey wrote:
>
>> Maybe this is a reasonable place to ask since I am holding off on
>> doing any audio support for the i.MX platform I want to push a DT for,
>> and I am still bashing my head over the logic in this. Why is this so
>> much more complicated than it needs to be? I can see codec specific
>> and Linux-specific things floating in trees. I can see redundancy in
>> property naming and misplacement of properties.. this whole thing
>> needs to be cleaned up because it's been done as "binding a Linux
>> driver into the device tree" and this is another example of the same
>> thing - taking the way ASoC is working right now and dumping platform
>> data externally.
>
> Can you be more specific about all these points please?  It's hard to
> engage without concrete technical discussion which there is very little
> of in your rather lengthy mail.  Please also take a look at the previous
> discussions on this stuff and make sure you understand the needs of more
> advanced audio subsystems like those found in smartphones.

To be specific, there are several "braindeadisms" in the current
bindings for mxs-audio-sgtl5000 and imx-audio-sgtl5000 and
imx-audio-wm8962 existing bindings which are being dragged into
Eukrea's bindings.

None of the three bindings above do anything that is particularly
codec *OR* SoC-variant specific.

Now this patch comes in that defines eukrea-tvl320 as a special magic
binding but 90% of it is identical to the existing bindings. Still,
none of it is codec, SoC-variant or even Eukrea-specific.

From imx-audio-sgtl5000, the example is covering all the definitions
in the binding documented before it:

sound {
compatible = "fsl,imx51-babbage-sgtl5000",
    "fsl,imx-audio-sgtl5000";
model = "imx51-babbage-sgtl5000";
ssi-controller = <&ssi1>;
audio-codec = <&sgtl5000>;
audio-routing =
"MIC_IN", "Mic Jack",
"Mic Jack", "Mic Bias",
"Headphone Jack", "HP_OUT";
mux-int-port = <1>;
mux-ext-port = <3>;
};

and..

sound {
compatible = "fsl,imx6q-sabresd-wm8962",
    "fsl,imx-audio-wm8962";
model = "wm8962-audio";
ssi-controller = <&ssi2>;
audio-codec = <&codec>;
audio-routing =
"Headphone Jack", "HPOUTL",
"Headphone Jack", "HPOUTR",
"Ext Spk", "SPKOUTL",
"Ext Spk", "SPKOUTR",
"MICBIAS", "AMIC",
"IN3R", "MICBIAS",
"DMIC", "MICBIAS",
"DMICDAT", "DMIC";
mux-int-port = <2>;
mux-ext-port = <3>;
};

and..

sound {
compatible = "fsl,imx28-evk-sgtl5000",
    "fsl,mxs-audio-sgtl5000";
model = "imx28-evk-sgtl5000";
saif-controllers = <&saif0 &saif1>;
audio-codec = <&sgtl5000>;
};


Right, so here are the main braindeadisms that immediately jump out;

1) saif-controllers and ssi-controller could be a single property -
controllers - which list the controller. It will be obvious which kind
of controller it is by delving into that node referenced by the
phandle.

2) audio-codec is redundantly named as we're defining audio devices
here.. but it should/could be "codecs" since it should be an array of
phandles just like controllers (there's no reason you can't have
multiple codecs attached to the same i2s bus, it's just not entirely
common).

3) the compatible properties define a specific board and codec style
which simply isn't used in the drivers, because there's no opportunity
to use it. The only reason 3 different compatible properties exist are
to probe the 3 different drivers which all do nearly exactly the same
thing - collect controllers, codecs, the routing information (for
power management, of all reasons..) and stuff them in a handoff
structure to allow ASoC to individually probe components.

4) The drivers themselves just call imx_audmux_v2_foo() with the
contents of the mux-int-port and mux-ext-port properties - this is
Linux subsystem layout dictated by quickly and poorly architected
drivers leaking into the device tree. This is almost entirely wrong
from the conceptual purpose of a device tree and how it defines
structure. You may be able to *assume* that given those properties,
this is what needs to be done, but you shouldn't be describing
hardware this way.

A lot of it comes from this weird concept that for every file in Linux
that can probe on it's own, a device tree needs to somehow define ONE
node per file, and define platform properties in that node. This comes
eventually from the platform_data concept the original drivers were
written against.

Writing device tree bindings is not about looking an an existing
driver, platform, or private data structure and creating a new node
for every one and shuffling the data into it. It is for describing
hardware. In this case, none of the bindings above describe hardware,
they describe the current driver structure in Linux - worse still,
they were describing it as that abstraction was being defined, which
also does not describe hardware, per se.

5) for some reason the drivers picking up these nodes do some really
strange things like use the "model" property to fill in the card name.
This is a perfectly valid case, but everyone has missed the
opportunity to give this an actual friendly name. If you get to a
"desktop" and open the mixer properties in PulseAudio,
"imx51-babbage-sgtl5000" is not a particularly useful name, nor is
"MXC_SPDIF" (from the BSP, I always hated this) or "imx-spdif". This
is purely a nitpick at how this ends up in userspace, but it's kind of
important as it shows nobody is really caring about how these items
get represented in the real system.

A more reasonable and correct binding for ALL of this hardware,
whereby it would be able to determine all of the current information
while allowing a single set of parsing, configuring, stuffing handoff
data into card and dai stuff for ASoC is an actual tree structure, not
a large unwieldy single node which defines everything as peer
properties.

Maybe;

codec@b {
    controllers = <&ssi2>;
    audio-sinks = <&connector1>, <&connector1>, <&mic_in>;
    audio-sources = <&hp_l>, <&hp_r>, <&connector2>;
}

ssi2@7000000 {
   pinctrl-0 = <&audmux_ssi2_3_2>; // there's no reason a pinctrl node
has to JUST contain one property?
};

audmux_ssi2_3_2: as232 {
   fsl,pins = < ... >;
   fsl,audmux = < ... >;
};

sound {
   compatible = "fsl,imx51-audio", "fsl,imx-audio"; // if there IS
anything to workaround that's hardware-specific... we can still do
something
   model = "Front Audio Ports"; // this is not a 'compatible' or
'device_type', model is a freeform string that ALSA happens to make
user facing so make it nice
   codecs = <&codec>;
   controllers = <&ssi2>;

   connectors {
     connector1: c1@1 {
         compatible = "rca,audio-jack";
         model = "Speakers";
     }
   }

};

Yes, that's a lot of work, and a lot of it describes fixed properties
of the codecs, but where those fixed features are not used on a board
design, they can be ommitted..

The codec node could also maybe contain the audio-routing property (I
am not sure it ever turns out to really be 'card' specific), although
this is a complete SNAFU as well since none of the defined properties
match the hardware in the cases described above - they're defined by
picking what the existing Linux drivers used. "Ext Spk", for example,
in the SGTL5000 bindings is a Linuxism dragged into the tree. There
are many ways to define the actual hardware outputs depending on which
part of the docs you trust most, but strings like "LINE_OUT" "Ext Spk"
then this is not only poorly documented in it's intent, but erroneous
in describing actual hardware (if it's external pin/pads, none of the
names match the pad names in the electrical spec for SGTL5000, and a
couple of the internal connections are also misnamed..).

A cursory look at the actual code, and it turns out the audio-routing
property - which nVidia's drivers call nvidia,audio-routing and TI's
call ti,audio-routing by the way, which says a lot about the property
definition in itself - is defined as pairs of strings for "sink" and
"source" and I would nitpick about the order of those to be honest -
but the naming is totally defined by the driver logic.

~~~
  Each entry is a pair of strings, the first being the connection's sink,
  the second being the connection's source. Valid names could be power
  supplies, SGTL5000 pins, and the jacks on the board:

  Power supplies:
   * Mic Bias

  SGTL5000 pins:
   * MIC_IN
   * LINE_IN
   * HP_OUT
   * LINE_OUT

  Board connectors:
   * Mic Jack
   * Line In Jack
   * Headphone Jack
   * Line Out Jack
   * Ext Spk
~~~

nVidia's definition of nvidia,audio-routing:

~~~
  Each entry is a pair of strings, the first being the connection's sink,
  the second being the connection's source. Valid names for sources and
  sinks are the WM8903's pins (documented in the WM8903 binding document),
  and the jacks on the board:
~~~

Note the difference here. The nVidia doc says where these valid
sources and sink names come from. SGTL5000 does not (nor does WM8962
for that matter), it just lists a bunch in the "card" binding. So
these definitions are, as above, poorly documented and poorly defined,
and even if you make the assumption - they don't match the SGTL5000 or
WM8962 docs in the i.MX cases. Is the sink or source a freeform
string? Which one? In which cases?

Also, who says the connector on *my* board is called "Ext Spk"? On the
Efika MX, we actually call it the internal speaker because it's inside
the case. What if you had one connected to a 4- or 5-conductor jack
that supported stereo microphone on the same jack? Can you define a
sink and source twice (i.e. MIC_IN -> Headphone Jack and Headphone
Jack -> HP_OUT)? This isn't covered by ANY of the bindings, complex or
not (I can give a use case on a real board, but I don't have access to
it anymore) except in one real-life example. Those strings should be
freeform where the connector is board-specific, but they're actually
hardcoded into the drivers.

Every one of these is also seemingly going to have to also have a
custom set of controls for external amplifier (I have a use case on a
real board that I *do* have access to) or headphone jack insertion
gpio (TI use ti,jack-det-gpio, I need one on my board too..) which
while it is seemingly necessary to define them in the top level card
driver under Linux, doesn't describe the real connection at a hardware
level. They have been stuffed in the top level "card" node because of
the former..

I notice almost 90% code duplication in the drivers that run off these
nodes; fsl/imx-sgtl5000.c, fsl/imx-mc13783.c, fsl/imx-wm8962.c, (now)
fsl/eukrea-tvl320.c, mxs/mxs-sgtl5000.c and, ironically, since it uses
the wm8962 codec but it doesn't support device tree yet..
samsung/tobermory.c which if it got support for device tree would
probably be except for constant string definitions be line for line
identical to the imx one.

And duplicated functionality everywhere. I don't know how the i.MX
version of it ended up without a vendor prefix if ti and nvidia did
the same thing (in the case it got "standardized", how come nobody
updated the drivers to follow suit?)

>> should not be codec or platform specific at all. This is a TI audio
>> codec being referenced in an SoC-specific audio fabric definition.
>
> Please take a look at Morimoto-san's work on the generic sound card if
> you want to work on a generic card, it'd be good if some of the people
> complaining about this stuff could help him work on that as he doesn't
> seem to be getting any help or review from anyone.

I'll have a good look at it some more if I can find anything more than
the file in Linus' current tree and it's rather small history of
commits.. if anything new popped up it didn't hit my radar as I'm
apparently not subscribed to every Linux mailing list under the sun
(nor do I have the time to watch every one of them).

My understanding is that it should fix some of this, but what it also
seems to be doing is identifying some slightly weird design in the
ASoC framework as it goes:

in simple-card.c dai_init()
      ret = snd_soc_dai_set_sysclk(dai, 0, set->sysclk, 0);

in imx-sgtl5000.c dai_init()

    ret = snd_soc_dai_set_sysclk(rtd->codec_dai, SGTL5000_SYSCLK,
             data->clk_frequency, SND_SOC_CLOCK_IN);

In theory, the purpose of these functions is to notify the codec of a
change to one of it's input clocks. In what world where the clk_id and
the direction be able to be 0 from simple-card if every driver we can
see here seems to need something very specific?

Why would the card driver need to set the sysclk in a very
codec-specific way like this *at init time*? Surely this gets hammered
into the codec as it starts up it's first stream (and possibly torn
down by the upper layers afterwards, and informed again next stream)?

Same for setting the dai_fmt in the same places in both above drivers
(and all the rest).

Same for the bias level callbacks at the card level which are called
by abstraction from another part of the framework, but they nigh on
always set codec-level configuration items such as PLLs and sysclks in
most if not all cases. The codec has bias level callbacks too, which
get called after the card callbacks - I can't find a good example of
where snd_soc_card isn't ultimately dereferenced to snd_soc_dai or
snd_soc_codec member and then passed to lower level subsystems. They
never seem to do anything at the *card* level.

Such "one-time init" and weird layering is, I'm guessing and I haven't
looked too hard, a workaround for the framework not doing the right
thing while the drivers get up to scratch to be able to do the right
thing in the absence of some other functionality.. Surely the codec
should be able to query the other layers for this information at
runtime, or even set the "card" callbacks on codec probe? In whatever
situation, this information and structure shouldn't leak into device
tree bindings.

What we have right now is a single node trying to describe all the
properties and pointers, within itself, required to link these things
together in what ends up being a very Linux-specific way. What it
should be is a node which contains enough information to further walk
the tree and collect all the information based on some commonality
between other bindings (for instance, as above) without being scared
for it to link to a node and have that node contain a link to it's
logical (if not tree) parent.

What I'm looking at now needs a hell of a lot of thought, and all I
can say right now with certainty is not "how it should be" (yet) but
that what we have right now doesn't cut it, and only goes so far as to
use device trees as a place to shove platform_data.

Matt Sealey <neko@bakuhatsu.net>
Matt Sealey Oct. 30, 2013, 5:19 a.m. UTC | #6
On Tue, Oct 29, 2013 at 12:44 PM, Mark Brown <broonie@kernel.org> wrote:
>
> Please take a look at Morimoto-san's work on the generic sound card if
> you want to work on a generic card, it'd be good if some of the people
> complaining about this stuff could help him work on that as he doesn't
> seem to be getting any help or review from anyone.

Okay I had another thought about it over a cup of tea and a biscuit.
One thing I'm missing is a description (with pictures PLEASE) of the
ASoC framework and how it really fits together, since I'm just going
from the bits I see and the code I've spent a couple hours browsing
with LXR and help from a git checkout.

We can come down to brass tacks on how we'd want to describe a board,
from a hardware/board design level which informs essentially what the
layout of the device tree will be, and who owns what property.

But first we have to decide if we're a Linux ASoC framework, where do
we want to start parsing the tree, and what components do we need to
discover?

I think in the end, what ends up the most flexible way is if in pretty
much any situation you start with the codec or the controller, which
can have a phandle to each other.

On i.MX the audmux settings are a function of the controller and the
iomuxc configuration - audmux connects an SSI or ESAI from an internal
port to an external port, and the iomuxc connects the external port to
actual pads on the chip, connected on the board to the codec. So we
need a handle to the audmux controller and possibly an information
structure (at minimum, internal and external ports) to throw at it.

The audio routing seems to be mostly a property of internal codec
specific features like mixers and DACs going to external ports -
headphones, speakers, microphone in. This would make it a codec
feature.

So the Linux driver structure is not being "catered to" in this sense,
since all we got to configure the board are the fact  that we're on an
imx51, we want to use ssi2 and there's an sgtl5000 codec somewhere.
There's some audio mux configuration going on (and io mux
configuration too) but this is under the pervue of the controller.

How would a top level driver for Linux do this?

Well, I don't know how it'd actually get called to do it, maybe just
start off with the codec compatible property and then dereference the
phandle to the controller, and stuff the data in there. If all it has
to do is fill out card structure and register it, then probing the
codec, finding it's controller, and putting in the information that
maps the OF compatible property for those to the name of the driver
(which is what it does now) is a table of properties vs. static
strings.. right? It would be up to the controller driver (fsl-ssi
here) to do pinctrl and audmux magic when initialized.

I am not entirely sure I understand why you'd need much more than
that, nobody (nvidia, ti, samsung) is defining more than a controller
and codec handle and all of them do some variant this:

~~~~

data->codec_clk = clk_get(&codec_dev->dev, NULL);
if (IS_ERR(data->codec_clk)) {
   ret = PTR_ERR(data->codec_clk);
   goto fail;
}

data->clk_frequency = clk_get_rate(data->codec_clk);

data->dai.name = "HiFi"; // not sure what effect this has
data->dai.stream_name = "HiFi"; // not sure what effect this has
data->dai.codec_dai_name = "sgtl5000"; // hardcoded into the codec
driver, which is how we go find it??
data->dai.codec_of_node = codec_np; // or do we find it from this?
data->dai.cpu_of_node = ssi_np; // same question..?
data->dai.platform_of_node = ssi_np; // or this?
data->dai.init = &imx_sgtl5000_dai_init; // why bother having this if
card->clk_frequency is set above
data->dai.dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF |
   SND_SOC_DAIFMT_CBM_CFM; // and card->dai.dai_fmt is here and
accessible from the snd_soc_codec structure?.
data->card.dev = &pdev->dev;

ret = snd_soc_of_parse_card_name(&data->card, "model"); //ends up in a
PulseAudio dialog..
if (ret)
   goto fail;
ret = snd_soc_of_parse_audio_routing(&data->card, "audio-routing"); //
fills out dapm sink/sources?
if (ret)
   goto fail;

data->card.num_links = 1;
data->card.owner = THIS_MODULE;
data->card.dai_link = &data->dai;
data->card.dapm_widgets = imx_sgtl5000_dapm_widgets;
data->card.num_dapm_widgets = ARRAY_SIZE(imx_sgtl5000_dapm_widgets);
~~~~

As a hardware description, it works fine, matches what's in the SoC
and the connections on the board. Just look for a codec you know you
support and walk the tree.

The only REALLY configurable parts are the controller (cpu/platform)
and codec nodes, the codec_dai_name is used to probe the ACTUAL codec
driver, dai.init() is used to for some reason set the sysclk and
dai_fmt to the codec (.. but it's already there, in the card
structure, and every codec knows it's parent card.. so someone explain
to me why simple-card needs to do these two things by function call?)

What is really missing is a logical, topological view of the audio
routing - which ASoC only really uses to do power management and
manage bias levels - and some of that also comes into play when you
think about physical connections like jacks (which may have detection
logic, no need to turn on that path if there's no device connected),
external amplifiers (which is essentially the same concept as jack
detection logic from the opposite side - you need to turn it on or
there's no sound, but no need to turn it on if there's no sound) and
those connectors - and what they really do on the design - is
important in being described. "imx-spdif" on my board is actually an
input to the HDMI codec, so in reality I want everyone outside in
userland to know this is "HDMI Audio Out" and especially not just bark
the driver name at them.

3.5mm plugs, external amplifiers circuits, speakers, routing to other
codecs, possibly with some kind of jack detection logic - these need
naming and pairing since they can't be hardcoded in the driver, who
knows what LINE_OUT really is connected to, but it may not be "Ext
Spk". It may seem laborious to do so, but in the end you can go look
at the way Marvell and Samsung did their pinctrl drivers - a lot of
tiny nodes with very little information in it. It's a clutter, but how
else would you parse out a configurable setup? Having audio routing be
defined like pinctrl (a handle to a node which contains properties
full of information, which may be entirely device-specific) seems to
make sense. If you need to define a route/path you can use phandles to
define the pairs and the content of the nodes at those phandles would
define the source/sink properly.

I guess what I am saying is that having a top level "audio card
description" makes no sense from a device tree perspective, as long as
the bindings take into account being able to walk from a device you
can match "compatible" with, to every other node you need to reference
to make up a workable setup, you don't need ANOTHER node with ANOTHER
compatible property just to collect them all together.

Also; if the current sound node persists, it is missing a reg property
in all of the bindings. All nodes need reg properties, even if it's
@0, @1. If possible, whatever OS audio subsystem should take on the
reg property as the index of the card, otherwise you get logical
device re-ordering in userspace which pisses me off immensely when
ALSA pcm0 is HDMI on one boot and pcm1 is HDMI on another depending on
deferred probe or some driver init race. I am not sure how that'd get
resolved if the node goes away and the tree gets parsed from the codec
or controller outwards by phandle dereference..

Still thinking about it anyway..
Mark Brown Oct. 30, 2013, 9:29 p.m. UTC | #7
On Tue, Oct 29, 2013 at 10:05:40PM -0500, Matt Sealey wrote:

> To be specific, there are several "braindeadisms" in the current
> bindings for mxs-audio-sgtl5000 and imx-audio-sgtl5000 and
> imx-audio-wm8962 existing bindings which are being dragged into
> Eukrea's bindings.

*Please* also try to be a bit more concise and focused, this mail is
very long and verbose for the amount of content in it.  As I said in my
previous e-mail please review the previous extensive discussions on
these topics and avoid going over old ground, similiarly like I said if
you want to work on making things more generic that's great but please
work with the people doing this already.

Endlessly going over old ground and starting from scratch is not going
to help move anything forward.

> 1) saif-controllers and ssi-controller could be a single property -
> controllers - which list the controller. It will be obvious which kind
> of controller it is by delving into that node referenced by the
> phandle.

Picking a consistent name for this might be nice, yes.  Feel free to
send patches...

> 2) audio-codec is redundantly named as we're defining audio devices
> here.. but it should/could be "codecs" since it should be an array of
> phandles just like controllers (there's no reason you can't have
> multiple codecs attached to the same i2s bus, it's just not entirely
> common).

I don't think bikeshedding the name is a worthwhile activity.  

If you want to convert it into an array you're adding substantial
complication to both defining the link and the software required to
support it, it's definitely specialist hardware time and there's a lot
of ways that cat can be skinned.

> 3) the compatible properties define a specific board and codec style
> which simply isn't used in the drivers, because there's no opportunity
> to use it. The only reason 3 different compatible properties exist are
> to probe the 3 different drivers which all do nearly exactly the same
> thing - collect controllers, codecs, the routing information (for
> power management, of all reasons..) and stuff them in a handoff
> structure to allow ASoC to individually probe components.

To repeat yet again, if this concerns you please work with Morimoto-san
on his generic card driver.  There are some code differences with clock
setup for the devices which are encoded in the board definitions as
well, plus the external connections.

> 4) The drivers themselves just call imx_audmux_v2_foo() with the
> contents of the mux-int-port and mux-ext-port properties - this is
> Linux subsystem layout dictated by quickly and poorly architected

I don't think anyone is happy with the audmux code but nobody has much
interest in spending the time and effort on it so we're stuck just
typing the data in statically.  If the situation improves we can always
just ignore the properties in future.

> A lot of it comes from this weird concept that for every file in Linux
> that can probe on it's own, a device tree needs to somehow define ONE
> node per file, and define platform properties in that node. This comes
> eventually from the platform_data concept the original drivers were
> written against.

> Writing device tree bindings is not about looking an an existing
> driver, platform, or private data structure and creating a new node

That's not what's going on here, let me once again ask you to review the
previous discussion on device tree bindings and make sure that you
understand the needs of advanced audio hardware.

> 5) for some reason the drivers picking up these nodes do some really
> strange things like use the "model" property to fill in the card name.
> This is a perfectly valid case, but everyone has missed the
> opportunity to give this an actual friendly name. If you get to a
> "desktop" and open the mixer properties in PulseAudio,
> "imx51-babbage-sgtl5000" is not a particularly useful name, nor is
> "MXC_SPDIF" (from the BSP, I always hated this) or "imx-spdif". This
> is purely a nitpick at how this ends up in userspace, but it's kind of
> important as it shows nobody is really caring about how these items
> get represented in the real system.

So submit patches changing people's model names in the DTSs for the
affected boards to be something you find more pleasing...  that's what
they're there for.  I imagine that the people using these boards aren't
using UIs that present the names to the user and hence didn't care.

In any case this doesn't seem to be anything to do with the device tree
bindings, if you want to complain about specific DTS files go and talk
to them.

> codec@b {
>     controllers = <&ssi2>;
>     audio-sinks = <&connector1>, <&connector1>, <&mic_in>;
>     audio-sources = <&hp_l>, <&hp_r>, <&connector2>;
> }

That's not going to scale well to boards that have inter CODEC analogue
links, we'd need a node for every single thing that could be a source
and sink on every device which seems tedious.

> match the hardware in the cases described above - they're defined by
> picking what the existing Linux drivers used. "Ext Spk", for example,
> in the SGTL5000 bindings is a Linuxism dragged into the tree. There
> are many ways to define the actual hardware outputs depending on which
> part of the docs you trust most, but strings like "LINE_OUT" "Ext Spk"
> then this is not only poorly documented in it's intent, but erroneous
> in describing actual hardware (if it's external pin/pads, none of the
> names match the pad names in the electrical spec for SGTL5000, and a
> couple of the internal connections are also misnamed..).

No, you've not understood what these are.  They are not the pads on the
CODEC (as you can tell from the fact that the routing map connects the
pads on the CODEC to them...), they are external things on the boards
like transducers or jacks.

Feel free to send patches for the documentation if you feel that the
documentation is not adequate for your needs.  Again I don't think it's
terribly useful to spend much time bikeshedding the names, improving the
documentation and adding a set of standard names that seem useful might
be worthwhile.

> > Please take a look at Morimoto-san's work on the generic sound card if
> > you want to work on a generic card, it'd be good if some of the people
> > complaining about this stuff could help him work on that as he doesn't
> > seem to be getting any help or review from anyone.

> I'll have a good look at it some more if I can find anything more than
> the file in Linus' current tree and it's rather small history of
> commits.. if anything new popped up it didn't hit my radar as I'm
> apparently not subscribed to every Linux mailing list under the sun
> (nor do I have the time to watch every one of them).

There are patches on the list to add device tree bindings to it, which
he's got precious little feedback on - this one e-mail from you is far
more feedback by volume than he's had up until now.

> In theory, the purpose of these functions is to notify the codec of a
> change to one of it's input clocks. In what world where the clk_id and
> the direction be able to be 0 from simple-card if every driver we can
> see here seems to need something very specific?

We can't, one of the problems with this stuff is that clocking design
(especially for the advanced use cases) is not as simple or general as
one might desire for a bunch of totally sensible electrical engineering
and system design reasons.  When we get to the point of deploying simple
card widely it's just typing to define a standard constant that everyone
can use for whatever the default sensible clocks are, and of course
people working on simple card can do that as they're going.

> Why would the card driver need to set the sysclk in a very
> codec-specific way like this *at init time*? Surely this gets hammered
> into the codec as it starts up it's first stream (and possibly torn
> down by the upper layers afterwards, and informed again next stream)?

A large proportion of systems (especially simple ones) have a fixed
master clock for audio, telling it what rate it's running at is
orthogonal to starting and stopping it (assuming that control even
exists).

> Same for setting the dai_fmt in the same places in both above drivers
> (and all the rest).

It is vanishingly rare (though possible and possibly even sensible in
extremely unusual situations) for the system to want to reconfigure the
DAI format dynamically at runtime.  No point in doing the same thing
over and over again...

> Same for the bias level callbacks at the card level which are called
> by abstraction from another part of the framework, but they nigh on
> always set codec-level configuration items such as PLLs and sysclks in
> most if not all cases. The codec has bias level callbacks too, which
> get called after the card callbacks - I can't find a good example of
> where snd_soc_card isn't ultimately dereferenced to snd_soc_dai or
> snd_soc_codec member and then passed to lower level subsystems. They
> never seem to do anything at the *card* level.

The particular decisions about what to do are a system integration
decision, the system integration is a card decision since devices are in
general flexible enough to support many system designs.
Matt Sealey Oct. 30, 2013, 10:29 p.m. UTC | #8
On Wed, Oct 30, 2013 at 4:29 PM, Mark Brown <broonie@kernel.org> wrote:
> On Tue, Oct 29, 2013 at 10:05:40PM -0500, Matt Sealey wrote:
>
>> To be specific, there are several "braindeadisms" in the current
>> bindings for mxs-audio-sgtl5000 and imx-audio-sgtl5000 and
>> imx-audio-wm8962 existing bindings which are being dragged into
>> Eukrea's bindings.
>
> *Please* also try to be a bit more concise and focused, this mail is
> very long and verbose for the amount of content in it.  As I said in my
> previous e-mail please review the previous extensive discussions on
> these topics and avoid going over old ground, similiarly like I said if
> you want to work on making things more generic that's great but please
> work with the people doing this already.
>
> Endlessly going over old ground and starting from scratch is not going
> to help move anything forward.
>
>> 1) saif-controllers and ssi-controller could be a single property -
>> controllers - which list the controller. It will be obvious which kind
>> of controller it is by delving into that node referenced by the
>> phandle.
>
> Picking a consistent name for this might be nice, yes.  Feel free to
> send patches...

When - if - I have time.

> To repeat yet again, if this concerns you please work with Morimoto-san
> on his generic card driver.  There are some code differences with clock
> setup for the devices which are encoded in the board definitions as
> well, plus the external connections.

Well the main crux of the problem here is that there's been a node
defined with a special compatible property that requires a special
device driver to marshall and collect nodes referenced within it - and
most of these drivers do exactly the same thing, which is why generic
card drivers seem like a good idea.

Where those things are generic it is not only a problem of code
duplication but a bad idea in "creating a special node" to marshall
it. The "sound" node here, with this special compatible property,
doesn't really exist in real hardware design, it exists within the
Linux kernel to allow it to construct a "card" out of a controller,
link and codec. If your SoC has a controller, link and codec, you
should be able to find ONE of them (I'd say start at the codec if it's
the last gasp between a programmers' view of hardware and the outside
world) and then walk the tree to collect the other driver information
required to build the abstract structure Linux requires.

>> 4) The drivers themselves just call imx_audmux_v2_foo() with the
>> contents of the mux-int-port and mux-ext-port properties - this is
>> Linux subsystem layout dictated by quickly and poorly architected
>
> I don't think anyone is happy with the audmux code but nobody has much
> interest in spending the time and effort on it so we're stuck just
> typing the data in statically.  If the situation improves we can always
> just ignore the properties in future.

Well, I am going to fix about 8 other things first, then figure this
out. I want my device tree actually IN the upstream kernel first, but
so far the only things that haven't changed under me are UART and
regulators.. good job that's the bare minimum, right?

> That's not what's going on here, let me once again ask you to review the
> previous discussion on device tree bindings and make sure that you
> understand the needs of advanced audio hardware.

There's no existing example of this advanced audio hardware, no
discussion about what would be required, only a vague assertion that
to build this abstract card structure in Linux, this is the only way
to do it.

Device trees don't exist to allow you to create all kinds of
marshalling structures to keep your data in a nice, concise place and
then "optimize" your Linux driver probe process. If it takes forever
to go through the entire device tree following links, doing probe
deferrals, so be it - you're meant to be pulling a hardware
description and mapping it to your software, not pushing your software
description into the wild.

>> codec@b {
>>     controllers = <&ssi2>;
>>     audio-sinks = <&connector1>, <&connector1>, <&mic_in>;
>>     audio-sources = <&hp_l>, <&hp_r>, <&connector2>;
>> }
>
> That's not going to scale well to boards that have inter CODEC analogue
> links, we'd need a node for every single thing that could be a source
> and sink on every device which seems tedious.

Yes, but I am not sure having a single property with pairs of strings
makes any more sense - sure, you require a node for each, but how does
just requiring a static string for each do this any better? Now you
need a string for each, which has to match a driver in Linux.. what
about another OS?

>> match the hardware in the cases described above - they're defined by
>> picking what the existing Linux drivers used. "Ext Spk", for example,
>> in the SGTL5000 bindings is a Linuxism dragged into the tree. There
>> are many ways to define the actual hardware outputs depending on which
>> part of the docs you trust most, but strings like "LINE_OUT" "Ext Spk"
>> then this is not only poorly documented in it's intent, but erroneous
>> in describing actual hardware (if it's external pin/pads, none of the
>> names match the pad names in the electrical spec for SGTL5000, and a
>> couple of the internal connections are also misnamed..).
>
> No, you've not understood what these are.  They are not the pads on the
> CODEC (as you can tell from the fact that the routing map connects the
> pads on the CODEC to them...), they are external things on the boards
> like transducers or jacks.

The binding documents for nVidia and TI name them after the pads on
the codec, and then gives out some really quite small list of possible
devices to connect on the end - Ext Spk is external, but it's
connected to LINE_OUT. What is worrying is LINE_OUT isn't defined in
the docs at all - it is an arbitrary mishmash across all bindings,
where they don't follow the docs, don't follow the docs even though
they say they do, and where there are no docs, someone has come up
with some arbitrary values which may need expanding, which requires
more bindings and more updates to support.

What frustrates me is that it has gone through the usual "if I type
less characters then it is better" workflow of Linux developers, and
copy pasting it out of the existing driver into the binding and then
into the trees meant no typing at all, just two swift mouse movements
and a middle click. That's what we're dealing with here, little care
about what the best practises have been (since *1993* for crying out
loud) since this is a brand new concept, and just a quick way of
getting your driver to be 'compliant' with this new movement.

> Feel free to send patches for the documentation if you feel that the
> documentation is not adequate for your needs.  Again I don't think it's
> terribly useful to spend much time bikeshedding the names, improving the
> documentation and adding a set of standard names that seem useful might
> be worthwhile.

"Standard names" won't cover the more complex use cases. Where it's an
internal signal or mixer output, it has to follow the docs. That's 85%
of what is there already (the other 15% is what I am frustrated about
where it doesn't match at all, making cross-referencing the manuals a
pain in the backside). Where it's an external signal it should be a
phandle to something. But mixing strings and phandles in a property as
an array is a real shitty thing to do (we discussed this a while back,
it works in Forth, it is not great in C, and it is one place where FDT
writing is very different to OF DT designing).

May as well make them all phandles and describe them properly. The
jack detection - even registering a jack since this is another part of
ASoC - should be properties under a jack node. Not a card node.
Something just needs to round these up, it needs a place to start.

It is tedious but you only have to do it once per board - or even per
SoC or codec since a lot of system designers just use what was on the
reference design and throw away what they don't need - then never
again. This work should be rolled into design tools, synthesis,
simulation, mux configuration tools like Freescale's, prebuilt
software libraries one day.. and if it's not, shame on the design
tools vendors. Altera have one already that will generate a device
tree based on prefab IP blocks and connections on their board.

>> > Please take a look at Morimoto-san's work on the generic sound card if
>> > you want to work on a generic card, it'd be good if some of the people
>> > complaining about this stuff could help him work on that as he doesn't
>> > seem to be getting any help or review from anyone.
>
>> I'll have a good look at it some more if I can find anything more than
>> the file in Linus' current tree and it's rather small history of
>> commits.. if anything new popped up it didn't hit my radar as I'm
>> apparently not subscribed to every Linux mailing list under the sun
>> (nor do I have the time to watch every one of them).
>
> There are patches on the list to add device tree bindings to it, which
> he's got precious little feedback on - this one e-mail from you is far
> more feedback by volume than he's had up until now.

I will take a peek, but I've lost a bunch of mails the past few
months. If I find it on an archive I'll do my best.

>> In theory, the purpose of these functions is to notify the codec of a
>> change to one of it's input clocks. In what world where the clk_id and
>> the direction be able to be 0 from simple-card if every driver we can
>> see here seems to need something very specific?
>
> We can't, one of the problems with this stuff is that clocking design
> (especially for the advanced use cases) is not as simple or general as
> one might desire for a bunch of totally sensible electrical engineering
> and system design reasons.  When we get to the point of deploying simple
> card widely it's just typing to define a standard constant that everyone
> can use for whatever the default sensible clocks are, and of course
> people working on simple card can do that as they're going.

I would have thought using the clock bindings, and judicious use of
clkdev internal to the codec or controller driver, would fix this.

This is the way it ends up for things like, say, i.MX graphics
(drivers/staging/imx-drm/ipu-v3/ipu-di.c:259 or so) because the clock
cannot be provided by device tree description as it is wonderfully
dynamic.

If the clock is entirely internal to the device and not sourced from
elsewhere, it should be defined and done this way in the driver. It
can pick up it's parent or source from the DT, as the above does.

I am not sure I understand why this isn't done.. maybe the whole "but
if I unload my module what happens to my clock tree?!" discussion
comes into it. I am not sure if that got solved...?

>> Same for the bias level callbacks at the card level which are called
>> by abstraction from another part of the framework, but they nigh on
>> always set codec-level configuration items such as PLLs and sysclks in
>> most if not all cases. The codec has bias level callbacks too, which
>> get called after the card callbacks - I can't find a good example of
>> where snd_soc_card isn't ultimately dereferenced to snd_soc_dai or
>> snd_soc_codec member and then passed to lower level subsystems. They
>> never seem to do anything at the *card* level.
>
> The particular decisions about what to do are a system integration
> decision, the system integration is a card decision since devices are in
> general flexible enough to support many system designs.

Right but it's a card decision that ends up rattling it's way down to
the codec. The card-level call resolves the codec and sets some
settings, the codec-level call does some other settings, in the end
this all ends up at the codec level by dereference.. so why would it
need to *be* defined at the card level in any of these cases? In all
of these cases, maybe not for all cases forever and ever, but the ones
we're looking at here where we have a goodly selection of codecs and
maybe 4 or 5 controllers and 2 or 3 implementations of each, there's
no reason for it.

This isn't really a DT issue but a really badly layered driver
approach that works for a bunch of systems, but *not these*.
generic/simple-card.c reproduces it anyway, which means if the driver
gets fixed, simple-card no longer acts as a proxy for it, and custom
code gets written.

I'll keep throwing some stuff around and we'll see what I come up
with. I definitely think my concise point is, though - the current
bindings define a Linux software abstraction to point at hardware, and
this shouldn't be the way it's done. The Linux code should deal with
the lack of hardware abstraction by parsing the tree in a different
way. The real thing to get our heads around is where we start, and I
think this has to be device_type = "sound" which has implications in
the original OF specs which make a lot of sense regarding marshalling
multiple components together.

Can you give any really good examples (block diagram?) of suitably
complex systems so I can get my head around what that parsing would be
like? Because none of the existing platforms go anywhere close to just
linking a controller to a codec. I have ONE platform where I could
possibly link two codecs to the same controller. The Efika MX routes
the same I2S bus from SSI to the HDMI codec and to the SGTL5000, so
you can have HDMI audio from a single source. Selecting one or the
other is necessary or you get output on the speaker AND on the TV or
receiver which is odd (and ever-so-slightly delayed). It never got
realistically implemented because the rate for the SGTL5000 would need
to be locked to one rate and bit width to work properly (you cannot
change the rate going into the HDMI transmitter without tearing down
the display, since it just encodes it as it gets it, there is limited
resampling but not a lot).

Thanks,
Matt Sealey <neko@bakuhatsu.net>
Mark Brown Oct. 31, 2013, 1:28 a.m. UTC | #9
On Wed, Oct 30, 2013 at 05:29:31PM -0500, Matt Sealey wrote:
> On Wed, Oct 30, 2013 at 4:29 PM, Mark Brown <broonie@kernel.org> wrote:
> > On Tue, Oct 29, 2013 at 10:05:40PM -0500, Matt Sealey wrote:

> >> 1) saif-controllers and ssi-controller could be a single property -
> >> controllers - which list the controller. It will be obvious which kind
> >> of controller it is by delving into that node referenced by the
> >> phandle.

> > Picking a consistent name for this might be nice, yes.  Feel free to
> > send patches...

> When - if - I have time.

Please consider prioritising this over writing e-mails; talking about
device tree does often seem more popular than solving the harder
problems.

> > To repeat yet again, if this concerns you please work with Morimoto-san
> > on his generic card driver.  There are some code differences with clock
> > setup for the devices which are encoded in the board definitions as
> > well, plus the external connections.

> Well the main crux of the problem here is that there's been a node
> defined with a special compatible property that requires a special
> device driver to marshall and collect nodes referenced within it - and
> most of these drivers do exactly the same thing, which is why generic
> card drivers seem like a good idea.

The missing bit with generic card drivers has been anyone other than
Morimoto-san working on it; I don't think anyone thinks that is a bad
idea for the simple cases.

> Where those things are generic it is not only a problem of code
> duplication but a bad idea in "creating a special node" to marshall
> it. The "sound" node here, with this special compatible property,
> doesn't really exist in real hardware design, it exists within the
> Linux kernel to allow it to construct a "card" out of a controller,

Please, as I have repeatedly asked you, go back and look at the previous
discussions on this topic and familiarise yourself with the needs of
advanced audio subsystems.  This has been covered ad nauseum in the
past, there's no need to go through it all over again.

> > That's not what's going on here, let me once again ask you to review the
> > previous discussion on device tree bindings and make sure that you
> > understand the needs of advanced audio hardware.

> There's no existing example of this advanced audio hardware, no
> discussion about what would be required, only a vague assertion that
> to build this abstract card structure in Linux, this is the only way
> to do it.

That is a rather strongly worded statement which doesn't appear to
correspond to reality; for example there are a number of Linux based
smartphones in production many if not most of which have complex audio
needs and components with lots of flexibility.

Please do the research I have been asking you to do and/or contribute
code, this will be less time consuming for all of us.

> > That's not going to scale well to boards that have inter CODEC analogue
> > links, we'd need a node for every single thing that could be a source
> > and sink on every device which seems tedious.

> Yes, but I am not sure having a single property with pairs of strings
> makes any more sense - sure, you require a node for each, but how does
> just requiring a static string for each do this any better? Now you
> need a string for each, which has to match a driver in Linux.. what
> about another OS?

Look at where the names come from...

> > We can't, one of the problems with this stuff is that clocking design
> > (especially for the advanced use cases) is not as simple or general as
> > one might desire for a bunch of totally sensible electrical engineering
> > and system design reasons.  When we get to the point of deploying simple
> > card widely it's just typing to define a standard constant that everyone
> > can use for whatever the default sensible clocks are, and of course
> > people working on simple card can do that as they're going.

> I would have thought using the clock bindings, and judicious use of
> clkdev internal to the codec or controller driver, would fix this.

The clock configuration bindings don't currently exist, won't cover the
dynamic cases and won't help non-DT platforms.  Remember also that we
don't even have a clock API of any kind on a good proportion at the
minute and don't have the common clock API on some of the rest.

> this all ends up at the codec level by dereference.. so why would it
> need to *be* defined at the card level in any of these cases? In all

This is the sort of information that you should be able to discover if
you review the previous discussions.

> of these cases, maybe not for all cases forever and ever, but the ones
> we're looking at here where we have a goodly selection of codecs and
> maybe 4 or 5 controllers and 2 or 3 implementations of each, there's
> no reason for it.

One of the devices you're looking at here is WM8962 - it does actually
have the ability to do a bunch of the things that make decisions about
clocking interesting.
Matt Sealey Oct. 31, 2013, 3:32 p.m. UTC | #10
On Wed, Oct 30, 2013 at 8:28 PM, Mark Brown <broonie@kernel.org> wrote:
> On Wed, Oct 30, 2013 at 05:29:31PM -0500, Matt Sealey wrote:
>> On Wed, Oct 30, 2013 at 4:29 PM, Mark Brown <broonie@kernel.org> wrote:
>> > On Tue, Oct 29, 2013 at 10:05:40PM -0500, Matt Sealey wrote:
>
>> > Picking a consistent name for this might be nice, yes.  Feel free to
>> > send patches...
>
>> When - if - I have time.
>
> Please consider prioritising this over writing e-mails; talking about
> device tree does often seem more popular than solving the harder
> problems.

If it doesn't get talked about, what happens is people copy paste crap
from unreviewed bindings or just make things up and throw them at the
tree. If there's no discussion about it, how do people find the
information they need (there's no documentation, and no.. I don't feel
like writing any because it mostly already exists. Device trees aren't
something that appeared a year and a half ago and are new
technology.).

Right now I don't think I can actually come up with a binding for
these things, what I can say is I can throw something simple out here;

* Cover the SGTL5000 and WM8962 and TLV320 case for i.MX
* Be a good framework for future device support and parsing
* Not solve any "hardcoded driver strings" issue
* Reduce functional driver code
* Increase the amount of DT parsing that gets done (a healthy
trade-off for the above)

Am I going to cover all cases? No idea.. I don't have the right
information, and it's not on the 'previous discussions' I could
review, there are no block diagrams, I can't trapse through 150 codec
and controller specs to figure it out if you want patches.

>> Yes, but I am not sure having a single property with pairs of strings
>> makes any more sense - sure, you require a node for each, but how does
>> just requiring a static string for each do this any better? Now you
>> need a string for each, which has to match a driver in Linux.. what
>> about another OS?
>
> Look at where the names come from...

What I get is...

audio-routing = "sink", "source", "sink", "source", und so weiter.

Either sink or source could be internal (therefore probably be in the
codec or controller manual) or external component (headphone jack,
internal speaker, magic codec link).

What I don't get is..

Why some of those internal names don't match the manuals despite the
binding saying they do. It's because if you go through history of the
driver, they were hardcoded into the driver originally or someone took
the prefix off a preprocessor macro and dumped it in the tree. Someone
threw underscores in or took them out or aren't reading the manual
properly. This can slide as long as the bindings are consistent per
codec binding and my worry is, they are only consistent because there
are <= 2 bindings per codec and where it's >1, it's a copy paste from
the first.

Why the external names seem to be hardcoded into the drivers - the
behavior of those external names in what they control is also fixed
per-driver in the default definitions of the widgets. What
audio-routing describes is the path between hardcoded audio paths in
the DAPM domain (which may be some of the above), and the fixed names
of some pre-defined DAPM widgets, which have fixed types in the
driver.

While it wouldn't necessarily be ASoC DAPM specific, the binding here
makes no attempt whatsoever to define what these might be, or even
their purpose - the tree properties have no 'knowledge' built into
them except to allow Linux to do some associative array tricks. Adding
flags to say this is a headphone, this is a microphone, this is a
named mixer control would end up being Linux-specific (perhaps even
version-specific). But there's no attempt whatsoever to move the
information on the audio paths into the tree so all 'cards' can
eventually just probe for their connections.

As it stands, the SGTL5000 is so simple it only has microphone in (and
bias out for the 32pin package), headphone out, line in, line out.

Thing is, line out doesn't mean always connect an external speaker
here.. nor could there be a jack at all for the headphone socket (who
knows.. ) but that description is hardcoded in the 'card' driver.
Which means for every SoC, every codec, implemented on each board,
there needs to be a magic card driver. Sure, this is fine, but this
isn't how a board is designed, around a magic concept of a collection
of audio components as a single, isolated entity that needs to be
described in a 'collection' node. This is how Linux is designed (the
concept of a "sound card" is almost redundant on the consumer PC space
too) and how Linux wants to parse it because it maps 1:1 to the way
it's driver model works. That's not right.

>> I would have thought using the clock bindings, and judicious use of
>> clkdev internal to the codec or controller driver, would fix this.
>
> The clock configuration bindings don't currently exist, won't cover the
> dynamic cases and won't help non-DT platforms.
>  Remember also that we
> don't even have a clock API of any kind on a good proportion at the
> minute and don't have the common clock API on some of the rest.

Yergh.

>> this all ends up at the codec level by dereference.. so why would it
>> need to *be* defined at the card level in any of these cases? In all
>
> This is the sort of information that you should be able to discover if
> you review the previous discussions.

Reviewing, not finding the information I really need here..

>> of these cases, maybe not for all cases forever and ever, but the ones
>> we're looking at here where we have a goodly selection of codecs and
>> maybe 4 or 5 controllers and 2 or 3 implementations of each, there's
>> no reason for it.
>
> One of the devices you're looking at here is WM8962 - it does actually
> have the ability to do a bunch of the things that make decisions about
> clocking interesting.

I understand why the Linux driver model here does it this way, what I
am asking is why is this done in the current driver implementations
this way? Why define bias level settings for audio paths at the "card"
level (thus creating a need for a special card driver in these
implementations) when snd_soc_dapm_set_bias_level calls

card->set_bias_level
codec->set_bias_level
card->set_bias_level_post

All of which exist here. There are levels of indirection for a good
reason, I understand that, but in this implementation
card->set_bias_level calls functions which do purely codec-level
operations (set fll/mclk/sysclk), codec->set_bias_level does it's
obviously purely codec-level operations, and then
card->set_bias_level_post finishes up by doing some, again,
codec-level operations.

If the current WM8962 bias level ops were rolled into the codec driver
where all the magic happens anyway, card->set_bias_level does nothing,
codec->set_bias_level does the right thing, card->set_bias_level_post
does nothing, and supplementally the codec driver can pick up it's own
clock (ironically the i.MX6 device tree passes it a clock, but the
codec driver ignores it) - the "imx-wm8962" collector becomes
*completely* generic apart from some hardcoded dapm widgets, and is
more suitable for replacement by simple-card.c.

Same goes for imx-sgtl5000, and both suffer the "inline, hardcoded
audmux settings" problem, which can be moved out of the "marshalling"
driver to support code in the sense that both of them do *identical*
operations based on identical properties.

DRM has this same problem, it's been designed around probing a single,
hotpluggable entity which collects a bunch of components somewhere,
initialized internally in a batch at probe time. An nVidia card may
have an i2c bus on some GPU logic somewhere, which talks to an
external transmitter, and it works because the GPU driver will
*create* an i2c bus for that GPU, register it with the i2c subsystem,
add a hardcoded set of devices, and then go ahead and use them later
by reference.

This isn't how SoCs or embedded systems are designed, it is not how
device trees are parsed and it is also not how drivers are probed
against them in Linux.

What's needed, really, is a way of marshalling the information for
Linux's card abstraction. Luckily, device trees from 20 years ago had
this already - device_type = "sound" (or device_type = "display" for
the above case). They were defined and used in a time when they
represented an API to do function calls against them (i.e. sound had
ops to play a sound, display had ops to.. well, do nothing) and they
ended up being single-chip devices with some bare minimum support
logic. The reason they're dirt simple device tree descriptuons in
every implementation is because sound devices were headphone and
microphone and maybe an internal speaker with no power management
worth a damn, and display devices were identically dumb (we're talking
about soundblaster and cirrus logic 2D kind of era, the same kind of
HW you can bring up on any ancient version of QEMU).

With a bit more complexity (understatement), the description has to
get more complicated. SoC clocks and pad mux definitions are a good
example (big love to the TI guys for making a dent in this) where the
bindings are well-formed, but you do end up with sprawling, monster
trees. This is the only way to actually describe the hardware without
hardcoding it inside the kernel, though.

We can't just say "device type is sound" anymore, but we can use it as
a marker for collecting the complexity of the audio components. In
this case, inputs and outputs are codec-based and the rest is fabric.
This is where the device_type = "sound" lives. Linux would need to
parse the tree to find a "sound" device_type and then figure out where
all the codecs, buses and controllers and dma channels and interrupts
go, right now this is thankfully frightfully simple since the card
abstraction is quite well defined.

For now there aren't any codec->codec links on any board I can find,
so I can't even see how they're linked let alone think of a spec for
defining it. I will start with the simplest case (where it works on
i.MX and Samsung boards since they are the closest to doing it
correctly and taking advantage of generic cards, and because I have
enough i.MX boards and access to a couple Samsungs that I can test it
on) and work outwards. You can always add new properties to a binding,
but changing the existing ones and node layouts is a pain - this is
why it needs discussing and laying down and fixed in stone for the
future and why I want it not to keep layering and layering.

The Eukrea guys can just as well put this in the tree (how could I
even stop them?) since it suffers the same solvable problems and they
can (will, if I put my mind to it) be solved later, but there is a
much better way to do this globally, and I'd like to hash it out as a
design and not an agile codebase which leaves everyone wondering when
it'll be done or how it affects the board support they're writing for
their board.

Matt Sealey <neko@bakuhatsu.net>
Mark Brown Oct. 31, 2013, 5:45 p.m. UTC | #11
On Thu, Oct 31, 2013 at 10:32:00AM -0500, Matt Sealey wrote:
> On Wed, Oct 30, 2013 at 8:28 PM, Mark Brown <broonie@kernel.org> wrote:

> > Please consider prioritising this over writing e-mails; talking about
> > device tree does often seem more popular than solving the harder
> > problems.

> If it doesn't get talked about, what happens is people copy paste crap

...

> like writing any because it mostly already exists. Device trees aren't
> something that appeared a year and a half ago and are new
> technology.).

These empty discussions aren't anything new either.  If we just keep on
endlessly going through the basics over and over again without any
associated code this just reinforces the perception that people are more
focused on talking than on actually accomplishing anything.  Lengthy
reiterations of design discussions won't move things forwards.

This is why I keep asking you to review and engage with the prior
discussion or to do concrete things to improve the situation.

> Why some of those internal names don't match the manuals despite the
> binding saying they do. It's because if you go through history of the

Do you see any specific issues here?  It sounds like you're perfectly
aware of what the bindings are supposed to be doing with routing signals
to and from CODECs and have found some errors but you haven't told us
what those are.

Like I keep saying, do concrete things to move things forwards.

> All of which exist here. There are levels of indirection for a good
> reason, I understand that, but in this implementation
> card->set_bias_level calls functions which do purely codec-level
> operations (set fll/mclk/sysclk), codec->set_bias_level does it's
> obviously purely codec-level operations, and then
> card->set_bias_level_post finishes up by doing some, again,
> codec-level operations.

To repeat myself yet again: what is done to the CODEC here is system
dependent.  This includes interaction with other components in the
system.
Matt Sealey Oct. 31, 2013, 9:19 p.m. UTC | #12
On Thu, Oct 31, 2013 at 12:45 PM, Mark Brown <broonie@kernel.org> wrote:
>
>> Why some of those internal names don't match the manuals despite the
>> binding saying they do. It's because if you go through history of the
>
> Do you see any specific issues here?  It sounds like you're perfectly
> aware of what the bindings are supposed to be doing with routing signals
> to and from CODECs and have found some errors but you haven't told us
> what those are.

Yes, I can make a big list and even send a patch, but I think it's
putting a band-aid on a non-surgical limb amputation.

In summary, concisely:

* The internal components don't match the manuals they say they
follow, and that is if they even say they follow them at all.
* The external components names in the routing paths have arbitrary,
potentially non-real-life names and are hardcoded into the driver at
certain DAPM widgets, in order for the DAPM paths to work

What I've got going right now is, respectively:

* a 90% done rewrite of these bindings with no functional changes
(i.e. documentation clarification ONLY)
* a fully complete addition to the bindings and a patch to the drivers
that rationalizes the internal path name strings to the manuals (i.e.
documentation AND binding change with code to match)

That's basically the band-aid, but the second one is like rubbing salt
on the wound first because of the binding change. Given that "make
arbitrary strings less arbitrary" constitutes a binding change, this
is what both annoys and worries the piss out of me with the current
bindings.

What I am trying to figure out is a way to have real component names
for the external ones, and somehow codify the internal path naming, so
that if a binding change comes in, it's worth it, and drivers don't
need to include string names of parts of the chips in question, when
they're stuffed way, way down into some abstracted sub-layer anyway.

Close.. not there yet.

> Like I keep saying, do concrete things to move things forwards.

I don't want to be the source of an incremental, agile binding change
going on for every merge window, just because I didn't fully grasp one
issue right at the start.

>> All of which exist here. There are levels of indirection for a good
>> reason, I understand that, but in this implementation
>> card->set_bias_level calls functions which do purely codec-level
>> operations (set fll/mclk/sysclk), codec->set_bias_level does it's
>> obviously purely codec-level operations, and then
>> card->set_bias_level_post finishes up by doing some, again,
>> codec-level operations.
>
> To repeat myself yet again: what is done to the CODEC here is system
> dependent.  This includes interaction with other components in the
> system.

In this particular use case (init setting sysclk, and bias_level
callbacks setting sysclk and pll for WM8962), there is no system
dependency except a fixed, unchanging parent clock (it's fixed at card
init and NEVER updated) which the DT binding can solve really easily
by using the existing clock bindings. The only other example I can
find is the "Cragganmore 6410" board support (which mentions wm8692 in
passing, it seems) vs. specific speyside/wm8996/wm9081/wm1250/wm0010
audio support (which mostly all has your name at the top) and it
doesn't do anything "system dependent", either, except defining it in
the card layer, and doing things which only affect register changes at
the codec level. It doesn't even use or set data at the card layer.

If every implementation eventually gets down to a call inside the
actual codec driver, making the extra abstraction just something
that's fuzzing the issue, and reproducing the same layering in
separate drivers doing almost the same thing - only different by some
clock id and direction - is a lot of code with the potential for
consolidation under one roof with a DT binding that takes it into
account.

For the SGTL5000 case, setting the sysclk on init is overridden by the
DT provided clock anyway inside the codec driver (it gets the clock in
both places, and shoves the value) so this is a compatibility stub for
non-DT boards (none of which should exist by now).. it shouldn't do
anything if it's on devicetree (or at least, in every implementation
existing, it's basically pointless), which is a patch in itself I
should add to the list.

BTW speyside is a good demonstration of a pre-DT "hardcoded strings"
issue. The DAPM widgets get hardcoded and that's where those route
strings come from in the bindings (as above, they don't exist in the
drivers anymore).. it's entirely board-specific because that card
driver is board-specific, but for a single SoC, or even where SoCs use
codecs in exactly the same way, we shouldn't need to entertain a whole
new card driver where that information can be ascertained from the
device tree - that's what device trees are for! If it ever goes DT,
porting this to the DT bindings just means moving the audio routing
table from the middle of the driver into the tree, except this bit:

        { "MICB2", NULL, "Headset Mic", speyside_get_micbias },

.. which can't be represented and would mean parsing the routing
property, then hacking that callback in afterwards.

After that, it's ditching all the very-board-specific i2c addressing,
and it shows the same "system dependent" clocking and bias level
setting which essentially all calls down to the codec level. Would you
consider it a good example of the kind of linking of codecs and audio
interfaces you're talking about? Is there a block diagram? :)

Thanks,
Matt Sealey <neko@bakuhatsu.net>
Mark Brown Oct. 31, 2013, 10:23 p.m. UTC | #13
On Thu, Oct 31, 2013 at 04:19:05PM -0500, Matt Sealey wrote:

> Yes, I can make a big list and even send a patch, but I think it's
> putting a band-aid on a non-surgical limb amputation.

It would be moving the situation forwards which is something that
lengthy e-mails full of architecting with zero code won't do. 

> > To repeat myself yet again: what is done to the CODEC here is system
> > dependent.  This includes interaction with other components in the
> > system.

> In this particular use case (init setting sysclk, and bias_level
> callbacks setting sysclk and pll for WM8962), there is no system
> dependency except a fixed, unchanging parent clock (it's fixed at card
> init and NEVER updated) which the DT binding can solve really easily

That is the chosen configuration for the systems you are looking at;
other systems do exist.  This is the nature of board specifics.
diff mbox

Patch

diff --git a/Documentation/devicetree/bindings/sound/eukrea-tlv320.txt b/Documentation/devicetree/bindings/sound/eukrea-tlv320.txt
new file mode 100644
index 0000000..8791037
--- /dev/null
+++ b/Documentation/devicetree/bindings/sound/eukrea-tlv320.txt
@@ -0,0 +1,23 @@ 
+Audio complex for Eukrea boards with tlv320aic23 codec.
+
+Required properties:
+- compatible : "eukrea,eukrea-tlv320"
+- model : The user-visible name of this sound complex.
+- ssi-controller : The phandle of the SSI controller.
+- audio-codec : The phandle of the tlv320aic23 audio codec.
+- mux-int-port : The internal port of the i.MX audio muxer (AUDMUX).
+- mux-ext-port : The external port of the i.MX audio muxer.
+
+Note: The AUDMUX port numbering should start at 1, which is consistent with
+hardware manual.
+
+Example:
+
+	sound {
+		compatible = "eukrea,eukrea-tlv320";
+		model = "imx51-eukrea-tlv320aic23";
+		ssi-controller = <&ssi2>;
+		fsl,audio-codec = <&tlv320aic23>;
+		mux-int-port = <2>;
+		mux-ext-port = <3>;
+	};
diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
index b7ab71f..9c3cd64 100644
--- a/sound/soc/fsl/Kconfig
+++ b/sound/soc/fsl/Kconfig
@@ -161,12 +161,15 @@  config SND_SOC_EUKREA_TLV320
 	depends on MACH_EUKREA_MBIMX27_BASEBOARD \
 		|| MACH_EUKREA_MBIMXSD25_BASEBOARD \
 		|| MACH_EUKREA_MBIMXSD35_BASEBOARD \
-		|| MACH_EUKREA_MBIMXSD51_BASEBOARD
+		|| MACH_EUKREA_MBIMXSD51_BASEBOARD \
+		|| OF
 	depends on I2C
 	select SND_SOC_TLV320AIC23
 	select SND_SOC_IMX_PCM_FIQ
 	select SND_SOC_IMX_AUDMUX
 	select SND_SOC_IMX_SSI
+	select SND_SOC_FSL_SSI
+	select SND_SOC_IMX_PCM_DMA
 	help
 	  Enable I2S based access to the TLV320AIC23B codec attached
 	  to the SSI interface
diff --git a/sound/soc/fsl/eukrea-tlv320.c b/sound/soc/fsl/eukrea-tlv320.c
index 5983740..ad1aac6 100644
--- a/sound/soc/fsl/eukrea-tlv320.c
+++ b/sound/soc/fsl/eukrea-tlv320.c
@@ -15,8 +15,11 @@ 
  *
  */
 
+#include <linux/errno.h>
 #include <linux/module.h>
 #include <linux/moduleparam.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
 #include <linux/device.h>
 #include <linux/i2c.h>
 #include <sound/core.h>
@@ -26,6 +29,7 @@ 
 
 #include "../codecs/tlv320aic23.h"
 #include "imx-ssi.h"
+#include "fsl_ssi.h"
 #include "imx-audmux.h"
 
 #define CODEC_CLOCK 12000000
@@ -41,7 +45,8 @@  static int eukrea_tlv320_hw_params(struct snd_pcm_substream *substream,
 	ret = snd_soc_dai_set_fmt(cpu_dai, SND_SOC_DAIFMT_I2S |
 				  SND_SOC_DAIFMT_NB_NF |
 				  SND_SOC_DAIFMT_CBM_CFM);
-	if (ret) {
+	/* fsl_ssi lacks the set_fmt ops. */
+	if (ret && ret != -ENOTSUPP) {
 		dev_err(cpu_dai->dev,
 			"Failed to set the cpu dai format.\n");
 		return ret;
@@ -63,11 +68,13 @@  static int eukrea_tlv320_hw_params(struct snd_pcm_substream *substream,
 			"Failed to set the codec sysclk.\n");
 		return ret;
 	}
+
 	snd_soc_dai_set_tdm_slot(cpu_dai, 0xffffffc, 0xffffffc, 2, 0);
 
 	ret = snd_soc_dai_set_sysclk(cpu_dai, IMX_SSP_SYS_CLK, 0,
 				SND_SOC_CLOCK_IN);
-	if (ret) {
+	/* fsl_ssi lacks the set_sysclk ops */
+	if (ret && ret != -EINVAL) {
 		dev_err(cpu_dai->dev,
 			"Can't set the IMX_SSP_SYS_CLK CPU system clock.\n");
 		return ret;
@@ -84,7 +91,6 @@  static struct snd_soc_dai_link eukrea_tlv320_dai = {
 	.name		= "tlv320aic23",
 	.stream_name	= "TLV320AIC23",
 	.codec_dai_name	= "tlv320aic23-hifi",
-	.platform_name	= "imx-ssi.0",
 	.codec_name	= "tlv320aic23-codec.0-001a",
 	.cpu_dai_name	= "imx-ssi.0",
 	.ops		= &eukrea_tlv320_snd_ops,
@@ -101,8 +107,49 @@  static int eukrea_tlv320_probe(struct platform_device *pdev)
 {
 	int ret;
 	int int_port = 0, ext_port;
+	struct platform_device *ssi_pdev;
+	struct device_node *np = pdev->dev.of_node;
+	struct device_node *ssi_np;
+
+	if (np) {
+		ssi_np = of_parse_phandle(pdev->dev.of_node,
+					  "ssi-controller", 0);
+		ssi_pdev = of_find_device_by_node(ssi_np);
+		if (!ssi_pdev) {
+			dev_err(&pdev->dev,
+				"ssi-controller missing or invalid.\n");
+			ret = -ENODEV;
+			goto err;
+		}
+
+		ret = of_property_read_u32(np, "mux-int-port", &int_port);
+		if (ret) {
+			dev_err(&pdev->dev,
+				"mux-int-port missing or invalid\n");
+			return ret;
+		}
+		ret = of_property_read_u32(np, "mux-ext-port", &ext_port);
+		if (ret) {
+			dev_err(&pdev->dev,
+				"mux-ext-port missing or invalid\n");
+			return ret;
+		}
+
+		/*
+		 * The port numbering in the hardware manual starts at 1, while
+		 * the audmux API expects it starts at 0.
+		 */
+		int_port--;
+		ext_port--;
+
+		eukrea_tlv320_dai.cpu_dai_name = dev_name(&ssi_pdev->dev);
+		eukrea_tlv320_dai.platform_of_node = ssi_np;
+	} else {
+		eukrea_tlv320_dai.platform_name = "imx-ssi.0";
+	}
 
-	if (machine_is_eukrea_cpuimx27()) {
+	if (machine_is_eukrea_cpuimx27() ||
+	    of_find_compatible_node(NULL, NULL, "fsl,imx21-audmux")) {
 		imx_audmux_v1_configure_port(MX27_AUDMUX_HPCR1_SSI0,
 			IMX_AUDMUX_V1_PCR_SYN |
 			IMX_AUDMUX_V1_PCR_TFSDIR |
@@ -119,8 +166,12 @@  static int eukrea_tlv320_probe(struct platform_device *pdev)
 		);
 	} else if (machine_is_eukrea_cpuimx25sd() ||
 		   machine_is_eukrea_cpuimx35sd() ||
-		   machine_is_eukrea_cpuimx51sd()) {
-		ext_port = machine_is_eukrea_cpuimx25sd() ? 4 : 3;
+		   machine_is_eukrea_cpuimx51sd() ||
+		   of_find_compatible_node(NULL, NULL, "fsl,imx31-audmux")) {
+		if (!np)
+			ext_port = machine_is_eukrea_cpuimx25sd() ?
+				4 : 3;
+
 		imx_audmux_v2_configure_port(int_port,
 			IMX_AUDMUX_V2_PTCR_SYN |
 			IMX_AUDMUX_V2_PTCR_TFSDIR |
@@ -134,14 +185,28 @@  static int eukrea_tlv320_probe(struct platform_device *pdev)
 			IMX_AUDMUX_V2_PDCR_RXDSEL(int_port)
 		);
 	} else {
-		/* return happy. We might run on a totally different machine */
-		return 0;
+		if (np) {
+			/* The eukrea,eukrea-tlv320 driver was explicitely
+			 * requested (through the device tree).
+			 */
+			dev_err(&pdev->dev,
+				"Missing audmux DT node.\n");
+			return -ENODEV;
+		} else {
+			/* Return happy.
+			 * We might run on a totally different machine.
+			 */
+			return 0;
+		}
 	}
 
 	eukrea_tlv320.dev = &pdev->dev;
 	ret = snd_soc_register_card(&eukrea_tlv320);
+err:
 	if (ret)
 		dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", ret);
+	if (np)
+		of_node_put(ssi_np);
 
 	return ret;
 }
@@ -153,10 +218,17 @@  static int eukrea_tlv320_remove(struct platform_device *pdev)
 	return 0;
 }
 
+static const struct of_device_id imx_tlv320_dt_ids[] = {
+	{ .compatible = "eukrea,eukrea-tlv320"},
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, imx_tlv320_dt_ids);
+
 static struct platform_driver eukrea_tlv320_driver = {
 	.driver = {
 		.name = "eukrea_tlv320",
 		.owner = THIS_MODULE,
+		.of_match_table = imx_tlv320_dt_ids,
 	},
 	.probe = eukrea_tlv320_probe,
 	.remove = eukrea_tlv320_remove,