Message ID | 20210323114327.3969072-1-codrin.ciubotariu@microchip.com (mailing list archive) |
---|---|
Headers | show |
Series | Separate BE DAI HW constraints from FE ones | expand |
Dne 23. 03. 21 v 12:43 Codrin Ciubotariu napsal(a): > To achieve this, the first thing needed is to detect whether a HW > constraint rule is enforced by a FE or a BE DAI. This means that > snd_pcm_hw_rule_add() needs to be able to differentiate between the two > type of DAIs. For this, the runtime pointer to struct snd_pcm_runtime is > replaced with a pointer to struct snd_pcm_substream, to be able to reach > substream->pcm->internal to differentiate between FE and BE DAIs. Think about other not-so-invasive solution. What about to use 'runtime->private_data' (struct snd_soc_pcm_runtime *) to determine FE / BE? Jaroslav
On 23.03.2021 14:15, Jaroslav Kysela wrote: > EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe > > Dne 23. 03. 21 v 12:43 Codrin Ciubotariu napsal(a): > >> To achieve this, the first thing needed is to detect whether a HW >> constraint rule is enforced by a FE or a BE DAI. This means that >> snd_pcm_hw_rule_add() needs to be able to differentiate between the two >> type of DAIs. For this, the runtime pointer to struct snd_pcm_runtime is >> replaced with a pointer to struct snd_pcm_substream, to be able to reach >> substream->pcm->internal to differentiate between FE and BE DAIs. > > Think about other not-so-invasive solution. What about to use > 'runtime->private_data' (struct snd_soc_pcm_runtime *) to determine FE / BE? > I think struct snd_soc_pcm_runtime * is placed in substream->private_data, while runtime->private_data is used more by the platform drivers. runtime->trigger_master could be an idea, but it looks like it's initialized just before the trigger callback is called, way after the constraint rules are added and I am not sure it can be initialized earlier... Best regards, Codrin
On 3/23/21 6:43 AM, Codrin Ciubotariu wrote: > HW constraints are needed to set limitations for HW parameters used to > configure the DAIs. All DAIs on the same link must agree upon the HW > parameters, so the parameters are affected by the DAIs' features and > their limitations. In case of DPCM, the FE DAIs can be used to perform > different kind of conversions, such as format or rate changing, bringing > the audio stream to a configuration supported by all the DAIs of the BE's > link. For this reason, the limitations of the BE DAIs are not always > important for the HW parameters between user-space and FE, only for the > paratemers between FE and BE DAI links. This brings us to this patch-set, > which aims to separate the FE HW constraints from the BE HW constraints. > This way, the FE can be used to perform more efficient HW conversions, on > the initial audio stream parameters, to parameters supported by the BE > DAIs. > To achieve this, the first thing needed is to detect whether a HW > constraint rule is enforced by a FE or a BE DAI. This means that > snd_pcm_hw_rule_add() needs to be able to differentiate between the two > type of DAIs. For this, the runtime pointer to struct snd_pcm_runtime is > replaced with a pointer to struct snd_pcm_substream, to be able to reach > substream->pcm->internal to differentiate between FE and BE DAIs. > This change affects many sound drivers (and one gpu drm driver). > All these changes are included in the first patch, to have a better > overview of the implications created by this change. > The second patch adds a new struct snd_pcm_hw_constraints under struct > snd_soc_dpcm_runtime, which is used to store the HW constraint rules > added by the BE DAIs. This structure is initialized with a subset of the > runtime constraint rules which does not include the rules that affect > the buffer or period size. snd_pcm_hw_rule_add() will add the BE rules > to the new struct snd_pcm_hw_constraints. > The third and last patch will apply the BE rule constraints, after the > fixup callback. If the fixup HW parameters do not respect the BE > constraint rules, the rules will exit with an error. The FE mask and > interval constraints are copied to the BE ones, to satisfy the > dai_link->dpcm_merged_* flags. The dai_link->dpcm_merged_* flags are > used to know if the FE does format or sampling rate conversion. > > I tested with ad1934 and wm8731 codecs as BEs, with a not-yet-mainlined > ASRC as FE, that can also do format conversion. I realize that the > change to snd_pcm_hw_rule_add() has a big impact, even though all the > drivers use snd_pcm_hw_rule_add() with substream->runtime, so passing > substream instead of runtime is not that risky. can you use the BE hw_params_fixup instead? That's what we use for SOF. The FE hw_params are propagated to the BE, and then the BE can update the hw_params based on its own limitations and pass the result downstream, e.g. to a codec. I'll copy below my understanding of the flow, which we discussed recently in the SOF team: my understanding is that we start with the front-end hw_params as the basis for the back-end hw_params. static int dpcm_fe_dai_hw_params(struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params) { struct snd_soc_pcm_runtime *fe = asoc_substream_to_rtd(substream); int ret, stream = substream->stream; mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME); dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); memcpy(&fe->dpcm[stream].hw_params, params, sizeof(struct snd_pcm_hw_params)); ret = dpcm_be_dai_hw_params(fe, stream); <<< the BE is handled first. if (ret < 0) { dev_err(fe->dev,"ASoC: hw_params BE failed %d\n", ret); goto out; } dev_dbg(fe->dev, "ASoC: hw_params FE %s rate %d chan %x fmt %d\n", fe->dai_link->name, params_rate(params), params_channels(params), params_format(params)); /* call hw_params on the frontend */ ret = soc_pcm_hw_params(substream, params); then each BE will be configured int dpcm_be_dai_hw_params(struct snd_soc_pcm_runtime *fe, int stream) { struct snd_soc_dpcm *dpcm; int ret; for_each_dpcm_be(fe, stream, dpcm) { struct snd_soc_pcm_runtime *be = dpcm->be; struct snd_pcm_substream *be_substream = snd_soc_dpcm_get_substream(be, stream); /* is this op for this BE ? */ if (!snd_soc_dpcm_be_can_update(fe, be, stream)) continue; /* copy params for each dpcm */ memcpy(&dpcm->hw_params, &fe->dpcm[stream].hw_params, sizeof(struct snd_pcm_hw_params)); /* perform any hw_params fixups */ ret = snd_soc_link_be_hw_params_fixup(be, &dpcm->hw_params); The fixup is the key, in SOF this is where we are going to look for information from the topology. /* fixup the BE DAI link to match any values from topology */ int sof_pcm_dai_link_fixup(struct snd_soc_pcm_runtime *rtd, struct snd_pcm_hw_params *params) { struct snd_interval *rate = hw_param_interval(params, SNDRV_PCM_HW_PARAM_RATE); struct snd_interval *channels = hw_param_interval(params, SNDRV_PCM_HW_PARAM_CHANNELS); struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); struct snd_soc_component *component = snd_soc_rtdcom_lookup(rtd, SOF_AUDIO_PCM_DRV_NAME); struct snd_sof_dai *dai = snd_sof_find_dai(component, (char *)rtd->dai_link->name); struct snd_soc_dpcm *dpcm; /* no topology exists for this BE, try a common configuration */ if (!dai) { dev_warn(component->dev, "warning: no topology found for BE DAI %s config\n", rtd->dai_link->name); /* set 48k, stereo, 16bits by default */ rate->min = 48000; rate->max = 48000; channels->min = 2; channels->max = 2; snd_mask_none(fmt); snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE); return 0; } /* read format from topology */ snd_mask_none(fmt); switch (dai->comp_dai.config.frame_fmt) { case SOF_IPC_FRAME_S16_LE: snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE); break; case SOF_IPC_FRAME_S24_4LE: snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S24_LE); break; case SOF_IPC_FRAME_S32_LE: snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S32_LE); break; default: dev_err(component->dev, "error: No available DAI format!\n"); return -EINVAL; } /* read rate and channels from topology */ switch (dai->dai_config->type) { case SOF_DAI_INTEL_SSP: rate->min = dai->dai_config->ssp.fsync_rate; rate->max = dai->dai_config->ssp.fsync_rate; channels->min = dai->dai_config->ssp.tdm_slots; channels->max = dai->dai_config->ssp.tdm_slots;
On 23.03.2021 21:25, Pierre-Louis Bossart wrote: > EXTERNAL EMAIL: Do not click links or open attachments unless you know > the content is safe > > On 3/23/21 6:43 AM, Codrin Ciubotariu wrote: >> HW constraints are needed to set limitations for HW parameters used to >> configure the DAIs. All DAIs on the same link must agree upon the HW >> parameters, so the parameters are affected by the DAIs' features and >> their limitations. In case of DPCM, the FE DAIs can be used to perform >> different kind of conversions, such as format or rate changing, bringing >> the audio stream to a configuration supported by all the DAIs of the BE's >> link. For this reason, the limitations of the BE DAIs are not always >> important for the HW parameters between user-space and FE, only for the >> paratemers between FE and BE DAI links. This brings us to this patch-set, >> which aims to separate the FE HW constraints from the BE HW constraints. >> This way, the FE can be used to perform more efficient HW conversions, on >> the initial audio stream parameters, to parameters supported by the BE >> DAIs. >> To achieve this, the first thing needed is to detect whether a HW >> constraint rule is enforced by a FE or a BE DAI. This means that >> snd_pcm_hw_rule_add() needs to be able to differentiate between the two >> type of DAIs. For this, the runtime pointer to struct snd_pcm_runtime is >> replaced with a pointer to struct snd_pcm_substream, to be able to reach >> substream->pcm->internal to differentiate between FE and BE DAIs. >> This change affects many sound drivers (and one gpu drm driver). >> All these changes are included in the first patch, to have a better >> overview of the implications created by this change. >> The second patch adds a new struct snd_pcm_hw_constraints under struct >> snd_soc_dpcm_runtime, which is used to store the HW constraint rules >> added by the BE DAIs. This structure is initialized with a subset of the >> runtime constraint rules which does not include the rules that affect >> the buffer or period size. snd_pcm_hw_rule_add() will add the BE rules >> to the new struct snd_pcm_hw_constraints. >> The third and last patch will apply the BE rule constraints, after the >> fixup callback. If the fixup HW parameters do not respect the BE >> constraint rules, the rules will exit with an error. The FE mask and >> interval constraints are copied to the BE ones, to satisfy the >> dai_link->dpcm_merged_* flags. The dai_link->dpcm_merged_* flags are >> used to know if the FE does format or sampling rate conversion. >> >> I tested with ad1934 and wm8731 codecs as BEs, with a not-yet-mainlined >> ASRC as FE, that can also do format conversion. I realize that the >> change to snd_pcm_hw_rule_add() has a big impact, even though all the >> drivers use snd_pcm_hw_rule_add() with substream->runtime, so passing >> substream instead of runtime is not that risky. > > can you use the BE hw_params_fixup instead? > > That's what we use for SOF. > > The FE hw_params are propagated to the BE, and then the BE can update > the hw_params based on its own limitations and pass the result > downstream, e.g. to a codec. > > I'll copy below my understanding of the flow, which we discussed > recently in the SOF team: > > my understanding is that we start with the front-end hw_params as the > basis for the back-end hw_params. > > static int dpcm_fe_dai_hw_params(struct snd_pcm_substream *substream, > struct snd_pcm_hw_params *params) > { > struct snd_soc_pcm_runtime *fe = asoc_substream_to_rtd(substream); > int ret, stream = substream->stream; > > mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME); > dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); > > memcpy(&fe->dpcm[stream].hw_params, params, > sizeof(struct snd_pcm_hw_params)); > ret = dpcm_be_dai_hw_params(fe, stream); > <<< the BE is handled first. > if (ret < 0) { > dev_err(fe->dev,"ASoC: hw_params BE failed %d\n", ret); > goto out; > } > > dev_dbg(fe->dev, "ASoC: hw_params FE %s rate %d chan %x fmt %d\n", > fe->dai_link->name, params_rate(params), > params_channels(params), params_format(params)); > > /* call hw_params on the frontend */ > ret = soc_pcm_hw_params(substream, params); > > then each BE will be configured > > int dpcm_be_dai_hw_params(struct snd_soc_pcm_runtime *fe, int stream) > { > struct snd_soc_dpcm *dpcm; > int ret; > > for_each_dpcm_be(fe, stream, dpcm) { > > struct snd_soc_pcm_runtime *be = dpcm->be; > struct snd_pcm_substream *be_substream = > snd_soc_dpcm_get_substream(be, stream); > > /* is this op for this BE ? */ > if (!snd_soc_dpcm_be_can_update(fe, be, stream)) > continue; > > /* copy params for each dpcm */ > memcpy(&dpcm->hw_params, &fe->dpcm[stream].hw_params, > sizeof(struct snd_pcm_hw_params)); > > /* perform any hw_params fixups */ > ret = snd_soc_link_be_hw_params_fixup(be, &dpcm->hw_params); > > The fixup is the key, in SOF this is where we are going to look for > information from the topology. > > /* fixup the BE DAI link to match any values from topology */ > int sof_pcm_dai_link_fixup(struct snd_soc_pcm_runtime *rtd, struct > snd_pcm_hw_params *params) > { > struct snd_interval *rate = hw_param_interval(params, > SNDRV_PCM_HW_PARAM_RATE); > struct snd_interval *channels = hw_param_interval(params, > SNDRV_PCM_HW_PARAM_CHANNELS); > struct snd_mask *fmt = hw_param_mask(params, > SNDRV_PCM_HW_PARAM_FORMAT); > struct snd_soc_component *component = > snd_soc_rtdcom_lookup(rtd, SOF_AUDIO_PCM_DRV_NAME); > struct snd_sof_dai *dai = > snd_sof_find_dai(component, (char *)rtd->dai_link->name); > struct snd_soc_dpcm *dpcm; > > /* no topology exists for this BE, try a common configuration */ > if (!dai) { > dev_warn(component->dev, > "warning: no topology found for BE DAI %s config\n", > rtd->dai_link->name); > > /* set 48k, stereo, 16bits by default */ > rate->min = 48000; > rate->max = 48000; > > channels->min = 2; > channels->max = 2; > > snd_mask_none(fmt); > snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE); > > return 0; > } > > /* read format from topology */ > snd_mask_none(fmt); > > switch (dai->comp_dai.config.frame_fmt) { > case SOF_IPC_FRAME_S16_LE: > snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE); > break; > case SOF_IPC_FRAME_S24_4LE: > snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S24_LE); > break; > case SOF_IPC_FRAME_S32_LE: > snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S32_LE); > break; > default: > dev_err(component->dev, "error: No available DAI format!\n"); > return -EINVAL; > } > > /* read rate and channels from topology */ > switch (dai->dai_config->type) { > case SOF_DAI_INTEL_SSP: > rate->min = dai->dai_config->ssp.fsync_rate; > rate->max = dai->dai_config->ssp.fsync_rate; > channels->min = dai->dai_config->ssp.tdm_slots; > channels->max = dai->dai_config->ssp.tdm_slots; I am using hw_params_fixup, but it's not enough. The first thing I do is to not add the BE HW constraint rules in runtime->hw_constraints, because this will potentially affect the FE HW params. If the FE does sampling rate conversion, for example, applying the sampling rate constrain rules from a BE codec on FE is useless. The second thing I do is to apply these BE HW constraint rules to the BE HW params. It's true that the BE HW params can be fine tuned via hw_params_fixup (topology, device-tree or even static parameters) and set in such a way that avoid the BE HW constraints, so we could ignore the constraint rules added by their drivers. It's not every elegant and running the BE HW constraint rules for the fixup BE HW params can be a sanity check. Also, I am thinking that if the FE does no conversion (be_hw_params_fixup missing) and more than one BEs are available, applying the HW constraint rules on the same set of BE HW params could rule out the incompatible BE DAI links and start the compatible ones only. I am not sure this is a real usecase. Thank you for your insights! Best regards, Codrin
> I am using hw_params_fixup, but it's not enough. The first thing I do is > to not add the BE HW constraint rules in runtime->hw_constraints, > because this will potentially affect the FE HW params. If the FE does > sampling rate conversion, for example, applying the sampling rate > constrain rules from a BE codec on FE is useless. The second thing I do > is to apply these BE HW constraint rules to the BE HW params. It's true > that the BE HW params can be fine tuned via hw_params_fixup (topology, > device-tree or even static parameters) and set in such a way that avoid > the BE HW constraints, so we could ignore the constraint rules added by > their drivers. It's not every elegant and running the BE HW constraint > rules for the fixup BE HW params can be a sanity check. Also, I am > thinking that if the FE does no conversion (be_hw_params_fixup missing) > and more than one BEs are available, applying the HW constraint rules on > the same set of BE HW params could rule out the incompatible BE DAI > links and start the compatible ones only. I am not sure this is a real > usecase. Even after a second cup of coffee I was not able to follow why the hw_params_fixup was not enough? That paragraph is rather dense. And to be frank I don't fully understand the problem statement above: "separate the FE HW constraints from the BE HW constraints". We have existing solutions with a DSP-based SRC adjusting FE rates to what is required by the BE dailink. Maybe it would help to show examples of what you can do today and what you cannot, so that we are on the same wavelength on what the limitations are and how to remove them?
On 24.03.2021 17:28, Pierre-Louis Bossart wrote: >> I am using hw_params_fixup, but it's not enough. The first thing I do is >> to not add the BE HW constraint rules in runtime->hw_constraints, >> because this will potentially affect the FE HW params. If the FE does >> sampling rate conversion, for example, applying the sampling rate >> constrain rules from a BE codec on FE is useless. The second thing I do >> is to apply these BE HW constraint rules to the BE HW params. It's true >> that the BE HW params can be fine tuned via hw_params_fixup (topology, >> device-tree or even static parameters) and set in such a way that avoid >> the BE HW constraints, so we could ignore the constraint rules added by >> their drivers. It's not every elegant and running the BE HW constraint >> rules for the fixup BE HW params can be a sanity check. Also, I am >> thinking that if the FE does no conversion (be_hw_params_fixup missing) >> and more than one BEs are available, applying the HW constraint rules on >> the same set of BE HW params could rule out the incompatible BE DAI >> links and start the compatible ones only. I am not sure this is a real >> usecase. > > Even after a second cup of coffee I was not able to follow why the > hw_params_fixup was not enough? That paragraph is rather dense. Right, sorry about that. :) > > And to be frank I don't fully understand the problem statement above: > "separate the FE HW constraints from the BE HW constraints". We have > existing solutions with a DSP-based SRC adjusting FE rates to what is > required by the BE dailink. > > Maybe it would help to show examples of what you can do today and what > you cannot, so that we are on the same wavelength on what the > limitations are and how to remove them? For example, ad1934 driver sets a constraint to always have 32 sample bits [1] and this rule will be added in struct snd_pcm_runtime -> hw_constraints [2]. As you can see, this is added early and always. So this rule will affect the HW parameters used from the user-space [3] and, in our example, only audio streams that have formats of 4B containers will be used (S24_LE, S32_LE, etc). For playback, if the audio stream won't have this format, the stream will need to be converted in software, using plug ALSA plugin for example. This is OK for normal PCM: HW params user <----------> CPU <---> AD1934 space ^ DAI | | | -------------------------| sample bits constraint rule (32b) For DPCM, we have the same behavior (unfortunately). ad1934, as a BE codec, will add it's rule in the same place, which means that it will again affect the HW parameters set by user-space. So user-space will have to convert all audio streams to have a 32b sample size. If the FE is capable of format conversing, this software conversion is useless. FE HW params BE HW params user <----------> FE <--------------> BE CPU <----> BE AD1934 space ^ DAI DAI | | | --------------------------------------------------| sample bits constraint rule (32b) The format conversion will be done in software instead of hardware (FE). I hope I was able to be more clear now. Thanks for taking time to understand this. I owe you a coffee. :) Best regards, Codrin [1] https://elixir.bootlin.com/linux/v5.12-rc4/source/sound/soc/codecs/ad193x.c#L366 [2] https://elixir.bootlin.com/linux/v5.12-rc4/source/sound/core/pcm_lib.c#L1141 [3] https://elixir.bootlin.com/linux/v5.12-rc4/source/sound/core/pcm_native.c#L692
On 23.03.2021 16:18, Codrin.Ciubotariu@microchip.com wrote: > EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe > > On 23.03.2021 14:15, Jaroslav Kysela wrote: >> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe >> >> Dne 23. 03. 21 v 12:43 Codrin Ciubotariu napsal(a): >> >>> To achieve this, the first thing needed is to detect whether a HW >>> constraint rule is enforced by a FE or a BE DAI. This means that >>> snd_pcm_hw_rule_add() needs to be able to differentiate between the two >>> type of DAIs. For this, the runtime pointer to struct snd_pcm_runtime is >>> replaced with a pointer to struct snd_pcm_substream, to be able to reach >>> substream->pcm->internal to differentiate between FE and BE DAIs. >> >> Think about other not-so-invasive solution. What about to use >> 'runtime->private_data' (struct snd_soc_pcm_runtime *) to determine FE / BE? >> > > I think struct snd_soc_pcm_runtime * is placed in > substream->private_data, while runtime->private_data is used more by the > platform drivers. runtime->trigger_master could be an idea, but it looks > like it's initialized just before the trigger callback is called, way > after the constraint rules are added and I am not sure it can be > initialized earlier... > > Best regards, > Codrin > How about using a different API for ASoC only, since that's the place of DPCM. Only drivers that do not involve DSPs would have to to be changed to call the new snd_pcm_hw_rule_add() variant. Another solution would be to have a different snd_soc_pcm_runtime for BE interfaces (with a new hw_constraints member of course). What do you think? Thanks! Codrin
On Wed, Apr 14, 2021 at 02:58:10PM +0000, Codrin.Ciubotariu@microchip.com wrote: > How about using a different API for ASoC only, since that's the place of > DPCM. Only drivers that do not involve DSPs would have to to be changed > to call the new snd_pcm_hw_rule_add() variant. > Another solution would be to have a different snd_soc_pcm_runtime for BE > interfaces (with a new hw_constraints member of course). What do you think? I'm really not convinced we want to continue to pile stuff on top of DPCM, it is just fundamentally not up to modelling what modern systems are able to do - it's already making things more fragile than they should be and more special cases seems like it's going to end up making that worse. That said I've not seen the code but...
On 15.04.2021 19:17, Mark Brown wrote: > On Wed, Apr 14, 2021 at 02:58:10PM +0000, Codrin.Ciubotariu@microchip.com wrote: > >> How about using a different API for ASoC only, since that's the place of >> DPCM. Only drivers that do not involve DSPs would have to to be changed >> to call the new snd_pcm_hw_rule_add() variant. >> Another solution would be to have a different snd_soc_pcm_runtime for BE >> interfaces (with a new hw_constraints member of course). What do you think? > > I'm really not convinced we want to continue to pile stuff on top of > DPCM, it is just fundamentally not up to modelling what modern systems > are able to do - it's already making things more fragile than they > should be and more special cases seems like it's going to end up making > that worse. That said I've not seen the code but... > Are there any plans for refactoring DPCM? any ideas ongoing? I also have some changes for PCM dmaengine, in the same 'style', similar to what I sent some time ago... I can adjust to different ideas, if there are any, but, for a start, can anyone confirm that the problem I am trying to fix is real? Best regards, Codrin
On Thu, Apr 15, 2021 at 04:56:00PM +0000, Codrin.Ciubotariu@microchip.com wrote: > Are there any plans for refactoring DPCM? any ideas ongoing? I also have > some changes for PCM dmaengine, in the same 'style', similar to what I > sent some time ago... > I can adjust to different ideas, if there are any, but, for a start, can > anyone confirm that the problem I am trying to fix is real? Lars-Peter's presentation from ELC in 2016 (!) is probably the clearest summary of the ideas: https://elinux.org/images/e/e7/Audio_on_Linux.pdf https://youtu.be/6oQF2TzCYtQ Essentially the idea is to represent everything, including the DSPs, as ASoC components and be able to represent the digital links between components in the DAPM graph in the same way we currently represent analogue links. This means we need a way to propagate digital configuration along the DAPM graph (or a parallel, cross linked digital one). Sadly I'm not really aware of anyone actively working on the actual conversion at the minute, Morimoto-san has done a lot of great preparatory work to make everything a component which makes the whole thing more tractable.
On 15.04.2021 20:25, Mark Brown wrote: > On Thu, Apr 15, 2021 at 04:56:00PM +0000, Codrin.Ciubotariu@microchip.com wrote: > >> Are there any plans for refactoring DPCM? any ideas ongoing? I also have >> some changes for PCM dmaengine, in the same 'style', similar to what I >> sent some time ago... >> I can adjust to different ideas, if there are any, but, for a start, can >> anyone confirm that the problem I am trying to fix is real? > > Lars-Peter's presentation from ELC in 2016 (!) is probably the clearest > summary of the ideas: > > https://elinux.org/images/e/e7/Audio_on_Linux.pdf > https://youtu.be/6oQF2TzCYtQ > > Essentially the idea is to represent everything, including the DSPs, as > ASoC components and be able to represent the digital links between > components in the DAPM graph in the same way we currently represent > analogue links. This means we need a way to propagate digital > configuration along the DAPM graph (or a parallel, cross linked digital > one). Sadly I'm not really aware of anyone actively working on the > actual conversion at the minute, Morimoto-san has done a lot of great > preparatory work to make everything a component which makes the whole > thing more tractable. > Thank you for the links! So basically the machine driver disappears and all the components will be visible in user-space. If there is a list with the 'steps' or tasks to achieve this? I can try to pitch in. Best regards, Codrin
On Fri, Apr 16, 2021 at 04:03:05PM +0000, Codrin.Ciubotariu@microchip.com wrote: > Thank you for the links! So basically the machine driver disappears and > all the components will be visible in user-space. Not entirely - you still need something to say how they're wired together but it'll be a *lot* simpler for anything that currently used DPCM. > If there is a list with the 'steps' or tasks to achieve this? I can try > to pitch in. Not really written down that I can think of. I think the next steps that I can think of right now are unfortunately bigger and harder ones, mainly working out a way to represent digital configuration as a graph that can be attached to/run in parallel with DAPM other people might have some better ideas though. Sorry, I appreciate that this isn't super helpful :/
On 4/16/21 11:31 AM, Mark Brown wrote: > On Fri, Apr 16, 2021 at 04:03:05PM +0000, Codrin.Ciubotariu@microchip.com wrote: > >> Thank you for the links! So basically the machine driver disappears and >> all the components will be visible in user-space. > > Not entirely - you still need something to say how they're wired > together but it'll be a *lot* simpler for anything that currently used > DPCM. > >> If there is a list with the 'steps' or tasks to achieve this? I can try >> to pitch in. > > Not really written down that I can think of. I think the next steps > that I can think of right now are unfortunately bigger and harder ones, > mainly working out a way to represent digital configuration as a graph > that can be attached to/run in parallel with DAPM other people might > have some better ideas though. Sorry, I appreciate that this isn't > super helpful :/ I see a need for this in our future SoundWire/SDCA work. So far I was planning to model the entities as 'widgets' and lets DAPM propagate activation information for power management, however there are also bits of information in 'Clusters' (number of channels and spatial relationships) that could change dynamically and would be interesting to propagate across entities, so that when we get a stream of data on the bus we know what it is. when we discussed the multi-configuration support for BT offload, it also became apparent that we don't fully control the sample rate changes between FE and BE, we only control the start and ends. I fully agree that the division between front- and back-ends is becoming limiting and DPCM is not only complicated but difficult to stretch further.
On 16.04.2021 19:31, Mark Brown wrote: > On Fri, Apr 16, 2021 at 04:03:05PM +0000, Codrin.Ciubotariu@microchip.com wrote: > >> Thank you for the links! So basically the machine driver disappears and >> all the components will be visible in user-space. > > Not entirely - you still need something to say how they're wired > together but it'll be a *lot* simpler for anything that currently used > DPCM. Right, device-tree/acpi wiring is not enough... > >> If there is a list with the 'steps' or tasks to achieve this? I can try >> to pitch in. > > Not really written down that I can think of. I think the next steps > that I can think of right now are unfortunately bigger and harder ones, > mainly working out a way to represent digital configuration as a graph > that can be attached to/run in parallel with DAPM other people might > have some better ideas though. Sorry, I appreciate that this isn't > super helpful :/ > I think I have good picture, a more robust card->dai_link, not with just FEs and BEs ... I will try to monitor the alsa list more, see what other people are working on. Thank you for your time! Best regards, Codrin
On Fri, Apr 16, 2021 at 11:47:01AM -0500, Pierre-Louis Bossart wrote: > On 4/16/21 11:31 AM, Mark Brown wrote: > > Not really written down that I can think of. I think the next steps > > that I can think of right now are unfortunately bigger and harder ones, > > mainly working out a way to represent digital configuration as a graph > > that can be attached to/run in parallel with DAPM other people might > > have some better ideas though. Sorry, I appreciate that this isn't > > super helpful :/ > I see a need for this in our future SoundWire/SDCA work. So far I was > planning to model the entities as 'widgets' and lets DAPM propagate > activation information for power management, however there are also bits of > information in 'Clusters' (number of channels and spatial relationships) > that could change dynamically and would be interesting to propagate across > entities, so that when we get a stream of data on the bus we know what it > is. Yes, I was thinking along similar lines last time I looked at it - I was using the term digital domains. You'd need some impedence matching objects for things like SRCs, and the ability to flag which configuration matters within a domain (eg, a lot of things will covert to the maximum supported bit width for internal operation so bit width only matters on external interfaces) but I think for a first pass we can get away with forcing everything other than what DPCM has as front ends into static configurations.
On 4/16/21 1:55 PM, Mark Brown wrote: > On Fri, Apr 16, 2021 at 11:47:01AM -0500, Pierre-Louis Bossart wrote: >> On 4/16/21 11:31 AM, Mark Brown wrote: > >>> Not really written down that I can think of. I think the next steps >>> that I can think of right now are unfortunately bigger and harder ones, >>> mainly working out a way to represent digital configuration as a graph >>> that can be attached to/run in parallel with DAPM other people might >>> have some better ideas though. Sorry, I appreciate that this isn't >>> super helpful :/ > >> I see a need for this in our future SoundWire/SDCA work. So far I was >> planning to model the entities as 'widgets' and lets DAPM propagate >> activation information for power management, however there are also bits of >> information in 'Clusters' (number of channels and spatial relationships) >> that could change dynamically and would be interesting to propagate across >> entities, so that when we get a stream of data on the bus we know what it >> is. > > Yes, I was thinking along similar lines last time I looked at it - I was > using the term digital domains. You'd need some impedence matching > objects for things like SRCs, and the ability to flag which > configuration matters within a domain (eg, a lot of things will covert > to the maximum supported bit width for internal operation so bit width > only matters on external interfaces) but I think for a first pass we can > get away with forcing everything other than what DPCM has as front ends > into static configurations. You lost me on the last sentence. did you mean "forcing everything into static configurations except for what DPCM has as front-ends"? It may already be too late for static configurations, Intel, NXP and others have started to enable cases where the dailink configuration varies. FWIW both the USB and SDCA class document are very careful with the notion of constraints and whether an entity is implemented in the analog or digital domains. There are 'clock sources' that may be used in specific terminals but no notion of explicit SRC in the graph to leave implementers a lot of freedom. There is a notion of 'Usage' that describes e.g. FullBand or WideBand without defining what the representation is. The bit width is also not described except where necessary (history buffers and external bus-facing interfaces). Like you said it's mostly the boundaries of the domains that matter.
On Fri, Apr 16, 2021 at 02:39:25PM -0500, Pierre-Louis Bossart wrote: > On 4/16/21 1:55 PM, Mark Brown wrote: > > to the maximum supported bit width for internal operation so bit width > > only matters on external interfaces) but I think for a first pass we can > > get away with forcing everything other than what DPCM has as front ends > > into static configurations. > You lost me on the last sentence. did you mean "forcing everything into > static configurations except for what DPCM has as front-ends"? Yes... > It may already be too late for static configurations, Intel, NXP and others > have started to enable cases where the dailink configuration varies. Well, they won't be able to use any new stuff until someone implements support for dynamic configurations in the new stuff.