Message ID | 20200707163641.17113-11-srinivas.kandagatla@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | ASoC: qdsp6: add gapless compressed audio support | expand |
On 7/7/20 11:36 AM, Srinivas Kandagatla wrote: > Add support to gapless playback by implementing metadata, > next_track, drain and partial drain support. > > Gapless on Q6ASM is implemented by opening 2 streams in a single asm stream What does 'in a single asm stream' means? > and toggling them on next track. It really seems to me that you have two streams at the lowest level, along with the knowledge of how many samples to remove/insert and hence could do a much better job - including gapless support between unrelated profiles and cross-fading - without the partial drain and next_track mechanism that was defined assuming a single stream/profile.
Thanks Pierre for review, On 07/07/2020 18:07, Pierre-Louis Bossart wrote: > > > On 7/7/20 11:36 AM, Srinivas Kandagatla wrote: >> Add support to gapless playback by implementing metadata, >> next_track, drain and partial drain support. >> >> Gapless on Q6ASM is implemented by opening 2 streams in a single asm >> stream > > What does 'in a single asm stream' means? So in QDSP6 ASM (Audio Stream Manager) terminology we have something called "asm session" for each ASoC FE DAI, Each asm session can be connected with multiple streams (upto 8 I think). However there will be only one active stream at anytime. Also there only single data buffer associated with each asm session. For Gapless usecase, we can keep two streams open for one asm-session, allowing us to fill in data on second stream while first stream is playing. > >> and toggling them on next track. > > It really seems to me that you have two streams at the lowest level, > along with the knowledge of how many samples to remove/insert and hence > could do a much better job - including gapless support between unrelated > profiles and cross-fading - without the partial drain and next_track > mechanism that was defined assuming a single stream/profile. At the end of the day its a single session with one data buffer but with multiple streams. Achieving cross fade should be easy with this design. We need those hooks for partial drain and next track to allow us to switch between streams and pass silence information to respective stream ids. --srini >
>>> Add support to gapless playback by implementing metadata, >>> next_track, drain and partial drain support. >>> >>> Gapless on Q6ASM is implemented by opening 2 streams in a single asm >>> stream >> >> What does 'in a single asm stream' means? > > > So in QDSP6 ASM (Audio Stream Manager) terminology we have something > called "asm session" for each ASoC FE DAI, Each asm session can be > connected with multiple streams (upto 8 I think). However there will be > only one active stream at anytime. Also there only single data buffer > associated with each asm session. > > For Gapless usecase, we can keep two streams open for one asm-session, > allowing us to fill in data on second stream while first stream is playing. Ah, that's interesting, thanks for the details. So you have one DMA transfer and the data from the previous and next track are provided in consecutive bytes in a ring buffer, but at the DSP level you have a switch that will feed data for the previous and next tracks into different decoders, yes? If that is the case, indeed the extension you suggested earlier to change the profile is valid. You could even change the format I guess. To avoid confusion I believe the capabilities would need to be extended so that applications know that gapless playback is supported across unrelated profiles/formats. The point is that you don't want a traditional implementation to use a capability that isn't supported in hardware or will lead to audio issues. >>> and toggling them on next track. >> >> It really seems to me that you have two streams at the lowest level, >> along with the knowledge of how many samples to remove/insert and >> hence could do a much better job - including gapless support between >> unrelated profiles and cross-fading - without the partial drain and >> next_track mechanism that was defined assuming a single stream/profile. > At the end of the day its a single session with one data buffer but with > multiple streams. > > Achieving cross fade should be easy with this design. looks like it indeed. > We need those hooks for partial drain and next track to allow us to > switch between streams and pass silence information to respective stream > ids. right, but the key point is 'switch between streams'. That means a more complex/capable implementation that should be advertised as such to applications. This is not the default behavior assumed initially: to allow for minimal implementations in memory-constrained devices, we assumed gapless was supported with a single decoder. Maybe the right way to do this is extend the snd_compr_caps structure: /** * struct snd_compr_caps - caps descriptor * @codecs: pointer to array of codecs * @direction: direction supported. Of type snd_compr_direction * @min_fragment_size: minimum fragment supported by DSP * @max_fragment_size: maximum fragment supported by DSP * @min_fragments: min fragments supported by DSP * @max_fragments: max fragments supported by DSP * @num_codecs: number of codecs supported * @reserved: reserved field */ struct snd_compr_caps { __u32 num_codecs; __u32 direction; __u32 min_fragment_size; __u32 max_fragment_size; __u32 min_fragments; __u32 max_fragments; __u32 codecs[MAX_NUM_CODECS]; __u32 reserved[11]; } __attribute__((packed, aligned(4))); and use a reserved field to provide info on capabilities, and filter the set_codec_params() addition based this capability - i.e. return -ENOTSUP in 'traditional' implementations based on a single 'stream'/decoder instance.
On Wed, Jul 08, 2020 at 08:32:02AM -0500, Pierre-Louis Bossart wrote: > To avoid confusion I believe the capabilities would need to be extended so > that applications know that gapless playback is supported across unrelated > profiles/formats. The point is that you don't want a traditional > implementation to use a capability that isn't supported in hardware or will > lead to audio issues. We'd also need error handling in case someone ignores the capability checks.
On 08/07/2020 14:32, Pierre-Louis Bossart wrote: > >>>> Add support to gapless playback by implementing metadata, >>>> next_track, drain and partial drain support. >>>> >>>> Gapless on Q6ASM is implemented by opening 2 streams in a single asm >>>> stream >>> >>> What does 'in a single asm stream' means? >> >> >> So in QDSP6 ASM (Audio Stream Manager) terminology we have something >> called "asm session" for each ASoC FE DAI, Each asm session can be >> connected with multiple streams (upto 8 I think). However there will >> be only one active stream at anytime. Also there only single data >> buffer associated with each asm session. >> >> For Gapless usecase, we can keep two streams open for one asm-session, >> allowing us to fill in data on second stream while first stream is >> playing. > > Ah, that's interesting, thanks for the details. So you have one DMA > transfer and the data from the previous and next track are provided in > consecutive bytes in a ring buffer, but at the DSP level you have a > switch that will feed data for the previous and next tracks into > different decoders, yes? Yes, that's true, we can drain and stop first stream and start next stream which will do the switch! > > If that is the case, indeed the extension you suggested earlier to > change the profile is valid. You could even change the format I guess. > Exactly, we did test this patchset along with the extension suggested! > To avoid confusion I believe the capabilities would need to be extended > so that applications know that gapless playback is supported across > unrelated profiles/formats. The point is that you don't want a > traditional implementation to use a capability that isn't supported in > hardware or will lead to audio issues. > >>>> and toggling them on next track. >>> >>> It really seems to me that you have two streams at the lowest level, >>> along with the knowledge of how many samples to remove/insert and >>> hence could do a much better job - including gapless support between >>> unrelated profiles and cross-fading - without the partial drain and >>> next_track mechanism that was defined assuming a single stream/profile. >> At the end of the day its a single session with one data buffer but >> with multiple streams. >> >> Achieving cross fade should be easy with this design. > > looks like it indeed. > >> We need those hooks for partial drain and next track to allow us to >> switch between streams and pass silence information to respective >> stream ids. > > right, but the key point is 'switch between streams'. That means a more > complex/capable implementation that should be advertised as such to > applications. This is not the default behavior assumed initially: to > allow for minimal implementations in memory-constrained devices, we > assumed gapless was supported with a single decoder. > > Maybe the right way to do this is extend the snd_compr_caps structure: > > /** > * struct snd_compr_caps - caps descriptor > * @codecs: pointer to array of codecs > * @direction: direction supported. Of type snd_compr_direction > * @min_fragment_size: minimum fragment supported by DSP > * @max_fragment_size: maximum fragment supported by DSP > * @min_fragments: min fragments supported by DSP > * @max_fragments: max fragments supported by DSP > * @num_codecs: number of codecs supported > * @reserved: reserved field > */ > struct snd_compr_caps { > __u32 num_codecs; > __u32 direction; > __u32 min_fragment_size; > __u32 max_fragment_size; > __u32 min_fragments; > __u32 max_fragments; > __u32 codecs[MAX_NUM_CODECS]; > __u32 reserved[11]; > } __attribute__((packed, aligned(4))); > > > and use a reserved field to provide info on capabilities, and filter the > set_codec_params() addition based this capability - i.e. return -ENOTSUP > in 'traditional' implementations based on a single 'stream'/decoder > instance. Sounds good! I will give it a go and see how it ends up! --srini
>> right, but the key point is 'switch between streams'. That means a >> more complex/capable implementation that should be advertised as such >> to applications. This is not the default behavior assumed initially: >> to allow for minimal implementations in memory-constrained devices, we >> assumed gapless was supported with a single decoder. >> >> Maybe the right way to do this is extend the snd_compr_caps structure: >> >> /** >> * struct snd_compr_caps - caps descriptor >> * @codecs: pointer to array of codecs >> * @direction: direction supported. Of type snd_compr_direction >> * @min_fragment_size: minimum fragment supported by DSP >> * @max_fragment_size: maximum fragment supported by DSP >> * @min_fragments: min fragments supported by DSP >> * @max_fragments: max fragments supported by DSP >> * @num_codecs: number of codecs supported >> * @reserved: reserved field >> */ >> struct snd_compr_caps { >> __u32 num_codecs; >> __u32 direction; >> __u32 min_fragment_size; >> __u32 max_fragment_size; >> __u32 min_fragments; >> __u32 max_fragments; >> __u32 codecs[MAX_NUM_CODECS]; >> __u32 reserved[11]; >> } __attribute__((packed, aligned(4))); >> >> >> and use a reserved field to provide info on capabilities, and filter >> the set_codec_params() addition based this capability - i.e. return >> -ENOTSUP in 'traditional' implementations based on a single >> 'stream'/decoder instance. I think this is also what Mark was referring to earlier. > Sounds good! > I will give it a go and see how it ends up! Glad to see this discussion progressing. We may also want to document the 3 possible ways of supporting gapless playback while we are at it: a) with the existing single decoder assumption b) with your suggested solution with a switch at the DSP level c) with 2 streams at the userspace level and a switch/x-fade at the DSP level - which may simplify userspace quite a bit and was the initial design in a non-Linux OS.
diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c index c4b4684b7824..f48eca227fb5 100644 --- a/sound/soc/qcom/qdsp6/q6asm-dai.c +++ b/sound/soc/qcom/qdsp6/q6asm-dai.c @@ -67,12 +67,15 @@ struct q6asm_dai_rtd { uint16_t bits_per_sample; uint16_t source; /* Encoding source bit mask */ struct audio_client *audio_client; + uint32_t next_track_stream_id; + bool next_track; /* Active */ uint32_t stream_id; uint16_t session_id; enum stream_state state; uint32_t initial_samples_drop; uint32_t trailing_samples_drop; + bool notify_on_drain; }; struct q6asm_dai_data { @@ -510,9 +513,11 @@ static void compress_event_handler(uint32_t opcode, uint32_t token, { struct q6asm_dai_rtd *prtd = priv; struct snd_compr_stream *substream = prtd->cstream; - unsigned long flags; + unsigned long flags = 0; + u32 wflags = 0; uint64_t avail; - uint32_t bytes_written; + uint32_t bytes_written, bytes_to_write; + bool is_last_buffer = false; switch (opcode) { case ASM_CLIENT_EVENT_CMD_RUN_DONE: @@ -527,7 +532,25 @@ static void compress_event_handler(uint32_t opcode, uint32_t token, break; case ASM_CLIENT_EVENT_CMD_EOS_DONE: - prtd->state = Q6ASM_STREAM_STOPPED; + if (prtd->notify_on_drain) { + if (substream->partial_drain && prtd->next_track_stream_id) { + /** + * Close old stream and make it stale, switch + * the active stream now! + */ + q6asm_cmd_nowait(prtd->audio_client, + prtd->stream_id, + CMD_CLOSE); + prtd->stream_id = prtd->next_track_stream_id; + prtd->next_track_stream_id = 0; + } + + snd_compr_drain_notify(prtd->cstream); + prtd->notify_on_drain = false; + + } else { + prtd->state = Q6ASM_STREAM_STOPPED; + } break; case ASM_CLIENT_EVENT_DATA_WRITE_DONE: @@ -543,13 +566,32 @@ static void compress_event_handler(uint32_t opcode, uint32_t token, } avail = prtd->bytes_received - prtd->bytes_sent; + if (avail > prtd->pcm_count) { + bytes_to_write = prtd->pcm_count; + } else { + if (substream->partial_drain || prtd->notify_on_drain) + is_last_buffer = true; + bytes_to_write = avail; + } + + if (bytes_to_write) { + if (substream->partial_drain && is_last_buffer) { + wflags |= ASM_LAST_BUFFER_FLAG; + q6asm_stream_remove_trailing_silence(prtd->audio_client, + prtd->stream_id, + prtd->trailing_samples_drop); + } - if (avail >= prtd->pcm_count) { q6asm_write_async(prtd->audio_client, prtd->stream_id, - prtd->pcm_count, 0, 0, 0); - prtd->bytes_sent += prtd->pcm_count; + bytes_to_write, 0, 0, wflags); + + prtd->bytes_sent += bytes_to_write; } + if (prtd->notify_on_drain && is_last_buffer) + q6asm_cmd_nowait(prtd->audio_client, + prtd->stream_id, CMD_EOS); + spin_unlock_irqrestore(&prtd->lock, flags); break; @@ -629,9 +671,15 @@ static int q6asm_dai_compr_free(struct snd_soc_component *component, struct snd_soc_pcm_runtime *rtd = stream->private_data; if (prtd->audio_client) { - if (prtd->state) + if (prtd->state) { q6asm_cmd(prtd->audio_client, prtd->stream_id, CMD_CLOSE); + if (prtd->next_track_stream_id) { + q6asm_cmd(prtd->audio_client, + prtd->next_track_stream_id, + CMD_CLOSE); + } + } snd_dma_free_pages(&prtd->dma_buffer); q6asm_unmap_memory_regions(stream->direction, @@ -645,15 +693,13 @@ static int q6asm_dai_compr_free(struct snd_soc_component *component, return 0; } -static int q6asm_dai_compr_set_params(struct snd_soc_component *component, - struct snd_compr_stream *stream, - struct snd_compr_params *params) +static int __q6asm_dai_compr_set_codec_params(struct snd_soc_component *component, + struct snd_compr_stream *stream, + struct snd_compr_params *params, + int stream_id) { struct snd_compr_runtime *runtime = stream->runtime; struct q6asm_dai_rtd *prtd = runtime->private_data; - struct snd_soc_pcm_runtime *rtd = stream->private_data; - int dir = stream->direction; - struct q6asm_dai_data *pdata; struct q6asm_flac_cfg flac_cfg; struct q6asm_wma_cfg wma_cfg; struct q6asm_alac_cfg alac_cfg; @@ -669,43 +715,8 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component, codec_options = &(prtd->codec_param.codec.options); - memcpy(&prtd->codec_param, params, sizeof(*params)); - pdata = snd_soc_component_get_drvdata(component); - if (!pdata) - return -EINVAL; - - if (!prtd || !prtd->audio_client) { - dev_err(dev, "private data null or audio client freed\n"); - return -EINVAL; - } - - prtd->periods = runtime->fragments; - prtd->pcm_count = runtime->fragment_size; - prtd->pcm_size = runtime->fragments * runtime->fragment_size; - prtd->bits_per_sample = 16; - if (dir == SND_COMPRESS_PLAYBACK) { - ret = q6asm_open_write(prtd->audio_client, prtd->stream_id, - params->codec.id, params->codec.profile, - prtd->bits_per_sample, true); - - if (ret < 0) { - dev_err(dev, "q6asm_open_write failed\n"); - q6asm_audio_client_free(prtd->audio_client); - prtd->audio_client = NULL; - return ret; - } - } - - prtd->session_id = q6asm_get_session_id(prtd->audio_client); - ret = q6routing_stream_open(rtd->dai_link->id, LEGACY_PCM_MODE, - prtd->session_id, dir); - if (ret) { - dev_err(dev, "Stream reg failed ret:%d\n", ret); - return ret; - } - switch (params->codec.id) { case SND_AUDIOCODEC_FLAC: @@ -722,7 +733,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component, flac_cfg.min_frame_size = flac->min_frame_size; ret = q6asm_stream_media_format_block_flac(prtd->audio_client, - prtd->stream_id, + stream_id, &flac_cfg); if (ret < 0) { dev_err(dev, "FLAC CMD Format block failed:%d\n", ret); @@ -782,11 +793,11 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component, if (wma_v9) ret = q6asm_stream_media_format_block_wma_v9( - prtd->audio_client, prtd->stream_id, + prtd->audio_client, stream_id, &wma_cfg); else ret = q6asm_stream_media_format_block_wma_v10( - prtd->audio_client, prtd->stream_id, + prtd->audio_client, stream_id, &wma_cfg); if (ret < 0) { dev_err(dev, "WMA9 CMD failed:%d\n", ret); @@ -820,7 +831,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component, break; } ret = q6asm_stream_media_format_block_alac(prtd->audio_client, - prtd->stream_id, + stream_id, &alac_cfg); if (ret < 0) { dev_err(dev, "ALAC CMD Format block failed:%d\n", ret); @@ -845,7 +856,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component, ape_cfg.seek_table_present = ape->seek_table_present; ret = q6asm_stream_media_format_block_ape(prtd->audio_client, - prtd->stream_id, + stream_id, &ape_cfg); if (ret < 0) { dev_err(dev, "APE CMD Format block failed:%d\n", ret); @@ -857,6 +868,63 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component, break; } + return 0; +} + +static int q6asm_dai_compr_set_params(struct snd_soc_component *component, + struct snd_compr_stream *stream, + struct snd_compr_params *params) +{ + struct snd_compr_runtime *runtime = stream->runtime; + struct q6asm_dai_rtd *prtd = runtime->private_data; + struct snd_soc_pcm_runtime *rtd = stream->private_data; + int dir = stream->direction; + struct q6asm_dai_data *pdata; + struct device *dev = component->dev; + int ret; + + pdata = snd_soc_component_get_drvdata(component); + if (!pdata) + return -EINVAL; + + if (!prtd || !prtd->audio_client) { + dev_err(dev, "private data null or audio client freed\n"); + return -EINVAL; + } + + prtd->periods = runtime->fragments; + prtd->pcm_count = runtime->fragment_size; + prtd->pcm_size = runtime->fragments * runtime->fragment_size; + prtd->bits_per_sample = 16; + + if (dir == SND_COMPRESS_PLAYBACK) { + ret = q6asm_open_write(prtd->audio_client, prtd->stream_id, params->codec.id, + params->codec.profile, prtd->bits_per_sample, + true); + + if (ret < 0) { + dev_err(dev, "q6asm_open_write failed\n"); + q6asm_audio_client_free(prtd->audio_client); + prtd->audio_client = NULL; + return ret; + } + } + + prtd->session_id = q6asm_get_session_id(prtd->audio_client); + ret = q6routing_stream_open(rtd->dai_link->id, LEGACY_PCM_MODE, + prtd->session_id, dir); + if (ret) { + dev_err(dev, "Stream reg failed ret:%d\n", ret); + return ret; + } + + ret = __q6asm_dai_compr_set_codec_params(component, stream, params, + prtd->stream_id); + if (ret) { + dev_err(dev, "codec param setup failed ret:%d\n", ret); + return ret; + } + ret = q6asm_map_memory_regions(dir, prtd->audio_client, prtd->phys, (prtd->pcm_size / prtd->periods), prtd->periods); @@ -871,12 +939,13 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component, return 0; } -static int q6asm_dai_compr_set_metadata(struct snd_compr_stream *stream, +static int q6asm_dai_compr_set_metadata(struct snd_soc_component *component, + struct snd_compr_stream *stream, struct snd_compr_metadata *metadata) { struct snd_compr_runtime *runtime = stream->runtime; struct q6asm_dai_rtd *prtd = runtime->private_data; - int ret = 0; + int stream_id = prtd->stream_id, ret = 0; switch (metadata->key) { case SNDRV_COMPRESS_ENCODER_PADDING: @@ -884,6 +953,31 @@ static int q6asm_dai_compr_set_metadata(struct snd_compr_stream *stream, break; case SNDRV_COMPRESS_ENCODER_DELAY: prtd->initial_samples_drop = metadata->value[0]; + if (prtd->next_track_stream_id) { + stream_id = prtd->next_track_stream_id; + ret = q6asm_open_write(prtd->audio_client, + prtd->next_track_stream_id, + prtd->codec_param.codec.id, + prtd->codec_param.codec.profile, + prtd->bits_per_sample, + true); + if (ret < 0) { + dev_err(component->dev, "q6asm_open_write failed\n"); + return ret; + } + ret = __q6asm_dai_compr_set_codec_params(component, stream, + &prtd->codec_param, + prtd->next_track_stream_id); + if (ret < 0) { + dev_err(component->dev, "q6asm_open_write failed\n"); + return ret; + } + + } + + q6asm_stream_remove_initial_silence(prtd->audio_client, + stream_id, + prtd->initial_samples_drop); break; default: ret = -EINVAL; @@ -917,6 +1011,14 @@ static int q6asm_dai_compr_trigger(struct snd_soc_component *component, ret = q6asm_cmd_nowait(prtd->audio_client, prtd->stream_id, CMD_PAUSE); break; + case SND_COMPR_TRIGGER_NEXT_TRACK: + prtd->next_track = true; + prtd->next_track_stream_id = (prtd->stream_id == 1 ? 2 : 1); + break; + case SND_COMPR_TRIGGER_DRAIN: + case SND_COMPR_TRIGGER_PARTIAL_DRAIN: + prtd->notify_on_drain = true; + break; default: ret = -EINVAL; break; diff --git a/sound/soc/qcom/qdsp6/q6asm.h b/sound/soc/qcom/qdsp6/q6asm.h index 69513ac1fa55..a8dddfeb28da 100644 --- a/sound/soc/qcom/qdsp6/q6asm.h +++ b/sound/soc/qcom/qdsp6/q6asm.h @@ -34,6 +34,7 @@ enum { #define MAX_SESSIONS 8 #define NO_TIMESTAMP 0xFF00 #define FORMAT_LINEAR_PCM 0x0000 +#define ASM_LAST_BUFFER_FLAG BIT(30) struct q6asm_flac_cfg { u32 sample_rate;
Add support to gapless playback by implementing metadata, next_track, drain and partial drain support. Gapless on Q6ASM is implemented by opening 2 streams in a single asm stream and toggling them on next track. Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> --- sound/soc/qcom/qdsp6/q6asm-dai.c | 212 +++++++++++++++++++++++-------- sound/soc/qcom/qdsp6/q6asm.h | 1 + 2 files changed, 158 insertions(+), 55 deletions(-)