https://developer.apple.com/documentation/audiounit
Apple has a number of audio frameworks we could use. This uses the Audio
Unit framework, as it gives us most control over the rendering of the
audio frames (such as being able to quickly pause / discard buffers).
From some reading, we could implement niceties such as fading playback
in and out while seeking over a short (10ms) period. This patch does not
implement such fancy features though.
Co-Authored-By: Andrew Kaster <akaster@serenityos.org>
This implementation is very naive compared to the PulseAudio one.
Instead of using a callback implemented by the audio server connection
to push audio to the buffer, we have to poll on a timer to check when
we need to push the audio buffers. Implementing cross-process condition
variables into the audio queue class could allow us to avoid polling,
which may prove beneficial to CPU usage.
Audio timestamps will be accurate to the number of samples available,
but will count in increments of about 100ms and run ahead of the actual
audio being pushed to the device by the server.
Buffer underruns are completely ignored for now as well, since the
`AudioServer` has no way to know how many samples are actually written
in a single audio buffer.
This will ensure that we don't leak any memory while playing back
audio.
There is an expectation value in the test that is only set to true when
PulseAudio is present for the moment. When any new implementation is
added for other libraries/platforms, we should hopefully get a CI
failure due to unexpected success in creating the `PlaybackStream`.
To ensure that we clean up our PulseAudio connection whenever audio
output is not needed, add `PulseAudioContext::weak_instance()` to allow
us to check whether an instance exists without creating one.