This change was a long time in the making ever since we obtained sample
rate awareness in the system. Now, each client has its own sample rate,
accessible via new IPC APIs, and the device sample rate is only
accessible via the management interface. AudioServer takes care of
resampling client streams into the device sample rate. Therefore, the
main improvement introduced with this commit is full responsiveness to
sample rate changes; all open audio programs will continue to play at
correct speed with the audio resampled to the new device rate.
The immediate benefits are manifold:
- Gets rid of the legacy hardware sample rate IPC message in the
non-managing client
- Removes duplicate resampling and sample index rescaling code
everywhere
- Avoids potential sample index scaling bugs in SoundPlayer (which have
happened many times before) and fixes a sample index scaling bug in
aplay
- Removes several FIXMEs
- Reduces amount of sample copying in all applications (especially
Piano, where this is critical), improving performance
- Reduces number of resampling users, making future API changes (which
will need to happen for correct resampling to be implemented) easier
I also threw in a simple race condition fix for Piano's audio player
loop.
The audio player loop uses custom IPC plumbing to safely bypass any
event loop shenanigans. There is still work to be done, but this already
improves the realtime capabilities of Piano.
The file is now renamed to Queue.h, and the Resampler APIs with
LegacyBuffer are also removed. These changes look large because nobody
actually needs Buffer.h (or Queue.h). It was mostly transitive
dependencies on the massive list of includes in that header, which are
now almost all gone. Instead, we include common things like Sample.h
directly, which should give faster compile times as very few files
actually need Queue.h.
Previously, we were sending Buffers to the server whenever we had new
audio data for it. This meant that for every audio enqueue action, we
needed to create a new shared memory anonymous buffer, send that
buffer's file descriptor over IPC (+recfd on the other side) and then
map the buffer into the audio server's memory to be able to play it.
This was fine for sending large chunks of audio data, like when playing
existing audio files. However, in the future we want to move to
real-time audio in some applications like Piano. This means that the
size of buffers that are sent need to be very small, as just the size of
a buffer itself is part of the audio latency. If we were to try
real-time audio with the existing system, we would run into problems
really quickly. Dealing with a continuous stream of new anonymous files
like the current audio system is rather expensive, as we need Kernel
help in multiple places. Additionally, every enqueue incurs an IPC call,
which are not optimized for >1000 calls/second (which would be needed
for real-time audio with buffer sizes of ~40 samples). So a fundamental
change in how we handle audio sending in userspace is necessary.
This commit moves the audio sending system onto a shared single producer
circular queue (SSPCQ) (introduced with one of the previous commits).
This queue is intended to live in shared memory and be accessed by
multiple processes at the same time. It was specifically written to
support the audio sending case, so e.g. it only supports a single
producer (the audio client). Now, audio sending follows these general
steps:
- The audio client connects to the audio server.
- The audio client creates a SSPCQ in shared memory.
- The audio client sends the SSPCQ's file descriptor to the audio server
with the set_buffer() IPC call.
- The audio server receives the SSPCQ and maps it.
- The audio client signals start of playback with start_playback().
- At the same time:
- The audio client writes its audio data into the shared-memory queue.
- The audio server reads audio data from the shared-memory queue(s).
Both sides have additional before-queue/after-queue buffers, depending
on the exact application.
- Pausing playback is just an IPC call, nothing happens to the buffer
except that the server stops reading from it until playback is
resumed.
- Muting has nothing to do with whether audio data is read or not.
- When the connection closes, the queues are unmapped on both sides.
This should already improve audio playback performance in a bunch of
places.
Implementation & commit notes:
- Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept
for WavLoader, see previous commit message.
- Most intra-process audio data passing is done with FixedArray<Sample>
or Vector<Sample>.
- Improvements to most audio-enqueuing applications. (If necessary I can
try to extract some of the aplay improvements.)
- New APIs on LibAudio/ClientConnection which allows non-realtime
applications to enqueue audio in big chunks like before.
- Removal of status APIs from the audio server connection for
information that can be directly obtained from the shared queue.
- Split the pause playback API into two APIs with more intuitive names.
I know this is a large commit, and you can kinda tell from the commit
message. It's basically impossible to break this up without hacks, so
please forgive me. These are some of the best changes to the audio
subsystem and I hope that that makes up for this :yaktangle: commit.
:yakring:
Derivatives of Core::Object should be constructed through
ClassName::construct(), to avoid handling ref-counted objects with
refcount zero. Fixing the visibility means that misuses like this are
more difficult.
All audio applications (aplay, Piano, Sound Player) respect the ability
of the system to have theoretically any sample rate. Therefore, they
resample their own audio into the system sample rate.
LibAudio previously had its loaders resample their own audio, even
though they expose their sample rate. This is now changed. The loaders
output audio data in their file's sample rate, which the user has to
query and resample appropriately. Resampling code from Buffer, WavLoader
and FlacLoader is removed.
Note that these applications only check the sample rate at startup,
which is reasonable (the user has to restart applications when changing
the sample rate). Fully dynamic adaptation could both lead to errors and
will require another IPC interface. This seems to be enough for now.