1
Fork 0
mirror of https://github.com/RGBCube/serenity synced 2025-06-01 02:18:12 +00:00

Base: Update the audio subsystem documentation

We need to mention the new queue system as well as the userland sample
format.
This commit is contained in:
kleines Filmröllchen 2022-05-14 11:42:47 +02:00 committed by Linus Groh
parent 74226a6cb3
commit 900349dbe1

View file

@ -8,13 +8,19 @@ Overview of the SerenityOS audio subsystem, including a brief description of [`/
SerenityOS structures audio into three groups of responsibilities: Audio drivers that talk to hardware and expose the `/dev/audio` device; the AudioServer that is responsible for talking to userland audio clients, mixing and processing audio, and controlling the hardware via the driver; the audio libraries LibAudio and LibDSP that facilitate easier handling of audio data for userland applications.
### Sample formats
There are two primary sample formats used in SerenityOS. The `Sample` class in LibAudio provides the userland sample format. It contains 32-bit floating-point samples in multiple channels (currently 2; Stereo), which accurately represent mathematical audio signals between -1 and 1. The kernel audio interfaces use another audio format described in [audio(4)](help://man/4/audio) which userland need not worry about.
### AudioServer
AudioServer is responsible for handling userland audio clients and talking to the hardware. For this reason, no userland application should ever need to write to a device in `/dev/audio` directly, except for special cases in which AudioServer is not present.
As with all system servers, AudioServer provides an IPC interface on `/tmp/portal/audio`. For specifics on how to talk to AudioServer, the IPC interface specifications are the best source of information. Audio clients send audio buffers with the standard audio format (see [audio](help://man/4/audio)) to the server. They can then query the state of these buffers, pause buffer playback or clear the playing buffers. For controlling mixer functionality, clients have the ability to obtain and change their own volume, or the main volume and mute state.
As with all system servers, AudioServer provides an IPC interface on `/tmp/portal/audio`. For specifics on how to talk to AudioServer, the IPC interface specifications are the best source of information. For controlling mixer functionality, clients have the ability to obtain and change their own volume, or the main volume and mute state.
In reverse, AudioServer has "event" calls that the client receives. These are: A client buffer finished playing (useful for queuing the next buffer), various mixer states changed (main volume, main mute, client volume).
Userland audio transmission happens via the AudioQueue. This is a shared memory circular queue which supports concurrent lock-free writing and reading. The queue is created by the audio client and its shared memory file descriptor sent to the audio server. In order to use this queue, an audio application needs to split up its audio data into atomic chunks that can then be provided to the queue. The application might need to wait around until the queue is empty in order to write to it. For these reasons, there's a utility API in LibAudio's audio server IPC connection which allows audio applications to send off a large chunk of samples which get progressively sent in the background.
On the server -> client side, AudioServer has "event" calls that the client receives. These are various mixer state changes (main volume, main mute, client volume).
### Libraries