Engines
To run your audio graph, you need to place it inside an audio engine. The engine is responsible for managing audio I/O, scheduling, and processing, and determines how and where your audio graph runs.
Switchboard SDK provides several types of engines, each suited for different use cases:
Engine Type | Description |
---|---|
Realtime | Runs your audio graph in real time, connecting to the system's audio input/output (e.g., mic, speakers). Ideal for interactive apps, live audio, and music production. |
Offline | Processes audio graphs using audio files as input and output. Useful for rendering, exporting, or batch processing audio. |
Manual | Gives you full control to process audio buffers step-by-step, ideal for custom workflows, testing, or integration with other engines. |
WebSocket | Processes audio buffers received over a network connection, enabling remote audio processing and streaming scenarios. |
You can find the full list of available engines and their documentation on the List of Objects page.
Choosing the Right Engine
-
Realtime Engine:
Use this when you want to process live audio from the microphone, play audio to speakers, or build interactive audio apps. The engine handles low-latency, real-time audio I/O for you. -
Offline Engine:
Use this for non-realtime tasks like rendering audio files, exporting mixes, or running batch audio processing jobs. The engine processes audio as fast as possible, independent of real-time constraints. -
Manual Engine:
Use this if you want to process audio buffers manually, for example, when integrating with another audio engine or for advanced testing and debugging. -
WebSocket Engine:
Use this to process audio data sent and received over a network, such as in collaborative or distributed audio applications.