Skip to main content

Adding New Node

Subclassing AudioNode

The first step in creating a custom node is choosing and subclassing the appropriate base class. Switchboard SDK provides three core AudioNode types, each designed for a specific role in the audio graph:

Available Base Classes

Switchboard SDK provides several base classes to choose from, depending on your node's requirements:

Base ClassInputsOutputsProcessing MethodBest For
AudioSourceNode❌ No✅ Yesproduce(AudioBusList&)Complex audio generators with multiple output buses (e.g., multi-track players)
AudioProcessorNode✅ Yes✅ Yesprocess(AudioBusList&, AudioBusList&)Complex processors with multiple input/output buses (e.g., multi-band processors, advanced mixers)
AudioSinkNode✅ Yes❌ Noconsume(AudioBusList&)Complex audio consumers with multiple input buses (e.g., multi-channel recorders)
SingleBusAudioSourceNode❌ No✅ Yesproduce(AudioBus&)Most common for sources - Simple audio generators (e.g., oscillators, file players)
SingleBusAudioProcessorNode✅ Yes✅ Yesprocess(AudioBus&, AudioBus&)Most common for processors - Standard audio effects and filters (e.g., gain, EQ, reverb)
SingleBusAudioSinkNode✅ Yes❌ Noconsume(AudioBus&)Most common for sinks - Simple audio consumers (e.g., audio output, analyzers)
tip

Start with Single Bus variants - Most custom nodes only need a single input and/or output bus. The SingleBus* classes provide a simplified API that's easier to implement and maintain. Only use the multi-bus variants if you specifically need to handle multiple buses.

Common Use Cases by Type

Source Nodes (Generate Audio)

  • Audio file players
  • Signal generators (oscillators, noise generators)
  • Microphone input nodes
  • Network audio receivers
  • Text-to-speech engines

Processor Nodes (Transform Audio)

  • Audio effects (reverb, delay, distortion, EQ)
  • Filters (low-pass, high-pass, band-pass)
  • Dynamics processors (compressors, limiters, gates)
  • Pitch shifters and time stretchers
  • Mixing and routing logic

Sink Nodes (Consume Audio)

  • Audio output to speakers/headphones
  • Audio file recorders
  • Network audio transmitters
  • Audio analyzers (FFT, level meters)
  • Audio-to-text transcription

Example Implementation

Here's a complete example of a custom processor node that implements a simple low-pass filter using the recommended SingleBusAudioProcessorNode base class:

#include <switchboard_core/SingleBusAudioProcessorNode.hpp>

class LowPassFilterNode : public SingleBusAudioProcessorNode {
public:
LowPassFilterNode() : previousSample(0.0f) {
type = "LowPassFilterNode";
}

// Set the bus format (covered in the next section)
bool setBusFormat(AudioBusFormat& inputBusFormat, AudioBusFormat& outputBusFormat) override {
// Typically, processors match input and output formats
return AudioBusFormat::matchBusFormats(inputBusFormat, outputBusFormat);
}

// Process audio data (covered in detail later)
bool process(AudioBus& inBus, AudioBus& outBus) override {
AudioBuffer<float>& inBuffer = *inBus.getBuffer();
AudioBuffer<float>& outBuffer = *outBus.getBuffer();

const uint numFrames = inBuffer.getNumberOfFrames();
const uint numChannels = inBuffer.getNumberOfChannels();

for (uint frame = 0; frame < numFrames; ++frame) {
for (uint channel = 0; channel < numChannels; ++channel) {
float input = inBuffer.getSample(channel, frame);
float output = (input + previousSample) * 0.5f;
outBuffer.setSample(channel, frame, output);
previousSample = input;
}
}
return true;
}

private:
float previousSample;
};

Key Considerations

When subclassing AudioNode, keep these important points in mind:

Real-Time Safety

Your node's processing code will run in a dedicated audio thread with strict timing requirements. You must ensure your code is real-time safe, which means:

  • No memory allocation in the audio thread (no new, malloc, std::vector::push_back, etc.)
  • No blocking operations (no file I/O, network calls, mutexes, or locks)
  • No unbounded loops (no operations with unpredictable execution time)
  • Use pre-allocated buffers and lock-free data structures
  • Keep processing deterministic and fast

Violating real-time safety can cause audio glitches, dropouts, or complete audio system failure.

Thread Safety

If your node exposes properties or handles actions that can be called from the UI thread, you need to ensure thread-safe communication between the UI thread and the audio thread. Use atomic operations or lock-free data structures for parameters that change during processing.

State Management

Initialize all audio processing state (buffers, filter coefficients, internal variables) in your constructor or in a dedicated initialization method. The process() function should only update state, not allocate or initialize it.

Implementing Bus Formats

The bus format method is called by the SDK during graph initialization to negotiate audio formats between connected nodes. Single-bus nodes implement setBusFormat() with individual formats, while multi-bus nodes implement setBusFormats() with format lists.

Method signatures:

setBusFormat(AudioBusFormat&, AudioBusFormat&)  // for processors
setBusFormat(AudioBusFormat&) // for sources/sinks
setBusFormats(AudioBusFormatList&, ...) // for multi-bus variants

AudioBusFormat contains:

  • sampleRate - Sample rate in Hz (e.g., 48000)
  • numberOfChannels - Channel count (1=mono, 2=stereo)
  • numberOfFrames - Buffer size in frames

Common Patterns

Pattern 1: Matching Formats (Passthrough)

When your processor doesn't change the audio format, use matchBusFormats():

bool setBusFormat(AudioBusFormat& inputBusFormat, AudioBusFormat& outputBusFormat) override {
// Input and output have identical formats
return AudioBusFormat::matchBusFormats(inputBusFormat, outputBusFormat);
}

This is common for effects like gain, filters, and most audio processors.

Pattern 2: Accepting Any Format (Source/Sink)

For source or sink nodes that adapt to any format:

bool setBusFormat(AudioBusFormat& busFormat) override {
// Accept any format the graph provides
return true;
}

Pattern 3: Validating Constraints

When your node has specific requirements, validate the format:

bool setBusFormat(AudioBusFormat& busFormat) override {
// Only accept mono input
if (busFormat.numberOfChannels != constants::MONO) {
Logger::error("[MyNode] Only mono input is supported.");
return false;
}
return true;
}

Pattern 4: Format Conversion

When your node changes the format (e.g., channel count or sample rate):

bool setBusFormat(AudioBusFormat& inputBusFormat, AudioBusFormat& outputBusFormat) override {
if (outputBusFormat.isSet()) {
// Output format is fixed, determine input format
inputBusFormat.sampleRate = outputBusFormat.sampleRate;
inputBusFormat.numberOfChannels = constants::MONO;
inputBusFormat.numberOfFrames = outputBusFormat.numberOfFrames;
return true;
} else if (inputBusFormat.isSet()) {
// Input format is fixed, determine output format
outputBusFormat.sampleRate = inputBusFormat.sampleRate;
outputBusFormat.numberOfChannels = constants::STEREO;
outputBusFormat.numberOfFrames = inputBusFormat.numberOfFrames;
return true;
}
return false;
}

Bidirectional Negotiation

The SDK may call setBusFormat() with either the input or output format already set (indicated by isSet()). Your implementation should handle both cases:

  • If input is set: Determine the output format based on your processing
  • If output is set: Determine the input format your node needs
  • If both are set: Validate they're compatible
  • If neither is set: Return false (the SDK will provide at least one)

Important Notes

  • Return true if the formats are acceptable, false otherwise
  • This method is called during graph setup, not in real-time
  • You can allocate resources here based on the negotiated format
  • Store format information (like sample rate) in member variables if needed for processing

Implementing the Process Function

The process function is where your audio processing happens. This method is called repeatedly in the audio thread for every buffer of audio.

Method signatures:

process(AudioBus&, AudioBus&)              // for processors
produce(AudioBus&) // for sources
consume(AudioBus&) // for sinks
process(AudioBusList&, AudioBusList&) // for multi-bus processors
produce(AudioBusList&) // for multi-bus sources
consume(AudioBusList&) // for multi-bus sinks

Accessing Audio Data

Get the audio buffer and its properties:

AudioBuffer<float>& buffer = *bus.getBuffer();
uint numFrames = buffer.getNumberOfFrames();
uint numChannels = buffer.getNumberOfChannels();
uint sampleRate = buffer.getSampleRate();

Reading and Writing Samples

Access individual samples using channel and frame indices:

// Read a sample
float sample = buffer.getSample(channelIndex, frameIndex);

// Write a sample
buffer.setSample(channelIndex, frameIndex, newValue);

// Get direct pointer to channel data (for performance)
const float* readPtr = buffer.getReadPointer(channelIndex);
float* writePtr = buffer.getWritePointer(channelIndex);

Common Processing Patterns

Pattern 1: Simple Copy (Passthrough)

bool process(AudioBus& inBus, AudioBus& outBus) override {
outBus.copyFrom(inBus);
return true;
}

Pattern 2: Sample-by-Sample Processing

bool process(AudioBus& inBus, AudioBus& outBus) override {
AudioBuffer<float>& inBuffer = *inBus.getBuffer();
AudioBuffer<float>& outBuffer = *outBus.getBuffer();

const uint numFrames = inBuffer.getNumberOfFrames();
const uint numChannels = inBuffer.getNumberOfChannels();

for (uint frame = 0; frame < numFrames; ++frame) {
for (uint channel = 0; channel < numChannels; ++channel) {
float input = inBuffer.getSample(channel, frame);
float output = processOneSample(input); // Your DSP here
outBuffer.setSample(channel, frame, output);
}
}
return true;
}

Pattern 3: Using Direct Pointers (Better Performance)

bool process(AudioBus& inBus, AudioBus& outBus) override {
AudioBuffer<float>& inBuffer = *inBus.getBuffer();
AudioBuffer<float>& outBuffer = *outBus.getBuffer();

const uint numFrames = inBuffer.getNumberOfFrames();
const uint numChannels = inBuffer.getNumberOfChannels();

for (uint channel = 0; channel < numChannels; ++channel) {
const float* input = inBuffer.getReadPointer(channel);
float* output = outBuffer.getWritePointer(channel);

for (uint frame = 0; frame < numFrames; ++frame) {
output[frame] = processOneSample(input[frame]);
}
}
return true;
}

Pattern 4: Audio Generation (Source Nodes)

bool produce(AudioBus& bus) override {
AudioBuffer<float>& buffer = *bus.getBuffer();
const uint numFrames = buffer.getNumberOfFrames();
const uint numChannels = buffer.getNumberOfChannels();

for (uint channel = 0; channel < numChannels; ++channel) {
float* output = buffer.getWritePointer(channel);
for (uint frame = 0; frame < numFrames; ++frame) {
output[frame] = generateSample(); // Your generation logic
}
}
return true;
}

Pattern 5: Audio Analysis (Sink Nodes)

bool consume(AudioBus& bus) override {
AudioBuffer<float>& buffer = *bus.getBuffer();
const uint numFrames = buffer.getNumberOfFrames();
const float* input = buffer.getReadPointer(0);

// Analyze the audio
float level = calculateRMS(input, numFrames);
currentLevel.store(level); // Use atomics for thread safety

return true;
}

Return Value

  • Return true if processing succeeded
  • Return false if an error occurred (this will be logged by the SDK)
  • Returning false typically stops the audio graph

Performance Tips

  • Use direct pointers instead of getSample()/setSample() for better performance
  • Process in blocks when possible rather than sample-by-sample
  • Avoid branching inside the inner loop if possible
  • Use SIMD operations for computationally intensive processing
  • Profile your code to identify bottlenecks

Critical Reminders

Remember the real-time safety rules mentioned earlier:

  • ❌ No memory allocation
  • ❌ No locks or blocking operations
  • ❌ No file I/O or system calls
  • ✅ Pre-allocated buffers only
  • ✅ Lock-free data structures for parameter updates
  • ✅ Keep processing deterministic and fast

Next Steps

At this point, you have a functional custom node that compiles and can process audio! You have two options for what to do next:

Make your node more powerful and interactive by exposing features:

  • Configurations - Add initial setup parameters that are defined at node creation time
  • Properties - Add real-time parameters that can be changed during audio processing
  • Actions - Add methods that can be triggered to perform specific operations
  • Events - Add notifications your node can emit to communicate state changes

Option 2: Package as Extension

If your node is ready as-is, you can package it as an extension to make it distributable and reusable across projects. See Packaging as an Extension to learn how to create a Switchboard SDK extension.