Skip to main content

Android

Microphone Access

If your app requires access to the microphone and you want to start receiving the microphone audio stream, you can set the enableInput flag to true in the AudioEngine constructor, or through the enableMicrophone(enable: Boolean) method call. This reconfigures the audio engine and activates the microphone input.

We recommend disabling the microphoneEnabled flag when your app does not need to receive the microphone audio stream.

Microphone Presets

The audio engine will use the microphone preset information to optimize the behavior of the input stream. This could affect microphone selection, and the kind of pre-processing that is applied (e.g. noise cancellation, automatic gain control, acoustic echo cancellation, etc). For more information check the Android developer site.

Notes about noise and echo cancellation: The built-in noise and echo cancellation algorithms, which are enabled when the VOICE_COMMUNICATION flag is used, perform poorly on some devices. If you are looking for high-performance and easily integrable algorithms check out our extensions page.

Bluetooth Modes

Bluetooth audio devices work differently during a phone call and while listening to music. Most Bluetooth audio devices expose two endpoints, A2DP and SCO (HFP). A2DP is uni-directional and transfers a high-quality audio stream with up to 2 channels. HFP supports two streams of audio (microphone + audio output) but with lower sample rate (typically below 24 kHz) and it is mono only.

When microphone access is not requested Bluetooth is enabled by default and A2DP is used. In this case no configuration is needed, using HFP is not possible.

When microphone access is requested, both A2DP or HFP can be used. The default is HFP, but A2DP can be enabled by setting the allowBluetoothA2DP flag. Since A2DP cannot handle two streams, the device's built-in microphone is used and there is no way to read the headset's microphone.

Achieving Low Latency

Some apps require low latency for a seamless experience (karaoke apps, recording studio apps). If that is your use case, enable the low-latency flag, otherwise let the audio engine to automatically handle latency for a glitch-free experience. Please note that if the low-latency flag is enabled the power consumption might be higher and glitches may appear more frequently.

Device Selection

By default our audio engine automatically handles input and output device selection. If you want to use a specific input or output device pass the device ID as a parameter obtained from the AudioDeviceInfo class.

Managing Audio Focus

By default, two or more Android apps can play audio to the same output stream simultaneously and the system mixes everything together. This is not always the desired user experience. To avoid music apps playing at the same time, Android introduced the idea of audio focus.

When your app needs to output audio, it should request audio focus. When it has focus, it can play sound. However, after you acquire audio focus you may not be able to keep it until you’re done playing. Another app can request focus, which pre-empts your hold on it. If that happens, your app should pause playing or lower its volume to let users hear the new audio source better.

It is important to note, that these guidelines are not forced by the Android system, they are just advised. An app that follows the guidelines mentioned beforehand is named “well-behaved audio app”. Usually all music apps in the Play Store are “well-behaved” apps.

Since handling audio focus should be done on a higher level, it is out of scope for our SDK, but below are some code snippets that show how to do so.

Requesting audio focus is a bit different on different API levels, but the logic is similar.

On all API levels you have to create a listener which listens for the audio focus change:

private val audioFocusChangeListener =
OnAudioFocusChangeListener { focusChange ->
when (focusChange) {
AudioManager.AUDIOFOCUS_GAIN -> {
playMusic()
}
AudioManager.AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE -> {
playMusic()
}
AudioManager.AUDIOFOCUS_LOSS -> {
pauseMusic()
}
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT -> {
pauseMusic()
}
}
}

This listener has to be passed as a parameter to the AudioManager, by calling audioManager.requestAudioFocus(...), which has a different signature on different API levels.

Requesting Audio Focus on API level < 26:

val requestAudioFocusForResult = audioManager.requestAudioFocus(
audioFocusChangeListener,
AudioManager.STREAM_MUSIC,
AudioManager.AUDIOFOCUS_GAIN
)

Requesting Audio Focus on API level ≥ 26:

val playbackAttributes = AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build()

val audioFocusRequest = AudioFocusRequest.Builder(AudioManager.AUDIOFOCUS_GAIN)
.setAudioAttributes(playbackAttributes)
.setAcceptsDelayedFocusGain(true)
.setOnAudioFocusChangeListener(audioFocusChangeListener)
.build()

val requestAudioFocusForResult = audioManager.requestAudioFocus(audioFocusRequest)

Check the result of the audio focus request and act accordingly:

if (requestAudioFocusForResult == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
playMusic()
} else {
// Audio focus not granted by the system
}

For more detailed information please check the official documentation.