public interface IAudioFrameObserver
Modifier and Type | Method and Description |
---|---|
AudioParams |
getEarMonitoringAudioParams()
Sets the audio ear monitoring format for the
onEarMonitoringAudioFrame callback. |
AudioParams |
getMixedAudioParams()
Sets the audio mixing format for the
onMixedFrame callback. |
int |
getObservedAudioFramePosition()
Sets the audio observation positions.
|
AudioParams |
getPlaybackAudioParams()
Sets the audio playback format for the
onPlaybackFrame callback. |
AudioParams |
getRecordAudioParams()
Sets the audio recording format for the
onRecordFrame callback. |
boolean |
onEarMonitoringAudioFrame(int type,
int samplesPerChannel,
int bytesPerSample,
int channels,
int samplesPerSec,
java.nio.ByteBuffer buffer,
long renderTimeMs,
int avsync_type)
Occurs when the ear monitoring audio frame is received.
|
boolean |
onMixedAudioFrame(java.lang.String channelId,
int type,
int samplesPerChannel,
int bytesPerSample,
int channels,
int samplesPerSec,
java.nio.ByteBuffer buffer,
long renderTimeMs,
int avsync_type)
Occurs when the mixed playback audio frame is received.
|
boolean |
onPlaybackAudioFrame(java.lang.String channelId,
int type,
int samplesPerChannel,
int bytesPerSample,
int channels,
int samplesPerSec,
java.nio.ByteBuffer buffer,
long renderTimeMs,
int avsync_type)
Occurs when the playback audio frame is received.
|
boolean |
onPlaybackAudioFrameBeforeMixing(java.lang.String channelId,
int userId,
int type,
int samplesPerChannel,
int bytesPerSample,
int channels,
int samplesPerSec,
java.nio.ByteBuffer buffer,
long renderTimeMs,
int avsync_type)
Occurs when the playback audio frame before mixing is received.
|
boolean |
onRecordAudioFrame(java.lang.String channelId,
int type,
int samplesPerChannel,
int bytesPerSample,
int channels,
int samplesPerSec,
java.nio.ByteBuffer buffer,
long renderTimeMs,
int avsync_type)
Occurs when the recorded audio frame is received.
|
boolean onRecordAudioFrame(java.lang.String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, java.nio.ByteBuffer buffer, long renderTimeMs, int avsync_type)
channelId
- The channel nametype
- The audio frame type.samplesPerChannel
- The samples per channel.bytesPerSample
- The number of bytes per audio sample. For example, each PCM
audio sample usually takes up 16 bits (2 bytes).channels
- The number of audio channels. If the channel uses stereo, the data is
interleaved.
- 1: Mono.
- 2: Stereo.samplesPerSec
- The number of samples per channel per second in the audio frame.buffer
- The audio frame payload.renderTimeMs
- The render timestamp in ms.avsync_type
- The audio/video sync type.boolean onPlaybackAudioFrame(java.lang.String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, java.nio.ByteBuffer buffer, long renderTimeMs, int avsync_type)
channelId
- The channel nametype
- The audio frame type.samplesPerChannel
- The samples per channel.bytesPerSample
- The number of bytes per audio sample. For example, each PCM
audio sample usually takes up 16 bits (2 bytes).channels
- The number of audio channels. If the channel uses stereo, the data is
interleaved.
- 1: Mono.
- 2: Stereo.samplesPerSec
- The number of samples per channel per second in the audio frame.buffer
- The audio frame payload.renderTimeMs
- The render timestamp in ms.avsync_type
- The audio/video sync type.boolean onMixedAudioFrame(java.lang.String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, java.nio.ByteBuffer buffer, long renderTimeMs, int avsync_type)
channelId
- The channel nametype
- The audio frame type.samplesPerChannel
- The samples per channel.bytesPerSample
- The number of bytes per audio sample. For example, each PCM
audio sample usually takes up 16 bits (2 bytes).channels
- The number of audio channels. If the channel uses stereo, the data is
interleaved.
- 1: Mono.
- 2: Stereo.samplesPerSec
- The number of samples per channel per second in the audio frame.buffer
- The audio frame payload.renderTimeMs
- The render timestamp in ms.avsync_type
- The audio/video sync type.boolean onEarMonitoringAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, java.nio.ByteBuffer buffer, long renderTimeMs, int avsync_type)
type
- The audio frame type.samplesPerChannel
- The samples per channel.bytesPerSample
- The number of bytes per audio sample. For example, each PCM
audio sample usually takes up 16 bits (2 bytes).channels
- The number of audio channels. If the channel uses stereo, the data is
interleaved.
- 1: Mono.
- 2: Stereo.samplesPerSec
- The number of samples per channel per second in the audio frame.buffer
- The audio frame payload.renderTimeMs
- The render timestamp in ms.avsync_type
- The audio/video sync type.boolean onPlaybackAudioFrameBeforeMixing(java.lang.String channelId, int userId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, java.nio.ByteBuffer buffer, long renderTimeMs, int avsync_type)
userId
- The user Id.type
- The audio frame type.samplesPerChannel
- The samples per channel.bytesPerSample
- The number of bytes per audio sample. For example, each PCM
audio sample usually takes up 16 bits (2 bytes).channels
- The number of audio channels. If the channel uses stereo, the data is
interleaved.
- 1: Mono.
- 2: Stereo.samplesPerSec
- The number of samples per channel per second in the audio frame.buffer
- The audio frame payload.renderTimeMs
- The render timestamp in ms.avsync_type
- The audio/video sync type.int getObservedAudioFramePosition()
getObservedAudioFramePosition
callback to determine at each
specific audio-frame processing node whether to trigger the following callbacks:
- onRecordFrame
- onPlaybackFrame
- onPlaybackFrameBeforeMixing
or onPlaybackFrameBeforeMixingEx
- onMixedFrame
You can set the positions that you want to observe by modifying the return value
of getObservedAudioFramePosition
according to your
scenario.onPlaybackFrame
callback.
- `POSITION_RECORD (0x01 << 1)`: The position for observing the recorded audio of the local
user, which enables the SDK to trigger the onRecordFrame
callback.
- `POSITION_MIXED (0x01 << 2)`: The position for observing the mixed audio of the local user
and all remote users, which enables the SDK to trigger the onMixedFrame
callback.
- `POSITION_BEFORE_MIXING (0x01 << 3)`: The position for observing the audio of a single remote
user before mixing, which enables the SDK to trigger the onPlaybackFrameBeforeMixing
or onPlaybackFrameBeforeMixingEx
callback.AudioParams getRecordAudioParams()
onRecordFrame
callback.
Register the getRecordAudioParams
callback when calling the registerAudioFrameObserver
method. After you successfully
register the audio observer, the SDK triggers this callback each time it receives an audio
frame. You can set the audio recording format in the return value of this callback.AudioParams
.AudioParams getPlaybackAudioParams()
onPlaybackFrame
callback.
Register the getPlaybackAudioParams
callback when calling
the registerAudioFrameObserver
method.
After you successfully register the audio observer, the SDK triggers this callback each time it
receives an audio frame. You can set the audio playback format in the return value of this
callback.AudioParams
.AudioParams getMixedAudioParams()
onMixedFrame
callback.
Register the getMixedAudioParams
callback when calling the registerAudioFrameObserver
method. After you successfully
register the audio observer, the SDK triggers this callback each time it receives an audio
frame. You can set the audio mixing format in the return value of this callback.AudioParams
.AudioParams getEarMonitoringAudioParams()
onEarMonitoringAudioFrame
callback.
Register the getMixedAudioParams
callback when calling the registerAudioFrameObserver
method. After you successfully
register the audio observer, the SDK triggers this callback each time it receives an audio
frame. You can set the audio ear monitoring format in the return value of this callback.AudioParams
.