Agora Java API Reference for Android
Public Member Functions | List of all members
io.agora.rtc2.IAudioFrameObserver Interface Reference

Public Member Functions

abstract boolean onRecordAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract boolean onPlaybackAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract boolean onMixedAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract boolean onEarMonitoringAudioFrame (int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract boolean onPlaybackAudioFrameBeforeMixing (String channelId, int userId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract int getObservedAudioFramePosition ()
 
abstract AudioParams getRecordAudioParams ()
 
abstract AudioParams getPlaybackAudioParams ()
 
abstract AudioParams getMixedAudioParams ()
 
abstract AudioParams getEarMonitoringAudioParams ()
 

Detailed Description

The IAudioFrameObserver interface.

Member Function Documentation

◆ onRecordAudioFrame()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onRecordAudioFrame ( String  channelId,
int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the recorded audio frame is received.

Parameters
channelIdThe channel name
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The recorded audio frame is valid and is encoded and sent.
  • false: The recorded audio frame is invalid and is not encoded or sent.

◆ onPlaybackAudioFrame()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onPlaybackAudioFrame ( String  channelId,
int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the playback audio frame is received.

Parameters
channelIdThe channel name
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The playback audio frame is valid and is encoded and sent.
  • false: The playback audio frame is invalid and is not encoded or sent.

◆ onMixedAudioFrame()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onMixedAudioFrame ( String  channelId,
int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the mixed playback audio frame is received.

Parameters
channelIdThe channel name
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The mixed audio data is valid and is encoded and sent.
  • false: The mixed audio data is invalid and is not encoded or sent.

◆ onEarMonitoringAudioFrame()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onEarMonitoringAudioFrame ( int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the ear monitoring audio frame is received.

Parameters
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The ear monitoring audio frame is valid and is encoded and sent.
  • false: The ear monitoring audio frame is invalid and is not encoded or sent.

◆ onPlaybackAudioFrameBeforeMixing()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onPlaybackAudioFrameBeforeMixing ( String  channelId,
int  userId,
int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the playback audio frame before mixing is received.

Parameters
userIdThe user Id.
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The playback audio frame before mixing is valid and is encoded and sent.
  • false: The playback audio frame before mixing is invalid and is not encoded or sent.

◆ getObservedAudioFramePosition()

abstract int io.agora.rtc2.IAudioFrameObserver.getObservedAudioFramePosition ( )
abstract

Sets the audio observation positions.

After you successfully register the audio observer, the SDK uses the {getObservedAudioFramePosition} callback to determine at each specific audio-frame processing node whether to trigger the following callbacks: onRecordFrame onRecordFrame} onPlaybackFrame onPlaybackFrame} onPlaybackFrameBeforeMixing onPlaybackFrameBeforeMixing} or { onPlaybackFrameBeforeMixingEx onPlaybackFrameBeforeMixingEx} onMixedFrame onMixedFrame} You can set the positions that you want to observe by modifying the return value of getObservedAudioFramePosition getObservedAudioFramePosition} according to your scenario. To observe multiple positions, use | (the OR operator).The default return value of getObservedAudioFramePosition getObservedAudioFramePosition} is POSITION_PLAYBACK (0x01) and POSITION_RECORD (0x01 << 1).To conserve system resources, you can reduce the number of frame positions that you want to observe. The bit mask that controls the audio observation positions:POSITION_PLAYBACK (0x01): The position for observing the playback audio of all remote users after mixing, which enables the SDK to trigger the onPlaybackFrame onPlaybackFrame} callback.POSITION_RECORD (0x01 << 1): The position for observing the recorded audio of the local user, which enables the SDK to trigger the onRecordFrame onRecordFrame} callback.POSITION_MIXED (0x01 << 2): The position for observing the mixed audio of the local user and all remote users, which enables the SDK to trigger the onMixedFrame onMixedFrame} callback.POSITION_BEFORE_MIXING (0x01 << 3): The position for observing the audio of a single remote user before mixing, which enables the SDK to trigger the onPlaybackFrameBeforeMixing onPlaybackFrameBeforeMixing} or onPlaybackFrameBeforeMixingEx onPlaybackFrameBeforeMixingEx} callback.

◆ getRecordAudioParams()

abstract AudioParams io.agora.rtc2.IAudioFrameObserver.getRecordAudioParams ( )
abstract

◆ getPlaybackAudioParams()

abstract AudioParams io.agora.rtc2.IAudioFrameObserver.getPlaybackAudioParams ( )
abstract

Sets the audio playback format for the onPlaybackFrame callback.

Register the getPlaybackAudioParams callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio playback format in the return value of this callback.

Note
The SDK calculates the sample interval according to the AudioParams you set in the return value of this callback and triggers the onPlaybackFrame callback at the calculated sample interval.

Sample interval (seconds) = samplePerCall/(sampleRate × channelCnt). Ensure that the value of sample interval is equal to or greater than 0.01.

Returns
Sets the audio format. See AudioParams.

◆ getMixedAudioParams()

abstract AudioParams io.agora.rtc2.IAudioFrameObserver.getMixedAudioParams ( )
abstract

◆ getEarMonitoringAudioParams()

abstract AudioParams io.agora.rtc2.IAudioFrameObserver.getEarMonitoringAudioParams ( )
abstract