Agora Java API Reference for Android
Public Member Functions | List of all members
io.agora.rtc2.IAudioFrameObserver Interface Reference

Public Member Functions

abstract boolean onRecordAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract boolean onPlaybackAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract boolean onMixedAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract boolean onEarMonitoringAudioFrame (int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 
abstract boolean onPlaybackAudioFrameBeforeMixing (String channelId, int userId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type)
 

Detailed Description

The IAudioFrameObserver interface.

Member Function Documentation

◆ onRecordAudioFrame()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onRecordAudioFrame ( String  channelId,
int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the recorded audio frame is received.

Parameters
channelIdThe channel name
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The recorded audio frame is valid and is encoded and sent.
  • false: The recorded audio frame is invalid and is not encoded or sent.

◆ onPlaybackAudioFrame()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onPlaybackAudioFrame ( String  channelId,
int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the playback audio frame is received.

Parameters
channelIdThe channel name
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The playback audio frame is valid and is encoded and sent.
  • false: The playback audio frame is invalid and is not encoded or sent.

◆ onMixedAudioFrame()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onMixedAudioFrame ( String  channelId,
int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the mixed playback audio frame is received.

Parameters
channelIdThe channel name
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The mixed audio data is valid and is encoded and sent.
  • false: The mixed audio data is invalid and is not encoded or sent.

◆ onEarMonitoringAudioFrame()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onEarMonitoringAudioFrame ( int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the ear monitoring audio frame is received.

Parameters
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The ear monitoring audio frame is valid and is encoded and sent.
  • false: The ear monitoring audio frame is invalid and is not encoded or sent.

◆ onPlaybackAudioFrameBeforeMixing()

abstract boolean io.agora.rtc2.IAudioFrameObserver.onPlaybackAudioFrameBeforeMixing ( String  channelId,
int  userId,
int  type,
int  samplesPerChannel,
int  bytesPerSample,
int  channels,
int  samplesPerSec,
ByteBuffer  buffer,
long  renderTimeMs,
int  avsync_type 
)
abstract

Occurs when the playback audio frame before mixing is received.

Parameters
userIdThe user Id.
typeThe audio frame type.
samplesPerChannelThe samples per channel.
bytesPerSampleThe number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
channelsThe number of audio channels. If the channel uses stereo, the data is interleaved.
  • 1: Mono.
  • 2: Stereo.
samplesPerSecThe number of samples per channel per second in the audio frame.
bufferThe audio frame payload.
renderTimeMsThe render timestamp in ms.
avsync_typeThe audio/video sync type.
Returns
  • true: The playback audio frame before mixing is valid and is encoded and sent.
  • false: The playback audio frame before mixing is invalid and is not encoded or sent.