Agora Java API Reference for Android
|
Public Member Functions | |
abstract boolean | onRecordAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
abstract boolean | onPlaybackAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
abstract boolean | onMixedAudioFrame (String channelId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
abstract boolean | onEarMonitoringAudioFrame (int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
abstract boolean | onPlaybackAudioFrameBeforeMixing (String channelId, int userId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
abstract int | getObservedAudioFramePosition () |
abstract AudioParams | getRecordAudioParams () |
abstract AudioParams | getPlaybackAudioParams () |
abstract AudioParams | getMixedAudioParams () |
abstract AudioParams | getEarMonitoringAudioParams () |
The IAudioFrameObserver interface.
|
abstract |
Occurs when the recorded audio frame is received.
channelId | The channel name |
type | The audio frame type. |
samplesPerChannel | The samples per channel. |
bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
channels | The number of audio channels. If the channel uses stereo, the data is interleaved.
|
samplesPerSec | The number of samples per channel per second in the audio frame. |
buffer | The audio frame payload. |
renderTimeMs | The render timestamp in ms. |
avsync_type | The audio/video sync type. |
|
abstract |
Occurs when the playback audio frame is received.
channelId | The channel name |
type | The audio frame type. |
samplesPerChannel | The samples per channel. |
bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
channels | The number of audio channels. If the channel uses stereo, the data is interleaved.
|
samplesPerSec | The number of samples per channel per second in the audio frame. |
buffer | The audio frame payload. |
renderTimeMs | The render timestamp in ms. |
avsync_type | The audio/video sync type. |
|
abstract |
Occurs when the mixed playback audio frame is received.
channelId | The channel name |
type | The audio frame type. |
samplesPerChannel | The samples per channel. |
bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
channels | The number of audio channels. If the channel uses stereo, the data is interleaved.
|
samplesPerSec | The number of samples per channel per second in the audio frame. |
buffer | The audio frame payload. |
renderTimeMs | The render timestamp in ms. |
avsync_type | The audio/video sync type. |
|
abstract |
Occurs when the ear monitoring audio frame is received.
type | The audio frame type. |
samplesPerChannel | The samples per channel. |
bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
channels | The number of audio channels. If the channel uses stereo, the data is interleaved.
|
samplesPerSec | The number of samples per channel per second in the audio frame. |
buffer | The audio frame payload. |
renderTimeMs | The render timestamp in ms. |
avsync_type | The audio/video sync type. |
|
abstract |
Occurs when the playback audio frame before mixing is received.
userId | The user Id. |
type | The audio frame type. |
samplesPerChannel | The samples per channel. |
bytesPerSample | The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes). |
channels | The number of audio channels. If the channel uses stereo, the data is interleaved.
|
samplesPerSec | The number of samples per channel per second in the audio frame. |
buffer | The audio frame payload. |
renderTimeMs | The render timestamp in ms. |
avsync_type | The audio/video sync type. |
|
abstract |
|
abstract |
Sets the audio recording format for the onRecordFrame callback.
Register the getRecordAudioParams callback when calling the {registerAudioFrameObserver} method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio recording format in the return value of this callback. The SDK calculates the sample interval according to the AudioParams
you set in the return value of this callback and triggers the onRecordFrame onRecordFrame} callback at the calculated sample interval. Sample interval (seconds) = samplePerCall
/(sampleRate
× channelCnt
). Ensure that the value of sample interval is equal to or greater than 0.01. Sets the audio format. See io.agora.rtc.audio.AudioParams AudioParams}.
|
abstract |
Sets the audio playback format for the onPlaybackFrame callback.
Register the getPlaybackAudioParams callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio playback format in the return value of this callback.
AudioParams
you set in the return value of this callback and triggers the onPlaybackFrame callback at the calculated sample interval.Sample interval (seconds) = samplePerCall
/(sampleRate
× channelCnt
). Ensure that the value of sample interval is equal to or greater than 0.01.
|
abstract |
Sets the audio mixing format for the onMixedFrame callback.
Register the getMixedAudioParams callback when calling the {registerAudioFrameObserver} method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio mixing format in the return value of this callback. The SDK calculates the sample interval according to the AudioParams
you set in the return value of this callback and triggers the onMixedFrame onMixedFrame} callback at the calculated sample interval. Sample interval (seconds) = samplePerCall
/(sampleRate
× channelCnt
). Ensure that the value of sample interval is equal to or greater than 0.01. Sets the audio format. See io.agora.rtc.audio.AudioParams AudioParams}.
|
abstract |
Sets the audio ear monitoring format for the onEarMonitoringAudioFrame callback.
Register the getMixedAudioParams callback when calling the {registerAudioFrameObserver} method. After you successfully register the audio observer, the SDK triggers this callback each time it receives an audio frame. You can set the audio ear monitoring format in the return value of this callback. The SDK calculates the sample interval according to the AudioParams
you set in the return value of this callback and triggers the onEarMonitoringAudioFrame onEarMonitoringAudioFrame} callback at the calculated sample interval. Sample interval (seconds) = samplePerCall
/(sampleRate
× channelCnt
). Ensure that the value of sample interval is equal to or greater than 0.01. Sets the audio format. See io.agora.rtc.audio.AudioParams AudioParams}.