在webrtc58中,目前用音频模块是VoiceEngine;
一般的创建流程是:
VoiceEngine* m_voe ;
VoEBase* base1 ;
m_voe = VoiceEngine::Create();
base1 = VoEBase::GetInterface(m_voe);
...
res = base1->Init(); //这里是创建音频设备的init函数;
函数说明:
// Initializes all common parts of the VoiceEngine; e.g. all
// encoders/decoders, the sound card and core receiving components.
// This method also makes it possible to install some user-defined external
// modules:
// - The Audio Device Module (ADM) which implements all the audio layer
// functionality in a separate (reference counted) module.
// - The AudioProcessing module handles capture-side processing. VoiceEngine
// takes ownership of this object.
// - An AudioDecoderFactory - used to create audio decoders.
// If NULL is passed for any of these, VoiceEngine will create its own.
// Returns -1 in case of an error, 0 otherwise.
// TODO(ajm): Remove default NULLs.
virtual int Init(AudioDeviceModule* external_adm = NULL,
AudioProcessing* audioproc = NULL,
const rtc::scoped_refptr
decoder_factory = nullptr) = 0;
//也就是说,一般可以默认设备,如果想要自己制定设备,需要 AudioDeviceModule ;
如何实现: AudioDeviceModule :
在webrtc中,已经实现了AudioDeviceModuleImpl,只需要应用就可以了;
具体可以参考webrtc58\src\webrtc\modules\audio_device\test\audio_device_test_api.cc 示例;
如果想要获取音频数据,一般可以通过:audio_device_->RegisterAudioCallback(audio_transport_);
自己实现audio_transport_;
在webrtc58版本中已经有了音频采集获取的接口,OnData,但是webrtc工程中还米有实现完成,相信以后的版本一定会实现的,具体可以参考peerconnection_client;
看到这里,还有一个更好的提议:
一般webrtc的示例中,和上面的示例中,用的是 VoEBase这个类;
但是在webrtc中还有一个类:VoEBaseImpl;
class VoEBaseImpl : public VoEBase,
public AudioTransport,
public AudioDeviceObserver
这个类,更完善一些,还有数据获取,不过如上面所说,相信后续的版本这些接口,会有改变;
在webrtc58版本中,关于音频设备内部模块的创建,可以参考voe_cmd_test.cc这个示例;
以后版本中,如果音频接口完成,也可以通过peerconnection创建音频和获取音频数据;
在webrtc58版本中,关于音频相关的模块,更需要关注的是VoiceEngineImpl;
因为,这才是webrtc具体关于音频应用的管理类;
class VoiceEngineImpl : public voe::SharedData, // Must be the first base class
public VoiceEngine,
public VoEAudioProcessingImpl,
public VoECodecImpl,
public VoEFileImpl,
public VoEHardwareImpl,
public VoENetEqStatsImpl,
public VoENetworkImpl,
public VoERTP_RTCPImpl,
public VoEVolumeControlImpl,
public VoEBaseImpl {
public:
VoiceEngineImpl()
: SharedData(),
VoEAudioProcessingImpl(this),
VoECodecImpl(this),
VoEFileImpl(this),
VoEHardwareImpl(this),
VoENetEqStatsImpl(this),
VoENetworkImpl(this),
VoERTP_RTCPImpl(this),
VoEVolumeControlImpl(this),
VoEBaseImpl(this),
_ref_count(0) {}
~VoiceEngineImpl() override { assert(_ref_count.Value() == 0); }
int AddRef();
// This implements the Release() method for all the inherited interfaces.
int Release() override;
// Backdoor to access a voe::Channel object without a channel ID. This is only
// to be used while refactoring the VoE API!
virtual std::unique_ptr
// This is *protected* so that FakeVoiceEngine can inherit from the class and
// manipulate the reference count. See: fake_voice_engine.h.
protected:
Atomic32 _ref_count;
};
AudioDeviceModule 是一个接口类,是纯虚函数;
具体实现:
class AudioDeviceModuleImpl : public AudioDeviceModule
其中有两个重要的变量:
AudioDeviceBuffer audio_device_buffer_; //主要负责数据;
std::unique_ptr<AudioDeviceGeneric> audio_device_; //具体真正实现了设备的操作;
WebRtcVoiceEngine::Init()
{
...
if (!adm_) {
adm_ = webrtc::AudioDeviceModule::Create(
webrtc::AudioDeviceModule::kPlatformDefaultAudio);
}
//后面的代码是一般的音频相关设置;
...
}
这段代码可以看出,如果没有创建音频设备,则默认创建一个,如果已经创建,则不创建了;
webrtc中还有一个类:
class ADMWrapper : public AudioDeviceModule, public AudioTransport
通过:rtc::scoped_refptr
const AudioDeviceModule::AudioLayer audio_layer,
AudioDeviceDataObserver* observer) 创建;
这个类可以作为单独的音频采集设备用,但是webrtc中用的是VoEBaseImpl ;