最近做这个webrtc,着实麻烦。
网上资料少,困难,即使成功下载速度也很慢。因为我这边是联通,慢,慢,慢。我想研究下webrtc是如何采集音频的,并如何将其写入到文件的。
无奈不得不查看webrtc的源码,怎么查看,需要有好的方法。我在一次不经意间发现VoEFile是有关音频读写文件的类。
这样我查看其相关代码voe_file.h,发现其里面有个例子:
// This sub-API supports the following functionalities:
//
// - File playback.
// - File recording.
// - File conversion.
//
// Usage example, omitting error checking:
//
// using namespace webrtc;
// VoiceEngine* voe = VoiceEngine::Create();
// VoEBase* base = VoEBase::GetInterface(voe);
// VoEFile* file = VoEFile::GetInterface(voe);
// base->Init();
// int ch = base->CreateChannel();
// ...
// base->StartPlayout(ch);
// file->StartPlayingFileAsMicrophone(ch, "data_file_16kHz.pcm", true);
// ...
// file->StopPlayingFileAsMicrophone(ch);
// base->StopPlayout(ch);
// ...
// base->DeleteChannel(ch);
// base->Terminate();
// base->Release();
// file->Release();
// VoiceEngine::Delete(voe);
但是这个例子没法直接使用,我经过多方研究,将其修改了一下,具体如下:
#include "webrtc/base/ssladapter.h"
#include "webrtc/base/win32socketinit.h"
#include "webrtc/base/win32socketserver.h"
#include "webrtc\voice_engine\voe_file_impl.h"
#include "webrtc\voice_engine\include\voe_base.h"
#include "webrtc/modules/audio_device/include/audio_device.h"
#include
#include
#include
#include
#include "webrtc/modules/audio_device/include/audio_device.h"
#include "webrtc/common_audio/resampler/include/resampler.h"
#include "webrtc/modules/audio_processing/aec/include/echo_cancellation.h"
#include "webrtc/common_audio/vad/include/webrtc_vad.h"
#include "dbgtool.h"
#include "string_useful.h"
using namespace webrtc;
VoiceEngine* g_voe = NULL;
VoEBase* g_base = NULL;
VoEFile* g_file = NULL;
int g_ch = -1;
HANDLE g_hEvQuit = NULL;
void Begin_RecordMicrophone();
void End_RecordMicrophone();
/////////////////////////////////////////////////////////////////////////
// 录制microphone输入的声音
void Begin_RecordMicrophone()
{
int iRet = -1;
g_voe = VoiceEngine::Create();
g_base = VoEBase::GetInterface(g_voe);
g_file = VoEFile::GetInterface(g_voe);
g_base->Init();
//g_ch = g_base->CreateChannel();
g_hEvQuit = CreateEvent(NULL, FALSE, FALSE, NULL);
// ...
//base->StartPlayout(ch);
// 播放输入文件audio_long16.pcm并将其录入到audio_long16_out.pcm中
//iRet = file->StartPlayingFileLocally(ch, "E:\\webrtc_compile\\webrtc_windows\\src\\talk\\examples\\hh_sample\\audio_long16.pcm", true);
//iRet = file->StartRecordingPlayout(ch, "E:\\webrtc_compile\\webrtc_windows\\src\\talk\\examples\\hh_sample\\audio_long16_out.pcm");
// 录制输入的microphone的声音到文件
iRet = g_file->StartRecordingMicrophone("E:\\webrtc_compile\\webrtc_windows\\src\\talk\\examples\\hh_sample\\audio_long16_from_microphone.wav");
while (TRUE) {
DWORD dwRet = ::WaitForSingleObject(g_hEvQuit, 500);
if (dwRet == WAIT_OBJECT_0) {
End_RecordMicrophone();
break;
}
}
}
void End_RecordMicrophone()
{
g_file->StopRecordingMicrophone();
g_base->Terminate();
g_base->Release();
g_file->Release();
VoiceEngine::Delete(g_voe);
}
DWORD WINAPI ThreadFunc(LPVOID lpParameter) {
Begin_RecordMicrophone();
return 0;
}
int main()
{
// 初始化SSL
rtc::InitializeSSL();
DWORD IDThread;
HANDLE hThread;
DWORD ExitCode;
hThread = CreateThread(NULL,
0,
(LPTHREAD_START_ROUTINE)ThreadFunc,
NULL,
0,
&IDThread);
if (hThread == NULL) {
return -1;
}
printf("Input 'Q' to stop recording!!!");
char ch;
while (ch = getch()) {
if (ch == 'Q') {
if (g_hEvQuit) {
::SetEvent(g_hEvQuit);
if (hThread) {
::WaitForSingleObject(hThread, INFINITE);
CloseHandle(hThread);
hThread = NULL;
}
CloseHandle(g_hEvQuit);
g_hEvQuit = NULL;
}
break;
}
}
rtc::CleanupSSL();
return 0;
}
上面这个例子实现了采集麦克风声音到文件的功能。
然后我根据这个例子,研究了一下采集音频buffer,并写入到文件的流程,具体学习记录如下:
/////////////////////////////////////////////////////
// Part A -- 初始化音频输入端,输出端
1. 当用户调用
VoEFileImpl::StartRecordingMicrophone录制Microphone到文件时,其实在其内部将该操作转交给了成员变量SharedData的成员变量TransmitMixer的StartRecordingMicrophone函数来实现,具体实现如下:
int VoEFileImpl::StartRecordingMicrophone(const char* fileNameUTF8,
CodecInst* compression,
int maxSizeBytes) {
// ...
if (_shared->transmit_mixer()->StartRecordingMicrophone(fileNameUTF8,
compression)) {
WEBRTC_TRACE(kTraceError, kTraceVoice, VoEId(_shared->instance_id(), -1),
"StartRecordingMicrophone() failed to start recording");
return -1;
}
// ...
}
2. 在TransmitMixer::StartRecordingMicrophone内部,其创建了一个FileRecorder,并委托其来录制Microphone到音频文件。具体实现如下:
int TransmitMixer::StartRecordingMicrophone(const char* fileName,
const CodecInst* codecInst)
{
// ...
// 创建一个FileRecorder
_fileRecorderPtr =
FileRecorder::CreateFileRecorder(_fileRecorderId,
(const FileFormats) format);
// ...
// 委托给FileRecorder的StartRecordingAudioFile来处理
if (_fileRecorderPtr->StartRecordingAudioFile(
fileName,
(const CodecInst&) *codecInst,
notificationTime) != 0)
{
// ...
}
// ...
}
3. 而FileRecorder只是一个接口,其实现类为FileRecorderImpl。
在FileRecorderImpl的StartRecordingAudioFile函数中,又将录制的具体操作委托给
MediaFile来实现。
int32_t FileRecorderImpl::StartRecordingAudioFile(
const char* fileName,
const CodecInst& codecInst,
uint32_t notificationTimeMs,
ACMAMRPackingFormat amrFormat)
{
// ...
_moduleFile->StartRecordingAudioFile(fileName, _fileFormat,
codecInst,
notificationTimeMs);
// ...
}
4. 同样的, MediaFile只是一个接口,其实现类为MediaFileImpl。在其函数StartRecordingAudioFile内部,新建了一个FileWrapper代表写出的文件流,然后在StartRecordingAudioStream做相关的设置工作,并将该文件里指针保存到类型为OutStream*的成员变量_ptrOutStream中。
代码如下:
int32_t MediaFileImpl::StartRecordingAudioStream(
OutStream& stream,
const FileFormats format,
const CodecInst& codecInst,
const uint32_t notificationTimeMs)
{
// ...
FileWrapper* outputStream = FileWrapper::Create();
// ...
if(StartRecordingAudioStream(*outputStream, format, codecInst,
notificationTimeMs) == -1)
{
outputStream->CloseFile();
delete outputStream;
return -1;
}
// ...
}
int32_t MediaFileImpl::StartRecordingAudioStream(
OutStream& stream,
const FileFormats format,
const CodecInst& codecInst,
const uint32_t notificationTimeMs)
{
// ...
_ptrOutStream = &stream;
// ...
}
// 至此,音频输出端各项参数都已经准备好了,但是音频输入端还没有准备好。
下面又回到了VoEFileImpl::StartRecordingMicrophone函数,需要初始化音频输入端的各项参数。在初始化音频输入端各项参数时,会根本不同的平台初始化不同系统SDK。
比如Windows平台,使用的就是AudioDeviceWindowsCore,其他平台也有相应的类。
在实际的StartRecording录音函数里面,会创建一个录音线程不断从声卡获取音频buffer,这里获取的是PCM数据。
int VoEFileImpl::StartRecordingMicrophone(const char* fileNameUTF8,
CodecInst* compression,
int maxSizeBytes) {
{
// ...
// 初始化音频输出端各项参数
if (_shared->transmit_mixer()->StartRecordingMicrophone(fileNameUTF8,
compression)) {
WEBRTC_TRACE(kTraceError, kTraceVoice, VoEId(_shared->instance_id(), -1),
"StartRecordingMicrophone() failed to start recording");
return -1;
}
// 初始化音频输入端各项参数,并开始录音
if (!_shared->ext_recording()) {
if (_shared->audio_device()->InitRecording() != 0) {
WEBRTC_TRACE(kTraceError, kTraceVoice, VoEId(_shared->instance_id(), -1),
"StartRecordingMicrophone() failed to initialize recording");
return -1;
}
if (_shared->audio_device()->StartRecording() != 0) {
WEBRTC_TRACE(kTraceError, kTraceVoice, VoEId(_shared->instance_id(), -1),
"StartRecordingMicrophone() failed to start recording");
return -1;
}
}
return 0;
}
///////////////////////////////////////////////////////////////////////////////
// Part B -- 输入端的buffer是如何写到输出端的
上面,把录音的输入端,输出端都讲完了,但是他们之间的接口呢。就是这个新运行的录音线程录取的音频buffer是怎么就到了输出端的呢?看完请往下继续看。
1. 在上面我们知道录音的时候会创建一个新的录音线程,该线程的入口就是AudioDeviceWindowsCore::DoCaptureThread()(Windows下使用Windows Core API的实现)。
在该函数里面,使用IAudioCaptureClient接口获取音频buffer,然后调用DeliverRecordedData将其推到上层进行处理。
DWORD AudioDeviceWindowsCore::DoCaptureThread()
{
// ...
// 获取音频buffer
// Find out how much capture data is available
//
hr = _ptrCaptureClient->GetBuffer(&pData, // packet which is ready to be read by used
&framesAvailable, // #frames in the captured packet (can be zero)
&flags, // support flags (check)
&recPos, // device position of first audio frame in data packet
&recTime); // value of performance counter at the time of recording the first audio frame
// ...
// 将其推送到上层处理
_ptrAudioBuffer->DeliverRecordedData();
// ...
}
2. AudioDeviceBuffer::DeliverRecordedData()里面检查相关的数据参数,然后委托其内部成员AudioTransport将录音数据再往上层送。_ptrCbAudioTransport在该类里面被具先话为VoEBaseImpl。具体代码如下:
int32_t AudioDeviceBuffer::DeliverRecordedData()
{
// ...
// 将录音数据往上层继续送
res = _ptrCbAudioTransport->RecordedDataIsAvailable(&_recBuffer[0],
_recSamples,
_recBytesPerSample,
_recChannels,
_recSampleRate,
totalDelayMS,
_clockDrift,
_currentMicLevel,
_typingStatus,
newMicLevel);
// ...
}
3. _ptrCbAudioTransport其实是指向VoEBaseImpl的。在VoEBaseImpl::RecordedDataIsAvailable里面只是简单的将数据委托给本类的ProcessRecordedDataWithAPM继续处理,代码如下:
int32_t VoEBaseImpl::RecordedDataIsAvailable(
const void* audioSamples, uint32_t nSamples, uint8_t nBytesPerSample,
uint8_t nChannels, uint32_t samplesPerSec, uint32_t totalDelayMS,
int32_t clockDrift, uint32_t micLevel, bool keyPressed,
uint32_t& newMicLevel) {
newMicLevel = static_cast
nullptr, 0, audioSamples, samplesPerSec, nChannels, nSamples,
totalDelayMS, clockDrift, micLevel, keyPressed));
return 0;
}
4.在VoEBaseImpl::ProcessRecordedDataWithAPM里将数据继续委托给之前说的transmit_mixer执行,在那里会将音频buffer写入到文件,代码如下:
int VoEBaseImpl::ProcessRecordedDataWithAPM(
const int voe_channels[], int number_of_voe_channels,
const void* audio_data, uint32_t sample_rate, uint8_t number_of_channels,
uint32_t number_of_frames, uint32_t audio_delay_milliseconds,
int32_t clock_drift, uint32_t volume, bool key_pressed)
{
// ...
// 将数据写入到文件
// Perform channel-independent operations
// (APM, mix with file, record to file, mute, etc.)
shared_->transmit_mixer()->PrepareDemux(
audio_data, number_of_frames, number_of_channels, sample_rate,
static_cast
voe_mic_level, key_pressed);
// ...
}
4. 上面的shared->transmit_mixer()->PrepareDemux的实现类在TransmitMixer::PrepareDemux,在该函数里面会继续调用RecordAudioToFile将音频buffer写入到文件,代码如下:
int32_t
TransmitMixer::PrepareDemux(const void* audioSamples,
uint32_t nSamples,
uint8_t nChannels,
uint32_t samplesPerSec,
uint16_t totalDelayMS,
int32_t clockDrift,
uint16_t currentMicLevel,
bool keyPressed)
{
// ...
// 音频buffer写入到文件
if (file_recording)
{
RecordAudioToFile(_audioFrame.sample_rate_hz_);
}
// ...
}
5. TransmitMixer::RecordAudioToFile里面写入音频buffer做了一个同步,然后委托给内部成员_fileRecorderPtr(其类型为FileRecorder)来写入音频buffer到文件,注意到没?这个_fileRecorderPtr不就是PartA-2里面的那个FileRecorder嘛。
int32_t FileRecorderImpl::RecordAudioToFile(
const AudioFrame& incomingAudioFrame,
const TickTime* playoutTS)
{
// ...
if (WriteEncodedAudioData(_audioBuffer, encodedLenInBytes) == -1)
{
return -1;
}
// ...
}
6. FileRecorderImpl::WriteEncodedAudioData里面就简单了,没做啥事,只是委托给MediaFile* _moduleFile来执行工作,代码如下:
int32_t FileRecorderImpl::WriteEncodedAudioData(const int8_t* audioBuffer,
size_t bufferLength)
{
return _moduleFile->IncomingAudioData(audioBuffer, bufferLength);
}
7. MediaFile的实现类为MediaFileImpl,在MediaFileImpl::IncomingAudioData里面我们将数据写入到_ptrOutStream中,注意看这个_ptrOutStream不就是PartA-4里面所说的那个_ptrOutStream嘛,简略代码如下:
int32_t MediaFileImpl::IncomingAudioData(
const int8_t* buffer,
const size_t bufferLengthInBytes)
{
// ...
bytesWritten = _ptrFileUtilityObj->WritePCMData(
*_ptrOutStream,
buffer,
bufferLengthInBytes);
// ...
}
// 至此,音频是如何采集,并保存到文件的,全部流程已经通了,但是中间有很多对音频的处理,这里并没有讲。因为说这些会影响我对全局的理解,因此在以后的学习过程中慢慢的消化。