因CSDN MardDown语法问题,流程图部分不兼容有道云笔记,所以流程图部分请拷贝到有道云笔记生成查看。
iOS视频录制:
同拍照一样视频录制功能有两种实现方式
- UIImagePickerViewController
- AVFoundation。
这里只讨论AVFoundation框架,这个框架是苹果提供的底层多媒体框架,用于音视频采集、音视频解码、视频编辑等,多媒体基本上都依赖AVFoundation框架。
视频录制和拍照需要做的工作差不多,主要有以下5步:
- 创建会话AVCaptureSession,用于控制input到output的流向。
- 获取设备AVCaptureDevice,摄像头用于视频采集,话筒用于音频采集。
- 创建输入设备AVCaptureDeviceInput,将设备绑定到input口中,并添加到session上
- 创建输出AVCaptureOutput,可以输出到文件和屏幕上。 AVCaptureMovieFileOutput 输出一个电影文件 AVCaptureVideoDataOutput 输出处理视频帧,用于显示正在录制的视频 AVCaptureAudioDataOutput 输出音频数据
- 音视频合成到一个文件中
iOS对视频实时处理:
如果需要对视频进行实时处理(当然需要否则看不到正在录制的内容),则需要直接对相机缓冲区(camera buffer)中的视频流进行处理。
- 定义一个视频数据输出(AVCaptureVideoDataOutput), 并将其添加到session上。
- 设置接受的controller作为视频数据输出缓冲区(sample buffer)的代理。
- 实现代理方法
-(void)captureOutput:(AVCaptureOutput )captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection )connection
当数据缓冲区(data buffer)一有数据时,AVFoundation就调用该方法。在该代理方法中,我们可以获取视频帧、处理视频帧、显示视频帧。实时滤镜就是在这里进行处理的。在这个方法中将缓冲区中的视频数据(就是帧图片)输出到要显示的layer上。
函数:
int ViECaptureImpl::NumberOfCaptureDevices()
graph TD
A[ViECaptureImpl::NumberOfCaptureDevices] --> B(ViEInputManager::NumberOfCaptureDevices)
B --> C{capture_device_info_ == NULL?}
C --> |Y| D(VideoCaptureFactory::CreateDeviceInfo)
C --> |N| E(capture_device_info_->NumberOfDevices)
D --> F(VideoCaptureImpl::CreateDeviceInfo)
F --> G{platform}
G --> |IOS| H(new DeviceInfoIos)
G --> |Android| I(new DeviceInfoAndroid)
H --> J(new DeviceInfoImpl)
E --> K(DeviceInfoIosObjC captureDeviceCount)
K --> L(device_info_ios_objc.mm: AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo count)
函数:
int ViECaptureImpl::GetCaptureDevice(unsigned int list_number,
char* device_nameUTF8,
unsigned int device_nameUTF8Length,
char* unique_idUTF8,
unsigned int unique_idUTF8Length)
graph TD
A[ViECaptureImpl::GetCaptureDevice] --> B(ViEInputManager::GetDeviceName)
B --> C(capture_device_info_->GetDeviceName)
C --> D(DeviceInfoIos::GetDeviceName)
D --> E(DeviceInfoIosObjC deviceNameForIndex:deviceNumber)
D --> F(DeviceInfoIosObjC deviceUniqueIdForIndex:deviceNumber)
根据CameraId 和 UniqueId创建VCPM
函数:
VideoCaptureModule* VideoCaptureFactory::Create(const int32_t id,
const char* deviceUniqueIdUTF8)
graph TD
A[VideoCaptureFactory::Create] --> B{VideoCaptureImpl::Create}
B --> |IOS video_capture_ios.mm| C(VideoCaptureIos::Create)
B --> |Android video_capture_android.cc| D(VideoCaptureAndroid)
C --> E(VideoCaptureIos::capture_device_)
E -->|rtc_video_capture_ios_objc.mm| F(RTCVideoCaptureIosObjC initWithOwner)
F --> |分配CaptureSession| G(_captureSession = AVCaptureSession alloc init)
G --> |创建captureOutput| H(captureOutput = AVCaptureVideoDataOutput alloc init)
H --> |添加captureOutput到session| I(_captureSession addOutput:captureOutput)
F --> |设置UniqueId| J(capture_device_ setCaptureDeviceByUniqueId)
J --> |inputs ?| K(RTCVideoCaptureIosObjC::changeCaptureInputByUniqueId)
K --> |根据uniqueId找到对应的AVCaptureDevice| L(DeviceInfoIosObjC captureDeviceForUniqueId:uniqueId)
L --> |取出AVCaptureDevice里面的AVCaptureDeviceInput deviceInputWithDevice:captureDevice| M(AVCaptureDeviceInput deviceInputWithDevice:captureDevice)
M --> |添加captureInput| N(_captureSession addInput:newCaptureInput)
N --> |根据Session的Output创建connection| O(_connection=currentOutput connectionWithMediaType:AVMediaTypeVideo)
O --> |设置_connection的video输出方向| P(setRelativeVideoOrientation)
所以针对IOS平台来说,VCPM对象类型为VideoCaptureIos。
函数:
int AllocateCaptureDevice(VideoCaptureModule& capture_module,
int& capture_id)
其中capture_module即为VCPM, 返回绑定的CaptureId
graph TD
A[ViECaptureImpl::AllocateCaptureDevice] --> B(ViEInputManager::CreateCaptureDevice)
B --> |分配空闲CaptureId| C(newcapture_id = GetFreeCaptureId)
C --> |创建ViECapturer| D(ViECapturer::CreateViECapture)
D --> |vie_capturer.cc| E(capture = new ViECapturer)
D --> |将vie_capture对象保存到map,索引为newcapture_id| L(vie_frame_provider_map_ = vie_capture)
E --> |创建并启动线程ViECaptureThread| F(capture_thread_: ViECaptureThreadFunction)
E --> |创建overuse_detector_模块并注册| G(overuse_detector_ = new OveruseFrameDetector)
G --> K(module_process_thread_.RegisterModule overuse_detector_)
D --> |初始化capture对象| H(capture->Init)
H --> |将capture对象注册到VCPM| I(capture_module_->RegisterCaptureDataCallback:this)
I --> |capture_module_为VideoCaptureIos继承了VideoCaptureImpl| J(VideoCaptureImpl::RegisterCaptureDataCallback)
J --> N(VideoCaptureImpl::_dataCallBack = &dataCallBack)
H --> |将VCPM注册为模块| M(module_process_thread_.RegisterModule capture_module_)
至此,已创建视频Session, input, output, 及将vie_capturer绑定到了VCPM模块,启动了线程ViECaptureThread函数(ViECaptureThreadFunction())
这个流程会将ViEEncoder对象注册到ViEFrameProviderBase的frame_callbacks_内。
函数:
int ViECaptureImpl::ConnectCaptureDevice(const int capture_id,
const int video_channel)
graph TD
A[ViECaptureImpl::ConnectCaptureDevice] --> |根据captureId获取到4中创建的ViECapturer对象| B(ViECapturer* vie_capture = is.Capture:capture_id)
B --> |根据channelId获取到ViEEncoder对象| C(ViEEncoder* vie_encoder = cs.Encoder:video_channel)
C --> |向ViECapturer对象注册帧回调对象| D(vie_capture->RegisterFrameCallback:video_channel, vie_encoder)
D --> |vie_frame_provider_base.cc| E(ViEFrameProviderBase::RegisterFrameCallback)
E --> |保存该FrameCallback| F(frame_callbacks_.push_back:callback_object)
ViECapturer继承了ViEFrameProviderBase,所以当调用ViECapturer对象的RegisterFrameCallback会调用父类的该函数.
函数:
int ViECaptureImpl::StartCapture(const int capture_id,
const CaptureCapability& capture_capability)
graph TD
A[ViECaptureImpl::StartCapture] --> B(ViECapturer::Start)
B --> |video_capture_ios.mm| C(VideoCaptureIos::StartCapture)
C --> |rtc_video_capture_ios_objc.mm| D(capture_device_ startCaptureWithCapability:capability)
D --> |分辨率和帧率校验, currentOutput为Session内AVCaptureVideoDataOutput对象| E(self startCaptureInBackgroundWithOutput:currentOutput)
E --> |设置图像采集质量| F(_captureSession setSessionPreset:captureQuality)
F --> |设置session 内AVCaptureDeviceInput* deviceInput参数,比如对焦啥的| G(AVCaptureDevice* inputDevice = deviceInput.device)
G --> |开始采集| H(_captureSession startRunning)
这个流程会创建本地渲染线程及渲染窗口创建和初始化等操作,同时将ViERenderer对象注册到ViEFrameProviderBase的frame_callbacks_内。所以ViEFrameProviderBase的frame_callbacks_会同时存在ViEEncoder和ViERender对象,即本地图像会送往视频编码对象和本地渲染对象。
int ViERenderImpl::AddRenderer(const int render_id, void* window,
const unsigned int z_order, const float left,
const float top, const float right,
const float bottom)
graph TD
A[ViERenderImpl::AddRenderer] --> |vie_render_manager.cc 创建ViERenderer对象| B(renderer = ViERenderManager::AddRenderStream)
B --> |根据window创建本地Render| C(VideoRender::CreateVideoRender)
C --> |video_render_internal_impl.cc 根据平台创建Render| D{new ModuleVideoRenderImpl}
D --> |IOS| E(ptrRenderer = new VideoRenderIosImpl)
D --> |Android| F(ptrRenderer = new AndroidSurfaceViewRenderer)
E --> |video_render_ios_impl.mm | G(VideoRenderIosImpl::VideoRenderIosImpl)
G --> H(VideoRenderIosImpl::Init)
H --> I(ptr_ios_render_ = new VideoRenderIosGles20)
I --> |video_render_ios_gles20.mm| J(VideoRenderIosGles20::VideoRenderIosGles20)
J --> |创建本地视频预览渲染线程VideoRenderIosGles20:ScreenUpdateThreadProc| K(screen_update_thread_ = ThreadWrapper::CreateThread)
K --> |初始化Gles20对象| L(ptr_ios_render_->Init)
L --> |VideoRenderIosView createContext设置及绑定OpenGLES相关buffer| O(view_ createContext)
O --> |启动预览线程VideoRenderIosGles20及定时器| M(screen_update_thread_->Start)
M --> |vie_render_manager.cc 将创建的VideoRender对象加入管理链表| N(render_list_.push_back: render_module)
B --> |保存ViERenderer对象| BCA(stream_to_vie_renderer_:render_id = vie_renderer)
B --> BA(ViERenderer* vie_renderer = ViERenderer::CreateViERenderer)
BA --> |vie_renderer.cc 创建ViERenderer对象并初始化| BB(self = new ViERenderer)
BB --> BC(ViERenderer::Init)
BC --> |render_module_即为VideoRender对象,该render_callback指向为IncomingVideoStream对象| BD(render_callback_ = render_module_.AddIncomingRenderStream)
BD --> |video_render_internal_impl.cc| BE(ModuleVideoRenderImpl::AddIncomingRenderStream)
BE --> |video_render_internal_impl.cc, ptrRenderCallback为VideoRenderIosChannel对象| BF(ptrRenderCallback = _ptrRenderer->AddIncomingRenderStream)
BF --> |video_render_ios_impl.mm | BG(VideoRenderIosImpl::AddIncomingRenderStream)
BG --> BH(ptr_ios_render_->CreateEaglChannel)
BH --> |video_render_ios_gles20.mm| BI(VideoRenderIosGles20::CreateEaglChannel)
BI --> BJ(new_eagl_channel = new VideoRenderIosChannel)
BJ --> BK(VideoRenderIosChannel::VideoRenderIosChannel)
BK --> BL(new_eagl_channel->SetStreamSettings)
BL --> BM(agl_channels_:channel = new_eagl_channel)
BM --> BN(z_order_to_channel_.insert)
BE --> BBA(ptrIncomingStream = new IncomingVideoStream)
BBA --> |incoming_video_stream.cc| BBB(IncomingVideoStream::IncomingVideoStream)
BBB --> |video_render_internal_impl.cc| BBC(ptrIncomingStream->SetRenderCallback:ptrRenderCallback)
BBC --> |将VideoRenderIosChannel对象设置给IncomingVideoStream对象|BBD(IncomingVideoStream::SetRenderCallback)
BBD --> |将IncomingVideoStream对象对象保存到Map内| BBE(_streamRenderMap:streamId = ptrIncomingStream)
A --> ABA(frame_provider->RegisterFrameCallback:renderer)
ABA --> ABC(ViEFrameProviderBase::RegisterFrameCallback)
ABC --> ABD(frame_callbacks_.push_back:callback_object)
int ViERenderImpl::StartRender(const int render_id)
graph TD
A[ViERenderImpl::StartRender] --> |在stream_to_vie_renderer_内找到ViERenderer对象|B(ViERenderer::StartRender)
B --> C(render_module_.StartRender)
C --> D(ModuleVideoRenderImpl::StartRender)
D --> DA(IncomingVideoStream::Start)
D --> |启动硬件层渲染| DB(_ptrRenderer->StartRender)
DA --> |创建并启动线程IncomingVideoStreamThread: IncomingVideoStream::IncomingVideoStreamProcess| DAB(incoming_render_thread_ = ThreadWrapper::CreateThread)
DAB --> |启动定时器| DAC(deliver_buffer_event_.StartTimer)
DB --> |video_render_ios_impl.mm| DBA(VideoRenderIosImpl::StartRender)
DBA --> DBB(ptr_ios_render_->StartRender)
DBB --> DBC(VideoRenderIosGles20::StartRender)
DBC --> |Gles20内线程VideoRenderIosGles20开始运行| DBD(is_rendering_ = true)
rtc_video_capture_ios_objc.mm 实现代理方法
-(void)captureOutput: (AVCaptureOutput)captureOutput
didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer
fromConnection: (AVCaptureConnection )connection
当数据缓冲区(data buffer)一有数据时,AVFoundation就调用该方法。在该代理方法中,我们可以获取视频帧、处理视频帧、显示视频帧。实时滤镜就是在这里进行处理的。在这个方法中将缓冲区中的视频数据(就是帧图片)输出到要显示的layer上。
graph TD
A[RTCVideoCaptureIosObjC captureOutput] --> B(_owner->IncomingFrame)
B --> |video_capture_impl.cc _owner为VideoCaptureIos继承了VideoCaptureImpl| C(VideoCaptureImpl::IncomingFrame)
C --> |将视频数据进行旋转处理及转换成I420格式| D(VideoCaptureImpl::DeliverCapturedFrame)
D --> E(_dataCallBack->OnIncomingCapturedFrame)
E --> |vie_capturer.cc _dataCallBack为ViECapturer对象| F(ViECapturer::OnIncomingCapturedFrame)
F --> |将video_frame存入captured_frame,并激活信号量进入ViECaptureThread线程进行处理| G(captured_frame_->SwapFrame:&video_frame)
G --> |ViECaptureThread线程处理函数| H(ViECapturer::ViECaptureProcess)
H --> |将captured_frame_内数据导入deliver_frame_| I(ViECapturer::SwapCapturedAndDeliverFrameIfAvailable)
I --> J(ViECapturer::DeliverI420Frame)
J --> |Deliver the captured frame to all observers:channels, renderer or file| K(ViEFrameProviderBase::DeliverFrame)
K --> |vie_frame_provider_base.cc, frame_callbacks列表包含了ViEEncoder和ViERenderer对象| L(frame_callbacks_->DeliverFrame)
L --> |视频编码路线| LA(ViEEncoder::DeliverFrame)
L --> |本地渲染路线| LBA(ViERenderer::DeliverFrame)
LBA --> |render_callback为IncomingVideoStream对象| LBC(render_callback_->RenderFrame)
LBC --> |incoming_video_stream.cc| LBD(IncomingVideoStream::RenderFrame)
LBD --> |插入视频帧到render_buffers_,并给出信号量| LBE(render_buffers_.AddFrame:&video_frame)
LBE --> |收到信号量,执行线程处理| LBF(IncomingVideoStreamThread: IncomingVideoStream::IncomingVideoStreamProcess)
LBF --> |此处render_callback_为VideoRenderIosChannel对象| LBG(render_callback_->RenderFrame)
LBG --> |video_render_ios_channel.mm| LBH(VideoRenderIosChannel::RenderFrame, buffer_is_updated_ = true)
LBH --> |video_render_ios_gles20.mm, VideoRenderIosGles20线程每隔16.7ms会去检测VideoRenderIosChannel的buffer_is_updated_标记,判断是否需要渲染屏幕| LBI(VideoRenderIosGles20::ScreenUpdateProcess)
LBI --> |video_render_ios_channel.mm| LBJ(VideoRenderIosChannel::RenderOffScreenBuffer)
LBJ --> |将视频帧数据塞入OpenGles20的Buffer内| LBK(view_ renderFrame:current_frame_, buffer_is_updated_ = false)
LBK --> |video_render_ios_view.mm| LBL(VideoRenderIosView::renderFrame)
LBL --> LBM(_gles_renderer20->Render)
LBM --> |open_gles20.mm| LBN(OpenGles20::Render)
LBN --> LBO(OpenGles20::UpdateTextures)
LBI --> |将OpenGles20内Buffer数据显示出来| LBIA(view_ presentFramebuffer)
LBIA --> LBIB(VideoRenderIosView::presentFramebuffer)
LBIB --> LBIC(_context presentRenderbuffer:GL_RENDERBUFFER)