IOS【AVFoundation(一)】 视频录制

IOS【AVFoundation(一)】 视频录制_第1张图片
4246D3BA-EE6A-43D0-BDEF-6D9C1E109114.png

AVFoundation 查看介绍,苹果在官方文档上写的比较清楚。大概如下图所示。

IOS【AVFoundation(一)】 视频录制_第2张图片
11603306-3CE5-406A-B735-8617EBB78D5C.png

AVFoundation 捕获视频
======

AVCaptureSession : 负责管理 音频与视频之间的数据流

AVCaptureDevice :视频或者音频设备(摄像头,麦克风)

AVCaptureDeviceInput :音视频输入,需要绑定AVCaptureDevice(设备)

AVCaptureVideoPreviewLayer :AVCaptureSession捕捉到的信息会通过此layer 显示出来

整个流程图看起来像这样:

下面附上代码

1.定义所需对象###


//队列
@property(nonatomic,copy)dispatch_queue_t captureQueue;
//捕获视频的会话
@property (strong, nonatomic) AVCaptureSession *session;
///捕捉到现实的view
@property(nonatomic,strong)AVCaptureVideoPreviewLayer *previewLayer;
//后置摄像头输入
@property (strong, nonatomic) AVCaptureDeviceInput *backCameraInput;
//前置摄像头输入
@property (strong, nonatomic) AVCaptureDeviceInput *frontCameraInput;
//麦克风输入
@property (strong, nonatomic) AVCaptureDeviceInput *audioMicInput;
//音频录制连接
@property (strong, nonatomic) AVCaptureConnection *audioConnection;
//视频录制连接
@property (strong, nonatomic) AVCaptureConnection *videoConnection;
//视频输出
@property (strong, nonatomic) AVCaptureVideoDataOutput *videoOutput;
//音频输出
@property (strong, nonatomic) AVCaptureAudioDataOutput *audioOutput;


2.实例化###


-(void)initSession{
    _fristRun = YES;
    _paused = YES;  //默认是暂停(未开始录制)
    _isFront = YES;  //默认为前摄像头
    //录制队列
    _captureQueue = dispatch_queue_create("com.capture", DISPATCH_QUEUE_SERIAL);
    NSError *error;
    //默认前摄像头输入
    AVCaptureDevice *frontDevice = [self cameraWithPosition:AVCaptureDevicePositionFront];
    _frontCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:frontDevice error:&error];
    if (error) {
        NSLog(@"获取摄像头失败、、、、");
    }
    //实例化后后摄像头
    AVCaptureDevice *backDevice = [self cameraWithPosition:AVCaptureDevicePositionBack];
    _backCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:backDevice error:&error];
    if (error) {
        NSLog(@"获取后摄像头失败、、、、");
    }
    
    //麦克风输入
    NSError *micError;
    AVCaptureDevice *audioDevice =[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    _audioMicInput = [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:&micError];
    if (micError) {
        NSLog(@"获取麦克风失败。。。。");
    }
    
    //音视频输出
    _videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    [_videoOutput setSampleBufferDelegate:self queue:self.captureQueue];
    //视频输出的设置
    NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                    [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
                                    nil];
    _videoOutput.videoSettings = setcapSettings;
    _audioOutput = [[AVCaptureAudioDataOutput alloc] init];
    [_audioOutput setSampleBufferDelegate:self queue:self.captureQueue];
    
    
    
    _session = [[AVCaptureSession alloc] init];
    _session.sessionPreset = AVCaptureSessionPreset1280x720;
    //添加设备
    if ([_session canAddInput:self.frontCameraInput]) {
        [_session addInput:self.frontCameraInput];
    }
    if ([_session canAddInput:self.audioMicInput]) {
        [_session addInput:self.audioMicInput];
    }
    
    //添加输出
    if ([_session canAddOutput:self.audioOutput]) {
        [_session addOutput:self.audioOutput];
    }
    if ([_session canAddOutput:self.videoOutput]) {
        [_session addOutput:self.videoOutput];
    }
    
    //捕获view
    _previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
    _previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [_previewLayer setFrame:CGRectMake(0, 0, WIDTH, HEIGHT)];
    [self.showView.layer insertSublayer:_previewLayer atIndex:0];

    //音视频连接
    _audioConnection = [self.audioOutput connectionWithMediaType:AVMediaTypeAudio];
    self.videoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
}

3.开启捕获###


[self.session startRunning];

做完以上操作步骤,AVCaptureSession 已经在开始捕获视频,此处需要遵守代理

AVCaptureVideoDataOutputSampleBufferDelegate,AVCaptureAudioDataOutputSampleBufferDelegate

在捕获的每一帧,都会回调协议方法

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection

4.视频写入###


//媒体写入对象
@property(nonatomic,strong)AVAssetWriter *writer;
//视频写入
@property (nonatomic, strong) AVAssetWriterInput *videoInput;
//音频写入
@property (nonatomic, strong) AVAssetWriterInput *audioInput;

直接保存视频信息会非常大,所以这里要进行设置,需要进行视频编码:

 //初始化视频输入
        NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:
                                  AVVideoCodecH264, AVVideoCodecKey,
                                  nil];
        //初始化视频写入类
        _videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:settings];

开始写入

 [_writer startWriting];

写入完成回调方法

 [_writer finishWritingWithCompletionHandler: handler];

总结###

AVCaptureSession 对象的构建:需要AVCaptureDeviceInput 设备输入-->AVCaptureDevice 通过调用设备(相机和麦克风等)
在AVCaptureVideoDataOutputSampleBufferDelegate 协议方法的理解。
AVAssetWriter :视频写入的灵活运用。


代码地址

思考###

1.在多次录制完成之后,为什么偶尔会出现首帧黑屏现象?

--一般情况下,音频采集要快于视频采集,在写入过程中,第一帧为音频帧,所有有黑屏现象。解决方法,在- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
方法中,判断captureOutput 参数,如果首帧为音频帧,直接舍弃。

2.在录制完成后会调用[_session stopRunning]; 而业务需求经常会有,视频多次录制需求,如何进行 开始录制-->暂停录制-->开启录制-->....?

---1.)在暂停录制的时候,同时调用[_session stopRunning] 方法。开始的时候,在重新实例化session对象,但在stopRunning时,同时手机画面也会被暂停,体验非常不好,而且重新实例化对性能也会有一些损耗,所以不推荐此方法

---2.)文件写入时候做处理。[self.session startRunning];开始捕获之后,用户点击暂停按钮,响应时间为,停止写入视频帧,点击开始按钮时,继续进入视频帧,录制完成后。
例:录制过程为: A段--暂停2s--B段
此时会发现录制完的视频,在播放完A段视频时候,会有2s的卡界面情况,然后播放B段视频,解决方法。


你可能感兴趣的:(IOS【AVFoundation(一)】 视频录制)