iOS 视频录制功能

本文对之前做过的相机模块做个小结,包括自定义相机进行视频拍摄,视频处理及保存等,感兴趣的朋友可以做个参考

框架介绍 AVFoundation

image.png

常用于媒体录制、编辑、播放,音频录制和播放,视频音频解码等

常用类:AVCaptureDevice、 AVCaptureDeviceInput、 AVCapturePhotoOutput、 AVCaptureVideoPreviewLayer、
AVAsset、 AVAssetReader、 AVAssetWriter、 CMSampleBuffer、 AVPlayer、 CMTime、 AVCaptureMovieFileOutput、 AVCaptureMetadataOutput等

  • AVAsset 是一个抽象类,定义了一个资产文件的抽象接口AVURLAsset 通过 URL 创建,URL 可以是本地资源,也可以是网络资源

  • AVAssetReader 用以读取 AVAsset 的媒体数据,可以直接将未解码的媒体数据解码为可用数据

  • AVAssetWriter 可以将媒体数据 CMSampleBuffer 写入指定的文件中

  • CMSampleBuffer 是 Core Foundation 对象,是音频, 视频的压缩或未压缩数据样本

  • CMTime 一个表示时间的结构体。以分数的形式表示时间

  • AVCaptureMovieFileOutput 将音频和视频数据输出到文件中

  • AVCaptureMetadataOutput 元数据捕获输出 该 Output 比较牛逼,可以用来扫描条形码,人脸,二维码,UPC-E 商品条形码等信息。

准备工作

1.判断有无权限

AVAuthorizationStatus authStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];

如果未申请过权限,则进行权限获取

[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {
            dispatch_sync(dispatch_get_main_queue(), ^{
                if (granted) {
                   // 申请权限成功
                } else {
                   // 申请权限失败
                }
            });
        }];
  1. 如果默认横屏,需要根据屏幕方向进行旋转

自定义相机配置信息

image.png

Capture 系统体系结构主要部分是会话,输入和输出

Capture 会话将一个或多个输入连接到一个或多个
输出。输入是媒体的来源,包括捕获设备相机和麦克风。输出是从输入中获取媒体数据,例如写入磁盘文件并产生一个电影文件。

  1. 需要创建以下属性
@property (nonatomic ,strong) AVCaptureSession *session; // 会话 由他把输入输出结合在一起,并开始启动捕获设备(摄像头)
@property (nonatomic ,strong) AVCaptureDevice *device; // 视频输入设备
@property (nonatomic ,strong) AVCaptureDevice *audioDevice; // 音频输入设备
@property (nonatomic ,strong) AVCaptureDeviceInput *deviceInput;//图像输入源
@property (nonatomic ,strong) AVCaptureDeviceInput *audioInput; //音频输入源
@property (nonatomic ,strong) AVCaptureAudioDataOutput *audioPutData;   //音频输出源
@property (nonatomic ,strong) AVCaptureVideoDataOutput *videoPutData;   //视频输出源
@property (nonatomic ,strong) AVCaptureVideoPreviewLayer *previewLayer;
@property (nonatomic ,strong) AVCaptureConnection *connection;
@property (nonatomic ,strong) AVAssetWriter *writer;//视频采集
@property (nonatomic ,strong) AVAssetWriterInput *writerAudioInput;//音频采集
@property (nonatomic ,strong) AVAssetWriterInput *writerVideoInput;//视频采集
  1. 初始化session会话 AVCaptureSession 采集会话,用于管理并协调输入设备和输出设备
    self.session = [[AVCaptureSession alloc] init];
    if ([self.session canSetSessionPreset:AVCaptureSessionPresetHigh]){
        self.session.sessionPreset = AVCaptureSessionPresetHigh;
    }else if ([self.session canSetSessionPreset:AVCaptureSessionPresetiFrame960x540]) {
        self.session.sessionPreset = AVCaptureSessionPresetiFrame960x540;
    }
  1. 获取视频输入设备(摄像头)
    self.device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    [_device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus];
  1. 创建视频输入源 并添加到会话
    NSError *error = nil;
    self.deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.device error:&error];
    if (!error) {
        if ([self.session canAddInput:self.deviceInput]) {
            [self.session addInput:self.deviceInput];
        }
    }
  1. 创建视频输出源 并添加到会话
    NSDictionary *videoSetting = @{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)};
    self.videoPutData = [[AVCaptureVideoDataOutput alloc] init];
    self.videoPutData.videoSettings = videoSetting;
    self.videoPutData.alwaysDiscardsLateVideoFrames = YES; //立即丢弃旧帧,节省内存,默认YES
    dispatch_queue_t videoQueue = dispatch_queue_create("vidio", DISPATCH_QUEUE_CONCURRENT);
    [self.videoPutData setSampleBufferDelegate:self queue:videoQueue];
    if ([self.session canAddOutput:self.videoPutData]) {
        [self.session addOutput:self.videoPutData];
    }
    // 设置 imageConnection 控制相机拍摄视频的角度方向
    AVCaptureConnection *imageConnection = [self.videoPutData connectionWithMediaType:AVMediaTypeVideo];
    if (imageConnection.supportsVideoOrientation) {
        imageConnection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
    }
  1. 获取音频输入设备
    self.audioDevice = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio] firstObject];
  1. 创建音频输入源 并添加到会话
    NSError *audioError = nil;
    self.audioInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.audioDevice error:&audioError];
    if (!audioError) {
        if ([self.session canAddInput:self.audioInput]) {
            [self.session addInput:self.audioInput];
        }
    }
  1. 创建音频输出源 并添加到会话
    self.audioPutData = [[AVCaptureAudioDataOutput alloc] init];
    if ([self.session canAddOutput:self.audioPutData]) {
        [self.session addOutput:self.audioPutData];
    }
    dispatch_queue_t audioQueue = dispatch_queue_create("audio", DISPATCH_QUEUE_CONCURRENT);
    [self.audioPutData setSampleBufferDelegate:self queue:audioQueue]; // 设置写入代理
  1. 初始化预览层,session会话负责驱动input输入源进行信息的采集,layer预览层负责把采集到的图像进行渲染显示
    self.previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.session];
    self.previewLayer.frame = CGRectMake(0, 0, width,height);
    self.previewLayer.connection.videoOrientation = AVCaptureVideoOrientationLandscapeRight; // 图层展示拍摄角度方向
    self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer:self.previewLayer];
  1. 开始采集
    [self.session startRunning];

视频拍摄属性设置 (可选项)

  1. 切换摄像头
[self.session stopRunning];
    // 1. 获取当前摄像头
    AVCaptureDevicePosition position = self.deviceInput.device.position;
    
    //2. 获取当前需要展示的摄像头
    if (position == AVCaptureDevicePositionBack) {
        position = AVCaptureDevicePositionFront;
    } else {
        position = AVCaptureDevicePositionBack;
    }
    
    // 3. 根据当前摄像头创建新的device
    AVCaptureDevice *device = [self getCameraDeviceWithPosition:position];
    
    // 4. 根据新的device创建input
    AVCaptureDeviceInput *newInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
    
    //5. 在session中切换input
    [self.session beginConfiguration];
    [self.session removeInput:self.deviceInput];
    [self.session addInput:newInput];
    [self.session commitConfiguration];
    self.deviceInput = newInput;
    
    [self.session startRunning];
  1. 闪光灯
if ([self.device lockForConfiguration:nil]) {

        if ([self.device hasFlash]) {

            if (self.device.flashMode == AVCaptureFlashModeAuto) {
                self.device.flashMode = AVCaptureFlashModeOn;
                [self.flashBtn setImage:[UIImage imageNamed:@"shanguangdeng_kai"] forState:UIControlStateNormal];

            }else if (self.device.flashMode == AVCaptureFlashModeOn){
                self.device.flashMode = AVCaptureFlashModeOff;
                [self.flashBtn setImage:[UIImage imageNamed:@"shanguangdeng_guan"] forState:UIControlStateNormal];

            }else{

                self.device.flashMode = AVCaptureFlashModeAuto;
                [self.flashBtn setImage:[UIImage imageNamed:@"shanguangdeng_zidong"] forState:normal];
            }
        }
        [self.device unlockForConfiguration];
    }
  1. 聚焦
// 添加聚焦手势
- (void)addTap {
    UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(focusGesture:)];
    [self.view addGestureRecognizer:tap];
}
- (void)focusGesture:(UITapGestureRecognizer*)gesture{
    CGPoint point = [gesture locationInView:gesture.view];
   CGSize size = self.view.bounds.size;
    // focusPoint 函数后面Point取值范围是取景框左上角(0,0)到取景框右下角(1,1)之间,有时按这个来但位置不对,按实际适配
    CGPoint focusPoint = CGPointMake( point.x /size.width , point.y/size.height );
    if ([self.device lockForConfiguration:nil]) {
        [self.session beginConfiguration];
        /*****必须先设定聚焦位置,在设定聚焦方式******/
        //聚焦点的位置
        if ([self.device isFocusPointOfInterestSupported]) {
            [self.device setFocusPointOfInterest:focusPoint];
        }
        // 聚焦模式
        if ([self.device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
            [self.device setFocusMode:AVCaptureFocusModeAutoFocus];
        }else{
            NSLog(@"聚焦模式修改失败");
        }
        //曝光点的位置
        if ([self.device isExposurePointOfInterestSupported]) {
            [self.device setExposurePointOfInterest:focusPoint];
        }
        //曝光模式
        if ([self.device isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {
            [self.device setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
        } else {
            NSLog(@"曝光模式修改失败");
        }
        [self.device unlockForConfiguration];
        [self.session commitConfiguration];
    }
}

视频录制方式一 --- 通过AVAssetWriter写入

视频录制需要在沙盒中先生成一个路径,用于存储视频录制过程中的文件信息写入,等视频资料全部写入完成后,即可获取到完整的视频

  1. 生成路径
- (NSURL *)createVideoFilePathUrl
{
    NSString *documentPath = [NSHomeDirectory() stringByAppendingString:@"/Documents/shortVideo"];

    NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
    [dateFormatter setDateFormat:@"yyyyMMddHHmmss"];

    NSString *destDateString = [dateFormatter stringFromDate:[NSDate date]];
    NSString *videoName = [destDateString stringByAppendingString:@".mp4"];

    NSString *filePath = [documentPath stringByAppendingFormat:@"/%@",videoName];

    NSFileManager *manager = [NSFileManager defaultManager];
    BOOL isDir;
    if (![manager fileExistsAtPath:documentPath isDirectory:&isDir]) {
        [manager createDirectoryAtPath:documentPath withIntermediateDirectories:YES attributes:nil error:nil];

    }
    
    return [NSURL fileURLWithPath:filePath];
}
  1. 开始录制, 完成录制配置的设置

2.1 获取存储路径 存储路径在沙盒中,需要唯一

self.preVideoURL = [self createVideoFilePathUrl];

2.2 开启异步线程进行写入配置

dispatch_queue_t writeQueueCreate = dispatch_queue_create("writeQueueCreate", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(writeQueueCreate, ^{

})

2.3. 生成视频采集对象

NSError *error = nil;
self.writer = [AVAssetWriter assetWriterWithURL:self.preVideoURL fileType:AVFileTypeMPEG4 error:&error];

2.4. 生成图像采集对象并添加到视频采集对象 可以对图像及音频采集对象进行设置,格式,尺寸,码率、帧率、频道等等

NSInteger numPixels = width * height;
//每像素比特
 CGFloat bitsPerPixel = 12.0;
NSInteger bitsPerSecond = numPixels * bitsPerPixel;
// 码率和帧率设置
NSDictionary *compressionProperties = @{ AVVideoAverageBitRateKey : @(bitsPerSecond),
                                                     AVVideoExpectedSourceFrameRateKey : @(30),
                                                     AVVideoMaxKeyFrameIntervalKey : @(30),
                                                     AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel };
//视频属性
NSDictionary *videoSetting = @{ AVVideoCodecKey : AVVideoCodecTypeH264,
                                            AVVideoWidthKey : @(width),
                                            AVVideoHeightKey : @(height),
                                            AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill,
                                            AVVideoCompressionPropertiesKey : compressionProperties };
self.writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSetting];
self.writerVideoInput.expectsMediaDataInRealTime = YES; //expectsMediaDataInRealTime 必须设为yes,需要从capture session 实时获取数据

if ([self.writer canAddInput:self.writerVideoInput]) {
     [self.writer addInput:self.writerVideoInput];
}

2.5. 生成音频采集对象并添加到视频采集对象

NSDictionary *audioSetting = @{ AVEncoderBitRatePerChannelKey : @(28000),
                                            AVFormatIDKey : @(kAudioFormatMPEG4AAC),
                                            AVNumberOfChannelsKey : @(1),
                                            AVSampleRateKey : @(22050) };
self.writerAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioSetting];
            
self.writerAudioInput.expectsMediaDataInRealTime = YES; //expectsMediaDataInRealTime 必须设为yes,需要从capture session 实时获取数据
            
 if ([self.writer canAddInput:self.writerAudioInput]) {
     [self.writer addInput:self.writerAudioInput];
}

上面的写法会在获取到视频信息的时候开始写入录制,避免出现先写入语音信息,导致开始的时候有语音但是没有视频信息问题出现 (实测此问题不明显,根据个人需要看是否添加)
startSessionAtSourceTime方法用于设置开始播放时间

  1. 文件写入 开始录制可以设置开始播放时间,避免开头空白视频的问题 startSessionAtSourceTime
    在回调方法captureOutput:didOutputSampleBuffer:romConnection:中,第一次收到数据时启动文件写入,并将每一次的数据写入到文件中
    CMFormatDescriptionRef desMedia = CMSampleBufferGetFormatDescription(sampleBuffer);
    CMMediaType mediaType = CMFormatDescriptionGetMediaType(desMedia);
    if (mediaType == kCMMediaType_Video) {
        if (!self.canWritting) {
            [self.writer startWriting];
            CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
            self.canWritting = YES;
            [self.writer startSessionAtSourceTime:timestamp];
        }
    }
    
    if (self.canWritting) {
        if (mediaType == kCMMediaType_Video) {
            if (self.writerVideoInput.readyForMoreMediaData) {
                BOOL success = [self.writerVideoInput appendSampleBuffer:sampleBuffer];
                if (!success) {
                    NSLog(@"video write failed");
                }
            }
        }else if (mediaType == kCMMediaType_Audio){
            if (self.writerAudioInput.readyForMoreMediaData) {
                BOOL success = [self.writerAudioInput appendSampleBuffer:sampleBuffer];
                if (!success) {
                    NSLog(@"audio write failed");
                }
            }
        }
    }
  1. 结束录制
    创建异步线程并在其中完成结束录制操作
dispatch_queue_t writeQueue = dispatch_queue_create("writeQueue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(writeQueue, ^{
   if (weakSelf.writer.status == AVAssetWriterStatusWriting) {
         [weakSelf.writer finishWritingWithCompletionHandler:^{
               /// 完成操作
         }];
    }
});

视频录制方式二 --- 通过AVCaptureMovieFileOutput写入

  • 1.创建视频输出源 只需要生成一个视频输出源,不需要生成音频输出源
  @property (nonatomic ,strong) AVCaptureMovieFileOutput *movieFileOutPut; // 影片输出源
  …………
  …………
    // 创建视频输出源 并添加到会话
    self.movieFileOutPut = [[AVCaptureMovieFileOutput alloc] init];
    // 设置输出对象的一些属性
    AVCaptureConnection *captureConnection=[self.movieFileOutPut connectionWithMediaType:AVMediaTypeVideo];    //设置防抖
    // 视频防抖 是在 iOS 6 和 iPhone 4S 发布时引入的功能。到了 iPhone 6,增加了更强劲和流畅的防抖模式,被称为影院级的视频防抖动。相关的 API 也有所改动 (目前为止并没有在文档中反映出来,不过可以查看头文件)。防抖并不是在捕获设备上配置的,而是在 AVCaptureConnection 上设置。由于不是所有的设备格式都支持全部的防抖模式,所以在实际应用中应事先确认具体的防抖模式是否支持:
    if ([captureConnection isVideoStabilizationSupported ]) {
        captureConnection.preferredVideoStabilizationMode=AVCaptureVideoStabilizationModeAuto;
    }
    // 预览图层和视频方向保持一致
    captureConnection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
    // 将设备输出添加到会话中
    if ([_session canAddOutput:self.movieFileOutPut]) {
        [_session addOutput:self.movieFileOutPut];
    }
    1. 生成存储路径
  • 3.调用录制方法传入路径,文件自动写入 直接调用录制方法,不需要配置文件写入对象
  [self.movieFileOutPut startRecordingToOutputFileURL:self.preVideoURL recordingDelegate:self];  
  • 4.完成录制
  [self.movieFileOutPut stopRecording];
  • 5.在代理方法中监控录制状态完成,获取到文件
  -(void)captureOutput:(AVCaptureFileOutput *)output didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error {
    …………
}

AVCaptureMovieFileOutput方式提供了暂停录制方法和恢复录制方法,但是仅mac os可用

AVAssetWriter不支持暂停录制,尝试过暂停文件写入,结果为空白段,且音频时间顺序混乱, 状态枚举无暂停状态,不支持

两种录制方式对比

相同点:数据采集都在AVCaptureSession中进行,视频和音频的输入都一样,画面的预览一致。
不同点:

  • 1.AVCaptureMovieFileOutput较为简便,只需要一个输出即可;
    AVAssetWriter 需要 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 两个单独的输出,拿到各自的输出数据后,然后自己进行相应的处理
  • 2.AVAssetWriter可以配置更多的参数,更为灵活
  • 3.文件处理不一致, AVAssetWriter可以拿到实时数据流
    AVCaptureMovieFileOutput 如果要剪裁视频,因为系统已经把数据写到文件中了,我们需要从文件中独到一个完整的视频,然后处理;
    而AVAssetWriter我们拿到的是数据流,还没有合成视频,对数据流进行处理

视频处理

录制完成之后可以通过之前的路径来获取视频文件,进行播放、保存等操作
保存

    PHPhotoLibrary *photoLibrary = [PHPhotoLibrary sharedPhotoLibrary];
    [photoLibrary performChanges:^{
        [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:self.preVideoURL];
    } completionHandler:^(BOOL success, NSError * _Nullable error) {
        if (success) {
            NSLog(@"已将视频保存至相册");
        } else {
            NSLog(@"未能保存视频到相册");
        }
    }];

拍照属性设置 (可选项)

参考相机拍照属性设置 https://www.jianshu.com/p/e2de8a85b8aa

你可能感兴趣的:(iOS 视频录制功能)