VideoToolBox 1

看完熊皮皮的文章后,个人理解的一点小笔记

前记:AVCaptureSession作为一个管理员,管理着input 和output; 由于笔者对视频音频的理解实在是白得不能再白的小白了,因此这里在音视频上不做解释,如果想了解可以查看原文

1, input 对象:AVCaptureDeviceInput 主要与AVCaptureDevice绑定;

AVCaptureDevice 有摄像头和麦克风等;如何获取:

AVCaptureDevice *avCaptureDevice;
//这里MediaType媒体类型,决定获取怎样的硬件源,video为摄像头,audio为
麦克风
NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in cameras) {
  //这里取到后置摄像头
  if (device.position == AVCaptureDevicePositionBack) {
      avCaptureDevice = device;
  }
}

后面再补充如何切换摄像头

2, 创建管理者session,添加input

NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:avCaptureDevice error:&error];
if (!videoInput)
{
    return;
}

AVCaptureSession *avCaptureSession = [[AVCaptureSession alloc] init];
avCaptureSession.sessionPreset = AVCaptureSessionPresetHigh; // sessionPreset为AVCaptureSessionPresetHigh,可不显式指定
[avCaptureSession addInput:videoInput];

3, output对象:AVCaptureVideoDataOutput,需要设置输出相关参数,例如分辨率什么的,还有代理协议用来处理输出数据

AVCaptureVideoDataOutput *avCaptureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
//YUV420一般用于标清视频,YUV422用于高清视频
NSDictionary*settings = @{(__bridge id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)};
avCaptureVideoDataOutput.videoSettings = settings;

dispatch_queue_t queue = dispatch_queue_create("com.github.michael-lfx.back_camera_io", NULL);
//这里添加代理,在代理中处理数据, 由于这里是在子线程中处理,后面将会涉及到时间戳帧的排序
[avCaptureVideoDataOutput setSampleBufferDelegate:self queue:queue];
[avCaptureSession addOutput:avCaptureVideoDataOutput];

4, 添加预览界面到view上

AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:avCaptureSession];
previewLayer.frame = self.view.bounds;
previewLayer.videoGravity= AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:previewLayer];

5, 启动会话。

[avCaptureSession startRunning];

直到这里,你就可以看到通过摄像头获取到的图像了


再来说说代理中数据的处理

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
  
    //这里从sampleBuffer拿到未压缩的数据,pixelBuffer; 小白我尝试audiolist,但是一堆的参数,实在是不解,暂且放弃
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    if (CVPixelBufferIsPlanar(pixelBuffer)) {
        NSLog(@"kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange -> planar buffer");
    }
    CMVideoFormatDescriptionRef desc = NULL;
    CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBuffer, &desc);
    CFDictionaryRef extensions = CMFormatDescriptionGetExtensions(desc);
    NSLog(@"extensions = %@", extensions);
}

输出结果

kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange -> planar buffer
extensions = {
    CVBytesPerRow = 2904;
    CVImageBufferColorPrimaries = "ITU_R_709_2";
    CVImageBufferTransferFunction = "ITU_R_709_2";
    CVImageBufferYCbCrMatrix = "ITU_R_709_2";
    Version = 2;
}

编码

// 获取摄像头输出图像的宽高
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);

static VTCompressionSessionRef compressionSession;
OSStatus status =  VTCompressionSessionCreate(NULL,
                                              width, height,
                                              kCMVideoCodecType_H264,
                                              NULL,
                                              NULL,
                                              NULL, &compressionOutputCallback, NULL, &compressionSession);

 CMTime presentationTimeStamp = CMTimeMake(frameCount, 1000);
 VTEncodeInfoFlags flags;
//开始硬编码
 VTCompressionSessionEncodeFrame(compressionSession, pixelBuffer, presentationTimeStamp, kCMTimeInvalid, NULL, NULL, &flags);

编码回调函数实现如下:

static void compressionOutputCallback(void * CM_NULLABLE outputCallbackRefCon,
                                      void * CM_NULLABLE sourceFrameRefCon,
                                      OSStatus status,
                                      VTEncodeInfoFlags infoFlags,
                                      CM_NULLABLE CMSampleBufferRef sampleBuffer ) {

//这里的sampleBuffer中的MBlockData不为空
    if (status != noErr) {
        NSLog(@"%s with status(%d)", __FUNCTION__, status);
        return;
    }
    if (infoFlags == kVTEncodeInfo_FrameDropped) {
        NSLog(@"%s with frame dropped.", __FUNCTION__);
        return;
    }

    /* ------ 辅助调试 ------ */
    CMFormatDescriptionRef fmtDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
    CFDictionaryRef extensions = CMFormatDescriptionGetExtensions(fmtDesc);
    NSLog(@"extensions = %@", extensions);
    CMItemCount count = CMSampleBufferGetNumSamples(sampleBuffer);
    NSLog(@"samples count = %d", count);
    /* ====== 辅助调试 ====== */

    // 推流或写入文件
}

打印信息如下:

extensions = {
    FormatName = "H.264";
    SampleDescriptionExtensionAtoms =     {
        avcC = <014d0028 ffe1000b 274d0028 ab603c01 13f2a001 000428ee 3c30>;
    };
}
samples count = 1

sampleBuffer的详细信息如下:

CMSampleBuffer 0x126e9fd80 retainCount: 1 allocator: 0x1a227cb68
    invalid = NO
    dataReady = YES
    makeDataReadyCallback = 0x0
    makeDataReadyRefcon = 0x0
    formatDescription =  {
    mediaType:'vide' 
    mediaSubType:'avc1' 
    mediaSpecific: {
        codecType: 'avc1'        dimensions: 1920 x 1080 
    } 
    extensions: {{type = immutable dict, count = 2,
entries =>
    0 : {contents = "SampleDescriptionExtensionAtoms"} = {type = immutable dict, count = 1,
entries =>
    2 : {contents = "avcC"} = {length = 26, capacity = 26, bytes = 0x014d0028ffe1000b274d0028ab603c01 ... a001000428ee3c30}
}

    2 : {contents = "FormatName"} = H.264
}
}
}
    sbufToTrackReadiness = 0x0
    numSamples = 1
    sampleTimingArray[1] = {
        {PTS = {196709596065916/1000000000 = 196709.596}, DTS = {INVALID}, duration = {INVALID}},
    }
    sampleSizeArray[1] = {
        sampleSize = 5707,
    }
    sampleAttachmentsArray[1] = {
        sample 0:
            DependsOnOthers = false
    }
    dataBuffer = 0x126e9fc50

你可能感兴趣的:(VideoToolBox 1)