对视频帧的实时硬编码

在iOS8.0以后开放了,对图片的硬编码,要引入

@import VideoToolbox;

以下是介绍使用VTCompressionSession对接收视频数据的压缩编码;
apple developer来源

以下是步骤:

1:创建 一个compression sessioin
VTCompressionSessionCreate(CFAllocatorRef  _Nullable allocator,//分配器,如果使用NULL的话,就使用默认的.
                           int32_t width,//视频帧的象素宽
                           int32_t height,//视频帧的象素高
                           CMVideoCodecType codecType,//编码器的类型
                           CFDictionaryRef  _Nullable encoderSpecification,//如果用指定的视频编码器,就要设置这个.使用NULL就是videoToolbox自己选择一个.
                           CFDictionaryRef  _Nullable sourceImageBufferAttributes,//元象素缓存,如果你不要videoToolbox给你创建,就传NULL.使用非VTB分配的缓存,可以让你有机会拷贝图片数据.
                           CFAllocatorRef  _Nullable compressedDataAllocator,//压缩数据分配器.传NULL可以使用默认的.
                           VTCompressionOutputCallback  _Nullable outputCallback,//回调,这个方法会在另一个线程上被异步的VTCompressionSessionEncodeFrame
                           调用.只有在你要使VTCompressionSessionEncodeFrameWithOutputHandler去编码帧时,才可以设置为NULL.
                           void * _Nullable outputCallbackRefCon,//回调方法所在的实例,回调方法是全局的可以设置为NULL
                           VTCompressionSessionRef  _Nullable * _Nonnull compressionSessionOut)//用来接收新的compression session

示例:

OSStatus status = VTCompressionSessionCreate(NULL, width, height, kCMVideoCodecType_H264, NULL, NULL, NULL, didCompressH264, (__bridge void *)(self),  &EncodingSession);
if (status != 0)
{
    NSLog(@"Error by VTCompressionSessionCreate  ");
    return ;
}
//continue...
2:(可选)配置session的属性(Compression Properties)

使用VTSessionSetProperty(_:_:_:)VTSessionSetProperties(_:_:)

示例

VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);//实时运行
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_H264_Baseline_4_1);

SInt32 bitRate = width*height*50;  //越高效果越屌  帧数据越大
CFNumberRef ref = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bitRate);
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_AverageBitRate, ref);
CFRelease(ref);

int frameInterval = 10; //关键帧间隔 越低效果越屌 帧数据越大
CFNumberRef  frameIntervalRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &frameInterval);
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_MaxKeyFrameInterval,frameIntervalRef);
CFRelease(frameIntervalRef);

还有其它很多,可以去属性查看

3:编码一帧
VTCompressionSessionEncodeFrame(VTCompressionSessionRef  _Nonnull session,//已经被定义的compress session
                                CVImageBufferRef  _Nonnull imageBuffer,//包含了一帧要被压缩的视频帧,这个buffer必须要有值.
                                CMTime presentationTimeStamp,//这一帧要显示的时间,这个会关联到采样缓存.每一个显示时间都必须要大于前一次的时间.
                                CMTime duration,//这一帧要显示的持续时间,会关联到采样缓存,如果你没有持续时间,传kCMTimeInvalid.
                                CFDictionaryRef  _Nullable frameProperties,//帧属性
                                void * _Nullable sourceFrameRefCon,//帧的引用值,这个会被传给输出回调方法.
                                VTEncodeInfoFlags * _Nullable infoFlagsOut)//接受编码操作的信息,可以传NULL

示例:

- (void) encode:(CMSampleBufferRef )sampleBuffer
{
    if (EncodingSession==nil||EncodingSession==NULL)
    {
        return;
    }
    dispatch_sync(aQueue, ^{
        frameCount++;
        CVImageBufferRef imageBuffer = (CVImageBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
        CMTime presentationTimeStamp = CMTimeMake(frameCount, 1000);
        VTEncodeInfoFlags flags;
        OSStatus statusCode = VTCompressionSessionEncodeFrame(EncodingSession,
                                                              imageBuffer,
                                                              presentationTimeStamp,
                                                              kCMTimeInvalid,
                                                              NULL, NULL, &flags);
        if (statusCode != noErr)
        {
            if (EncodingSession!=nil||EncodingSession!=NULL)
            {
                VTCompressionSessionInvalidate(EncodingSession);
                CFRelease(EncodingSession);
                EncodingSession = NULL;
                return;
            }
        }
    });
}
4.强制完成一些或全部未处理的视频帧.

调用VTCompressionSessionCompleteFrames(_:_:)

VTCompressionSessionCompleteFrames(VTCompressionSessionRef  _Nonnull session,//compression session
                                   CMTime completeUntilPresentationTimeStamp) // 视频帧关联的时间

如果completeUntilPresentationTimeStamp是数字的话,包括当前时间和之前时间的帧都会在方法返回前发出(处理完)?.
如果completeUntilPresentationTimeStamp是不是数字的话,全部的未处理的帧都会在方法返回前发出(处理完?).

示例

VTCompressionSessionCompleteFrames(EncodingSession, kCMTimeInvalid); 
5.当你要结果编码时.

调用VTCompressionSessionInvalidate(_:)使session无效,并用CFRelease去释放内存.

示例

VTCompressionSessionInvalidate(EncodingSession);
CFRelease(EncodingSession);
EncodingSession = NULL;

=============================================

在编码完成时会在

typedef void (*VTCompressionOutputCallback)(
        void * CM_NULLABLE outputCallbackRefCon,
        void * CM_NULLABLE sourceFrameRefCon, 
        OSStatus status, 
        VTEncodeInfoFlags infoFlags,
        CM_NULLABLE CMSampleBufferRef sampleBuffer );

里返回,详见上面的第一条.

void didCompressH264(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags,
                     CMSampleBufferRef sampleBuffer )
{
    if (status != 0) return;
    
    if (!CMSampleBufferDataIsReady(sampleBuffer))
    {
        NSLog(@"didCompressH264 data is not ready ");
        return;
    }
    H264HwEncoderImpl* encoder = (__bridge H264HwEncoderImpl*)outputCallbackRefCon;
    
    CFArrayRef array = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, true);
    const void * value = CFArrayGetValueAtIndex(array, 0);
    bool keyframe = !CFDictionaryContainsKey(value, kCMSampleAttachmentKey_NotSync);
    
    if (keyframe)
    {
        CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
        
        size_t sparameterSetSize, sparameterSetCount;
        const uint8_t *sparameterSet;
        //提取SPS<<<<<
        OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format,
                                                                                 0,
                                                                                 &sparameterSet,
                                                                                 &sparameterSetSize,
                                                                                 &sparameterSetCount,
                                                                                 NULL );
        if (statusCode == noErr)
        {
            size_t pparameterSetSize, pparameterSetCount;
            const uint8_t *pparameterSet;
           //提取PPS<<<<<<<
            OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 1, &pparameterSet, &pparameterSetSize, &pparameterSetCount, 0 );
            if (statusCode == noErr)
            {
                encoder->sps = [NSData dataWithBytes:sparameterSet length:sparameterSetSize];
                encoder->pps = [NSData dataWithBytes:pparameterSet length:pparameterSetSize];
                if (encoder->_delegate)
                {
                    [encoder->_delegate gotSpsPps:encoder->sps pps:encoder->pps];
                }
            }
        }
    }
    
    //提取IDR<<<<<<<
    CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
    size_t length, totalLength;
    char *dataPointer;
    OSStatus statusCodeRet = CMBlockBufferGetDataPointer(dataBuffer, 0, &length, &totalLength, &dataPointer);
    if (statusCodeRet == noErr) {
        
        size_t bufferOffset = 0;
        static const int AVCCHeaderLength = 4;
        while (bufferOffset < totalLength - AVCCHeaderLength)
        {
            uint32_t NALUnitLength = 0;
            memcpy(&NALUnitLength, dataPointer + bufferOffset, AVCCHeaderLength);
            NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
            NSData* data = [[NSData alloc] initWithBytes:(dataPointer + bufferOffset + AVCCHeaderLength) length:NALUnitLength];
            [encoder->_delegate gotEncodedData:data isKeyFrame:keyframe];
            bufferOffset += AVCCHeaderLength + NALUnitLength;
        }
        
    }
}

分解:
用以下方法,可以把SPS.PPS提取出来.

OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format,
                                                                         0,
                                                                         &sparameterSet,
                                                                         &sparameterSetSize,
                                                                         &sparameterSetCount,
                                                                         NULL );

用以下方法,可以把IDR提取出来,

    CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
    size_t length, totalLength;
    char *dataPointer;//拿出来数据的格式是,头四个字节是数据长度(但字节顺序是反的),后面就是数据..可能是由多个这样的格式组成.
    OSStatus statusCodeRet = CMBlockBufferGetDataPointer(dataBuffer, 0, &length, &totalLength, &dataPointer);
    if (statusCodeRet == noErr) {
        size_t bufferOffset = 0;
        static const int AVCCHeaderLength = 4;
        while (bufferOffset < totalLength - AVCCHeaderLength)
        {
            uint32_t NALUnitLength = 0;
            memcpy(&NALUnitLength, dataPointer + bufferOffset, AVCCHeaderLength);//取出头4个字节存到NALUintLength,
            NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);//把字节顺序反转一下,得到数据长度.
            NSData* data = [[NSData alloc] initWithBytes:(dataPointer + bufferOffset + AVCCHeaderLength) length:NALUnitLength];
            [encoder->_delegate gotEncodedData:data isKeyFrame:keyframe];
            bufferOffset += AVCCHeaderLength + NALUnitLength;
        }
    }

你可能感兴趣的:(对视频帧的实时硬编码)