iOS 通过摄像头麦克风获取音视频流(直播)

这里就不谈理论了,这一块的理论是比较复杂的,看多了看的自己迷糊,这里就说说每个步骤。
步骤1:
创建AVCaptureSession,AVCaptureSession是AVFoundation的核心类,用于捕捉视频和音频,协调视频和音频的输入和输出流.
代码:

- (AVCaptureSession *)session {
    if (!_session) {
        _session = [[AVCaptureSession alloc] init];
        _session.sessionPreset = AVCaptureSessionPresetHigh;//设置分辨率
    }
    return _session;
}

步骤2:
创建AVCaptureDevice,AVCaptureDevice主要用来获取iphone一些关于相机设备的属性。
如:

1. 前置和后置摄像头
enum {
    AVCaptureDevicePositionBack = 1,
    AVCaptureDevicePositionFront = 2
};
typedef NSInteger AVCaptureDevicePosition;

2. 闪光灯开关
enum {
    AVCaptureFlashModeOff = 0,
    AVCaptureFlashModeOn = 1,
    AVCaptureFlashModeAuto = 2
};
typedef NSInteger AVCaptureFlashMode;

3. 手电筒开关
enum {
    AVCaptureTorchModelOff = 0,
    AVCaptureTorchModelOn = 1,
    AVCaptureTorchModeAuto = 2
};
typedef NSInteger AVCaptureTorchMode;

4. 焦距调整
enum {
    AVCaptureFocusModelLocked = 0,
    AVCaptureFocusModeAutoFocus = 1,
    AVCaptureFocusModeContinousAutoFocus = 2
};
typedef NSInteger AVCaptureFocusMode;

5. 曝光量调节
enum {
    AVCaptureExposureModeLocked = 0,
    AVCaptureExposureModeAutoExpose = 1,
    AVCaptureExposureModeContinuousAutoExposure = 2
};
typedef NSInteger AVCaptureExposureMode;

6. 白平衡
enum {
    AVCaptureWhiteBalanceModeLocked = 0,
    AVCaptureWhiteBalanceModeAutoWhiteBalance = 1,
    AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance = 2
};
typedef NSInteger AVCaptureWhiteBalanceMode;

用AVCaptureDevice获取摄像头代码:

AVCaptureDevice *device;
    NSArray *devideArray = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *devides in devideArray) {
        if ([devides position] == AVCaptureDevicePositionBack) {
            device = devides;
        }
    }

步骤3:
创建AVCaptureDeviceInput,AVCaptureDeviceInput是输入流,用来捕获AVCaptureDevice捕获的数据。
代码:

//创建一个设备的输入设备,并将它添加到session上
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:device error:&error];
    if ([self.session canAddInput:input]) {
        [self.session addInput:input];
    }

步骤4
创建AVCaptureVideoDataOutput,一个数据输出流对象,设置获取数据代理和输出数据样式都是在此步骤。
代码:

_videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];//创建一个视频数据输出流
    [_videoDataOutput setSampleBufferDelegate:self queue:self.queue];
    // Specify the pixel format
    _videoDataOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                            [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                            [NSNumber numberWithInt: kScreenWidth], (id)kCVPixelBufferWidthKey,
                            [NSNumber numberWithInt: kScreenHeight], (id)kCVPixelBufferHeightKey,
                            nil];
    if ([self.session canAddOutput:_videoDataOutput]) {
        [self.session addOutput:_videoDataOutput];
    }

//实现代理方法
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {
}

步骤5:
创建显示画面图层AVCaptureVideoPreviewLayer
代码:

 _preLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
    //preLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
    _preLayer.frame = [UIScreen mainScreen].bounds;
    _preLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [_streamView.layer addSublayer:_preLayer];

步骤6:
开始录制
代码:

[self.session startRunning];

到此结束了,这里都是基本的步骤,其它细节以后在写出来。

你可能感兴趣的:(iOS 通过摄像头麦克风获取音视频流(直播))