IOS 摄像头采集之 AVCaptureDevice 简单使用 (Swift)

一,基础
AVFoundation框架中有几个类实现图像捕捉 ,通过这些类可以访问来自相机设备的原始数据并控制它的组件

  • AVCaptureDevice 是相机硬件相关,可以设置硬件属性:曝光,镜头位置,闪光灯,白平衡等
  • AVCaptureSession 管理输入输出的数据流
  • AVCaptureVideoPreviewLayer 是 CALayer 的子类,可用于自动显示相机产生的实时图像

二,权限(这里只用了摄像头和麦克风)
需要提前设置好
三,Demo(权限校验部分略过)

首先应该初始化音视频的输入输出,添加到Session中

     //1 视频的输入
        let devices = AVCaptureDevice.devices()
        //1.1默认获取前置摄像头
        guard let device = devices.filter({$0.position == .front}).first else {
             print("get front video AVCaptureDevice  failed!")
            return
        }
        //1.2视频输入
        guard let input = try? AVCaptureDeviceInput(device: device) else {
            print("get front video AVCaptureDeviceInput  failed!")
            return
        }
        self.videoInput = input
        //2 视频的输出
        let output = AVCaptureVideoDataOutput();
        output.setSampleBufferDelegate(self, queue: DispatchQueue.global())
        self.videoOutput = output
        
        //3 会话添加视频输入输出
        addInputOutputToSession(input, output)
  //1 音频的输入
        guard let device = AVCaptureDevice.default(for: .audio) else {
            print("get audio AVCaptureDevice  failed!")
            return
        }
        guard let input = try? AVCaptureDeviceInput(device: device) else {
            print("get audio AVCaptureDeviceInput  failed!")
            return
        }
        //2 音频的输出
        let output = AVCaptureAudioDataOutput()
        output.setSampleBufferDelegate(self, queue: DispatchQueue.global())
        
        //3 会话添加音频输入输出
        addInputOutputToSession(input, output)

创建预览图层

   //1 创建预览图层
        let previewLayer = AVCaptureVideoPreviewLayer(session: session)
        //2 设置属性
        previewLayer.frame = view.bounds
        //3 添加图层到view
        view.layer.insertSublayer(previewLayer, at: 0)
        self.previewLayer = previewLayer

添加输出 写文件到沙盒

   if (self.movieOutput != nil){
             session.removeOutput(self.movieOutput!)
        }
        
        let fileOutput = AVCaptureMovieFileOutput()
        self.movieOutput = fileOutput
        
        let connection = fileOutput.connection(with: .video)
        connection?.automaticallyAdjustsVideoMirroring  = true
        
        if session.canAddOutput(fileOutput){
            session.addOutput(fileOutput)
        }
        
        let filePath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first! + "/test.mp4"
        print("filePath: \(filePath)")
        let fileURL = URL(fileURLWithPath: filePath)
        
        fileOutput.startRecording(to: fileURL, recordingDelegate: self)
开始捕捉
        session.startRunning()
        
        setupPreviewLayer()
        
        // 录制视频, 并且写入文件
        setupMovieFileOutput()
停止捕捉
  //停止写入
        movieOutput?.stopRecording()
        
        session.stopRunning()
        
        previewLayer?.removeFromSuperlayer()

对焦
AVCaptureFocusMode
曝光
setExposureTargetBias
白平衡
deviceWhiteBalanceGainsForTemperatureAndTintValues
实时人脸检测
AVCaptureMetadataOutput
捕捉静态图片
captureStillImageAsynchronouslyFromConnection(connection, completionHandler)

你可能感兴趣的:(IOS,直播)