AV Foundation应用场景:
用 AV Foundation 创建自定义方案,需要定制的界面元素,或者在捕捉与回放的过程中,要直接访问摄像头和帧数据
UIImagePickerController ,用于获取图片
<span style="font-size:18px;"><span style="font-size:14px;">// 设置数据源 [picker setSourceType:UIImagePickerControllerSourceTypeCamera]; [picker setDelegate:self]; // 强制使用前摄像头,设置不可编辑 if([UIImagePickerController isCameraDeviceAvailable:UIImagePickerControllerCameraDeviceFront]){ [picker setCameraDevice:UIImagePickerControllerCameraDeviceFront]; } [picker setShowsCameraControls:NO]; // 创建自定义的视图 UIView *myOverlay = [[UIView alloc] initWithFrame:self.view.bounds]; UIImage *overlayImg = [UIImage imageNamed:@"overlay.png"]; UIImageView *overlayBg; overlayBg = [[UIImageView alloc] initWithImage:overlayImg]; [myOverlay addSubview:overlayBg]; // 添加自定义的按钮到视图中 UIButton *snap = [UIButton buttonWithType:UIButtonTypeCustom]; [snap setImage:[UIImage imageNamed:@"takePic"] forState:UIControlStateNormal]; [snap addTarget:self action:@selector(pickerCameraSnap:) forControlEvents:UIControlEventTouchUpInside]; snap.frame = CGRectMake(74, 370, 178, 37); [myOverlay addSubview:snap]; // 把自定义的视图添加到picker上 [picker setCameraOverlayView:myOverlay]; // 展示 picker [self presentViewController:picker animated:YES completion:nil]; // 按钮事件拍照 - (void)pickerCameraSnap:(id)sender{ [picker takePicture]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{ [self dismissViewControllerAnimated:YES completion:nil]; UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage]; [imageView setImage:image]; }</span> </span>
<span style="font-size:14px;">- (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker{ [self dismissViewControllerAnimated:YES completion:nil]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{ [self dismissViewControllerAnimated:YES completion:nil]; UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage]; [imageView setImage:image]; }</span>
播放视频 MPMoviePlayerController,它有两个重要的类
<span style="font-size:12px;"> <span style="font-size:14px;">moviePlayer = [[MPMoviePlayerController alloc] initWithContentURL:nil]; moviePlayer.controlStyle = MPMovieControlStyleEmbedded; moviePlayer.view.clipsToBounds = YES; CGFloat width = self.view.bounds.size.width; moviePlayer.view.frame = CGRectMake(0, 0, width, 480); moviePlayer.view.autoresizingMask = UIViewAutoresizingFlexibleWidth| UIViewAutoresizingFlexibleBottomMargin; [self.view addSubview:moviePlayer.view]; NSString *urlString = @“http://xxx/movie.mp4"; NSURL *contentURL = [NSURL URLWithString:urlString]; [moviePlayer setContentURL:contentURL]; [moviePlayer play];</span></span>
<span style="font-size:14px;"> MPMoviePlayerViewController *mpvc = [[MPMoviePlayerViewController alloc]initWithContentURL:contentUrl]; [self presentMoviePlayerViewControllerAnimated:mpvc]</span>
AVCaptureSession 控制来自一个输入设备的声音和视频,流入一个输出缓冲区AVCaptureOutput 的过程
1. 创建AVCaptureSession
2. 设置会话的录音、录音质量的预置值
3. 添加必要的输入捕捉设备 通过AVCaptureDevice来创建,可以是一个摄像头、麦克风、诸如此类
4. 添加必要的数据输出缓冲区 如AVCaptureStillImageOutput 或者 AVCaptureVideoDataOutput
5. 启动AVCaptureSession
AVCaptureVideoPreviewLayer
自定义图片步骤方案
<span style="font-size:14px;">-(void)setupAVCapture{ // 步骤 // 1) 设置 capture session // 2) 设置 capture device // 3) 设置 capture device input // ---- // 配置 Capture Session // ---- // 4) 添加 device input 到 capture session // 5) 创建 still image output, 并添加到 capture session // 6) 创建 video output, a并添加到 capture session // ---- // 设置 video preview layer // ---- // 7) 根据 capture session 创建 Video Preview layer // ---- // 完成设置 // ---- // 8) 提交 session 配置 // 9) 开始运行 capture session // 1) 设置 capture session // ======================================== capSession = [[AVCaptureSession alloc] init]; [capSession setSessionPreset:AVCaptureSessionPresetMedium]; // 2) 设置 capture device // ======================================== AVCaptureDevice *capDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // 3) 设置 capture device input // ======================================== NSError *error = nil; AVCaptureDeviceInput *capDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:capDevice error:&error]; if(error!=nil){ NSLog(@"There was an error setting up capture input device:\n%@",[error localizedDescription]); [self destroyAVCapture]; } else{ [capSession beginConfiguration]; self.isUsingFrontFacingCamera = NO; self.isCapturingVideo = NO; // 4) 添加 device input 到 capture session // ======================================== if([capSession canAddInput:capDeviceInput]) [capSession addInput:capDeviceInput]; else NSLog(@"could not add input"); // 5) 创建 still image output, 并添加到 capture session // ======================================== stillImageOutput = [AVCaptureStillImageOutput new]; [stillImageOutput addObserver:self forKeyPath:@"capturingStillImage" options:NSKeyValueObservingOptionNew context:@"AVCaptureStillImageIsCapturingStillImageContext"]; if([capSession canAddOutput:stillImageOutput]) [capSession addOutput:stillImageOutput]; // 6) 创建 video output, a并添加到 capture session // ======================================== videoDataOutput = [AVCaptureVideoDataOutput new]; NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; [videoDataOutput setVideoSettings:rgbOutputSettings]; [videoDataOutput setAlwaysDiscardsLateVideoFrames:YES]; // create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured // a serial dispatch queue must be used to guarantee that video frames will be delivered in order // see the header doc for setSampleBufferDelegate:queue: for more information videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL); [videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue]; if([capSession canAddOutput:videoDataOutput]) [capSession addOutput:videoDataOutput]; else NSLog(@"Could not add output"); // 7) 根据 capture session 创建 Video Preview layer // ======================================== // 设置 video preview layer videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:capSession]; videoPreviewLayer.frame = videoPreview.bounds; videoPreviewLayer.backgroundColor = [UIColor blackColor].CGColor; videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; videoPreview.layer.masksToBounds = YES; [videoPreview.layer addSublayer:videoPreviewLayer]; // 8) 提交 session 配置 // 9) 开始运行 capture session // ======================================== [capSession commitConfiguration]; [capSession startRunning]; //Make sure video is not recording [[videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:NO]; } }</span>
厚吾(http://blog.csdn.net/mangosnow)
本文遵循“署名-非商业用途-保持一致”创作公用协议