这个类的主要功能就是获取一个视频里面任何一帧的图片。
- 创建AVAssetImageGenerator
AVAssetImageGenerator
的初始化方法有两个,一个是类方法, 一个是对象方法, 要闯入的参数都是一个AVAsset对象
- (instancetype)initWithAsset:(AVAsset *)asset;
+ (instancetype)assetImageGeneratorWithAsset:(AVAsset *)asset;
2.实际使用
1.获取某个特定时间的视频帧(图像)
主要用到的方法是:- (nullable CGImageRef)copyCGImageAtTime:(CMTime)requestedTime actualTime:(nullable CMTime *)actualTime error:(NSError * _Nullable * _Nullable)outError
其中CMTime是一个结构体,
typedef struct
{
CMTimeValue value; /*! @field value The value of the CMTime. value/timescale = seconds. */
CMTimeScale timescale; /*! @field timescale The timescale of the CMTime. value/timescale = seconds. */
CMTimeFlags flags; /*! @field flags The flags, eg. kCMTimeFlags_Valid, kCMTimeFlags_PositiveInfinity, etc. */
CMTimeEpoch epoch; /*! @field epoch Differentiates between equal timestamps that are actually different because
of looping, multi-item sequencing, etc.
Will be used during comparison: greater epochs happen after lesser ones.
Additions/subtraction is only possible within a single epoch,
however, since epoch length may be unknown/variable. */
} CMTime;
C语言的API, 需要注意的是: 我们想要的时间 = value/timescale;例如:1s = CMTimeMake(1, 1);
0.5s = CMTimeMake(1, 2);
所以我们获取某一时刻的视频帧
-(void)getImageWithTime:(CMTime)time {
NSURL *videoUrl = [[NSBundle mainBundle] URLForResource:@"hubblecast.m4v" withExtension:nil];
AVAsset *videoAsset = [AVAsset assetWithURL:videoUrl];
self.videoAsset = videoAsset;
AVAssetImageGenerator *imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:self.videoAsset];
self.imageGenerator = imageGenerator;
imageGenerator.maximumSize = CGSizeMake(200, 0);//按比例生成, 不指定会默认视频原来的格式大小
CMTime actualTime;//获取到图片确切的时间
NSError *error = nil;
CGImageRef CGImage = [imageGenerator copyCGImageAtTime:time actualTime:&actualTime error:&error];
if (!error) {
UIImage *image = [UIImage imageWithCGImage:CGImage];
self.imageView.image = image;
CMTimeShow(actualTime); //{111600/90000 = 1.240}
CMTimeShow(time); // {1/1 = 1.000}
}
}
//调用 获取第一帧图片
[self getImageWithTime:CMTimeMake(1, 1)];
上面发现想要获取的时间(time)和实际获取到的时间的帧(actualTime)不一样
所以得设置
//防止时间出现偏差
imageGenerator.requestedTimeToleranceBefore = kCMTimeZero;
imageGenerator.requestedTimeToleranceAfter = kCMTimeZero;
得到的
CMTimeShow(actualTime); //{90000/90000 = 1.000}
CMTimeShow(time); // {1/1 = 1.000}
获取多个时刻的视频帧
主要用到这个方法, 根据多个CMTime对象转化为NSValue, 用数组包装传进去,然后根据传进去的NSValue个数进行回调, 回调的次数等于NSValue的个数;
- (void)generateCGImagesAsynchronouslyForTimes:(NSArray
self.imageGenerator = // 1
[AVAssetImageGenerator assetImageGeneratorWithAsset:self.videoAsset];
// Generate the @2x equivalent
self.imageGenerator.maximumSize = CGSizeMake(200.0f, 0.0f); // 2
CMTime duration = self.videoAsset.duration;
NSMutableArray *times = [NSMutableArray array]; // 3
CMTimeValue increment = duration.value / 20;
CMTimeValue currentValue = 2.0 * duration.timescale;
while (currentValue <= duration.value) {
CMTime time = CMTimeMake(currentValue, duration.timescale);
[times addObject:[NSValue valueWithCMTime:time]];
currentValue += increment;
}
__block NSUInteger imageCount = times.count; // 4
__block NSMutableArray *images = [NSMutableArray array];
AVAssetImageGeneratorCompletionHandler handler; // 5
handler = ^(CMTime requestedTime,
CGImageRef imageRef,
CMTime actualTime,
AVAssetImageGeneratorResult result,
NSError *error) {
if (result == AVAssetImageGeneratorSucceeded) { // 6
UIImage *image = [UIImage imageWithCGImage:imageRef];
[images addObject:image];
NSLog(@"%@", image);
} else {
NSLog(@"Error: %@", [error localizedDescription]);
}
// If the decremented image count is at 0, we're all done.
if (--imageCount == 0) { // 7
dispatch_async(dispatch_get_main_queue(), ^{
//获取完毕, 作出相应的操作
});
}
};
[self.imageGenerator generateCGImagesAsynchronouslyForTimes:times // 8
completionHandler:handler];