使用AVAssetReader、AVAssetWriter编解码视频

本篇作为 使用AVFoundation处理视频 的续篇;

上篇讲到AVAssetExportSession的局限性,一个更好的方案是使用AVAssetWriter重新编码视频:

与AVAssetExportSession相比,AVAssetWriter优势体现在它对输出进行编码时能够进行更加细致的压缩设置控制。可以指定诸如关键帧间隔、视频比特率、像素宽高比和纯净光圈H.264配置文件等设置;

基础

  • AVAssetReader,读取资源(可看做解码)
  • AVAssetReaderOutput,读取资源的输出配置
  • AVAssetReaderTrackOutput
  • AVAssetReaderVideoCompositionOutput
  • AVAssetReaderAudioMixOutput
  • AVAssetWriter,写资源(可看做编码)
  • AVAssetWriterInput,编码的输入配置
  • CMSampleBuffer,缓存的数据

AVAssetReader、AVAssetReaderOutput配套使用,决定以什么样的配置解码成buffer数据;
AVAssetWriter、AVAssetWriterInput配套使用,决定将数据以什么配置编码成视频
CMSampleBuffer为编码的数据,视频经AVAssetReader后输出CMSampleBuffer;经AVAssetWriter可以重新将CMSampleBuffer编码成视频;

AVAssetReader

AVAssetReader provides services for obtaining media data from an asset.

AVAssetReader可以从资源中读取媒体数据,每个AVAssetReader和单个AVAsset关联,AVAsset可能包含多个tracks,相当于一个AVAssetReader可以读取多个tracks数据。
如果需要AVAssetReader读取多个AVAsset数据,可以将多个AVAsset合成一个AVComposition,AVAssetReader关联/读取AVComposition即可;

AVAssetReader读取数据时,必须添加output(AVAssetReaderOutput) ,来配置media data怎样读取,可以为每个轨道添加不同的output

    open var outputs: [AVAssetReaderOutput] { get }
    open func add(_ output: AVAssetReaderOutput)

设置output后, startReading开始读取数据;

open func startReading() -> Bool

AVAssetReaderOutput

读取资源的配置,基类,具体需要使用以下子类

  • AVAssetReaderTrackOutput
    针对单个轨道数据配置
  • AVAssetReaderVideoCompositionOutput
    针对视频AVVideoComposition配置,同AVAssetExportSession 的videoComposition一样作用,可以处理视频的尺寸、背景等
  • AVAssetReaderAudioMixOutput
    针对音频AVAudioMix配置,同AVAssetExportSession 的audioMix一样作用,可以调节音频

可以通过copyNextSampleBuffer获取到读取的视频数据

open func copyNextSampleBuffer() -> CMSampleBuffer?

AVAssetWriter

AVAssetWriter provides services for writing media data to a new file

AVAssetWriter将媒体数据以指定文件格式、指定配置写入新的的单个文件;
和AVAssetReader不同,AVAssetWriter不需要关联AVAsset,可以从多个资源写入数据;

AVAssetWriter写数据时必须添加input(AVAssetWriterInput),来配置media data怎样写入

open var inputs: [AVAssetWriterInput] { get }
open func add(_ input: AVAssetWriterInput)

设置input后,startWriting开始写数据

open func startWriting() -> Bool

然后需要开启会话WriteSession

open func startSession(atSourceTime startTime: CMTime)

写入完成后,需要关闭会话

// 标记写入完成,同时也会endSession
open func finishWriting() async

AVAssetWriterInput

输入配置,可以为不同类型资源配置不同input

// 是否有数据需要写入
open var isReadyForMoreMediaData: Bool { get }

// 请求写入数据
open func requestMediaDataWhenReady(on queue: DispatchQueue, using block: @escaping () -> Void)

// 缓存数据
open func append(_ sampleBuffer: CMSampleBuffer) -> Bool

// 当前输入完成
open func markAsFinished()

需要说明的是,AVAssetWriter和AVAssetReader并不一定需要成对使用;实际上AVAssetWriter只需要拿到数据SampleBuffer即可,这个数据可以由很多途径得到,可以是通过AVAssetReader获取,可以是相机拍摄视频时获取实时流,还可以是图片数据转换而来;关于图片转视频,下面会着重编码实现


outputSettings

AVAssetReaderOutput和AVAssetWriterInput都有outputSettings设置(字典),outputSettings才是控制解、编码视频的核心;

AVVideoSettings

  • AVVideoCodecKey 编码方式
  • AVVideoWidthKey 像素宽
  • AVVideoHeightKey 像素高
  • AVVideoCompressionPropertiesKey 压缩设置:
    • AVVideoAverageBitRateKey 每秒bit数,720*1280适合3000000
    • AVVideoProfileLevelKey 画质级别 从低到高分别是BP、EP、MP、HP
    • AVVideoMaxKeyFrameIntervalKey

AVAudioSettings

  • AVFormatIDKey 音频格式
  • AVNumberOfChannelsKey
  • AVSampleRateKey 采样率
  • AVEncoderBitRateKey 编码码率

代码实现

老规矩先上UML

将多个视频合并一个视频

// 创建资源集合composition及可编辑轨道
let composition = AVMutableComposition()
// 视频轨道
let videoCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID:
kCMPersistentTrackID_Invalid)
// 音频轨道
let audioCompositionTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID:
kCMPersistentTrackID_Invalid)
var insertTime = CMTime.zero
for url in urls {
    autoreleasepool {
        // 获取视频资源 并分离出视频、音频轨道
        let asset = AVURLAsset(url: url)
        let videoTrack = asset.tracks(withMediaType: .video).first
        let audioTrack = asset.tracks(withMediaType: .audio).first
        let videoTimeRange = videoTrack?.timeRange
        let audioTimeRange = audioTrack?.timeRange
        
        // 将多个视频轨道合到一个轨道上(AVMutableCompositionTrack)
        if let insertVideoTrack = videoTrack, let insertVideoTime = videoTimeRange {
            do {
                // 在某个时间点插入轨道
                try videoCompositionTrack?.insertTimeRange(CMTimeRange(start: .zero, duration:
insertVideoTime.duration), of: insertVideoTrack, at: insertTime)
            } catch let e {
                callback(false, e)
                return
            }
        }
        
        // 将多个音频轨道合到一个轨道上(AVMutableCompositionTrack)
        if let insertAudioTrack = audioTrack, let insertAudioTime = audioTimeRange {
            do {
                try audioCompositionTrack?.insertTimeRange(CMTimeRange(start: .zero, duration:
insertAudioTime.duration), of: insertAudioTrack, at: insertTime)
            } catch let e {
                callback(false, e)
                return
            }
        }
        
        insertTime = insertTime + asset.duration
    }
}
// -----读取数据----
let videoTracks = composition.tracks(withMediaType: .video)
let audioTracks = composition.tracks(withMediaType: .audio)
guard let videoTrack = videoTracks.first, let audioTrack = audioTracks.first else {
    callback(false, nil)
    return
}
// AVAssetReader
do {
    reader = try AVAssetReader(asset: composition)
} catch let e {
    callback(false, e)
    return
}
reader.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
// 音视频uncompressed设置 (使用AVAssetReaderTrackOutput必要使用uncompressed设置)
let audioOutputSetting = [
    AVFormatIDKey: kAudioFormatLinearPCM
]
let videoOutputSetting = [
    kCVPixelBufferPixelFormatTypeKey as String: UInt32(kCVPixelFormatType_422YpCbCr8)
]
videoOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoOutputSetting)
videoOutput.alwaysCopiesSampleData = false
if reader.canAdd(videoOutput) {
    reader.add(videoOutput)
}
audioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioOutputSetting)
audioOutput.alwaysCopiesSampleData = false
if reader.canAdd(audioOutput) {
    reader.add(audioOutput)
}
reader.startReading()
// -----写数据----
// AVAssetWriter
do {
    writer = try AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
} catch let e {
    callback(false, e)
    return
}
writer.shouldOptimizeForNetworkUse = true
let videoInputSettings: [String : Any] = [
    AVVideoCodecKey: AVVideoCodecType.h264,
    AVVideoWidthKey: 720,
    AVVideoHeightKey: 1280,
    AVVideoCompressionPropertiesKey: [
        AVVideoAverageBitRateKey: 1000000,
        AVVideoProfileLevelKey: AVVideoProfileLevelH264High40
    ]
]
let audioInputSettings: [String : Any] = [
    AVFormatIDKey: NSNumber(value: kAudioFormatMPEG4AAC),
    AVNumberOfChannelsKey: NSNumber(value: 2),
    AVSampleRateKey: NSNumber(value: 44100),
    AVEncoderBitRateKey: NSNumber(value: 128000)
]
// AVAssetWriterInput
videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoInputSettings)
if writer.canAdd(videoInput) {
    writer.add(videoInput)
}
audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputSettings)
if writer.canAdd(audioInput) {
    writer.add(audioInput)
}
writer.startWriting()
writer.startSession(atSourceTime: .zero)
// 准备写入数据
writeGroup.enter()
videoInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    
    if wself.encodeReadySamples(from: wself.videoOutput, to: wself.videoInput) {
        wself.writeGroup.leave()
    }
}
writeGroup.enter()
audioInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    
    if wself.encodeReadySamples(from: wself.audioOutput, to: wself.audioInput) {
        wself.writeGroup.leave()
    }
}
writeGroup.notify(queue: inputQueue) {
    self.writer.finishWriting {
        callback(true, nil)
    }
}

多个视频合成(设置VideoComposition,AudioMix)

let composition = AVMutableComposition()
guard let videoCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID:
kCMPersistentTrackID_Invalid) else {
    callback(false, nil)
    return
}
let audioCompositionTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID:
kCMPersistentTrackID_Invalid)
// layerInstruction 用于更改视频图层
let vcLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoCompositionTrack)
var layerInstructions = [vcLayerInstruction]
var audioParameters: [AVMutableAudioMixInputParameters] = []
var insertTime = CMTime.zero
for url in urls {
    autoreleasepool {
        let asset = AVURLAsset(url: url)
        let videoTrack = asset.tracks(withMediaType: .video).first
        let audioTrack = asset.tracks(withMediaType: .audio).first
        let videoTimeRange = videoTrack?.timeRange
        let audioTimeRange = audioTrack?.timeRange
        
        if let insertVideoTrack = videoTrack, let insertVideoTime = videoTimeRange {
            do {
                try videoCompositionTrack.insertTimeRange(CMTimeRange(start: .zero, duration:
insertVideoTime.duration), of: insertVideoTrack, at: insertTime)
                
                // 更改Transform 调整方向、大小
                var trans = insertVideoTrack.preferredTransform
                let size = insertVideoTrack.naturalSize
                let orientation = VideoEditHelper.orientationFromVideo(assetTrack: insertVideoTrack)
                switch orientation {
                    case .portrait:
                        let scale = MMAssetExporter.renderSize.height / size.width
                        trans = CGAffineTransform(scaleX: scale, y: scale)
                        trans = trans.translatedBy(x: size.height, y: 0)
                        trans = trans.rotated(by: .pi / 2.0)
                    case .landscapeLeft:
                        let scale = MMAssetExporter.renderSize.width / size.width
                        trans = CGAffineTransform(scaleX: scale, y: scale)
                        trans = trans.translatedBy(x: size.width, y: size.height +
(MMAssetExporter.renderSize.height - size.height * scale) / scale / 2.0)
                        trans = trans.rotated(by: .pi)
                    case .portraitUpsideDown:
                        let scale = MMAssetExporter.renderSize.height / size.width
                        trans = CGAffineTransform(scaleX: scale, y: scale)
                        trans = trans.translatedBy(x: 0, y: size.width)
                        trans = trans.rotated(by: .pi / 2.0 * 3)
                    case .landscapeRight:
                        // 默认方向
                        let scale = MMAssetExporter.renderSize.width / size.width
                        trans = CGAffineTransform(scaleX: scale, y: scale)
                        trans = trans.translatedBy(x: 0, y: (MMAssetExporter.renderSize.height -
size.height * scale) / scale / 2.0)
                }
                
                vcLayerInstruction.setTransform(trans, at: insertTime)
                layerInstructions.append(vcLayerInstruction)
            } catch let e {
                callback(false, e)
                return
            }
        }
        if let insertAudioTrack = audioTrack, let insertAudioTime = audioTimeRange {
            do {
                try audioCompositionTrack?.insertTimeRange(CMTimeRange(start: .zero, duration:
insertAudioTime.duration), of: insertAudioTrack, at: insertTime)
                
                let adParameter = AVMutableAudioMixInputParameters(track: insertAudioTrack)
                adParameter.setVolume(1, at: .zero)
                audioParameters.append(adParameter)
            } catch let e {
                callback(false, e)
                return
            }
        }
        
        insertTime = insertTime + asset.duration
    }
}
let videoTracks = composition.tracks(withMediaType: .video)
let audioTracks = composition.tracks(withMediaType: .audio)
let videoComposition = AVMutableVideoComposition()
// videoComposition必须指定 帧率frameDuration、大小renderSize
videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
videoComposition.renderSize = MMAssetExporter.renderSize
let vcInstruction = AVMutableVideoCompositionInstruction()
vcInstruction.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
vcInstruction.backgroundColor = UIColor.red.cgColor // 可以设置视频背景颜色
vcInstruction.layerInstructions = layerInstructions
videoComposition.instructions = [vcInstruction]
let audioMix = AVMutableAudioMix()
audioMix.inputParameters = audioParameters
// AVAssetReader
do {
    reader = try AVAssetReader(asset: composition)
} catch let e {
    callback(false, e)
    return
}
reader.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
// AVAssetReaderOutput
videoOutput = AVAssetReaderVideoCompositionOutput(videoTracks: videoTracks, videoSettings: nil)
videoOutput.alwaysCopiesSampleData = false
videoOutput.videoComposition = videoComposition
if reader.canAdd(videoOutput) {
    reader.add(videoOutput)
}
audioOutput = AVAssetReaderAudioMixOutput(audioTracks: audioTracks, audioSettings: nil)
audioOutput.alwaysCopiesSampleData = false
audioOutput.audioMix = audioMix
if reader.canAdd(audioOutput) {
    reader.add(audioOutput)
}
if !reader.startReading() {
    callback(false, reader.error)
    return
}
// -----写数据----
// AVAssetWriter
do {
    writer = try AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
} catch let e {
    callback(false, e)
    return
}
writer.shouldOptimizeForNetworkUse = true
let videoInputSettings: [String : Any] = [
    AVVideoCodecKey: AVVideoCodecType.h264,
    AVVideoWidthKey: 720,
    AVVideoHeightKey: 1280,
    AVVideoCompressionPropertiesKey: [
        AVVideoAverageBitRateKey: 1000000,
        AVVideoProfileLevelKey: AVVideoProfileLevelH264High40
    ]
]
let audioInputSettings: [String : Any] = [
    AVFormatIDKey: NSNumber(value: kAudioFormatMPEG4AAC),
    AVNumberOfChannelsKey: NSNumber(value: 2),
    AVSampleRateKey: NSNumber(value: 44100),
    AVEncoderBitRateKey: NSNumber(value: 128000)
]
// AVAssetWriterInput
videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoInputSettings)
if writer.canAdd(videoInput) {
    writer.add(videoInput)
}
audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputSettings)
if writer.canAdd(audioInput) {
    writer.add(audioInput)
}
writer.startWriting()
writer.startSession(atSourceTime: .zero)
// 准备写入数据
writeGroup.enter()
videoInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    
    if wself.encodeReadySamples(from: wself.videoOutput, to: wself.videoInput) {
        wself.writeGroup.leave()
    }
}
writeGroup.enter()
audioInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    if wself.encodeReadySamples(from: wself.audioOutput, to: wself.audioInput) {
        wself.writeGroup.leave()
    }
}
writeGroup.notify(queue: inputQueue) {
    self.writer.finishWriting {
        callback(true, nil)
    }
}

设置VideoComposition,AudioMix,AVAssetReader的output需要使用AVAssetReaderVideoCompositionOutput、AVAssetReaderAudioMixOutput;
同AVAssetExportSession一样,最终设置VideoComposition,AudioMix实现调节视频size、旋转方向、背景颜色、音量;同样也能给视频添加水印等;

使用AVAssetReader、AVAssetWriter和AVAssetExportSession相比,对于资源、轨道的分离合成等操作都是一样的;对于AVAssetReader、AVAssetWriter可以进一步封装成类似AVAssetExportSession使用

public var composition: AVComposition!
public var videoComposition: AVVideoComposition!
public var audioMix: AVAudioMix!
public var outputUrl: URL!
public var videoInputSettings: [String : Any]?
public var videoOutputSettings: [String : Any]?
public var audioInputSettings: [String : Any]?
public var audioOutputSettings: [String : Any]?

public func exportAsynchronously(completionHandler callback: @escaping VideoResult) {
    let videoTracks = composition.tracks(withMediaType: .video)
    let audioTracks = composition.tracks(withMediaType: .audio)
    
    do {
        reader = try AVAssetReader(asset: composition)
    } catch let e {
        callback(false, e)
        return
    }
    reader.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
    
    videoOutput = AVAssetReaderVideoCompositionOutput(videoTracks: videoTracks, videoSettings:
videoOutputSettings)
    videoOutput.alwaysCopiesSampleData = false
    videoOutput.videoComposition = videoComposition
    if reader.canAdd(videoOutput) {
        reader.add(videoOutput)
    }
    audioOutput = AVAssetReaderAudioMixOutput(audioTracks: audioTracks, audioSettings: audioOutputSet
    audioOutput.alwaysCopiesSampleData = false
    audioOutput.audioMix = audioMix
    if reader.canAdd(audioOutput) {
        reader.add(audioOutput)
    }
    
    if !reader.startReading() {
        callback(false, reader.error)
        return
    }
    
    // -----写数据----
    do {
        writer = try AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
    } catch let e {
        callback(false, e)
        return
    }
    writer.shouldOptimizeForNetworkUse = true
    
    // AVAssetWriterInput
    videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoInputSettings)
    if writer.canAdd(videoInput) {
        writer.add(videoInput)
    }
    
    audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputSettings)
    if writer.canAdd(audioInput) {
        writer.add(audioInput)
    }
    
    writer.startWriting()
    writer.startSession(atSourceTime: .zero)
    
    // 准备写入数据
// videoInput.requestMediaDataWhenReady
// audioInput.requestMediaDataWhenReady
// encodeReadySamples
   ...
}

图片合成为视频

do {
    writer = try AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
} catch let e {
    callback(false, e)
    return
}
writer.shouldOptimizeForNetworkUse = true
videoInputSettings = [
    AVVideoCodecKey: AVVideoCodecType.h264,
    AVVideoWidthKey: MMAssetExporter.renderSize.width,
    AVVideoHeightKey: MMAssetExporter.renderSize.height
]
videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoInputSettings)
let adaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoInput,
sourcePixelBufferAttributes: nil)
if writer.canAdd(videoInput) {
    writer.add(videoInput)
}
writer.startWriting()
writer.startSession(atSourceTime: .zero)
let pixelBuffers = images.map { image in
    self.pixelBuffer(from: image)
}
let seconds = 2 // 每张图片显示时长 s
let timescale = 30 // 1s 30帧
let frames = images.count * seconds * timescale // 总帧数
var frame = 0
videoInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    
    if frame >= frames {
        // 全部数据写入完毕
        wself.videoInput.markAsFinished()
        wself.writer.finishWriting {
            callback(true, nil)
        }
        return
    }
    
    let imageIndex = frame / (seconds * timescale)
    let time = CMTime(value: CMTimeValue(frame), timescale: CMTimeScale(timescale))
    let pxData = pixelBuffers[imageIndex]
    if let cvbuffer = pxData {
        adaptor.append(cvbuffer, withPresentationTime: time)
    }
    
    frame += 1
}

这里使用到了AVAssetWriterInputPixelBufferAdaptor,前面也提到了AVASsetWriter数据并不一定需要从AVAssetReader得到,它可以接受其他各种数据;AVAssetWriterInputPixelBufferAdaptor就起到适配左右,可以让AVASsetWriter写入不同数据;
AVAssetWriterInputPixelBufferAdaptor创建时需要在writer.startWriting()前

遇到的问题

-[AVAssetWriterInput appendSampleBuffer:] Cannot append sample buffer: Input buffer must be in an uncompressed format when outputSettings is not nil

原因:使用了AVAssetReaderTrackOutput,且outSettings未设置uncompressed的配置(如果为nil则使用源视频设置);增加uncompressed 的outSettings即可;

[AVAssetReaderTrackOutput copyNextSampleBuffer] cannot copy next sample buffer before adding this output to an instance of AVAssetReader (using -addOutput:) and calling -startReading on that asset reader'

原因,在读取数据过程中,AVAssetReader被释放了;需要强引用AVAssetReader对象;
详见: https://stackoverflow.com/questions/27608510/avfoundation-add-first-frame-to-video

  1. reader.startReading()报错
Error Domain=AVFoundationErrorDomain Code=-11841

详见: https://www.cnblogs.com/song-jw/p/9530249.html

你可能感兴趣的:(使用AVAssetReader、AVAssetWriter编解码视频)