wav 播放器的另一思路,缓冲 AVAudioPCMBuffer

本文继续探讨 AudioToolBox 与 wav 播放器的那些事情

播放套路三步走:

先读数据,文件还原采样数据

对于音频资源文件,使用 Audio File Services, 和 Audio File Stream Services

这一步,下面两篇博客,都重点探讨了,

从 wav 播放器,学习 AudioToolBox 的 services

从 pcm 播放器,继续学习 AudioToolBox 的 services 与非压缩格式

采样数据,集中为音频缓冲

把 pcm buffer 交给 AVAudioPlayerNode ,就可以播放了

第三步,比较简单

本篇重点,探讨第二步,采样数据,转 buffer

本篇中,把第一步合并入了第二步


采样数据,转音频缓冲的常规做法

弄一个计时器,不停的调用下面的方法,通过 AVAudioPCMBuffer 走回调,产生新的 buffer,

直到读完数据,抛出异常 reachedEndOfFile

走到底,let isEndOfData = packetIndex >= packets.count - 1,

就返回



public func read(_ frames: AVAudioFrameCount) throws -> AVAudioPCMBuffer {
        let framesPerPacket = readFormat.streamDescription.pointee.mFramesPerPacket
        var packets = frames / framesPerPacket
        
        // 新建 buffer
        guard let buffer = AVAudioPCMBuffer(pcmFormat: readFormat, frameCapacity: frames) else {
            throw ReaderError.failedToCreatePCMBuffer
        }
        buffer.frameLength = frames
        
        // 填充数据
        try queue.sync {
            let context = unsafeBitCast(self, to: UnsafeMutableRawPointer.self)
            let status = AudioConverterFillComplexBuffer(converter!, ReaderConverterCallback, context, &packets, buffer.mutableAudioBufferList, nil)
            guard status == noErr else {
                 // ...
                 throw ReaderError.reachedEndOfFile
            }
        }
        return buffer
    }
    

通过 AVAudioPCMBuffer 走下面的回调,

重点是这一行 reader.currentPacket = reader.currentPacket + 1,

这里是数据源 reader.parser.packetsX, 从数据源中拿一个数据,进一位,


func ReaderConverterCallback(_ converter: AudioConverterRef,
                            _ packetCount: UnsafeMutablePointer,
                            _ ioData: UnsafeMutablePointer,
                            _ outPacketDescriptions: UnsafeMutablePointer?>?,
                            _ context: UnsafeMutableRawPointer?) -> OSStatus {
   let reader = Unmanaged.fromOpaque(context!).takeUnretainedValue()
   
   
   guard let _ = reader.parser.dataFormatD else {
       return ReaderMissingSourceFormatError
   }
   
   
   let packetIndex = Int(reader.currentPacket)
   let packets = reader.parser.packetsX
   let isEndOfData = packetIndex >= packets.count - 1
   if isEndOfData {
       
       packetCount.pointee = 0
       return ReaderReachedEndOfDataError
       
   }
   
   
   var data = packets[packetIndex]
   let dataCount = data.count

   ioData.pointee.mNumberBuffers = 1
   ioData.pointee.mBuffers.mData = UnsafeMutableRawPointer.allocate(byteCount: dataCount, alignment: 0)
   _ = data.withUnsafeMutableBytes { (bytes: UnsafeMutablePointer) in
       
       memcpy((ioData.pointee.mBuffers.mData?.assumingMemoryBound(to: UInt8.self))!, bytes, dataCount)
   }
   ioData.pointee.mBuffers.mDataByteSize = UInt32(dataCount)
   
   packetCount.pointee = 1
   reader.currentPacket = reader.currentPacket + 1
   
   return noErr
}

重点是 seek

拖动进度条,跳转时间,

我们拿到一个时刻,先用方法 frameOffset 算出当前是哪一帧,

对于 pcm, 一个包 packet 里面就一帧 frame,

方法 packetOffset, 没做什么事

调用播放节点的停止方法,playerNode.stop(),清空分配给播放节点的音频缓冲 buffer,

拿到包的序号,指定给上面代码中的 currentPacket ,

这样跳转播放时间,就是清空播放节点的音频缓冲,再从需要时刻的音频数据开始读取,分配相应的音频缓冲 buffer 给播放节点 playerNode

本文 github repo 中, 音频文件持续时间为 63.158 秒,有 1010528 帧 frame,

一秒钟 16000 帧 frame, 对应该音频文件 16000 Hz 的采样率

这种做法,缓存采样数据,seek 可以精确到具体的帧,seek 的精度很高,是 1/16000, 达到了 0.0000625 秒


guard let frameOffset = parser.frameOffset(forTime: time),
            let packetOffset = parser.packetOffset(forFrame: frameOffset) else {
                return
        }

        // ...
        
        // 栈,排空
        playerNode.stop()
        volume = 0
        
        // Perform the seek to the proper packet offset
        do {
            try reader.seek(packetOffset)
        } catch {
            os_log("Failed to seek: %@", log: Streamer.logger, type: .error, error.localizedDescription)
            return
        }


换一种做法,缓冲 AVAudioPCMBuffer

初始化的时候,读取音频文件,获取全部的采样数据,跟之前一样,

然后把采样数据,全部转化为音频 buffer,

上一种做法是,根据需要,拿对应的采样数据,去临时转化,产生想要的音频 buffer,

现在的做法是,根据需要,拿对应的音频 buffer,不需要去临时转化

先还原所有的采样数据,对应前面的 parser

再获取所有的 buffer

public required init(src path: URL, readFormat readF: AVAudioFormat, bufferSize size: AVAudioFrameCount) throws {
        readFormat = readF
        readBufferSize = size
        Utility.check(error:  AudioFileOpenURL(path as CFURL, .readPermission, 0,  &playbackFile) ,                // set on output to the AudioFileID
                      operation: "AudioFileOpenURL failed")
        
        guard let file = playbackFile else {
            return
        }
        
        var numPacketsToRead: UInt32 = 0
        
        
        GetPropertyValue(val: &numPacketsToRead, file: file, prop: kAudioFilePropertyAudioDataPacketCount)
        
        var asbdFormat = AudioStreamBasicDescription()
        GetPropertyValue(val: &asbdFormat, file: file, prop: kAudioFilePropertyDataFormat)
        
        dataFormatD = AVAudioFormat(streamDescription: &asbdFormat)
        /// At this point we should definitely have a data format
        var bytesRead: UInt32 = 0
        GetPropertyValue(val: &bytesRead, file: file, prop: kAudioFilePropertyAudioDataByteCount)
        
        
        
        
        guard let dataFormat = dataFormatD else {
            return
        }
        
    
        let format = dataFormat.streamDescription.pointee
        let bytesPerPacket = Int(format.mBytesPerPacket)
        
        for i in 0 ..< Int(numPacketsToRead) {
            
            var packetSize = UInt32(bytesPerPacket)
                
            let packetStart = Int64(i * bytesPerPacket)
            let dataPt: UnsafeMutableRawPointer = malloc(MemoryLayout.size * bytesPerPacket)
            AudioFileReadBytes(file, false, packetStart, &packetSize, dataPt)
            let startPt = dataPt.bindMemory(to: UInt8.self, capacity: bytesPerPacket)
            let buffer = UnsafeBufferPointer(start: startPt, count: bytesPerPacket)
            let array = Array(buffer)
            packetsX.append(Data(array))
        }
        
        
        
        
        print("packetsX.count = \(packetsX.count)")
        
        // 前面是合并的 parser
        
        // 后面是获取所有的 buffer
        
        
        let sourceFormat = dataFormat.streamDescription
        let commonFormat = readF.streamDescription
        let result = AudioConverterNew(sourceFormat, commonFormat, &converter)
        guard result == noErr else {
            throw ReaderError.unableToCreateConverter(result)
        }
        
        
        
        
        let framesPerPacket = readFormat.streamDescription.pointee.mFramesPerPacket
        var packets = readBufferSize / framesPerPacket
        
       // print("frames: \(frames)")
        
      //  print("framesPerPacket: \(framesPerPacket)")
        
        totalPacketCount = AVAudioPacketCount(packetsX.count)
        
        while true {
            /// Allocate a buffer to hold the target audio data in the Read format
            guard let buffer = AVAudioPCMBuffer(pcmFormat: readFormat, frameCapacity: readBufferSize) else {
                throw ReaderError.failedToCreatePCMBuffer
            }
            buffer.frameLength = readBufferSize
            
            // Try to read the frames from the parser
          
            let context = unsafeBitCast(self, to: UnsafeMutableRawPointer.self)
            let status = AudioConverterFillComplexBuffer(converter!, ReaderConverterCallback, context, &packets, buffer.mutableAudioBufferList, nil)
            guard status == noErr else {
                switch status {
                case ReaderMissingSourceFormatError:
                    print("parserMissingDataFormat")
                    throw ReaderError.parserMissingDataFormat
                case ReaderReachedEndOfDataError:
                    print("reachedEndOfFile: buffers.count = \(buffers.count)")
                    packetsX.removeAll()
                    return
                case ReaderNotEnoughDataError:
                    print("notEnoughData")
                    throw ReaderError.notEnoughData
                default:
                    print("converterFailed")
                    throw ReaderError.converterFailed(status)
                }
            }
            buffers.append(buffer)
        }
        
    }

后面拿 buffer, 播放就简单了,

不需要转化

public func read() throws -> AVAudioPCMBuffer {
        
        guard currentBuffer < buffers.count else {
            throw ReaderError.reachedEndOfFile
        }
        let buff = buffers[currentBuffer]
        
        currentBuffer += 1
        return buff
        
        
    }

重点还是 seek

拖动进度条,调整播放时间

这里没有精确到某一帧 Frame,

因 pcm 数据中 framesPerPacket = 1, 一个包 packet = 一帧 frame

本文 github repo 中, 音频文件持续时间为 63.158 秒,有 1359 个音频 buffer,

( 本文当前采用的 buffer size 为 2 KB, 2048 )

一秒钟 21.517 个 音频 buffer,

一个音频 buffer, 对应 0.0465 秒

这种做法,缓存采样数据,seek 只能精确到具体的 buffer ,seek 的精度到了 0.0465 秒

上文 packet ( 也就是 pcm 下的 frame )的精度为 0.0000625 秒。

这个就看业务了,高保真音质应该不可接受,人声听听歌可接受
public func seek(buffer ratio: TimeInterval) throws {

        currentBuffer = Int(TimeInterval(buffers.count) * ratio)
    }

如本文采用 buffer size 为 4 KB, 4096,

则有 679 个 buffer, 音频持续时间 63.158 秒

一秒对应 10.75 个 buffer,

一个音频 buffer, 对应 0.093 秒

github repo

你可能感兴趣的:(ios)