概念
参考:AudioStreamBasicDescription
音频流:连续的表示声音的数据流,可以是pcm也是编码后的数据流,例如aac
声道:声道(Sound Channel) 是指声音在录制或播放时在不同空间位置采集或回放的相互独立的音频信号,所以声道数也就是声音录制时的音源数量或回放时相应的扬声器数量。单声道有一个声道,立体声有两个声道。
采样点: 音频流中单个声道的一个采样点的数字表示。例如采样率16000,表示1秒内单个声道会有16000个采样点。
帧: 同一时间点的多个声道的采样集合构成一帧。对于一个立体声的pcm数据,一帧有两个采样,一个来自左声道,一个来自右声道。
包: 一个或多个连续帧的集合。pcm中一个包中只有一帧,在压缩格式中,一个包里面会有多个帧。例如aac,一个包里一般有1024个帧。
采样率: 对pcm来说,一秒钟的采样个数,常见的有16000,32000,44100等。
数据结构
struct AudioStreamBasicDescription
{
Float64 mSampleRate;
AudioFormatID mFormatID;
AudioFormatFlags mFormatFlags;
UInt32 mBytesPerPacket;
UInt32 mFramesPerPacket;
UInt32 mBytesPerFrame;
UInt32 mChannelsPerFrame;
UInt32 mBitsPerChannel;
UInt32 mReserved;
};
Float64 mSampleRate
采样率,16000,32000,44100 …
AudioFormatID mFormatID
四个char字符标识的音频数据格式。
CF_ENUM(AudioFormatID)
{
kAudioFormatLinearPCM = 'lpcm',
kAudioFormatAppleIMA4 = 'ima4',
kAudioFormatMPEG4AAC = 'aac ',
kAudioFormatMPEGLayer3 = '.mp3',
kAudioFormatOpus = 'opus'
...
};
用AudioUnit采集的通常就是pcm的裸流,编码后的数据,根据编码类型,可以是aac,mp3,opus 等。
AudioFormatFlags mFormatFlags
描述采样点表示的格式(用int还是用float,是大端还是小端),声道布局(是平面还是交错),可以组合使用。
ffmpeg 中描述采样点格式如下:
$ ffmpeg -formats | grep PCM
DE alaw PCM A-law
DE f32be PCM 32-bit floating-point big-endian
DE f32le PCM 32-bit floating-point little-endian
DE f64be PCM 64-bit floating-point big-endian
DE f64le PCM 64-bit floating-point little-endian
DE mulaw PCM mu-law
DE s16be PCM signed 16-bit big-endian
DE s16le PCM signed 16-bit little-endian
DE s24be PCM signed 24-bit big-endian
DE s24le PCM signed 24-bit little-endian
DE s32be PCM signed 32-bit big-endian
DE s32le PCM signed 32-bit little-endian
DE s8 PCM signed 8-bit
DE u16be PCM unsigned 16-bit big-endian
DE u16le PCM unsigned 16-bit little-endian
DE u24be PCM unsigned 24-bit big-endian
DE u24le PCM unsigned 24-bit little-endian
DE u32be PCM unsigned 32-bit big-endian
DE u32le PCM unsigned 32-bit little-endian
DE u8 PCM unsigned 8-bit
标识采样点数据类型
CF_ENUM(AudioFormatFlags)
{
kAudioFormatFlagIsFloat = (1U << 0), // 0x1
kAudioFormatFlagIsSignedInteger = (1U << 2), // 0x4
};
kAudioFormatFlagIsFloat 和 kAudioFormatFlagIsSignedInteger 不可以一起使用。
存在kAudioFormatFlagIsFloat标识,使用float类型,否则使用int类型。
存在kAudioFormatFlagIsSignedInteger标识,使用有符号int,否则使用无符号int,缺省为无符号int。
采样点大小端标识(有此标记代表大端,无此标记为native endian)
CF_ENUM(AudioFormatFlags)
{
kAudioFormatFlagsNativeEndian,
kLinearPCMFormatFlagIsBigEndian = kAudioFormatFlagIsBigEndian,
};
声道布局,平面型或交错型
CF_ENUM(AudioFormatFlags)
{
kAudioFormatFlagIsNonInterleaved = (1U << 5), // 0x20
};
双声道的情况
- 交错类型的存储为,只有一个通道,通道内左右声道交错存储
LRLRLRLRLRLR
- 非交错情况为, 左右声道分开存储,每个平面,存单独的声道
- 平面1
LLLLLLLL
- 平面2
RRRRRRR
- 平面1
是否占满整个channel
CF_ENUM(AudioFormatFlags)
{
kAudioFormatFlagIsPacked
};
Set if the sample bits occupy the entire available bits for the channel,
clear if they are high or low aligned within the channel. Note that even if
this flag is clear, it is implied that this flag is set if the
AudioStreamBasicDescription is filled out such that the fields have the
following relationship:
((mBitsPerSample / 8) * mChannelsPerFrame) == mBytesPerFrame
其他字段
mBitsPerChannel
单个声道,单个采样占据的bit数(16位, 32位,64位)
mChannelsPerFrame
一个帧里面有多少个声道(单声道1, 双声道2)
mBytesPerFrame
一帧有多少个字节,一帧是各个声道的采样点的集合, mBytesPerFrame = mChannelsPerFrame * mBitsPerChannel
mFramesPerPacket
一个包里面有多少个帧(pcm 一个包一个帧,aac 一个包1024个帧)
mBytesPerPacket
一个包里面有多少字节 mBytesPerPacket = mBytesPerFrame * mFramesPerPacket
mReserved ( 一般传 0 )
Pads the structure out to force an even 8 byte alignment.
示例
pcm s16le
+ (AudioStreamBasicDescription)signed16FormatWithNumberOfChannels:(UInt32)channels
sampleRate:(float)sampleRate
isInterleaved:(BOOL)isInterleaved
{
AudioStreamBasicDescription asbd;
UInt32 bytesPerSample = sizeof(sint16);
asbd.mChannelsPerFrame = channels;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mBytesPerFrame = channels * asbd.mBitsPerChannel;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerPacket = asbd.mFramesPerPacket * asbd.mBytesPerFrame;
if (isInterleaved) {
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
} else {
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsNonInterleaved;
}
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mSampleRate = sampleRate;
asbd.mReserved = 0;
return asbd;
}
pcm f32le
+ (AudioStreamBasicDescription)float32FormatWithNumberOfChannels:(UInt32)channels
sampleRate:(float)sampleRate
isInterleaved:(BOOL)isInterleaved
{
AudioStreamBasicDescription asbd;
UInt32 bytesPerSample = sizeof(float);
asbd.mChannelsPerFrame = channels;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mBytesPerFrame = channels * asbd.mBitsPerChannel / 8;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerPacket = asbd.mFramesPerPacket * asbd.mBytesPerFrame;
if (isInterleaved) {
asbd.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked;
} else {
asbd.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
}
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mSampleRate = sampleRate;
asbd.mReserved = 0;
return asbd;
}
通过AVAudioFormat来设置
+ (AudioStreamBasicDescription)intFormatWithNumberOfChannels:(UInt32)channels
sampleRate:(float)sampleRate
isInterleaved:(BOOL)isInterleaved
{
AVAudioFormat *format = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16 sampleRate:sampleRate channels:channels interleaved:isInterleaved];
AudioStreamBasicDescription desc = *(format.streamDescription);
format = nil;
return desc;
}
注意
kAudioFormatFlagIsNonInterleaved 会影响AudioUnit 采集的 AudioBufferList 的布局。
Typically, when an ASBD is being used, the fields describe the complete layout
of the sample data in the buffers that are represented by this description -
where typically those buffers are represented by an AudioBuffer that is
contained in an AudioBufferList.
However, when an ASBD has the kAudioFormatFlagIsNonInterleaved flag, the
AudioBufferList has a different structure and semantic. In this case, the ASBD
fields will describe the format of ONE of the AudioBuffers that are contained in
the list, AND each AudioBuffer in the list is determined to have a single (mono)
channel of audio data. Then, the ASBD's mChannelsPerFrame will indicate the
total number of AudioBuffers that are contained within the AudioBufferList -
where each buffer contains one channel. This is used primarily with the
AudioUnit (and AudioConverter) representation of this list - and won't be found
in the AudioHardware usage of this structure.