AudioRecord是Android系统提供的用于实现录音的功能类,可以得到原始的一帧帧PCM音频数据。
AudioRecord的参数配置如下:
配置AudioRecord
private var isRecording = false
private var audioRecord: AudioRecord? = null
val sampleRateInHz = 44100
val channelConfig = AudioFormat.CHANNEL_IN_MONO
val audioFormat = AudioFormat.ENCODING_PCM_16BIT
val bufferSizeInBytes =
AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, audioFormat)
audioRecord = AudioRecord(
MediaRecorder.AudioSource.MIC,
sampleRateInHz,
channelConfig,
audioFormat,
bufferSizeInBytes
)
开始录音,并把数据写进文件
audioRecord?.let {
it.startRecording()
isRecording = true
//把数据写进文件
lifecycleScope.launch(Dispatchers.IO) {
val path =
getExternalFilesDir(null)?.absolutePath + File.separator + System.currentTimeMillis() + ".pcm"
val photoFile = File(path)
if (!photoFile.exists()) {
photoFile.createNewFile()
}
val data = ByteArray(bufferSizeInBytes)
val fileOutputStream: FileOutputStream
try {
fileOutputStream = FileOutputStream(path)
while (isRecording) {
val read = it.read(data, 0, bufferSizeInBytes)
if (AudioRecord.ERROR_INVALID_OPERATION != read) {
fileOutputStream.write(data)
}
}
fileOutputStream.close()
} catch (exp: Exception) {
Log.e(tag, "Exception")
}
}
}
停止录音
isRecording = false
audioRecord?.let {
it.stop()
it.release()
audioRecord = null
}
这样,录音的功能就实现啦,对了,别忘了申请RECORD_AUDIO权限哈 ~
构造 AudioTrack
val sampleRateInHz = 44100
val channelConfig = AudioFormat.CHANNEL_OUT_MONO
val audioFormat = AudioFormat.ENCODING_PCM_16BIT
val bufferSizeInBytes =
AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, audioFormat)
val attributes = AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build()
val format = AudioFormat.Builder()
.setSampleRate(sampleRateInHz)
.setEncoding(audioFormat)
.setChannelMask(channelConfig)
.build()
AudioTrack有两种数据加载模式(MODE_STREAM 和 MODE_STATIC),对应的是数据加载模式和音频流类型, 对应两种完全不同的使用场景。
MODE_STREAM模式:通过write一次次把音频数据写到AudioTrack中,这种模式每次都要把数据从用户提供的Buffer中拷贝到AudioTrack内部的Buffer中,这在一定程度上会使引入延时。
val audioTrack = AudioTrack(
attributes,
format,
bufferSizeInBytes,
AudioTrack.MODE_STREAM,
AudioManager.AUDIO_SESSION_ID_GENERATE
)
audioTrack.play()
lifecycleScope.launch(Dispatchers.IO) {
val path = getExternalFilesDir(null)?.absolutePath + File.separator + fileName
val file = File(path)
try {
val dataBuffer = ByteArray(bufferSizeInBytes)
val fileInputStream = FileInputStream(file)
while (fileInputStream.available() > 0) {
val readCount = fileInputStream.read(dataBuffer)
if (readCount == AudioTrack.ERROR_INVALID_OPERATION || readCount == AudioTrack.ERROR_BAD_VALUE) {
continue
}
if (readCount != 0 && readCount != -1) {
audioTrack.write(dataBuffer, 0, readCount)
}
}
} catch (e: IOException) {
Log.e(tag, "IOException:${e.message}")
}
}
MODE_STATIC模式:在play之前只需要把所有数据通过一次write调用传递到AudioTrack中的内部缓冲区,后续就不必再传递数据了。这种模式适用于像铃声这种内存占用量较小,延时要求较高的文件。但它也有一个缺点,就是一次write的数据不能太多,否则系统无法分配足够的内存来存储全部数据。如果采用STATIC模式,须先调用write写数据,然后再调用play.。
lifecycleScope.launch(Dispatchers.IO) {
val path = getExternalFilesDir(null)?.absolutePath + File.separator + fileName
val file = File(path)
try {
val fileInputStream = FileInputStream(file)
val outputStream = ByteArrayOutputStream()
while (fileInputStream.read() != -1) {
outputStream.write(fileInputStream.read())
}
fileInputStream.close()
val audioData = outputStream.toByteArray()
val audioTrack = AudioTrack(
attributes,
format,
audioData.size,
AudioTrack.MODE_STATIC,
AudioManager.AUDIO_SESSION_ID_GENERATE
)
audioTrack.write(audioData, 0, audioData.size)
audioTrack.play()
} catch (e: IOException) {
Log.e(tag, "IOException:${e.message}")
}
}
如果想要停止播放的话,则调用
audioTrack.stop()
audioTrack.release()