在Android音视频开发中,网上知识点过于零碎,自学起来难度非常大,不过音视频大牛Jhuster提出了《Android 音视频从入门到提高 - 任务列表》。本文是Android音视频任务列表的第二篇, 对应的要学习的内容是:在Android平台使用AudioRecord和AudioTrack完成音频PCM数据的采集和播放,并实现读写音频wav文件
音视频任务列表: 点击此处跳转查看
AudioRecord是Android系统提供的用于实现录音的功能类,可以得到原始的一帧帧PCM音频数据。
要想了解这个类的具体的说明和用法,我们可以去看一下官方的文档:
AndioRecord类的主要功能是让各种JAVA应用能够管理音频资源,以便它们通过此类能够录制声音相关的硬件所收集的声音。此功能的实现就是通过”pulling”(读取)AudioRecord对象的声音数据来完成的。在录音过程中,应用所需要做的就是通过后面三个类方法中的一个去及时地获取AudioRecord对象的录音数据. AudioRecord类提供的三个获取声音数据的方法分别是read(byte[], int, int), read(short[], int, int), read(ByteBuffer, int). 无论选择使用那一个方法都必须事先设定方便用户的声音数据的存储格式。
开始录音的时候,AudioRecord需要初始化一个相关联的声音buffer, 这个buffer主要是用来保存新的声音数据。这个buffer的大小,我们可以在对象构造期间去指定。它表明一个AudioRecord对象还没有被读取(同步)声音数据前能录多长的音(即一次可以录制的声音容量)。声音数据从音频硬件中被读出,数据大小不超过整个录音数据的大小(可以分多次读出),即每次读取初始化buffer容量的数据。
构造一个AudioRecord对象代码为:
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration, EncodingBitRate, recordBufSize);
里面的各个参数配置如下:
(1)audioSource:音频采集的输入源
可选的值以常量的形式定义在 MediaRecorder.AudioSource 类中,常用的值包括:
(2)sampleRateInHz: 采样率
目前 44100Hz 是唯一可以保证兼容所有 Android 手机的采样率。
(3)channelConfig: 通道数
可选的值以常量的形式定义在 AudioFormat 类中,常用的是
(4)audioFormat: 数据位宽
可选的值以常量的形式定义在 AudioFormat 类中,常用的是( 1 是可以保证兼容所有Android手机的)
(5)bufferSizeInBytes:AudioRecord 内部的音频缓冲区的大小
该缓冲区的值不能低于一帧“音频帧”(Frame)的大小:
int size = 采样率 x 位宽 x 采样时间 x 通道数
首先要声明一些全局的变量参数:
private AudioRecord audioRecord = null; // 声明 AudioRecord 对象
private int recordBufSize = 0; // 声明recoordBufffer的大小字段
获取buffer的大小并创建AudioRecord:
public void createAudioRecord() {
recordBufSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, EncodingBitRate); //audioRecord能接受的最小的buffer大小
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration, EncodingBitRate, recordBufSize);
}
byte data[] = new byte[recordBufSize];
audioRecord.startRecording();
isRecording = true;
FileOutputStream os = null;
try {
os = new FileOutputStream(filename);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
if (null != os) {
while (isRecording) {
read = audioRecord.read(data, 0, recordBufSize);
// 如果读取音频数据没有出现错误,就将数据写入到文件
if (AudioRecord.ERROR_INVALID_OPERATION != read) {
try {
os.write(data);
} catch (IOException e) {
e.printStackTrace();
}
}
}
try {
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
修改标志位:isRecording 为false,上面的while循环就自动停止了,数据流也就停止流动了,Stream也就被关闭了
isRecording = false;
停止录音之后,注意要释放资源
if (null != audioRecord) {
audioRecord.stop();
audioRecord.release();
audioRecord = null;
recordingThread = null;
}
注:权限需求:WRITE_EXTERNAL_STORAGE、RECORD_AUDIO
Android SDK 提供了两套音频采集的API,分别是:MediaRecorder 和 AudioRecord,前者是一个更加上层一点的API,它可以直接把手机麦克风录入的音频数据进行编码压缩(如AMR、MP3等)并存成文件,而后者则更接近底层,能够更加自由灵活地控制,可以得到原始的一帧帧PCM音频数据。如果想简单地做一个录音机,录制成音频文件,则推荐使用 MediaRecorder,而如果需要对音频做进一步的算法处理、或者采用第三方的编码库进行压缩、以及网络传输等应用,则建议使用 AudioRecord,其实 MediaRecorder 底层也是调用了 AudioRecord 与 Android Framework 层的 AudioFlinger 进行交互的。直播中实时采集音频自然是要用AudioRecord了。
构造 AudioTrack 实例,下面简单介绍两种构造方法:
第一种构造方法:
public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode);
streamType:音频流的类型
sampleRateInHz:采样率
channelConfig:声道
audioFormat:格式
bufferSizeInBytes:需要的最小录音缓存
mode:数据加载模式
实例:
AudioTrack mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sample,
channel,
bits,
minBufSize,
AudioTrack.MODE_STREAM);
第二种构造方法:
public AudioTrack(AudioAttributes attributes,
AudioFormat format,
int bufferSizeInBytes,
int mode,
int sessionId);
实例:
audioTrack = new AudioTrack(
new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build(),
new AudioFormat.Builder().setSampleRate(SAMPLE_RATE_INHZ)
.setEncoding(AUDIO_FORMAT)
.setChannelMask(channelConfig)
.build(),
minBufferSize,
AudioTrack.MODE_STREAM,
AudioManager.AUDIO_SESSION_ID_GENERATE);
AudioTrack 类可以完成Android平台上音频数据的输出任务。AudioTrack有两种数据加载模式(MODE_STREAM和MODE_STATIC),对应的是数据加载模式和音频流类型, 对应着两种完全不同的使用场景。
MODE_STREAM: 在这种模式下,通过write一次次把音频数据写到AudioTrack中。这和平时通过write系统调用往文件中写数据类似,但这种工作方式每次都需要把数据从用户提供的Buffer中拷贝到AudioTrack内部的Buffer中,这在一定程度上会使引入延时。为解决这一问题,AudioTrack就引入了第二种模式。
MODE_STATIC:这种模式下,在play之前只需要把所有数据通过一次write调用传递到AudioTrack中的内部缓冲区,后续就不必再传递数据了。这种模式适用于像铃声这种内存占用量较小,延时要求较高的文件。但它也有一个缺点,就是一次write的数据不能太多,否则系统无法分配足够的内存来存储全部数据。如果采用STATIC模式,须先调用write写数据,然后再调用play。
(1)MODE_STATIC模式
MODE_STATIC模式输出音频的方式如下(注意:如果采用STATIC模式,须先调用write写数据,然后再调用play):
/**
* 播放,使用static模式
* 如果采用STATIC模式,须先调用write写数据,然后再调用play
*/
private void playInModeStatic() {
// static模式,需要将音频数据一次性write到AudioTrack的内部缓冲区
new AsyncTask<Void, Void, Void>() {
@Override
protected Void doInBackground(Void... params) {
try {
// 读取pcm数据
// File file = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.pcm");
// 读取wav数据
File file = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.wav");
InputStream in = new FileInputStream(file);
try {
ByteArrayOutputStream out = new ByteArrayOutputStream();
for (int b; (b = in.read()) != -1; ) {
out.write(b);
}
audioData = out.toByteArray();
} finally {
in.close();
}
} catch (IOException e) {
}
return null;
}
@Override
protected void onPostExecute(Void v) {
audioTrack = new AudioTrack(
new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build(),
new AudioFormat.Builder().setSampleRate(SAMPLE_RATE_INHZ)
.setEncoding(AUDIO_FORMAT)
.setChannelMask(AudioFormat.CHANNEL_OUT_MONO)
.build(),
audioData.length,
AudioTrack.MODE_STATIC,
AudioManager.AUDIO_SESSION_ID_GENERATE);
audioTrack.write(audioData, 0, audioData.length);
audioTrack.play();
}
}.execute();
}
(2)MODE_STREAM模式
MODE_STREAM 模式输出音频的方式如下:
/**
* 播放,使用stream模式
*/
private void playInModeStream() {
/*
* SAMPLE_RATE_INHZ 对应pcm音频的采样率
* channelConfig 对应pcm音频的声道
* AUDIO_FORMAT 对应pcm音频的格式
* */
int channelConfig = AudioFormat.CHANNEL_OUT_MONO;
final int minBufferSize = AudioTrack.getMinBufferSize(SAMPLE_RATE_INHZ, channelConfig, AUDIO_FORMAT);
audioTrack = new AudioTrack(
new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build(),
new AudioFormat.Builder().setSampleRate(SAMPLE_RATE_INHZ)
.setEncoding(AUDIO_FORMAT)
.setChannelMask(channelConfig)
.build(),
minBufferSize,
AudioTrack.MODE_STREAM,
AudioManager.AUDIO_SESSION_ID_GENERATE);
audioTrack.play();
File file = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.pcm");
try {
fileInputStream = new FileInputStream(file);
new Thread(new Runnable() {
@Override
public void run() {
try {
byte[] tempBuffer = new byte[minBufferSize];
while (fileInputStream.available() > 0) {
int readCount = fileInputStream.read(tempBuffer);
if (readCount == AudioTrack.ERROR_INVALID_OPERATION ||
readCount == AudioTrack.ERROR_BAD_VALUE) {
continue;
}
if (readCount != 0 && readCount != -1) {
audioTrack.write(tempBuffer, 0, readCount);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}).start();
} catch (IOException e) {
e.printStackTrace();
}
}
在AudioTrack构造函数中,会接触到AudioManager.STREAM_MUSIC这个参数。它的含义与Android系统对音频流的管理和分类有关。
Android将系统的声音分为好几种流类型,下面是几个常见的:
· STREAM_ALARM:警告声
· STREAM_MUSIC:音乐声,例如music等
· STREAM_RING:铃声
· STREAM_SYSTEM:系统声音,例如低电提示音,锁屏音等
· STREAM_VOCIE_CALL:通话声
注意:上面这些类型的划分和音频数据本身并没有关系。例如MUSIC和RING类型都可以是某首MP3歌曲。另外,声音流类型的选择没有固定的标准,例如,铃声预览中的铃声可以设置为MUSIC类型。音频流类型的划分和Audio系统对音频的管理策略有关。
(1)按照AudioRecord采集的PCM数据流程,把音频数据都输出到文件里面了,停止录音后,用播放器打开此文件,发现不能播放,到底是为什么呢?
答:按照流程走完了,数据是进去了,但是现在的文件里面的内容仅仅是最原始的音频数据,术语称为raw(中文解释是“原材料”或“未经处理的东西”),这时候,你让播放器去打开,它既不知道保存的格式是什么,又不知道如何进行解码操作。当然播放不了。
(2)那如何才能在播放器中播放我录制的内容呢?
答: 在文件的数据开头加入WAVE HEAD数据即可,也就是文件头。只有加上文件头部的数据,播放器才能正确的知道里面的内容到底是什么,进而能够正常的解析并播放里面的内容。具体的头文件的描述,在Play a WAV file on an AudioTrack里面可以进行了解。
添加WAVE文件头的代码如下:
public class PcmToWavUtil {
/**
* 缓存的音频大小
*/
private int mBufferSize;
/**
* 采样率
*/
private int mSampleRate;
/**
* 声道数
*/
private int mChannel;
/**
* @param sampleRate sample rate、采样率
* @param channel channel、声道
* @param encoding Audio data format、音频格式
*/
PcmToWavUtil(int sampleRate, int channel, int encoding) {
this.mSampleRate = sampleRate;
this.mChannel = channel;
this.mBufferSize = AudioRecord.getMinBufferSize(mSampleRate, mChannel, encoding);
}
/**
* pcm文件转wav文件
*
* @param inFilename 源文件路径
* @param outFilename 目标文件路径
*/
public void pcmToWav(String inFilename, String outFilename) {
FileInputStream in;
FileOutputStream out;
long totalAudioLen;
long totalDataLen;
long longSampleRate = mSampleRate;
int channels = mChannel == AudioFormat.CHANNEL_IN_MONO ? 1 : 2;
long byteRate = 16 * mSampleRate * channels / 8;
byte[] data = new byte[mBufferSize];
try {
in = new FileInputStream(inFilename);
out = new FileOutputStream(outFilename);
totalAudioLen = in.getChannel().size();
totalDataLen = totalAudioLen + 36;
// 在目标文件中加入头数据
writeWaveFileHeader(out, totalAudioLen, totalDataLen, longSampleRate, channels, byteRate);
// 将源文件数据加入目标文件
while (in.read(data) != -1) {
out.write(data);
}
in.close();
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* 加入wav文件头
*/
private void writeWaveFileHeader(FileOutputStream out, long totalAudioLen,
long totalDataLen, long longSampleRate, int channels, long byteRate)
throws IOException {
byte[] header = new byte[44];
// RIFF/WAVE header
header[0] = 'R';
header[1] = 'I';
header[2] = 'F';
header[3] = 'F';
header[4] = (byte) (totalDataLen & 0xff);
header[5] = (byte) ((totalDataLen >> 8) & 0xff);
header[6] = (byte) ((totalDataLen >> 16) & 0xff);
header[7] = (byte) ((totalDataLen >> 24) & 0xff);
//WAVE
header[8] = 'W';
header[9] = 'A';
header[10] = 'V';
header[11] = 'E';
// 'fmt ' chunk
header[12] = 'f';
header[13] = 'm';
header[14] = 't';
header[15] = ' ';
// 4 bytes: size of 'fmt ' chunk
header[16] = 16;
header[17] = 0;
header[18] = 0;
header[19] = 0;
// format = 1
header[20] = 1;
header[21] = 0;
header[22] = (byte) channels;
header[23] = 0;
header[24] = (byte) (longSampleRate & 0xff);
header[25] = (byte) ((longSampleRate >> 8) & 0xff);
header[26] = (byte) ((longSampleRate >> 16) & 0xff);
header[27] = (byte) ((longSampleRate >> 24) & 0xff);
header[28] = (byte) (byteRate & 0xff);
header[29] = (byte) ((byteRate >> 8) & 0xff);
header[30] = (byte) ((byteRate >> 16) & 0xff);
header[31] = (byte) ((byteRate >> 24) & 0xff);
// block align
header[32] = (byte) (2 * 16 / 8);
header[33] = 0;
// bits per sample
header[34] = 16;
header[35] = 0;
//data
header[36] = 'd';
header[37] = 'a';
header[38] = 't';
header[39] = 'a';
header[40] = (byte) (totalAudioLen & 0xff);
header[41] = (byte) ((totalAudioLen >> 8) & 0xff);
header[42] = (byte) ((totalAudioLen >> 16) & 0xff);
header[43] = (byte) ((totalAudioLen >> 24) & 0xff);
out.write(header, 0, 44);
}
}
AudioTrack完成音频wav数据的播放
/**
* 播放,使用static模式
* 如果采用STATIC模式,须先调用write写数据,然后再调用play
*/
private void playInModeStatic() {
// static模式,需要将音频数据一次性write到AudioTrack的内部缓冲区
new AsyncTask<Void, Void, Void>() {
@Override
protected Void doInBackground(Void... params) {
try {
// 读取wav数据
File file = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.wav");
InputStream in = new FileInputStream(file);
try {
ByteArrayOutputStream out = new ByteArrayOutputStream();
for (int b; (b = in.read()) != -1; ) {
out.write(b);
}
audioData = out.toByteArray();
} finally {
in.close();
}
} catch (IOException e) {
}
return null;
}
@Override
protected void onPostExecute(Void v) {
audioTrack = new AudioTrack(
new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build(),
new AudioFormat.Builder().setSampleRate(SAMPLE_RATE_INHZ)
.setEncoding(AUDIO_FORMAT)
.setChannelMask(AudioFormat.CHANNEL_OUT_MONO)
.build(),
audioData.length,
AudioTrack.MODE_STATIC,
AudioManager.AUDIO_SESSION_ID_GENERATE);
audioTrack.write(audioData, 0, audioData.length);
audioTrack.play();
}
}.execute();
}
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
android:gravity="center">
<Button
android:id="@+id/btn_control"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:text="@string/start_record"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toBottomOf="@+id/tv" />
<Button
android:id="@+id/btn_play"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:text="@string/start_play"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toBottomOf="@+id/btn_convert" />
<Button
android:id="@+id/btn_convert"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:text="pcm转wav"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toBottomOf="@+id/btn_control" />
<Button
android:id="@+id/btn_play_wav"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:text="@string/start_play_wav"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toBottomOf="@+id/btn_play" />
</LinearLayout>
strings.xml
<resources>
<string name="app_name">audio demo</string>
<string name="start_record">开始录音</string>
<string name="stop_record">停止录音</string>
<string name="start_play">播放pcm</string>
<string name="start_play_wav">播放WAV</string>
<string name="stop_play">停止</string>
</resources>
加入权限
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
package com.lzacking.audiodemo;
import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;
import android.Manifest;
import android.content.pm.PackageManager;
import android.media.AudioAttributes;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder;
import android.os.AsyncTask;
import android.os.Build;
import android.os.Bundle;
import android.os.Environment;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
import static com.lzacking.audiodemo.GlobalConfig.AUDIO_FORMAT;
import static com.lzacking.audiodemo.GlobalConfig.CHANNEL_CONFIG;
import static com.lzacking.audiodemo.GlobalConfig.SAMPLE_RATE_INHZ;
public class MainActivity extends AppCompatActivity implements View.OnClickListener {
private static final int MY_PERMISSIONS_REQUEST = 1001;
private static final String TAG = "MainActivity";
private Button mBtnControl;
private Button mBtnPlay;
private Button mBtnPlayWav;
/**
* 需要申请的运行时权限
*/
private String[] permissions = new String[] {
Manifest.permission.RECORD_AUDIO,
Manifest.permission.WRITE_EXTERNAL_STORAGE
};
/**
* 被用户拒绝的权限列表
*/
private List<String> mPermissionList = new ArrayList<>();
private boolean isRecording;
private AudioRecord audioRecord;
private Button mBtnConvert;
private AudioTrack audioTrack;
private byte[] audioData;
private FileInputStream fileInputStream;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// 录音得到pcm文件
mBtnControl = (Button) findViewById(R.id.btn_control);
mBtnControl.setOnClickListener(this);
// 播放
mBtnPlay = (Button) findViewById(R.id.btn_play);
mBtnPlay.setOnClickListener(this);
// 将刚才录音得到的pcm文件转化为wav文件
mBtnConvert = (Button) findViewById(R.id.btn_convert);
mBtnConvert.setOnClickListener(this);
// 播放wav文件
mBtnPlayWav = (Button) findViewById(R.id.btn_play_wav);
mBtnPlayWav.setOnClickListener(this);
checkPermissions();
}
@Override
public void onClick(View view) {
switch (view.getId()) {
case R.id.btn_control:
Button button = (Button) view;
if (button.getText().toString().equals(getString(R.string.start_record))) {
button.setText(getString(R.string.stop_record));
startRecord();
} else {
button.setText(getString(R.string.start_record));
stopRecord();
}
break;
case R.id.btn_play:
Button btn = (Button) view;
String string = btn.getText().toString();
Log.i(TAG, "onClick: " + string);
if (string.equals(getString(R.string.start_play))) {
btn.setText(getString(R.string.stop_play));
// 播放pcm
playInModeStream();
} else {
btn.setText(getString(R.string.start_play));
stopPlay();
}
break;
case R.id.btn_convert:
PcmToWavUtil pcmToWavUtil = new PcmToWavUtil(SAMPLE_RATE_INHZ, CHANNEL_CONFIG, AUDIO_FORMAT);
File pcmFile = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.pcm");
File wavFile = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.wav");
if (!wavFile.mkdirs()) {
Log.e(TAG, "wavFile Directory not created");
}
if (wavFile.exists()) {
wavFile.delete();
}
pcmToWavUtil.pcmToWav(pcmFile.getAbsolutePath(), wavFile.getAbsolutePath());
break;
case R.id.btn_play_wav:
Button btnWav = (Button) view;
String stringWav = btnWav.getText().toString();
Log.i(TAG, "onClick: " + stringWav);
if (stringWav.equals(getString(R.string.start_play_wav))) {
btnWav.setText(getString(R.string.stop_play));
// 播放wav文件
playInModeStatic();
} else {
btnWav.setText(getString(R.string.start_play_wav));
stopPlay();
}
break;
default:
break;
}
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
if (requestCode == MY_PERMISSIONS_REQUEST) {
for (int i = 0; i < grantResults.length; i++) {
if (grantResults[i] != PackageManager.PERMISSION_GRANTED) {
Log.i(TAG, permissions[i] + " 权限被用户禁止!");
}
}
// 运行时权限的申请不是本demo的重点,所以不再做更多的处理,请同意权限申请。
}
}
/**
* 开始录音
*/
public void startRecord() {
// 获取buffer的大小
final int minBufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE_INHZ, CHANNEL_CONFIG, AUDIO_FORMAT);
// 创建AudioRecord
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
SAMPLE_RATE_INHZ,
CHANNEL_CONFIG,
AUDIO_FORMAT,
minBufferSize);
// 初始化一个buffer
final byte data[] = new byte[minBufferSize];
final File file = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.pcm");
if (!file.mkdirs()) {
Log.e(TAG, "Directory not created");
}
if (file.exists()) {
file.delete();
}
// 开始录音
audioRecord.startRecording();
isRecording = true;
new Thread(new Runnable() {
@Override
public void run() {
FileOutputStream os = null;
try {
os = new FileOutputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
if (null != os) {
while (isRecording) {
int read = audioRecord.read(data, 0, minBufferSize);
// 如果读取音频数据没有出现错误,就将数据写入到文件
if (AudioRecord.ERROR_INVALID_OPERATION != read) {
try {
os.write(data);
} catch (IOException e) {
e.printStackTrace();
}
}
}
try {
Log.i(TAG, "run: close file output stream !");
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}).start();
}
/**
* 停止录音
*/
public void stopRecord() {
isRecording = false;
// 释放资源
if (null != audioRecord) {
audioRecord.stop();
audioRecord.release();
audioRecord = null;
}
}
private void checkPermissions() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
for (int i = 0; i < permissions.length; i++) {
if (ContextCompat.checkSelfPermission(this, permissions[i]) !=
PackageManager.PERMISSION_GRANTED) {
mPermissionList.add(permissions[i]);
}
}
if (!mPermissionList.isEmpty()) {
String[] permissions = mPermissionList.toArray(new String[mPermissionList.size()]);
ActivityCompat.requestPermissions(this, permissions, MY_PERMISSIONS_REQUEST);
}
}
}
/**
* 播放,使用stream模式
*/
private void playInModeStream() {
/*
* SAMPLE_RATE_INHZ 对应pcm音频的采样率
* channelConfig 对应pcm音频的声道
* AUDIO_FORMAT 对应pcm音频的格式
* */
int channelConfig = AudioFormat.CHANNEL_OUT_MONO;
final int minBufferSize = AudioTrack.getMinBufferSize(SAMPLE_RATE_INHZ, channelConfig, AUDIO_FORMAT);
audioTrack = new AudioTrack(
new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build(),
new AudioFormat.Builder().setSampleRate(SAMPLE_RATE_INHZ)
.setEncoding(AUDIO_FORMAT)
.setChannelMask(channelConfig)
.build(),
minBufferSize,
AudioTrack.MODE_STREAM,
AudioManager.AUDIO_SESSION_ID_GENERATE);
audioTrack.play();
File file = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.pcm");
try {
fileInputStream = new FileInputStream(file);
new Thread(new Runnable() {
@Override
public void run() {
try {
byte[] tempBuffer = new byte[minBufferSize];
while (fileInputStream.available() > 0) {
int readCount = fileInputStream.read(tempBuffer);
if (readCount == AudioTrack.ERROR_INVALID_OPERATION ||
readCount == AudioTrack.ERROR_BAD_VALUE) {
continue;
}
if (readCount != 0 && readCount != -1) {
audioTrack.write(tempBuffer, 0, readCount);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}).start();
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* 播放,使用static模式
* 如果采用STATIC模式,须先调用write写数据,然后再调用play
*/
private void playInModeStatic() {
// static模式,需要将音频数据一次性write到AudioTrack的内部缓冲区
new AsyncTask<Void, Void, Void>() {
@Override
protected Void doInBackground(Void... params) {
try {
// 读取wav数据
File file = new File(getExternalFilesDir(Environment.DIRECTORY_MUSIC), "test.wav");
InputStream in = new FileInputStream(file);
try {
ByteArrayOutputStream out = new ByteArrayOutputStream();
for (int b; (b = in.read()) != -1; ) {
out.write(b);
}
audioData = out.toByteArray();
} finally {
in.close();
}
} catch (IOException e) {
}
return null;
}
@Override
protected void onPostExecute(Void v) {
audioTrack = new AudioTrack(
new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build(),
new AudioFormat.Builder().setSampleRate(SAMPLE_RATE_INHZ)
.setEncoding(AUDIO_FORMAT)
.setChannelMask(AudioFormat.CHANNEL_OUT_MONO)
.build(),
audioData.length,
AudioTrack.MODE_STATIC,
AudioManager.AUDIO_SESSION_ID_GENERATE);
audioTrack.write(audioData, 0, audioData.length);
audioTrack.play();
}
}.execute();
}
/**
* 停止播放
*/
private void stopPlay() {
if (audioTrack != null) {
audioTrack.stop();
audioTrack.release();
}
}
}
package com.lzacking.audiodemo;
import android.media.AudioFormat;
public class GlobalConfig {
/**
* 采样率,现在能够保证在所有设备上使用的采样率是44100Hz, 但是其他的采样率(22050, 16000, 11025)在一些设备上也可以使用。
*/
public static final int SAMPLE_RATE_INHZ = 44100;
/**
* 声道数。CHANNEL_IN_MONO and CHANNEL_IN_STEREO. 其中CHANNEL_IN_MONO是可以保证在所有设备能够使用的。
*/
public static final int CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_MONO;
/**
* 返回的音频数据的格式。 ENCODING_PCM_8BIT, ENCODING_PCM_16BIT, and ENCODING_PCM_FLOAT.
*/
public static final int AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
}
package com.lzacking.audiodemo;
import android.media.AudioFormat;
import android.media.AudioRecord;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class PcmToWavUtil {
/**
* 缓存的音频大小
*/
private int mBufferSize;
/**
* 采样率
*/
private int mSampleRate;
/**
* 声道数
*/
private int mChannel;
/**
* @param sampleRate sample rate、采样率
* @param channel channel、声道
* @param encoding Audio data format、音频格式
*/
PcmToWavUtil(int sampleRate, int channel, int encoding) {
this.mSampleRate = sampleRate;
this.mChannel = channel;
this.mBufferSize = AudioRecord.getMinBufferSize(mSampleRate, mChannel, encoding);
}
/**
* pcm文件转wav文件
*
* @param inFilename 源文件路径
* @param outFilename 目标文件路径
*/
public void pcmToWav(String inFilename, String outFilename) {
FileInputStream in;
FileOutputStream out;
long totalAudioLen;
long totalDataLen;
long longSampleRate = mSampleRate;
int channels = mChannel == AudioFormat.CHANNEL_IN_MONO ? 1 : 2;
long byteRate = 16 * mSampleRate * channels / 8;
byte[] data = new byte[mBufferSize];
try {
in = new FileInputStream(inFilename);
out = new FileOutputStream(outFilename);
totalAudioLen = in.getChannel().size();
totalDataLen = totalAudioLen + 36;
writeWaveFileHeader(out, totalAudioLen, totalDataLen,
longSampleRate, channels, byteRate);
while (in.read(data) != -1) {
out.write(data);
}
in.close();
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* 加入wav文件头
*/
private void writeWaveFileHeader(FileOutputStream out, long totalAudioLen,
long totalDataLen, long longSampleRate, int channels, long byteRate)
throws IOException {
byte[] header = new byte[44];
// RIFF/WAVE header
header[0] = 'R';
header[1] = 'I';
header[2] = 'F';
header[3] = 'F';
header[4] = (byte) (totalDataLen & 0xff);
header[5] = (byte) ((totalDataLen >> 8) & 0xff);
header[6] = (byte) ((totalDataLen >> 16) & 0xff);
header[7] = (byte) ((totalDataLen >> 24) & 0xff);
// WAVE
header[8] = 'W';
header[9] = 'A';
header[10] = 'V';
header[11] = 'E';
// 'fmt ' chunk
header[12] = 'f';
header[13] = 'm';
header[14] = 't';
header[15] = ' ';
// 4 bytes: size of 'fmt ' chunk
header[16] = 16;
header[17] = 0;
header[18] = 0;
header[19] = 0;
// format = 1
header[20] = 1;
header[21] = 0;
header[22] = (byte) channels;
header[23] = 0;
header[24] = (byte) (longSampleRate & 0xff);
header[25] = (byte) ((longSampleRate >> 8) & 0xff);
header[26] = (byte) ((longSampleRate >> 16) & 0xff);
header[27] = (byte) ((longSampleRate >> 24) & 0xff);
header[28] = (byte) (byteRate & 0xff);
header[29] = (byte) ((byteRate >> 8) & 0xff);
header[30] = (byte) ((byteRate >> 16) & 0xff);
header[31] = (byte) ((byteRate >> 24) & 0xff);
// block align
header[32] = (byte) (2 * 16 / 8);
header[33] = 0;
// bits per sample
header[34] = 16;
header[35] = 0;
//data
header[36] = 'd';
header[37] = 'a';
header[38] = 't';
header[39] = 'a';
header[40] = (byte) (totalAudioLen & 0xff);
header[41] = (byte) ((totalAudioLen >> 8) & 0xff);
header[42] = (byte) ((totalAudioLen >> 16) & 0xff);
header[43] = (byte) ((totalAudioLen >> 24) & 0xff);
out.write(header, 0, 44);
}
}
源代码:
Android音视频开发基础(二) : 在Android平台使用AudioRecord和AudioTrack完成音频PCM数据的采集和播放,并实现读写音频wav文件