本文是在 Camera 采集数据通过 textureview 预览,手动对焦、自动对焦 (一) 在熟悉Camera使用基础上,增加了录制和拍照的功能,仿照微信点击拍照、长按录制视频。
网上也有很多仿微信相机的应用,不过基本上是使用MediaRecord录制视频,相对比较简单。与他们不同的是,这里我更想学习整个MP4的录制流程,从采集、编码、封包成MP4到解析、解码、播放,这更能够加深对音视频的理解和后续的学习。
1.利用 Camera.PreviewCallback 回调接口收集到 YUV数据
public void onPreviewFrame(byte[] bytes, Camera camera)
2.若是拍照,将NV21数据保存为文件,或者转化成bitmap显示到ImageView
你会发现图片会显示的旋转角度不对,需要根据是前置和后置摄像头,来对NV21数据进行旋转一定的角度
public static boolean saveNV21(byte[] data, int width, int height, String path) {
try {
FileOutputStream outputStream = new FileOutputStream(path);
// 后置摄像头旋转90度,后置旋转270度
if (CameraUtil.isBackCamera()) {
data = rotateYUV420Degree90(data, height, width);
} else {
data = rotateYUV420Degree270(data, height, width);
}
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 70, outputStream);
outputStream.flush();
outputStream.close();
return true;
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return false;
}
拍照比较简单,而录制需要分成三个部分:采集编码视频数据、采集编码音频数据、混合编制成MP4,这里遇到的坑要比拍照多多了。
1.同样获取onPreviewFrame()回调的YUV数据,利用MediaCodec编码成H.264格式数据
2.使用AudioRecord采集PCM原始音频数据,利用MediaCodec编码成AAC格式数据
3.使用MediaMuxer将H.264视频数据和AAC音频数据混合
1.开启一个VideoRecordThread处理NV21数据
// 开始录制
public void begin() {
dataQueue.clear();
isRecording = true;
generateIndex = 0;
start();
}
// dataQueue存储来自camera的数据
public void frame(byte[] data) {
if (isRecording) {
dataQueue.offer(data);
}
}
public void run() {
while (isRecording) {
byte[] data = dataQueue.poll();
if (data != null) {
// 颜色转换
NV21toI420SemiPlanar(data, yuv420sp, width, height);
encode(yuv420sp);
}
}
release();
}
2.编码流程
private void encode(byte[] input) {
if (input != null) {
try {
int inputBufferIndex = mMediaCodec.dequeueInputBuffer(TIMEOUT_S);
if (inputBufferIndex >= 0) {
long pts = getPts();
ByteBuffer inputBuffer = mMediaCodec.getInputBuffer(inputBufferIndex);
inputBuffer.clear();// 记得清除,否则可能遗留上一次的数据
inputBuffer.put(input);
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, pts, 0);
generateIndex += 1;
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_S);
if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
Log.e(TAG, "vedio run: INFO_OUTPUT_FORMAT_CHANGED");
MediaMuxerThread mediaMutex = mMutex.get();
if (mediaMutex != null && !mediaMutex.isVideoTrackExist()) {
// 给MediaMuxer添加视频轨道
mediaMutex.addVedioTrack(mMediaCodec.getOutputFormat());
}
}
while (outputBufferIndex >= 0) {
ByteBuffer outputBuffer = mMediaCodec.getOutputBuffer(outputBufferIndex);
if (bufferInfo.flags == MediaCodec.BUFFER_FLAG_CODEC_CONFIG) {
Log.e(TAG, "vedio run: BUFFER_FLAG_CODEC_CONFIG" );
bufferInfo.size = 0;
}
if (bufferInfo.size > 0) {
MediaMuxerThread mediaMuxer = this.mMutex.get();
if (mediaMuxer != null) {
byte[] outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
Log.e(TAG, "video presentationTimeUs : " + bufferInfo.presentationTimeUs);
bufferInfo.presentationTimeUs = getPts();
mediaMuxer.addMutexData(new MutexBean(true, outData, bufferInfo));
}
}
mMediaCodec.releaseOutputBuffer(outputBufferIndex, false);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_S);
}
} catch (Throwable t) {
t.printStackTrace();
Log.e(TAG, "encode: "+t.toString() );
}
}
}
private long getPts() {
return System.nanoTime() / 1000L;
}
1.同样开启一个AudioRecordThread线程处理
public void begin() {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
prevOutputPTSUs = 0; // 存储上一次的时间戳,为了保证时间戳是提增的
isRecording = true;
start();
}
public void run() {
byte[] bufferBytes = new byte[minBufferSize];
int len = 0;
while(isRecording) {
// 读取AudioRecord PCM数据
len = mAudioRecorder.read(bufferBytes, 0, minBufferSize);
if (len > 0) {
record(bufferBytes, len, getPTSUs());
}
}
release();
}
2.获取PCM数据和编码
record()
代码跟上面的视频编码的encode()
很相似
private void record(byte[] bufferBytes, final int len, final long presentationTimeUs) {
int inputBufferIndex = mMediaCodec.dequeueInputBuffer(TIMEOUT_S);// 1S
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = mMediaCodec.getInputBuffer(inputBufferIndex);
inputBuffer.clear();
if (inputBuffer != null) {
inputBuffer.put(bufferBytes);
}
if (len <= 0) {
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, 0, presentationTimeUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
Log.i(TAG, "send BUFFER_FLAG_END_OF_STREAM");
} else {
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, len, presentationTimeUs, 0);
}
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_S);
if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
Log.e(TAG, "audio run: INFO_OUTPUT_FORMAT_CHANGED");
MediaMuxerThread mediaMutex = mMutex.get();
if (mediaMutex != null && !mediaMutex.isAudioTrackExist()) {
mediaMutex.addAudioTrack(mMediaCodec.getOutputFormat());
}
}
while (outputBufferIndex >= 0) {
ByteBuffer outputBuffer = mMediaCodec.getOutputBuffer(outputBufferIndex);
if (bufferInfo.flags == MediaCodec.BUFFER_FLAG_CODEC_CONFIG) {
Log.e(TAG, "audio run: BUFFER_FLAG_CODEC_CONFIG");
bufferInfo.size = 0;
}
if (bufferInfo.size > 0) {
MediaMuxerThread mediaMuxer = mMutex.get();
if (mediaMuxer != null) {
byte[] outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
bufferInfo.presentationTimeUs = getPTSUs();
Log.e(TAG, "audio presentationTimeUs : " + bufferInfo.presentationTimeUs);
mediaMuxer.addMutexData(new MutexBean(false, outData, bufferInfo));
prevOutputPTSUs = bufferInfo.presentationTimeUs;
}
}
mMediaCodec.releaseOutputBuffer(outputBufferIndex, false);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_S);
}
}
private long getPTSUs() {
long result = System.nanoTime() / 1000L;
return result < prevOutputPTSUs ? prevOutputPTSUs : result;
}
同样开一个线程来专门执行MediaMuxer封包:MediaMuxerThread,并且他负责管理VideoRecordThread、AudioRecordThread的启动、结束和同步。
1.启动音视频处理线程:
这时候还没真正启动MediaMuxer,因为需要保证先添加视频音频轨道,可见后面的问题5
public void begin(int width, int height) {
prepareMediaMuxer(width, height);
isRecording = true;
isMediaMuxerStart = false;
mVideoThread.begin();
mAudioThread.begin();
}
2.真正启动MediaMuxer
private void startMediaMutex() {
if (!isMediaMuxerStart && isVideoTrackExist() && isAudioTrackExist()){
Log.e(TAG, "run: MediaMuxerStart");
mMediaMuxer.start();
isMediaMuxerStart = true;
start();
}
}
3.处理数据
mMutexBeanQueue储存了来自VideoRecordThread和AudioRecordThread编码后的数据
public void run() {
while(true) {
if (!mMutexBeanQueue.isEmpty()) {
MutexBean data = mMutexBeanQueue.poll();
if (data.isVedio()) {
mMediaMuxer.writeSampleData(mVideoTrack, data.getByteBuffer(), data.getBufferInfo());
} else {
mMediaMuxer.writeSampleData(mAudioTrack, data.getByteBuffer(), data.getBufferInfo());
}
}else {
try {
Thread.sleep(300);
} catch (InterruptedException e) {
e.printStackTrace();
}
if (!isRecording && mMutexBeanQueue.isEmpty()) {
break;
}
}
}
release();
if (mMediaMuxerCallback != null) {
mMediaMuxerCallback.onFinishMediaMutex(path);
}
}
4.结束录制
public void end() {
try {
isRecording = false;
mVideoThread.end();
mVideoThread.join();
mVideoThread = null;
mAudioThread.end();
mAudioThread.join();
mAudioThread = null;
} catch (InterruptedException e) {
e.printStackTrace();
}
}
以上是整体的流程,很清晰,但是遇到的问题不少,记录下自己的一些解决办法
1.onPreviewFrame()会源源不断的回调YUV数据,如何处理这些数据
利用队列存储YUV数据,开启独立线程进行视频数据编码
2.录制的视频是黑屏或者模糊
第一种可能是预览的宽高与MeidaCodec编码的宽高弄反了,我们都知道,camera预览的时候,会设置旋转一定的角度,导致camera的宽高与屏幕宽高是相反的;
MediaFormat mediaFormat = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC,width,height);
第二种可能是没有将NV21格式用颜色转换,直接进行编码;
private static void NV21toI420SemiPlanar(byte[] nv21bytes, byte[] i420bytes, int width, int height) {
System.arraycopy(nv21bytes, 0, i420bytes, 0, width * height);
for (int i = width * height; i < nv21bytes.length; i += 2) {
i420bytes[i] = nv21bytes[i + 1];
i420bytes[i + 1] = nv21bytes[i];
}
}
第三种可能是跟设置的帧率等参数有关。
3.视频与音频不同步?
音频和视频的同步是通过presentationTimeUs时间戳来控制的,需要同步好时间戳。
4.录制出来的MP4播放的时候,音频播放速度非常快、而且会中间会跳过几秒,视频也掉帧了
这个问题困扰了好几天,一直找不到问题出在哪里,而且单独编码H264并没有问题,而编出来的AAC音频数据播放的时候中间会跳过好几秒。
最后发现是在TIMEOUT_S等待时间设置出现了问题,TIMEOUT_S设置过大,中间的等待时间太长,直接跳秒了。解决办法就是将TIMEOUT_S设置成1s。
private static final int TIMEOUT_S = 10000;// 1s
.....
int inputBufferIndex = mMediaCodec.dequeueInputBuffer(TIMEOUT_S);
.....
int outputBufferIndex = mMediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_S);
5.MeidaMuxer.start()会奔溃问题
mediaMuxer.start()之前要保证添加了音频和视频轨道,确保先后顺序
...
mVideoTrack = mMediaMuxer.addTrack(mediaFormat);
...
mAudioTrack = mMediaMuxer.addTrack(mediaFormat);
...
mMediaMuxer.start();
以上是大致的流程
MediaCodec支持直接解析到Surface中:
mVideoCodec.configure(mediaFormat, mSurface, null, 0);
解析的核心代码
public void run() {
long startMs = System.currentTimeMillis();
int index = 0;
while(true) {
int inputIndex = mVideoCodec.dequeueInputBuffer(TIMEOUT_S);
if (inputIndex < 0) {
Log.e(TAG, "decode inputIdex < 0");
SystemClock.sleep(50);
continue;
}
ByteBuffer inputBuffer = mVideoCodec.getInputBuffer(inputIndex);
inputBuffer.clear();
int samplesize = mExtractor.readSampleData(inputBuffer, 0);
Log.e(TAG, "decode samplesize: " + samplesize);
if (samplesize <= 0) {
break;
}
mVideoCodec.queueInputBuffer(inputIndex, 0, samplesize, getPts(index++, mFrameRate), 0);
int outputIndex = mVideoCodec.dequeueOutputBuffer(mBufferInfo, TIMEOUT_S);
Log.e(TAG, "decode: outputIndex " + outputIndex);
while (outputIndex > 0) {
//帧控制
while (mBufferInfo.presentationTimeUs / 1000 > System.currentTimeMillis() - startMs) {
SystemClock.sleep(50);
}
mVideoCodec.releaseOutputBuffer(outputIndex, true);
outputIndex = mVideoCodec.dequeueOutputBuffer(mBufferInfo, TIMEOUT_S);
}
if (!mExtractor.advance()) {
break;
}
}
release();
}
private long getPts(int index, int frameRate) {
return index * 1000000 / frameRate;
}
利用AudioTrack播放,以下是核心代码
public void run() {
boolean isFinish = false;
while (!isFinish) {
int inputIdex = mAudioCodec.dequeueInputBuffer(TIMEOUT_S);
if (inputIdex < 0) {
isFinish = true;
}
ByteBuffer inputBuffer = mAudioCodec.getInputBuffer(inputIdex);
inputBuffer.clear();
int samplesize = mExtractor.readSampleData(inputBuffer, 0);
if (samplesize > 0) {
mAudioCodec.queueInputBuffer(inputIdex, 0, samplesize, 0, 0);
mExtractor.advance();
} else {
isFinish = true;
}
int outputIndex = mAudioCodec.dequeueOutputBuffer(mBufferInfo, TIMEOUT_S);
ByteBuffer outputBuffer;
byte[] chunkPCM;
while (outputIndex >= 0) {
outputBuffer = mAudioCodec.getOutputBuffer(outputIndex);
chunkPCM = new byte[mBufferInfo.size];
outputBuffer.get(chunkPCM);
outputBuffer.clear();
mAudioTrack.write(chunkPCM, 0, mBufferInfo.size);
mAudioCodec.releaseOutputBuffer(outputIndex, false);
outputIndex = mAudioCodec.dequeueOutputBuffer(mBufferInfo, TIMEOUT_S);
}
}
release();
}
遇到的问题:
1. 在TextureView播放的时候,方向是旋转了90度的
TextureView支持旋转,这也是选择用它作为预览的原因,比较方便
mTextureView = findViewById(R.id.video_view);
mTextureView.setRotation(90);
mTextureView.setScaleX((float) videoWidth/videoHeight);
mTextureView.setScaleY((float) videoHeight/videoWidth);
2.视频播放的速度很快
在确保了时间戳是正确之后,发现播放的速度还是很快,加入了帧控制以后,详细见上面代码,效果还不错。
因为这个目的在于学习android 音视频模块的知识,所以对UI的要求比较低,而且没有适配各种Android手机、低版本,特别是各种Android手机在这一块差异很大。
后续希望结合OpenGL,增加水印、特效录制等功能
https://github.com/ChyengJason/SCamera