H264Stream类
H264Stream类集成了VideoStream类,是编码视频流H264格式的具体实现类。
/**
* Tests if streaming with the given configuration (bit rate, frame rate, resolution) is possible
* and determines the pps and sps. Should not be called by the UI thread.
**/
private MP4Config testH264() throws IllegalStateException, IOException {
if (mMode != MODE_MEDIARECORDER_API) return testMediaCodecAPI();
else return testMediaRecorderAPI();
}
在configure()方法中会调用testH264()方法来判断是否支持H264编码格式,如果支持则获取到H264的PPS和SPS。
@SuppressLint("NewApi")
private MP4Config testMediaCodecAPI() throws RuntimeException, IOException {
createCamera();
updateCamera();
try {
if (mQuality.resX>=640) {
// Using the MediaCodec API with the buffer method for high resolutions is too slow
mMode = MODE_MEDIARECORDER_API;
}
EncoderDebugger debugger = EncoderDebugger.debug(mSettings, mQuality.resX, mQuality.resY);
return new MP4Config(debugger.getB64SPS(), debugger.getB64PPS());
} catch (Exception e) {
// Fallback on the old streaming method using the MediaRecorder API
Log.e(TAG,"Resolution not supported with the MediaCodec API, we fallback on the old streamign method.");
mMode = MODE_MEDIARECORDER_API;
return testH264();
}
}
testMediaCodecAPI()方法是在MediaCodec的情况下来获得PPS和SPS,主要是调用EncoderDebugger中的方法来处理,这里就不展开讨论了。这里要注意的是,当分辨率的宽大于640时,使用MediaCodec处理高分辨率视频流会很慢,所以这里自动切换成MediaRecorder来处理。
try {
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setCamera(mCamera);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mMediaRecorder.setVideoEncoder(mVideoEncoder);
mMediaRecorder.setPreviewDisplay(mSurfaceView.getHolder().getSurface());
mMediaRecorder.setVideoSize(mRequestedQuality.resX,mRequestedQuality.resY);
mMediaRecorder.setVideoFrameRate(mRequestedQuality.framerate);
mMediaRecorder.setVideoEncodingBitRate((int)(mRequestedQuality.bitrate*0.8));
mMediaRecorder.setOutputFile(TESTFILE);
mMediaRecorder.setMaxDuration(3000);
// We wait a little and stop recording
mMediaRecorder.setOnInfoListener(new MediaRecorder.OnInfoListener() {
public void onInfo(MediaRecorder mr, int what, int extra) {
Log.d(TAG,"MediaRecorder callback called !");
if (what==MediaRecorder.MEDIA_RECORDER_INFO_MAX_DURATION_REACHED) {
Log.d(TAG,"MediaRecorder: MAX_DURATION_REACHED");
} else if (what==MediaRecorder.MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED) {
Log.d(TAG,"MediaRecorder: MAX_FILESIZE_REACHED");
} else if (what==MediaRecorder.MEDIA_RECORDER_INFO_UNKNOWN) {
Log.d(TAG,"MediaRecorder: INFO_UNKNOWN");
} else {
Log.d(TAG,"WTF ?");
}
mLock.release();
}
});
// Start recording
mMediaRecorder.prepare();
mMediaRecorder.start();
if (mLock.tryAcquire(6,TimeUnit.SECONDS)) {
Log.d(TAG,"MediaRecorder callback was called :)");
Thread.sleep(400);
} else {
Log.d(TAG,"MediaRecorder callback was not called after 6 seconds... :(");
}
} catch (IOException e) {
throw new ConfNotSupportedException(e.getMessage());
} catch (RuntimeException e) {
throw new ConfNotSupportedException(e.getMessage());
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
try {
mMediaRecorder.stop();
} catch (Exception e) {}
mMediaRecorder.release();
mMediaRecorder = null;
lockCamera();
if (!cameraOpen) destroyCamera();
// Restore flash state
mFlashEnabled = savedFlashState;
}
// Retrieve SPS & PPS & ProfileId with MP4Config
MP4Config config = new MP4Config(TESTFILE);
上面截取的testMediaRecorderAPI()方法中的代码,使用MediaRecorder来获得PPS和SPS,其实就是使用MediaRecorder录制一点段视频文件,再有MP4Config从视频文件中分析出PPS和SPS信息。
H263Stream类中,只是用MediaRecorder进行编码,这里就不贴出代码了。
在VideoStream类中,要获得数据来源除了获得摄像头返回每一帧数据的方法以外,还有一种直接在Surface中获得数据的方式,这里我们简单介绍一下Surface作为编码数据源方式的流程:
//截取createCamera()中的代码
try {
if (mMode == MODE_MEDIACODEC_API_2) {
mSurfaceView.startGLThread();
mCamera.setPreviewTexture(mSurfaceView.getSurfaceTexture());
} else {
mCamera.setPreviewDisplay(mSurfaceView.getHolder());
}
} catch (IOException e) {
throw new InvalidSurfaceException("Invalid surface !");
}
//截取encodeWithMediaCodecMethod2()中的代码
//这里就是将mSurfaceView中的surface作为数据源,来替代输入缓冲区
Surface surface = mMediaCodec.createInputSurface();
((SurfaceView)mSurfaceView).addMediaCodecSurface(surface);
mMediaCodec.start();
- 调用Camera的setPreviewTexture即把传过去的SurfaceTexture对象设置了输出载体。
- 执行startGLThread()方法后启动一个线程循环读取Surface中的数据(具体看SurfaceView代码)。
- addMediaCodecSurface()方法就是将MediaCodec创建的Surface对象与摄像头输出载体的Surface对象进行数据传输,这样MediaCodec的数据源就成了Surface,替代了原来的Buffer缓冲区。
SurfaceView类
private SurfaceManager mViewSurfaceManager = null;
private SurfaceManager mCodecSurfaceManager = null;
private TextureManager mTextureManager = null;
...
@Override
public void run() {
mViewSurfaceManager = new SurfaceManager(getHolder().getSurface());
mViewSurfaceManager.makeCurrent();
mTextureManager.createTexture().setOnFrameAvailableListener(this);
mLock.release();
try {
long ts = 0, oldts = 0;
while (mRunning) {
synchronized (mSyncObject) {
mSyncObject.wait(2500);
if (mFrameAvailable) {
mFrameAvailable = false;
mViewSurfaceManager.makeCurrent();
mTextureManager.updateFrame();
mTextureManager.drawFrame();
mViewSurfaceManager.swapBuffer();
if (mCodecSurfaceManager != null) {
mCodecSurfaceManager.makeCurrent();
mTextureManager.drawFrame();
oldts = ts;
ts = mTextureManager.getSurfaceTexture().getTimestamp();
//Log.d(TAG,"FPS: "+(1000000000/(ts-oldts)));
mCodecSurfaceManager.setPresentationTime(ts);
mCodecSurfaceManager.swapBuffer();
}
} else {
Log.e(TAG,"No frame received !");
}
}
}
} catch (InterruptedException ignore) {
} finally {
mViewSurfaceManager.release();
mTextureManager.release();
}
}
循环读取摄像头输出载体Surface(mTextureManager)的数据,再把数据传输给MediaCodec的数据源Surface(mCodecSurfaceManager)。TextureManager类和SurfaceManager类主要是对OpenGl的操作,这里就不展开了。
到这里我们把视频流的采集(摄像头Camera的操作)、数据源(帧数据和Surface)、编码(MediaRecorder和MediaCodec)等整个流程全部了解了一遍。由于水平有限,对于视频原始数据的格式和OPENGL的内容没有做深入研究,只是一笔带过,有兴趣的读者请自行研究。