本篇是在Camera 采集数据通过 GLSurfaceView 预览 (二)的基础上修改增加的,上一篇只是通过GLSurfaceView进行预览Camera数据,这篇将会接着学习利用OpenGLES更多的特性和MediaCodec硬编码。
这篇主要记录自己遇到的问题和解决思路,文章底部附带源代码。
因为后续我们需要增加水印、录制等功能,需要使用到的是sampler2D纹理,而Camera采集的数据外部纹理。
所以第一步就是将外部纹理转为sampler2D纹理,可以借助FBO来实现;
FBO(Frame Buffer Object ) 帧缓冲对象,之前我们都是渲染到窗口系统提供的默认的帧缓冲中,而FBO支持创建一个帧缓冲区,这样就可以不直接渲染到屏幕上,而是渲染到定制的帧缓冲区。FBO支持将一个纹理绑定到 FBO 上,接着后续所有的渲染操作会被存储到纹理图像上。
整体的流程
(1).在CameraSurfaceRender负责创建GL_TEXTURE_EXTERNAL_OES纹理,接受Camera原始数据
mCameraTextureId = GlesUtil.createCameraTexture();
(2). 在OriginalRenderDrawer创建GL_TEXTURE_2D的纹理
mOutputTextureId = GlesUtil.createFrameTexture(width, height);
(3). 在当前的EGL环境下,创建一个FBO
在本章的源码中,RenderDrawerGroups接收负责管理所有的RenderDrawer,包括创建FBO、控制绘制顺序、是否需要绘制到FBO中等
mFrameBuffer = GlesUtil.createFrameBuffer();
public static int createFrameBuffer() {
int[] buffers = new int[1];
GLES30.glGenFramebuffers(1, buffers, 0);
checkError();
return buffers[0];
}
(4). 绑定DisplayRenderDrawer的GL_TEXTURE_2D纹理到FBO上,后续的绘制动作就会存储FBO上
public void bindFrameBuffer(int textureId) {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, mFrameBuffer);
GLES30.glFramebufferTexture2D(GLES30.GL_FRAMEBUFFER, GLES30.GL_COLOR_ATTACHMENT0, GLES30.GL_TEXTURE_2D, textureId, 0);
}
(5). 执行OriginalRenderDrawer渲染,通过FBO就自然就渲染到了DisplayRenderDrawer的纹理图像上
RenderDrawerGroups
// timestamp 时间戳 transformMatrix 转换矩阵
public void draw(long timestamp, float[] transformMatrix) {
// 将绑定到FBO中,最后转换成mOriginalDrawer中的Sample2D纹理
drawRender(mOriginalDrawer, true, timestamp, transformMatrix);
...
// 不绑定FBO,直接绘制到屏幕上
drawRender(mDisplayDrawer, false, timestamp, transformMatrix);
}
RenderDrawerGroups控制渲染流程
public void drawRender(BaseRenderDrawer drawer, boolean useFrameBuffer,
long timestamp, float[] transformMatrix) {
if (useFrameBuffer) { // 绑定到FBO中
bindFrameBuffer(drawer.getOutputTextureId());
}
drawer.draw(timestamp, transformMatrix);
if (useFrameBuffer) {
unBindFrameBuffer();
}
}
(6). 解除FBO绑定,恢复默认的帧缓冲区
public void unBindFrameBuffer() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, 0);
}
如果上面转换完成后,那么水印的叠加并不难,同样使用到了FBO,在纹理基础叠加一层水印的绘制混合
WaterMarkRenderDrawer负责渲染绘制水印图片
(1). 创建水印图片纹理
mMarkTextureId = GlesUtil.loadBitmapTexture(mBitmap);
(2). 渲染绘制水印图片
使用了Blend 颜色混合,具体参考轻松搞定 Blend 颜色混合
public void draw(long timestamp, float[] transformMatrix) {
useProgram();
//clear();
viewPort(40, 75, mBitmap.getWidth() * 2, mBitmap.getHeight() * 2);
GLES30.glDisable(GLES30.GL_DEPTH_TEST);
GLES30.glEnable(GLES30.GL_BLEND);
GLES30.glBlendFunc(GLES30.GL_SRC_COLOR, GLES30.GL_DST_ALPHA);
onDraw();
GLES30.glDisable(GLES30.GL_BLEND);
}
(3). 控制渲染顺序
这部分在代码中是通过RenderDrawerGroups控制
public void draw(long timestamp, float[] transformMatrix) {
if (mInputTexture == 0 || mFrameBuffer == 0) {
Log.e(TAG, "draw: mInputTexture or mFramebuffer or list is zero");
return;
}
drawRender(mOriginalDrawer, true, timestamp, transformMatrix);
// 绘制顺序会控制着 水印绘制哪一层
//drawRender(mWaterMarkDrawer, true, timestamp, transformMatrix);
drawRender(mDisplayDrawer, false, timestamp, transformMatrix);
drawRender(mWaterMarkDrawer, true, timestamp, transformMatrix);
....
}
绘制顺序会控制着 水印绘制哪一层,可以选择绘制到预览或者仅会知道录制层中;
同理,可以通过这种方法增加各种叠加图片、滤镜等等;
(滤镜也可以直接操作顶点、片元着色器的源码实现)
(1). 创建MediaCodec和MediaMutex
public VideoEncoder(int width, int height, File outputFile)
throws IOException {
int bitRate = height * width * 3 * 8 * FRAME_RATE / 256;
mBufferInfo = new MediaCodec.BufferInfo();
MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, width, height);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
Log.d(TAG, "format: " + format);
mEncoder = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
mEncoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mInputSurface = mEncoder.createInputSurface();
mEncoder.start();
mMuxer = new MediaMuxer(outputFile.toString(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
mTrackIndex = -1;
mMuxerStarted = false;
}
(2). 创建MediaCodec的输入Surface
MediaCodec 解码视频支持直接解码到 Surface 上,编码也可以直接从 Surface 采集数据。
之前尝试过从GLSurfaceView的 getHolder().getSurface() 获取到Surface,注入给MeidaCodec进行硬编码,最后会产生错误,提示not persistent surface。
MediaCodec可以通过调用createInputSurface() 方法为输入数据创建一个目标 Surface,我们将数据直接绘制到 MediaCodec 创建的 Surface 可以实现录制功能,这样可以使用GLSurfaceView 的surface 和 MediaCodec 的 surface绘制的东西不一样,如仅在录制的视频中增加水印。
创建一个EGL环境,将纹理绘制到MediaCodec的Surface中:
mInputSurface = mEncoder.createInputSurface();
public void drainEncoder(boolean endOfStream) {
if (endOfStream) {
mEncoder.signalEndOfInputStream();
}
while (true) {
int encoderStatus = mEncoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
if (!endOfStream) {
break;
}
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
if (mMuxerStarted) {
throw new RuntimeException("format changed twice");
}
MediaFormat newFormat = mEncoder.getOutputFormat();
mTrackIndex = mMuxer.addTrack(newFormat);
mMuxer.start();
mMuxerStarted = true;
} else if (encoderStatus < 0) {
Log.d(TAG, "unexpected result from encoder.dequeueOutputBuffer: " + encoderStatus);
} else {
ByteBuffer encodedData = mEncoder.getOutputBuffer(encoderStatus);
if (encodedData == null) {
throw new RuntimeException("encoderOutputBuffer " + encoderStatus + " was null");
}
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
mBufferInfo.size = 0;
}
if (mBufferInfo.size != 0) {
if (!mMuxerStarted) {
throw new RuntimeException("muxer hasn't started");
}
encodedData.position(mBufferInfo.offset);
encodedData.limit(mBufferInfo.offset + mBufferInfo.size);
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
Log.d(TAG, "sent " + mBufferInfo.size + " bytes to muxer, ts=" + mBufferInfo.presentationTimeUs);
} else {
Log.d(TAG, "drainEncoder mBufferInfo: " + mBufferInfo.size);
}
mEncoder.releaseOutputBuffer(encoderStatus, false);
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
break;
}
}
}
}
(3). 利用共享GLSurfaceview 的 EGLContext 创建EGL环境
注意:在GLSurfaceView的GLThread中获取EGLContext
@Override
public void create() {
mEglContext = EGL14.eglGetCurrentContext();
}
创建我们自己的GLThread:
@Override
public void run() {
Looper.prepare();
mMsgHandler = new MsgHandler();
Looper.loop();
}
private class MsgHandler extends Handler {
public static final int MSG_START_RECORD = 1;
public static final int MSG_STOP_RECORD = 2;
public static final int MSG_UPDATE_CONTEXT = 3;
public static final int MSG_UPDATE_SIZE = 4;
public static final int MSG_FRAME = 5;
public static final int MSG_QUIT = 6;
public MsgHandler() {
}
@Override
public void handleMessage(Message msg) {
switch (msg.what) {
case MSG_START_RECORD:
prepareVideoEncoder((EGLContext) msg.obj, msg.arg1, msg.arg2);
break;
case MSG_STOP_RECORD:
stopVideoEncoder();
break;
case MSG_UPDATE_CONTEXT:
updateEglContext((EGLContext) msg.obj);
break;
case MSG_UPDATE_SIZE:
updateChangedSize(msg.arg1, msg.arg2);
break;
case MSG_FRAME:
drawFrame((long)msg.obj);
break;
case MSG_QUIT:
quitLooper();
break;
default:
break;
}
}
}
private void prepareVideoEncoder(EGLContext context, int width, int height) {
try {
mEglHelper = new EGLHelper();
mEglHelper.createGL(context);
mVideoPath = StorageUtil.getVedioPath(true) + "glvideo.mp4";
mVideoEncoder = new VideoEncoder(width, height, new File(mVideoPath));
mEglSurface = mEglHelper.createWindowSurface(mVideoEncoder.getInputSurface());
boolean error = mEglHelper.makeCurrent(mEglSurface);
if (!error) {
Log.e(TAG, "prepareVideoEncoder: make current error");
}
onCreated();
} catch (IOException e) {
e.printStackTrace();
}
}
private void drawFrame(long timeStamp) {
Log.d(TAG, "drawFrame: " + timeStamp );
mEglHelper.makeCurrent(mEglSurface);
mVideoEncoder.drainEncoder(false);
onDraw();
mEglHelper.setPresentationTime(mEglSurface, timeStamp);
mEglHelper.swapBuffers(mEglSurface);
}
(4). 实现控制绘制
把上述流程封装到RecordRenderDrawer中
RenderDrawerGroups就可以直接将他当作RenderDrawer统一控制
public void draw(long timestamp, float[] transformMatrix) {
if (mInputTexture == 0 || mFrameBuffer == 0) {
Log.e(TAG, "draw: mInputTexture or mFramebuffer or list is zero");
return;
}
drawRender(mOriginalDrawer, true, timestamp, transformMatrix);
// 绘制顺序会控制着 水印绘制哪一层
//drawRender(mWaterMarkDrawer, true, timestamp, transformMatrix);
drawRender(mDisplayDrawer, false, timestamp, transformMatrix);
drawRender(mWaterMarkDrawer, true, timestamp, transformMatrix);
// mRecordDrawer是绘制到MediaCodec中进行录制
drawRender(mRecordDrawer, false, timestamp, transformMatrix);
}
https://github.com/ChyengJason/FboCamera
https://www.jianshu.com/p/b36b6e17e818