在自己实际运用中,发现EglSurfaceBase还是缺了对原生的surface的管理,对整体的理解好像总缺了点啥。所以在EglSurfaceBase的基础上,派生出了WindowSurface。代码超级简单的,但从理解学习上就完全不同一个台阶了。
public class WindowSurface extends EglSurfaceBase {
private Surface mSurface;
private boolean bReleaseSurface;
//将native的surface 与 EGL关联起来
public WindowSurface(EglCore eglCore, Surface surface, boolean isReleaseSurface) {
super(eglCore);
createWindowSurface(surface);
mSurface = surface;
bReleaseSurface = isReleaseSurface;
}
//将SurfaceTexture 与 EGL关联起来
protected WindowSurface(EglCore eglCore, SurfaceTexture surfaceTexture) {
super(eglCore);
createWindowSurface(surfaceTexture);
}
//释放当前EGL上下文 关联 的 surface
public void release() {
releaseEglSurface();
if (mSurface != null
&& bReleaseSurface) {
mSurface.release();
mSurface = null;
}
}
// That's All.
}
那么接下来,我们就要快速开始预览摄像头。从前篇的ContinuousRecordActivity开始代码:
public class ContinuousRecordActivity extends Activity implements SurfaceHolder.Callback {
public static final String TAG = "ContinuousRecord";
// 因为Androi的摄像头默认是横着方向,所以width>height
private static final int VIDEO_WIDTH = 1280;
private static final int VIDEO_HEIGHT = 720;
private static final int DESIRED_PREVIEW_FPS = 15;
private Camera mCamera;
SurfaceView sv;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.continuous_record);
sv = (SurfaceView) findViewById(R.id.continuousRecord_surfaceView);
SurfaceHolder sh = sv.getHolder();
sh.addCallback(this);
}
@Override
protected void onResume() {
super.onResume();
openCamera(VIDEO_WIDTH, VIDEO_HEIGHT, DESIRED_PREVIEW_FPS);
}
@Override
protected void onPause() {
super.onPause();
releaseCamera();
}
private EglCore mEglCore;
private WindowSurface mDisplaySurface;
@Override
public void surfaceCreated(SurfaceHolder surfaceHolder) {
Log.d(TAG, "surfaceCreated holder=" + surfaceHolder);
// 准备好EGL环境,创建渲染介质mDisplaySurface
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mDisplaySurface = new WindowSurface(mEglCore, surfaceHolder.getSurface(), false);
mDisplaySurface.makeCurrent();
//mTextureId = createTextureObject();
//mCameraTexture = new SurfaceTexture(mTextureId);
//mCameraTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
// @Override
// public void onFrameAvailable(SurfaceTexture surfaceTexture) {
// Handler.sendEmptyMessage(MSG_FRAME_AVAILABLE);
// }
//});
try {
Log.d(TAG, "starting camera preview");
//mCamera.setPreviewTexture(mCameraTexture);
mCamera.startPreview();
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}
... ... ...
(省略 openCamera releaseCamera surfaceChanged 和 surfaceDestroy等不是重点的代码,节省篇幅。)
(如有需要请follow github)
}
我们先分析以上代码,首先我们运用前篇文章的EGL,在合适时间(surfaceCreated)创建了能自己管控的EGL,并把surface和EGL绑定成我们这个项目的OpenGL渲染界面EGLSurface。接下来我们看看注释部分的代码,要先理解好这部分,我们才能继续下去。
在我们创建好EGLSurface后,我们要创建一个纹理对象并保存其纹理ID->mTextureId;这个纹理有啥子用?它是用来创建SurfaceTexture对象的,而这个SurfaceTexture又是啥子用啊?这是用来承接摄像头mCamera的预览帧。
啥,风太大听不懂?额,逻辑道理确实就是这样,详情请参考这里。重点查阅以下这段内容:
首先,SurfaceTexture从图像流(来自Camera预览,视频解码,GL绘制场景等)中获得帧数据,当调用updateTexImage()时,根据内容流中最近的图像更新SurfaceTexture对应的GL纹理对象,接下来,就可以像操作普通GL纹理一样操作它了。SurfaceTexture.OnFrameAvailableListener用于让SurfaceTexture的使用者知道有新数据到来。
所以我们可以这样理解,就是我们通过这个SurfaceTexture从Camera接取预览帧的图像流,然后我们就可以通过其绑定的GL纹理对象,直接按照纹理使用绘制了。
但这个纹理比较特殊,我们来show下代码:
public class GlUtil {
public static final String TAG = "ZZR-GL";
public static void checkGlError(String op) {
int error = GLES20.glGetError();
if (error != GLES20.GL_NO_ERROR) {
String msg = op + ": glError 0x" + Integer.toHexString(error);
Log.e(TAG, msg);
throw new RuntimeException(msg);
}
}
public static int createExternalTextureObject() {
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
GlUtil.checkGlError("glGenTextures");
int texId = textures[0];
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, texId);
GlUtil.checkGlError("glBindTexture " + texId);
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GlUtil.checkGlError("glTexParameter");
return texId;
}
}
我们之前创建的纹理类型都是GLES20.GL_TEXTURE_2D; 这里我们要使用GLES11Ext.GL_TEXTURE_EXTERNAL_OES; 这也是Android平台下特有的类型,意思是这纹理数据是额外存储到其他地方(内存or流),并不是在显存的环境上。 这个区别很重要,因为这个类型的纹理是不能 和 存储在显存的纹理在同一管线(shader)上渲染的,会出现问题。什么问题?最直接就是黑屏什么的,这个往后在讨论。
通过createExternalTextureObject我们已经得到了 创建SurfaceTexture的纹理对象,现在我们放开代码注释,并通过Handler通信机制看看OnFrameAvailable的回调是否能正常触发?
private static class MainHandler extends Handler {
private WeakReference mWeakActivity;
public static final int MSG_FRAME_AVAILABLE = 1;
MainHandler(ContinuousRecordActivity activity) {
mWeakActivity = new WeakReference(activity);
}
@Override
public void handleMessage(Message msg) {
ContinuousRecordActivity activity = mWeakActivity.get();
if (activity == null) {
Log.d(TAG, "Got message for dead activity");
return;
}
switch (msg.what) {
case MSG_FRAME_AVAILABLE:
activity.drawFrame();
break;
default:
super.handleMessage(msg);
break;
}
}
}
@Override
public void surfaceCreated(SurfaceHolder surfaceHolder) {
Log.d(TAG, "surfaceCreated holder=" + surfaceHolder);
// 准备好EGL环境,创建渲染介质mDisplaySurface
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mDisplaySurface = new WindowSurface(mEglCore, surfaceHolder.getSurface(), false);
mDisplaySurface.makeCurrent();
mTextureId = GlUtil.createExternalTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
@Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
mHandler.sendEmptyMessage(MainHandler.MSG_FRAME_AVAILABLE);
}
});
try {
Log.d(TAG, "starting camera preview");
mCamera.setPreviewTexture(mCameraTexture);
mCamera.startPreview();
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}
private void drawFrame() {
if (mEglCore == null) {
Log.d(TAG, "Skipping drawFrame after shutdown");
return;
}
Log.d(TAG, " MSG_FRAME_AVAILABLE");
mDisplaySurface.makeCurrent();
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
mDisplaySurface.swapBuffers();
}
我们分析一下drawFrame方法,首先我们用EglSurface.makeCurrent锁定了渲染的介质,然后我们就可以做自己的GL.draw的操作了,这里我们就是最简单的draw指令,清空颜色缓冲区和深度缓冲区,并把界面画成0xFF00FF的颜色值。然后我们调用swapBuffers交换读写的渲染介质,让Android系统把画面渲染出来。 然后,,,你们看到0xFF00FF的屏幕了吗? 奸笑.jpg
05-21 18:29:47.215 28583-28583/org.zzrblog.blogapp D/ZZR-GL: Trying GLES 2
05-21 18:29:47.217 28583-28583/org.zzrblog.blogapp D/ZZR-GL: Got GLES 2 config
05-21 18:29:47.219 28583-28583/org.zzrblog.blogapp D/ContinuousRecord: starting camera preview
05-21 18:29:47.299 28583-28616/org.zzrblog.blogapp D/OpenGLRenderer: endAllActiveAnimators on 0x72bd2d9000 (RippleDrawable) with handle 0x72bcd54c80
05-21 18:29:47.599 28583-28583/org.zzrblog.blogapp D/ContinuousRecord: MSG_FRAME_AVAILABLE
此时我们看看日志输出,我们创建了GLES2的EGL环境,打开摄像头预览,就只有一次MSG_FRAME_AVAILABLE预览的信息到达?这个是Android的Camera.SurfaceTexture的机制了,系统既然通知(OnFrameAvailableListener)告诉你,预览帧图像已经给你了,你是不是已经也有个责任告诉系统你已经使用这帧图像呢?是的,我们需要加上一句updateTexImage我们已经使用你的帧图像了。修改drawFrame代码
private void drawFrame() {
... ...
mDisplaySurface.makeCurrent();
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glClearColor(1.0f, 0.0f, 1.0f, 1.0f);
mCameraTexture.updateTexImage(); //告诉Android.Camera已经使用帧图像了。
// 怀疑还是不动的同学,不妨通过记录来帧的次数 分别glClearColor不同的颜色 ?
mDisplaySurface.swapBuffers();
}
好了,基本框架已经出来了。我们现在就想想怎么画出一张矩形帧图?有了以前的教学,我想这个任务不难吧。两个步骤,ShaderProgram 和 对应的顶点模型。
我们先来看看这次的渲染管线程序FrameRectSProgram
public class FrameRectSProgram extends ShaderProgram {
private static final String VERTEX_SHADER =
"uniform mat4 uMVPMatrix;\n" +
"attribute vec4 aPosition;\n" +
"uniform mat4 uTexMatrix;\n" +
"attribute vec4 aTextureCoord;\n" +
"varying vec2 vTextureCoord;\n" +
"void main() {\n" +
" gl_Position = uMVPMatrix * aPosition;\n" +
" vTextureCoord = (uTexMatrix * aTextureCoord).xy;\n" +
"}\n";
private static final String FRAGMENT_SHADER_EXT =
"#extension GL_OES_EGL_image_external : require\n" +
"precision mediump float;\n" +
"varying vec2 vTextureCoord;\n" +
"uniform samplerExternalOES sTexture;\n" +
"void main() {\n" +
" gl_FragColor = texture2D(sTexture, vTextureCoord);\n" +
"}\n";
public FrameRectSProgram() {
super(VERTEX_SHADER, FRAGMENT_SHADER_EXT);
uMVPMatrixLoc = GLES20.glGetUniformLocation(programId, "uMVPMatrix");
GlUtil.checkLocation(uMVPMatrixLoc, "uMVPMatrix");
aPositionLoc = GLES20.glGetAttribLocation(programId, "aPosition");
GlUtil.checkLocation(aPositionLoc, "aPosition");
uTexMatrixLoc = GLES20.glGetUniformLocation(programId, "uTexMatrix");
GlUtil.checkLocation(uTexMatrixLoc, "uTexMatrix");
aTextureCoordLoc = GLES20.glGetAttribLocation(programId, "aTextureCoord");
GlUtil.checkLocation(aTextureCoordLoc, "aTextureCoord");
}
int uMVPMatrixLoc;
int aPositionLoc;
int uTexMatrixLoc;
int aTextureCoordLoc;
}
我们还是沿用以前的基类模板代码ShaderProgram,派生出这次渲染预览帧的FrameRectSProgram;
先看顶点着色器,除了以前标准的顶点坐标aPosition,纹理坐标aTextureCoord,MVP三大矩阵的核矩阵uMVPMatrix之外,这下多了一个纹理矩阵uTexMatrix ?这个矩阵可以理解为是操作纹理变换的矩阵,之后我们就知道为啥要有这么一个矩阵。
再到片段着色器,多了一句#extension~ 还多了一个纹理类型samplerExternalOES ,这些都是与上面介绍的Android特有类型GLES11Ext.GL_TEXTURE_EXTERNAL_OES相关,要想使用这个类似的纹理,我们要先申明扩展“#extension~~”然后才能使用新的纹理类型samplerExternalOES。
然后我们再看对应的顶点模型
public class FrameRect {
public static final int SIZE_OF_FLOAT = 4;
/**
* 一个“完整”的正方形,从两维延伸到-1到1。
* 当 模型/视图/投影矩阵是都为单位矩阵的时候,这将完全覆盖视口。
* 纹理坐标相对于矩形是y反的。
* (This seems to work out right with external textures from SurfaceTexture.)
*/
private static final float FULL_RECTANGLE_COORDS[] = {
-1.0f, -1.0f, // 0 bottom left
1.0f, -1.0f, // 1 bottom right
-1.0f, 1.0f, // 2 top left
1.0f, 1.0f, // 3 top right
};
private static final float FULL_RECTANGLE_TEX_COORDS[] = {
0.0f, 0.0f, // 0 bottom left
1.0f, 0.0f, // 1 bottom right
0.0f, 1.0f, // 2 top left
1.0f, 1.0f // 3 top right
};
// 定义mVertexArray mTexCoordArray mCoordsPerVertex mVertexCount mVertexStride mTexCoordStride
... ... ...
public FrameRect() {
mVertexArray = createFloatBuffer(FULL_RECTANGLE_COORDS);
mTexCoordArray = createFloatBuffer(FULL_RECTANGLE_TEX_COORDS);
mCoordsPerVertex = 2;
mVertexCount = FULL_RECTANGLE_COORDS.length / mCoordsPerVertex; // 4
mTexCoordStride = 2 * SIZE_OF_FLOAT;
mVertexStride = 2 * SIZE_OF_FLOAT;
}
//... ... ...以上变量的get方法
private FrameRectSProgram mProgram;
public void setShaderProgram(FrameRectSProgram mProgram) {
this.mProgram = mProgram;
}
}
没啥疑问的吧,但请注意顶点坐标那句英文注释,那是我从Camera.SurfaceTexture源码中找出来的,是Android系统知道自己的特性所以就没有反转y了吧。然后有同学疑问,为啥只有矩阵的4个点,不应该是两个三角形吗 or 用索引吗?额,并不是,接着下去吧。
FrameRect暂且是这样,我们回到测试页面ContinuousRecordActivity,我们在onCreate创建FrameRect,在EGL环境下创建渲染管线程序FrameRectSProgram 并设置到 FrameRect当中
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
... ...
frameRect = new FrameRect();
}
@Override
public void surfaceCreated(SurfaceHolder surfaceHolder) {
... ...
mDisplaySurface.makeCurrent();
frameRect.setShaderProgram(new FrameRectSProgram());
... ...
}
//为啥要这样分开,还记得OpenGL的指令需要在EGL环境下执行的铁律了吗?
我们来个小总结,这次我们学会了:
1、自定义的EGL环境的使用。
2、Camera.SurfaceTexture对应的使用注意事项。
3、Android平台下特有的纹理类型GL_OES_EGL_image_external 和使用方法。
由于篇幅关系,我们先到这。下一篇我们把预览图像画出来,额外加上个性签名的水印效果。