《OpenGL从入门到放弃08》相机预览,这样讲就好理解了

这篇文章将介绍相机预览的实现,网上可以找到很多关于Camera2预览的文章,Camera1已经过时,所以暂时不考虑什么兼容性,就用Camera2来搞。可以使用TextureView作为载体来实现预览功能,但是我们后面要在相机上添加滤镜、贴纸等功能,所以载体就选择了GLSurfaceView

之前的文章
《OpenGL从入门到放弃01 》一些基本概念
《OpenGL从入门到放弃02 》GLSurfaceView和Renderer
《OpenGL从入门到放弃03 》相机和视图
《OpenGL从入门到放弃04 》画一个长方形
《OpenGL从入门到放弃05 》着色器语言
《OpenGL从入门到放弃06 》纹理图片显示
《OpenGL从入门到放弃07》滤镜,一定要看,不亏

开始撸撸代码...

1、简单的布局文件




    


    

预留个拍照按钮,后面文章会讲拍照功能

2、Activity代码

public class Camera2Demo_SurfaceView_Activity extends AppCompatActivity {

    private static final String TAG = "Camera2Demo_SurfaceView";

    private GLSurfaceView mCameraV2GLSurfaceView;
    private CameraV2Renderer mCameraV2Renderer;
    private CameraV2 mCamera;

    @Override
    protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.camera2_surfaceview_activity);
        initView();
    }

    private void initView() {
        //Camera2 API 进行封装
        mCamera = new CameraV2(this);

        mCameraV2GLSurfaceView = findViewById(R.id.glsurfaceview);
        mCameraV2GLSurfaceView.setEGLContextClientVersion(2);

        mCameraV2Renderer = new CameraV2Renderer();
        mCameraV2Renderer.init(mCameraV2GLSurfaceView, mCamera, this);
        mCameraV2GLSurfaceView.setRenderer(mCameraV2Renderer);

    }

    @Override
    protected void onResume() {
        super.onResume();
    }

    @Override
    protected void onDestroy() {
        super.onDestroy();
        if (mCamera != null) {
            mCamera.closeCamea();
        }
    }

    public void takePicture(View view) {
        Log.d(TAG, "onclick takePicture: ");
    }
}

说明:
initView 中代码不多,因为Camera2 的API被封装在 CameraV2 这个类里了,接下来将主要分析 CameraV2CameraV2Renderer 这两个类。

3、CameraV2

CameraV2 封装了Camera2 的API,听我细细道来~

3.1 构造方法

    public CameraV2(Activity activity) {
        mActivity = activity;
        //1.启动Camera线程
        startCameraThread();

        //2.准备Camera,获取cameraId,获取Camera预览大小
        setupCamera();

        //打开Camera
        openCamera();
    }

3.2 启动Camera线程

public void startCameraThread() {
        mCameraThread = new HandlerThread("CameraThread");
        mCameraThread.start();
        mCameraHandler = new Handler(mCameraThread.getLooper());
    }

openCamera方法里面需要传一个Handler,这个Handler必须是子线程的Handler,等下会介绍。

3.2 setupCamera

setupCamera 中主要是获取cameraId和camera 预览大小,这个预览大小什么意思呢?

public String setupCamera() {
        CameraManager cameraManager = (CameraManager) mActivity.getSystemService(Context.CAMERA_SERVICE);
        try {
            mCameraIdList = cameraManager.getCameraIdList();
            Log.d(TAG, "setupCamera: mCameraIdList:"+mCameraIdList.length);
            for (String id : mCameraIdList) {
                CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(id);
                if (characteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_FRONT) {
                    continue;
                }
                StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                mPreviewSize = map.getOutputSizes(SurfaceTexture.class)[0];
                mCameraId = id;
                Log.d(TAG, "setupCamera: preview width = " + mPreviewSize.getWidth() + ", height = " + mPreviewSize.getHeight() + ", cameraId = " + mCameraId);

            }
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
        return mCameraId;
    }
  1. 通过CameraManager获取cameraIdList,如果有前后摄像头,那么cameraIdList这个数组大小就是2了,大家可以点击getCameraIdList这个方法进去看源码,这里就先不说源码了。

  2. for 循环里面过滤掉前置摄像头,最终 mCameraId 对应后置摄像头的id;

  3. 预览大小mPreviewSize的获取都是模板代码,知道就行

准备好了,可以打开相机了

3.3 openCamera 打开相机

public boolean openCamera() {
        CameraManager cameraManager = (CameraManager) mActivity.getSystemService(Context.CAMERA_SERVICE);
        try {
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && ActivityCompat.checkSelfPermission(mActivity, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
                mActivity.requestPermissions(new String[]{Manifest.permission.CAMERA, Manifest
                        .permission.WRITE_EXTERNAL_STORAGE}, 1);
                return false;
            }
            cameraManager.openCamera(mCameraId, mStateCallback, mCameraHandler);
        } catch (CameraAccessException e) {
            e.printStackTrace();
            return false;
        }
        return true;
    }

打开相机就是通过获取 CameraManager 对象,然后调用它的 openCamera方法即可,这里要说下三个参数

  1. mCameraId 在上一步 setupCamera方法已经获取了
  2. mStateCallback 是一个 CameraDevice.StateCallback 对象
public CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
        @Override
        public void onOpened(@NonNull CameraDevice camera) {
            mCameraDevice = camera;
            startPreview();
        }

        @Override
        public void onDisconnected(@NonNull CameraDevice camera) {
            camera.close();
            mCameraDevice = null;
        }

        @Override
        public void onError(@NonNull CameraDevice camera, int error) {
            camera.close();
            mCameraDevice = null;
        }
    };

主要是接收相机打开状态,相机打开成功,断开连接、出现错误,会分别回调上面三个方法,最重要的是 onOpened 回调,我们可以在onOpened 回调中获取 CameraDevice 对象,一般的做法都是获取 CameraDevice 对象之后,打开预览。

  1. mCameraHandler,在startCameraThread方法中初始化好了

3.4 startPreview 开始预览

public void startPreview() {
        if (mStartPreview || mSurfaceTexture == null || mCameraDevice == null){
            return;
        }
        mStartPreview = true;
        
        //给 SurfaceTexture 设置默认大小,mPreviewSize是相机预览大小
        mSurfaceTexture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
        //Surface 需要接收一个 SurfaceTexture
        Surface surface = new Surface(mSurfaceTexture);
        try {
            // 通过CameraDevice创建request
            mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            //surface 作为输出的目标,预览的数据会传到 Surface 中
            mCaptureRequestBuilder.addTarget(surface);
            //创建会话, surface 这里传进去,然后只需关心回调
            mCameraDevice.createCaptureSession(Arrays.asList(surface), new CameraCaptureSession.StateCallback() {
                @Override
                public void onConfigured(@NonNull CameraCaptureSession session) {
                    try {
                        CaptureRequest mCaptureRequest = mCaptureRequestBuilder.build();
                        //设置一直重复捕获图像,不设置就只有一帧,没法预览
                        session.setRepeatingRequest(mCaptureRequest, null, mCameraHandler);
                    } catch (Exception e) {
                        e.printStackTrace();
                    }
                }

                @Override
                public void onConfigureFailed(@NonNull CameraCaptureSession session) {

                }
            }, mCameraHandler);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

注释应该很清楚了~
mCameraDevice 上面说过了,在openCamera方法的回调拿到的。
主要就是通过 mCameraDevice.createCaptureSession 创建一个Session,然后传入 surface,预览的图片数据会输出到 surface 中>

需要注意的是:在 onConfigured 回调方法中,需要调用 session.setRepeatingRequest 设置重复捕获图像,不然只能拿到一帧,不能实时预览。

surface中有一个 SurfaceTexture,这个是重点:

public void setSurfaceTexture(SurfaceTexture surfaceTexture) {
       this.mSurfaceTexture = surfaceTexture;
   }
...
Surface surface = new Surface(mSurfaceTexture);

SurfaceTexture 并不是在这里随意创建的,而是跟OpenGL挂钩起来的,setSurfaceTexture 方法在 CameraV2Renderer 中调用的,继续看~

4、CameraV2Renderer

老规矩,先贴代码再解释

public class CameraV2Renderer implements GLSurfaceView.Renderer {
    public static final String TAG = "CameraV2Renderer";
    private Context mContext;
    GLSurfaceView mGLSurfaceView;
    CameraV2 mCamera;
    private int mTextureId = -1;
    private SurfaceTexture mSurfaceTexture;
    private float[] mTransformMatrix = new float[16];

    Camera2BaseFilter mCamera2BaseFilter;

    public void init(GLSurfaceView surfaceView, CameraV2 camera, Context context) {
        mContext = context;
        mGLSurfaceView = surfaceView;
        mCamera = camera;
    }

    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {
        ShaderManager.init(mContext);
        initSurfaceTexture();
        mCamera2BaseFilter = new Camera2FilterNone(mContext, mTextureId);
    }

    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
        glViewport(0, 0, width, height);
        Log.i(TAG, "onSurfaceChanged: " + width + ", " + height);
    }

    @Override
    public void onDrawFrame(GL10 gl) {

        //mSurfaceTexture.updateTexImage()更新预览上的图像
        mSurfaceTexture.updateTexImage();
        //直接通过 SurfaceTexture 获取变换矩阵
        mSurfaceTexture.getTransformMatrix(mTransformMatrix);

        glClearColor(1.0f, 0.0f, 0.0f, 0.0f);
        mCamera2BaseFilter.draw(mTransformMatrix);

    }

    //创建 SurfaceTexture,CameraV2.startPreview()方法需要SurfaceTexture
    public boolean initSurfaceTexture() {
        //1、获取一个纹理id
        mTextureId = GLUtil.getOESTextureId();
        //2、纹理id设置到 SurfaceTexture 中,
        mSurfaceTexture = new SurfaceTexture(mTextureId);
        //图片数据固定的,而摄像头数据是变换的,所以每当摄像头有新的数据来时,我们需要通过surfaceTexture.updateTexImage()更新预览上的图像
        // updateTexImage 不应该在OnFrameAvailableLister的回调方法中直接调用,而应该在onDrawFrame中执行。而调用requestRender,可以触发onDrawFrame
        mSurfaceTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
            @Override
            public void onFrameAvailable(SurfaceTexture surfaceTexture) {
                Log.d(TAG, "onFrameAvailable: ");
                mGLSurfaceView.requestRender();
            }
        });

        //将 SurfaceTexture 设置给CameraV2,然后调用startPreview
        mCamera.setSurfaceTexture(mSurfaceTexture);
        mCamera.startPreview();
        return true;
    }

}

CameraV2Renderer代码不多,说一下流程:

4.1 onSurfaceCreated 中初始化数据

4.1.1 ShaderManager.init(mContext)

ShaderManager.init(mContext);
这个其实是后面一节滤镜功能加的,加了几个相机的着色器

/**相机部分*/
        insertParam(CAMERA_BASE_SHADER, GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_vertex_shader_base.glsl")
                , GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_fragment_shader_base.glsl"));
        //黑白
        insertParam(CAMERA_GRAY_SHADER, GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_vertex_shader_base.glsl")
                , GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_fragment_shader_gray.glsl"));

        insertParam(CAMERA_COOL_SHADER, GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_vertex_shader_base.glsl")
                , GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_fragment_shader_cool.glsl"));

        insertParam(CAMERA_WARM_SHADER, GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_vertex_shader_base.glsl")
                , GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_fragment_shader_warm.glsl"));

        insertParam(CAMERA_FOUR_SHADER, GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_vertex_shader_base.glsl")
                , GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_fragment_shader_four.glsl"));

        insertParam(CAMERA_ZOOM_SHADER, GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_vertex_shader_base.glsl")
                , GLUtil.loadFromAssetsFile(context, "shader/camera2/camera2_fragment_shader_zoom.glsl"));


这个是下一节的内容,这里简单说一下即可

4.1.2 initSurfaceTexture

这个方法很重要,一定要好好理解

    //创建 SurfaceTexture,CameraV2.startPreview()方法需要SurfaceTexture
    public void initSurfaceTexture() {
        //1、获取一个纹理id
        mTextureId = GLUtil.getOESTextureId();
        //2、纹理id设置到 SurfaceTexture 中,
        mSurfaceTexture = new SurfaceTexture(mTextureId);
        //图片数据固定的,而摄像头数据是变换的,所以每当摄像头有新的数据来时,我们需要通过surfaceTexture.updateTexImage()更新预览上的图像
        // updateTexImage 不应该在OnFrameAvailableLister的回调方法中直接调用,而应该在onDrawFrame中执行。而调用requestRender,可以触发onDrawFrame
        mSurfaceTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
            @Override
            public void onFrameAvailable(SurfaceTexture surfaceTexture) {
                Log.d(TAG, "onFrameAvailable: ");
                mGLSurfaceView.requestRender();
            }
        });

        //将 SurfaceTexture 设置给CameraV2,然后调用startPreview
        mCamera.setSurfaceTexture(mSurfaceTexture);
        mCamera.startPreview();
    }

    //class : GLUtil
    /**
     * 相机预览使用EXTERNAL_OES纹理,创建方式与2D纹理创建基本相同
     * @return
     */
    public static int getOESTextureId() {
        int[] texture = new int[1];
        GLES20.glGenTextures(1, texture, 0);
        //于我们创建的是扩展纹理,所以绑定的时候我们也需要绑定到扩展纹理上才可以正常使用,
        GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, texture[0]);
        GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
                GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
        GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
                GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
        GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
                GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
        GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
                GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
        GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
        return texture[0];
    }

这个方法就是创建 SurfaceTexture,然后传给 CameraV2,再开启预览。
分为两步:

  1. 创建一个扩展纹理,getOESTextureId方法返回纹理id :mTextureId
  2. 有了纹理id,创建 SurfaceTexture
  3. mSurfaceTexture 设置监听,在预览数据更新的时候,调用 mGLSurfaceView.requestRender() ,可以触发onDrawFrame`
  4. 把SurfaceTexture设置给CameraV2,然后就可以愉快地开启预览了,
    然后预览就会不断回调 onFrameAvailable,不断调用
    mGLSurfaceView.requestRender(),不断触发 onDrawFrame ...

4.2 onDrawFrame

这个方法里很清晰

        //mSurfaceTexture.updateTexImage()更新预览上的图像
        mSurfaceTexture.updateTexImage();
        //直接通过 SurfaceTexture 获取变换矩阵
        mSurfaceTexture.getTransformMatrix(mTransformMatrix);

更新预览图像,更新变换矩阵,然后其它的绘制过程都交给 Camera2BaseFilter

为什么命名为 xxFilter,因为下一节要介绍滤镜,所以将不同滤镜共有的代码抽取到 Camera2BaseFilter中。

5、Camera2BaseFilter

这个类代码注释非常清晰了

public abstract class Camera2BaseFilter {
   private static final String TAG = "Camera2BaseFilter";

   private FloatBuffer mVertexBuffer;  //顶点坐标数据要转化成FloatBuffer格式
   private FloatBuffer mTexCoordBuffer;//顶点纹理坐标缓存

   //当前绘制的顶点位置句柄
   protected int vPositionHandle;
   //变换矩阵句柄
   protected int mMVPMatrixHandle;
   //这个可以理解为一个OpenGL程序句柄
   protected int mProgram;
   //纹理坐标句柄
   protected int mTexCoordHandle;
   //纹理id
   protected int mTextureId;

   private Context mContext;


   public Camera2BaseFilter(Context context, int textureId) {
       this.mContext = context;
       this.mTextureId = textureId;
       //初始化Buffer、Shader、纹理
       initBuffer();
       initShader();
   }

   //数据转换成Buffer
   private void initBuffer() {
       float vertices[] = new float[]{
               -1, 1, 0,
               -1, -1, 0,
               1, 1, 0,
               1, -1, 0,

       };//顶点位置

       //这里经过测试,左下角才是纹理坐标的原点,右上角是(1,1)
       float[] colors = new float[]{
               0, 1,
               0, 0,
               1, 1,
               1, 0,

       };//纹理顶点数组

       mVertexBuffer = GLUtil.floatArray2FloatBuffer(vertices);
       mTexCoordBuffer = GLUtil.floatArray2FloatBuffer(colors);
   }

   /**
    * 着色器
    */
   private void initShader() {
       //获取程序,封装了加载、链接等操作
       ShaderManager.Param param = getProgram();
       mProgram = param.program;
       vPositionHandle = param.positionHandle;
       // 获取变换矩阵的句柄
       mMVPMatrixHandle = param.mMVPMatrixHandle;
       //纹理位置句柄
       mTexCoordHandle = param.mTexCoordHandle;


       Log.d(TAG, "initShader: mProgram = " + mProgram + ",vPositionHandle="+vPositionHandle +",mMVPMatrixHandle="+mMVPMatrixHandle+
               ",mTexCoordHandle="+mTexCoordHandle);
   }

   /**
    * 滤镜子类重写这个方法,加载不同的OpenGL程序
    *
    * @return
    */
   protected abstract ShaderManager.Param getProgram();


   public void draw( float[] transformMatrix) {
       // 将程序添加到OpenGL ES环境
       GLES20.glUseProgram(mProgram);

       // 启用顶点属性,最后对应禁用
       GLES20.glEnableVertexAttribArray(vPositionHandle);
       GLES20.glEnableVertexAttribArray(mTexCoordHandle);

       //绑定纹理,跟图片不同的是,这里是扩展纹理
       glActiveTexture(GL_TEXTURE_EXTERNAL_OES);
       glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, mTextureId);

       //设置转换矩阵数据
       glUniformMatrix4fv(mMVPMatrixHandle, 1, false, transformMatrix, 0);
       //设置三角形坐标数据(一个顶点三个坐标)
       GLES20.glVertexAttribPointer(vPositionHandle, 3,
               GLES20.GL_FLOAT, false,
               3 * 4, mVertexBuffer);
       //设置纹理坐标数据
       GLES20.glVertexAttribPointer(mTexCoordHandle, 2,
               GLES20.GL_FLOAT, false,
               2 * 4, mTexCoordBuffer);

       glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
       glBindFramebuffer(GL_FRAMEBUFFER, 0);
   }

   public void onDestroy() {
       GLES20.glDeleteProgram(mProgram);
       mProgram = 0;
   }
}

这是一个抽象类,ShaderManager.Param getProgram()
方法需要子类实现,返回不同着色器对应的OpenGL程序参数。Camera2FilterNone是没有滤镜效果

/**
 * 没有滤镜效果
 */
public class Camera2FilterNone extends Camera2BaseFilter {

    public Camera2FilterNone(Context context, int textureId) {
        super(context, textureId);
    }

    @Override
    protected ShaderManager.Param getProgram() {
        return ShaderManager.getParam(ShaderManager.CAMERA_BASE_SHADER);
    }
}

对应的着色器是
顶点着色器
assets/shader/camera2/camera2_vertex_shader_base.glsl

attribute vec4 aPosition;
uniform mat4 uMVPMatrix;
attribute vec4 aTexCoord;
varying vec2 vTextureCoord; //传给片元着色器
void main()
{
    vTextureCoord = (uMVPMatrix * aTexCoord).xy;
    gl_Position = aPosition;
}

片元着色器
assets/shader/camera2/camera2_fragment_shader_base.glsl

#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES uTextureSampler;
varying vec2 vTextureCoord;
void main()
{
  vec4 vCameraColor = texture2D(uTextureSampler, vTextureCoord);
  float fGrayColor = (0.3*vCameraColor.r + 0.59*vCameraColor.g + 0.11*vCameraColor.b);
  gl_FragColor = vCameraColor;
}


最终效果图:

image.png

总结:

已是深夜12:22,

就不总结了,从头看,有什么不理解的地方可以留言提出。

平时11点多睡觉的我,写文章又熬夜了...

预览的滤镜功能放到下一节,敬请期待...

你可能感兴趣的:(《OpenGL从入门到放弃08》相机预览,这样讲就好理解了)