Android native 层使用opengl渲染YUV420p和NV12

《Ndk中使用Mediacode解码》
《android mediacodec 编码demo(java)》
《NDK中使用mediacodec编码h264》
《Android native 层使用opengl渲染YUV420p和NV12》
《android 使用NativeWindow渲染RGB视频》
《opengl 叠加显示文字》
《android studio 编译freeType》
《最原始的yuv图像叠加文字的实现--手动操作像素》


利用opengl进行yuv的渲染,主要原理利用显卡的加速运算,是把YUV转换成 RGBA ,然后交给gl渲染, 即opengl最终需要的数据还是 RGBA,  我们可以采用cpu软件计算的方式把 yuv转成rgb,这样计算量大,占用CPU,所以一般用opengl 利用显卡运算,把我我们的yuv数据转成RGBA.
所以这里的工作,主要在于 怎么把 yuv 利用显卡运算转换为 RGBA。
Android native 层使用opengl渲染YUV420p和NV12_第1张图片

先准备点基础知识:


        YUV格式


            YUV420p( plane 平面格式):
                             YV12: y0y1y2y3y4y5y6y7  v0v1  u0u1    先平铺完y,再平铺完v,在平铺完u
                             YU12:y0y1y2y3y4y5y6y7   u0u1 v0v1     先平铺完y,再平铺完u,在平铺完v 这个                                也叫做I420

           YUV420sp( 平面+交叉)
                             NV12: y0y1y2y3y4y5y6y7  v0u0 v1u1     先平铺完y,再交叉存储 u和v
                             NV21: y0y1y2y3y4y5y6y7  u0v0 u1v1     先平铺完y,再交叉存储 v和u


       opengl中纹理的颜色格式


以YUV420p YU12来举例,这计算的原理,主要是准备三个纹理, 将Y U V三个分量分别存储到一个纹理上去,然后在opengl的显卡执行程序中,将这三个分量取出进行一次矩阵运算,这个运算及时 yuv转rgb的矩阵公式,得到rgba,再显示。这里有一个注意的地方, 纹理的GLenum format
这个值有几种选择: 在gl2.h中有宏定义:

#define GL_ALPHA                          0x1906
#define GL_RGB                            0x1907
#define GL_RGBA                           0x1908
#define GL_LUMINANCE                      0x1909
#define GL_LUMINANCE_ALPHA                0x190A
在网络上搜罗了一圈,总结一下:

/*
The difference between the three single-component texture formats, GL_ALPHA, GL_LUMINANCE and GL_INTENSITY, is in the way the four-component RGBA color vector is generated. If the value for a given texel is X, then the RGBA color vector generated is:

    * GL_ALPHA: RGBA = (0, 0, 0, X0)
    * GL_LUMINANCE: RGBA = (X0, X0, X0, 1)
    * GL_INTENSITY: RGBA = (X0, X0, X0, X0)
    * GL_LUMINANCE_ALPHA: RGBA = (X0, X0, X0, X1), 这个才是二维交叉的
    * GL_RGB :          RGBA=(X0, X1, X2, 1),三个量交叉
    * GL_RGBA :         RGBA=(X0, X1, X2, X3),四个量交叉
    这里用的 GL_LUMINANCE,每一纹理分量  r g b 是一样的值,都是纹理设定的数据
    其实我们可以改用GL_LUMINANCE_ALPHA, 只需要一个量
*/
对于YUV420p的格式中的yuv分量,我们直接选用GL_LUMINANCE GL_ALPHA  GL_INTENSITY 都可以,平铺格式。 而对于NV12 这种交叉存储的数据,就需要选用GL_LUMINANCE_ALPHA, 利用其中的任意一个 r g b分量和alph分量,可以得到 二维交叉的数据,即uv两个交叉存储。

着色器中的参数说明:

(摘自某一个博主文章https://blog.csdn.net/u012861978/article/details/97117321)

  • 输入参数介绍:
    1.着色器程序(Shader Program,图中没有画出):由 main 申明的一段程序源码或可执行文件,描述在顶点上执行的操作:如坐标变换、计算光照公式产生每个顶点颜色、计算纹理坐标。

    2.属性(Attribute):由 vertext array 提供的顶点数据,如空间位置,法向量,纹理坐标以及顶点颜色,属性可以理解为针对每一个顶点的输入数据。属性只在顶点着色器中才有,片元着色器中没有属性

    3.统一值(Uniforms): Uniforms保存由应用程序传递给着色器的只读常量数据。在顶点着色器中,这些数据通常是变换矩阵,光照参数,颜色等。由 uniform 修饰符修饰的变量属于全局变量该全局性对顶点着色器与片元着色器均可见,也就是说,这两个着色器如果被连接到同一个应用程序中,它们共享同一份 uniform 全局变量集。因此如果在这两个着色器中都声明了同名的 uniform 变量,要保证这对同名变量完全相同:同名+同类型,因为它们实际是同一个变量。

    4.采样器(Samplers): 一种特殊的 uniform,用于呈现纹理。sampler 可用于顶点着色器和片元着色器。

  • 输出参数介绍:
    1.可变变量(Varying):varying 变量用于存储顶点着色器的输出数据,也存储片元着色器的输入数据。varying 变量会在光栅化处理阶段被线性插值。顶点着色器如果声明了 varying 变量,它必须被传递到片元着色器中才能进一步传递到下一阶段,因此顶点着色器中声明的 varying 变量都应在片元着色器中重新声明为同名同类型的 varying 变量。(所以我们程序里在顶点着色器和片段着色器(片段或片元,叫法不同)中有同一个变量:varying vec2 vTextCoord, 是同一份变量,在顶点着色器中获取纹理输入,在片段着色器中用来计算rgb颜色。)

    1. gl_Position:在顶点着色器阶段至少应输出位置信息-即内建变量

    2. gl_FrontFacing:为back-face culling stage阶段生成的变量,无论精选是否被禁用,该变量都会生成。

    3. gl_PointSize:点大小。

最后绘制的位置:(顶点着色器用来确认每个点的位置)使用的三角形扇区绘制:

不明白它为什么叫顶点“着色器”,不是叫顶点生成器更好?毕竟它和颜色没什么关系,只是确定点的坐标
这里的顶点着色器,1. 没做什么运算,只是单纯原样将我们输入的数据复制到固定变量:gl_Position ,2. 将纹理坐标上下翻转了下(纹理坐标系 SW和计算机图像坐标系 XY,y轴方向刚好相反)
const char *vertexShader_ = GET_STR(
attribute vec4 aPosition;//输入的顶点坐标,会在程序指定将数据输入到该字段
attribute vec2 aTextCoord;//输入的纹理坐标,会在程序指定将数据输入到该字段
varying vec2 vTextCoord;//输出的纹理坐标
void main() {
    //这里其实是将上下翻转过来
    vTextCoord = vec2(aTextCoord.x, 1.0 - aTextCoord.y);
    //直接把传入的坐标值作为传入渲染管线。gl_Position是OpenGL内置的
    gl_Position = aPosition;
}
);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); 一个三角形扇以第一个中心顶点作为起始,使用相邻的第三三个顶点连接起来创建第一个三角形,  接着以第二个顶点作为中心起点,第三四个顶点连接起来创建三角形,依次类推。
这里使用的顶点数据是:

 static float ver[] = {
            1.0f, -1.0f, 0.0f,
            -1.0f, -1.0f, 0.0f,
            1.0f, 1.0f, 0.0f,
            -1.0f, 1.0f, 0.0f
    };
对应绘制如下图,由两个三角形组成这个矩形区域。 三角形ABC + 三角形BCD 
Android native 层使用opengl渲染YUV420p和NV12_第2张图片

绘制的纹理颜色:(片段着色器确认最终的颜色)

这里的片段着色器程序的主要工作,就是 取纹理里面的yuv数据,然后进行一个矩阵运算转换为RGBA数据,这就是每一个片段最终的颜色数据,赋值给固定的变量:gl_FragColor
const char *fragNV12_ = GET_STR(
    precision mediump float;
    varying vec2 vTextCoord;
    uniform sampler2D yTexture;
    uniform sampler2D uvTexture;
    void main(void)
    {
        vec3 yuv; 
        vec3 rgb; 
        yuv.x = texture2D(yTexture, vTextCoord.st).r; 
        yuv.y = texture2D(uvTexture, vTextCoord.st).r - 0.5; 
        yuv.z = texture2D(uvTexture, vTextCoord.st).a - 0.5; 
        rgb = mat3( 1,       1,         1, 
                    0,       -0.39465,  2.03211, 
                    1.13983, -0.58060,  0) * yuv; 
        gl_FragColor = vec4(rgb, 1.0); 
    }
);


这里上代码:( 循环读取"/storage/emulated/0/960X544_nv12.yuv" 文件,以30帧每秒的速度渲染)

文件jni_NativeOpengl.cpp

//canok 20210528
//jni_NativeOpengl.cpp 主要实现jni转换
#include 
#include
#include
#include
#include"Openglfuncs.h"
#include "GlPragrame.h"
#include "logs.h"

static char* jstringToChar(JNIEnv* env, jstring jstr) {
    char* rtn = NULL;
    jclass clsstring = env->FindClass("java/lang/String");
    //LOGD("fine String class------clsstring==NULL?%d",clsstring==NULL);
    jstring strencode = env->NewStringUTF("utf-8");
    jmethodID mid = env->GetMethodID(clsstring, "getBytes", "(Ljava/lang/String;)[B");
    jbyteArray barr = (jbyteArray) env->CallObjectMethod(jstr, mid, strencode);
    jsize alen = env->GetArrayLength(barr);
    jbyte* ba = env->GetByteArrayElements(barr, JNI_FALSE);
    if (alen > 0) {
        rtn = (char*) malloc(alen + 1);
        memcpy(rtn, ba, alen);
        rtn[alen] = 0;
    }
    env->ReleaseByteArrayElements(barr, ba, 0);
    return rtn;
}

void jStart(JNIEnv *env, jobject obj,jobject jsurface, jstring srcfile, int type, jint w, jint h, int fps){
    ANativeWindow *pWind= ANativeWindow_fromSurface(env, jsurface);
    char*file_in = jstringToChar(env,srcfile);
    //内部会开启另一个线程,并在新线程中初始化EGL环境,然后在新线程中渲染
    //注意,EGL初的始化的,一定要和渲染 为同一个线程 里面有个调用 eglMakeCurrent 。
    //如果把EGL的初始化也封装成了 jni借口由java的线程来调用,就不要在这里面单独再开启线程渲染,可以直接使用java的线程体来 执行渲染循环。
    mygl_start(pWind,file_in,type,w,h,fps);
}

void jStop(JNIEnv *env, jobject obj){
    mygl_stop();
}
static JNINativeMethod gMethods[] = {
        {"native_start",     "(Landroid/view/Surface;Ljava/lang/String;IIII)V",      (void*)jStart},
        {"native_stop",  "()V",   (void*)jStop},
};


static const char* const kClassPathName = "com/example/opengl_native/NativeOpengl";
static int registerNativeMethods(JNIEnv* env
        , const char* className
        , JNINativeMethod* gMethods, int numMethods) {
    jclass clazz;
    clazz = env->FindClass(className);
    if (clazz == NULL) {
        return JNI_FALSE;
    }
    if (env->RegisterNatives(clazz, gMethods, numMethods) < 0) {
        return JNI_FALSE;
    }
    return JNI_TRUE;
}
// This function only registers the native methods
static int registerFunctios(JNIEnv *env)
{
    ALOGD("register [%s]%d",__FUNCTION__,__LINE__);
    return registerNativeMethods(env,
                                 kClassPathName, gMethods, sizeof(gMethods)/sizeof(gMethods[0]));
}

jint JNI_OnLoad(JavaVM* vm, void* reserved)
{
    ALOGD("onloader");
    JNIEnv* env = NULL;
    jint result = -1;

    if (vm->GetEnv((void**) &env, JNI_VERSION_1_4) != JNI_OK) {
        ALOGD("ERROR: GetEnv failed\n");
        goto bail;
    }
    assert(env != NULL);
    if (registerFunctios(env) < 0) {
        ALOGE(" onloader ERROR: MediaPlayer native registration failed\n");
        goto bail;
    }
    ALOGD("onloader register ok ![%s]%d",__FUNCTION__,__LINE__);

    result = JNI_VERSION_1_4;

    bail:
    return result;
}

文件Openglfuncs.cpp
 

//canok 20210528
//文件openglfuncs.cpp 主要初始化egl环境,读文件进行渲染
#include
#include
#include
#include "logs.h"
#include "GlPragrame.h"
#include "Openglfuncs.h"
#define MIN(x,y) (x)<(y)?(x):(y)

#define DEBUG_FROME_FILE 1
EGLDisplay megldisplay =NULL;
EGLSurface meglSurface =NULL;
EGLContext meglContext=NULL;
int mbRun=0;
void initGles(){
    // 1 EGL 初始化
    EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    if(display == EGL_NO_DISPLAY){
        ALOGE("eglGetDisplay failed %d", eglGetError());
        return;
    }
    megldisplay = display;
    if(!eglInitialize(display,NULL,NULL)){
        ALOGE("eglInitialize failed %d", eglGetError());
    }

    //2配置display
    EGLConfig config;
    EGLint configNum;
    EGLint configSpec[] = {
            EGL_RED_SIZE,8,
            EGL_GREEN_SIZE, 8,
            EGL_BLUE_SIZE, 8,
            EGL_ALPHA_SIZE,8,
            EGL_RENDERABLE_TYPE,EGL_OPENGL_ES2_BIT,
            EGL_SURFACE_TYPE,EGL_WINDOW_BIT,EGL_NONE
    };

    if(EGL_TRUE != eglChooseConfig(display, configSpec, &config, 1, &configNum)){
        ALOGD("eglChooseConfig failed!");
        return;
    }


    //3 EGLcontext 创建上下文
    const EGLint ctxAttr[] = {
            EGL_CONTEXT_CLIENT_VERSION, 2,EGL_NONE
    };
    EGLContext  context = eglCreateContext(display, config, EGL_NO_CONTEXT, ctxAttr);
    if(context == EGL_NO_CONTEXT){
        ALOGD("eglCreateContext failed!");
        return;
    }
    meglContext=context;


    //4创建EGLsurface,把 java端传入的surface和EGL关联起来
    EGLSurface  eglsurface = eglCreateWindowSurface(display, config, global_Context.nwin, 0);
    if(eglsurface == EGL_NO_SURFACE){
        ALOGD("eglCreateWindowSurface failed!");
        return;
    }
    meglSurface=eglsurface;

    if(EGL_TRUE != eglMakeCurrent(display, eglsurface, eglsurface, context)){
        ALOGD("eglMakeCurrent failed!");
        return ;
    }

    glClearColor(0.0f,0.0f,0.0f,1.0f);
    glClear(GL_COLOR_BUFFER_BIT);
    eglSwapBuffers(megldisplay, meglSurface);
}


void *renderthread(void * prame){
#if DEBUG_FROME_FILE
    //第一步搭建ES环境
    initGles();
    ALOGD("mygl_init intglescontext ok ");
    //第二步 创建显卡可执行程序
    createprograme();
    ALOGD("mygl_init createprograme ok %d",global_Context.type);

    //waring!!!!!!!!
    //glViewport(0,0,WIDTH,HEIGHT);

    FILE *fp = fopen(global_Context.file_in,"r");
    if(fp ==NULL){
        ALOGD("fopen failed!");
        return NULL;
    }
    global_Context.buflen=global_Context.mW*global_Context.mH*3/2;
    global_Context.buf = (unsigned char*)malloc( global_Context.buflen);

    int ret =0;
    while(mbRun){
        //这里绘制
        if( (ret = fread(global_Context.buf,1,global_Context.buflen,fp)) ==global_Context.buflen ){
            drawFrame(global_Context.buf,global_Context.buflen);
            eglSwapBuffers(megldisplay, meglSurface);
            usleep(1000*1000/global_Context.mfps);
        }else{//循环播放
            fseek(fp,0,SEEK_SET);
        }
    }
    //销毁
    if(megldisplay!=NULL && meglSurface!=NULL && meglContext!=NULL){
        eglDestroySurface(megldisplay,meglSurface);
        eglDestroyContext(megldisplay,meglContext);
    }
    fclose(fp);
    free(global_Context.file_in);
    free(global_Context.buf);
#else
    initGles();
    ALOGD(" intglescontext ok ");

    createprograme();
    ALOGD(" createprograme ok ");

    global_Context.buflen=global_Context.mW*global_Context.mH*3/2;
    global_Context.buf = (unsigned char*)malloc( global_Context.buflen);
    if(global_Context.buf == NULL){
        ALOGE("err to malloc!");
        return NULL;
    }

    while(mbRun){
        //条件等待
        pthread_mutex_lock(&global_Context.mCondMutex);
        pthread_cond_wait(&global_Context.mCond,&global_Context.mCondMutex);
        pthread_mutex_unlock(&global_Context.mCondMutex);


        pthread_mutex_lock(&global_Context.bufMutex);
        do {//渲染,期间不允许同时修改
            drawFrame(global_Context.buf, global_Context.buflen);
            eglSwapBuffers(megldisplay, meglSurface);
        }while(0);
        pthread_mutex_unlock(&global_Context.bufMutex);

    }
    //销毁
    if(megldisplay!=NULL && meglSurface!=NULL && meglContext!=NULL){
        eglDestroySurface(megldisplay,meglSurface);
        eglDestroyContext(megldisplay,meglContext);
    }
    free(global_Context.buf);
#endif
    pthread_exit(NULL);
    return NULL;
}
void mygl_start(ANativeWindow *pWnid, char*file_in,int type, int w, int h,int fps){
    global_Context.file_in = file_in;
    global_Context.type = type;
    global_Context.nwin = pWnid;
    global_Context.mW = w;
    global_Context.mH = h;
    global_Context.mfps =fps;
    ALOGD("start run");
    //创建渲染程序:
    if(mbRun){
       ALOGE("had run, return");
    }
    mbRun =true;
    int ret =0;
    if(0 != (ret = pthread_create(&global_Context.mThread,NULL,renderthread,NULL))){
        ALOGE("pthread_create erro");
    }
    //pthread_detach(pd);
}
void mygl_render(unsigned char *buf,int size){

    pthread_mutex_lock(&global_Context.bufMutex);
    do {//拷贝,期间不允许同时修改
        memcpy(global_Context.buf,buf,MIN(size,global_Context.buflen));
    }while(0);
    pthread_mutex_unlock(&global_Context.bufMutex);

    //释放条件
    pthread_mutex_lock(&global_Context.mCondMutex);
    pthread_cond_signal(&global_Context.mCond);
    pthread_mutex_unlock(&global_Context.mCondMutex);

}
void mygl_stop(){
    mbRun = false;
    //释放条件
    pthread_mutex_lock(&global_Context.mCondMutex);
    pthread_cond_signal(&global_Context.mCond);
    pthread_mutex_unlock(&global_Context.mCondMutex);

    //等待结束,避免出现 FORTIFY: pthread_mutex_lock called on a destroyed mutex
    pthread_join(global_Context.mThread,NULL);
}

文件GlPragrame.cpp
 

//canok 20210528
//GlPragrame.cpp 主要实现 gl程序
//canok 2021.0528
#include 
#include "GlPragrame.h"
#include "Openglfuncs.h"
#include "logs.h"
#define GET_STR(x) #x
struct S_CONTEXT global_Context = {.bufMutex=PTHREAD_MUTEX_INITIALIZER,.mCondMutex=PTHREAD_MUTEX_INITIALIZER,
.mCond=PTHREAD_COND_INITIALIZER};

const char *vertexShader_ = GET_STR(
attribute vec4 aPosition;//输入的顶点坐标,会在程序指定将数据输入到该字段
attribute vec2 aTextCoord;//输入的纹理坐标,会在程序指定将数据输入到该字段
varying vec2 vTextCoord;//输出的纹理坐标
void main() {
    //这里其实是将上下翻转过来
    vTextCoord = vec2(aTextCoord.x, 1.0 - aTextCoord.y);
    //直接把传入的坐标值作为传入渲染管线。gl_Position是OpenGL内置的
    gl_Position = aPosition;
}
);
/*
The difference between the three single-component texture formats, GL_ALPHA, GL_LUMINANCE and GL_INTENSITY, is in the way the four-component RGBA color vector is generated. If the value for a given texel is X, then the RGBA color vector generated is:

    * GL_ALPHA: RGBA = (0, 0, 0, X0)

    * GL_LUMINANCE: RGBA = (X0, X0, X0, 1)

    * GL_INTENSITY: RGBA = (X0, X0, X0, X0)

    * GL_LUMINANCE_ALPHA: RGBA = (X0, X0, X0, X1), 这个才是二维交叉的
    * GL_RGB :          RGBA=(X0, X1, X2, 1),三个量交叉
    * GL_RGBA :          RGBA=(X0, X1, X2, X3),四个量交叉
    这里用的 GL_LUMINANCE,每一纹理分量  r g b 是一样的值,都是纹理设定的数据
    其实我们可以改用GL_LUMINANCE_ALPHA, 只需要一个量

      这里整个的原理,即把YUV420S  每一个分量当做一个纹理,然后在片原着色器中
      利用显卡的高速运算,通过 yuv转rgb 的矩阵公式,把三个纹理分量的值计算转换成rgb
      接着,然后画这个rgba 的点。
*/

const char *fragYUV420P_ = GET_STR(
    precision mediump float;
    varying vec2 vTextCoord;
    //输入的yuv三个纹理
    uniform sampler2D yTexture;//采样器
    uniform sampler2D uTexture;//采样器
    uniform sampler2D vTexture;//采样器
    void main() {
        vec3 yuv;
        vec3 rgb;
        //分别取yuv各个分量的采样纹理
        //下面是一个 yuv转 rgb的矩阵运算
        // 纹理的 r g b相等,都是一个值
        /*
        *矩阵公式
        *【】x [[yuv] -[0,0.5,0.5]]
        */
        yuv.x = texture2D(yTexture, vTextCoord).r;
        yuv.y = texture2D(uTexture, vTextCoord).r - 0.5;
        yuv.z = texture2D(vTexture, vTextCoord).r - 0.5;
        rgb = mat3(
                1.0, 1.0, 1.0,
                0.0, -0.39465, 2.03211,
                1.13983, -0.5806, 0.0
        ) * yuv;
        //gl_FragColor是OpenGL内置的
        gl_FragColor = vec4(rgb, 1.0);
    }
);

const char *fragNV12_ = GET_STR(
    precision mediump float;
    varying vec2 vTextCoord;
    uniform sampler2D yTexture;
    uniform sampler2D uvTexture;
    void main(void)
    {
        vec3 yuv; 
        vec3 rgb; 
        yuv.x = texture2D(yTexture, vTextCoord.st).r; 
        yuv.y = texture2D(uvTexture, vTextCoord.st).r - 0.5; 
        yuv.z = texture2D(uvTexture, vTextCoord.st).a - 0.5; 
        rgb = mat3( 1,       1,         1, 
                    0,       -0.39465,  2.03211, 
                    1.13983, -0.58060,  0) * yuv; 
        gl_FragColor = vec4(rgb, 1.0); 
    }
);


/**
 * 加载  着色器
 * @param type     着色器类型
 * @param shaderSrc  着色源码
 * @return
 */
GLuint LoadShader(GLenum type, const char *shaderSrc) {
    GLuint shader;
    GLint compiled;
    shader = glCreateShader(type);  //  创建 着色器 句柄
    if (shader == 0) {
        ALOGE("create shader error");
        return 0;
    }

    // 装载 着色器源码
    glShaderSource(shader, 1, &shaderSrc, 0);

    // 编译着色器
    glCompileShader(shader);

    // 检测编译状态
    glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);

    if (!compiled) {
        GLint infoLen = 0;
        // 获取日志长度
        glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
        if (infoLen > 1) {
            GLchar *infoLog = (GLchar *)(malloc(sizeof(GLchar) * infoLen));
            // 获取日志 信息
            glGetShaderInfoLog(shader, infoLen, 0, infoLog);
            ALOGE("%s", infoLog);
            free(infoLog);
        }
        glDeleteShader(shader);
        return 0;
    }
    return shader;
}

GLuint LoadProgramYUV(int type){

    GLint vertexShader = LoadShader(GL_VERTEX_SHADER,vertexShader_);
    GLint fragShader;
    if(TYPE_YUV420SP_NV12 == type){
       fragShader = LoadShader(GL_FRAGMENT_SHADER,fragNV12_);
    }else{
    //默认用yuv420p
        fragShader = LoadShader(GL_FRAGMENT_SHADER,fragYUV420P_);
    }
    // 创建渲染程序
    GLint program = glCreateProgram();
    if (program == 0){
        ALOGE("YUV : CreateProgram failure");
        glDeleteShader(vertexShader);
        glDeleteShader(fragShader);
        return 0;
    }

    // 先渲染程序中 加入着色器
    glAttachShader(program,vertexShader);
    glAttachShader(program,fragShader);

    // 链接 程序
    glLinkProgram(program);
    GLint  status = 0;
    glGetProgramiv(program,GL_LINK_STATUS,&status);
    if (status == 0){
        ALOGE("glLinkProgram failure");
        glDeleteProgram(program);
        return 0;
    }
    global_Context.glProgram = program;
    glUseProgram(program);
    return 1;

}

void RenderYUVConfig(uint width, uint height, int type) {
    static float ver[] = {
            1.0f, -1.0f, 0.0f,
            -1.0f, -1.0f, 0.0f,
            1.0f, 1.0f, 0.0f,
            -1.0f, 1.0f, 0.0f
    };

    GLuint apos = (GLuint)(glGetAttribLocation(global_Context.glProgram, "aPosition"));
    glEnableVertexAttribArray(apos);
    glVertexAttribPointer(apos, 3, GL_FLOAT, GL_FALSE, 0, ver);

   //加入纹理坐标数据
    static float fragment[] = {
            1.0f, 0.0f,
            0.0f, 0.0f,
            1.0f, 1.0f,
            0.0f, 1.0f
    };
    GLuint aTex = (GLuint)(glGetAttribLocation(global_Context.glProgram, "aTextCoord"));
    glEnableVertexAttribArray(aTex);
    glVertexAttribPointer(aTex, 2, GL_FLOAT, GL_FALSE, 0, fragment);

    ALOGD("can_ok[%s%d] type:%d",__FUNCTION__,__LINE__,type);

    if(TYPE_YUV420SP_NV12 == type){
        ALOGD("can_ok[%s%d] type:%d",__FUNCTION__,__LINE__,type);
        glUniform1i(glGetUniformLocation(global_Context.glProgram, "yTexture"), 0);
        glUniform1i(glGetUniformLocation(global_Context.glProgram, "uvTexture"), 1);

        glGenTextures(2, global_Context.mTextures);

        glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[0]);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexImage2D(GL_TEXTURE_2D,
                     0,
                     GL_LUMINANCE,
                     width,
                     height,
                     0,
                     GL_LUMINANCE,
                     GL_UNSIGNED_BYTE,
                     NULL 
        );

        glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[1]);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexImage2D(GL_TEXTURE_2D,
                     0,
                     GL_LUMINANCE_ALPHA, 
                     width / 2,
                     height / 2,
                     0, 
                     GL_LUMINANCE_ALPHA,
                     GL_UNSIGNED_BYTE,
                     NULL 
        );
    }else{
        glUniform1i(glGetUniformLocation(global_Context.glProgram, "yTexture"), 0);
        glUniform1i(glGetUniformLocation(global_Context.glProgram, "uTexture"), 1);
        glUniform1i(glGetUniformLocation(global_Context.glProgram, "vTexture"), 2);

        glGenTextures(3, global_Context.mTextures);

        glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[0]);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexImage2D(GL_TEXTURE_2D,
                     0,
                     GL_LUMINANCE,
                     width,
                     height,
                     0,
                     GL_LUMINANCE,
                     GL_UNSIGNED_BYTE,
                     NULL
        );

        glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[1]);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexImage2D(GL_TEXTURE_2D,
                     0,
                     GL_LUMINANCE,
                     width / 2,
                     height / 2,
                     0,
                     GL_LUMINANCE,
                     GL_UNSIGNED_BYTE,
                     NULL
        );

        glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[2]);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexImage2D(GL_TEXTURE_2D,
                     0,
                     GL_LUMINANCE,
                     width / 2,
                     height / 2,
                     0,
                     GL_LUMINANCE,
                     GL_UNSIGNED_BYTE,
                     NULL
        );
 
    }
}

//平面格式
void RenderYUV420P(GLuint width, GLuint height, unsigned char *buf) {

    //Y
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[0]);
    glTexSubImage2D(GL_TEXTURE_2D, 0,
                    0, 0,
                    width, height,
                    GL_LUMINANCE, GL_UNSIGNED_BYTE,
                    buf);


    //U
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[1]);
    glTexSubImage2D(GL_TEXTURE_2D, 0,
                    0, 0,
                    width / 2, height / 2,
                    GL_LUMINANCE, GL_UNSIGNED_BYTE,
                    buf+width*height);


    //V
    glActiveTexture(GL_TEXTURE2);
    glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[2]);
    glTexSubImage2D(GL_TEXTURE_2D, 0,
                    0, 0,
                    width / 2, height / 2,
                    GL_LUMINANCE, 
                    GL_UNSIGNED_BYTE,
                    buf+width*height*5/4);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    //eglSwapBuffers(global_Context.eglDisplay, global_Context.eglSurface);

}

void RenderNV12(GLuint width, GLuint height, unsigned char * buf){
     //Y
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[0]);
    glTexSubImage2D(GL_TEXTURE_2D, 0,
                    0, 0,
                    width, height,
                    GL_LUMINANCE,
                    GL_UNSIGNED_BYTE,
                    buf);

    //UV
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_2D, global_Context.mTextures[1]);
    glTexSubImage2D(GL_TEXTURE_2D, 0,
                    0, 0,
                    width/2, height / 2,
                    GL_LUMINANCE_ALPHA,
                    GL_UNSIGNED_BYTE,
                    buf+width*height);
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
void createprograme(){
    LoadProgramYUV(global_Context.type);
    RenderYUVConfig(global_Context.mW,global_Context.mH,global_Context.type);
}
void drawFrame(unsigned char* buf, int size) {
    if(global_Context.type == TYPE_YUV420SP_NV12){
         ALOGD("[%s%d] render NV12 %dx%d",__FUNCTION__,__LINE__,global_Context.mW,global_Context.mH);
         RenderNV12(global_Context.mW,global_Context.mH,buf);
    }else{
         ALOGD("[%s%d]render YUV420P %dx%d",__FUNCTION__,__LINE__,global_Context.mW,global_Context.mH);
         RenderYUV420P(global_Context.mW,global_Context.mH,buf);
    }
}

void changeLayout(int width, int height){
    glViewport(0,0,width,height);
}
//canok 20210528
//头文件:
logs.h
#ifndef __MY_LOGS_HEADER__
#define __MY_LOGS_HEADER__
#ifdef __cplusplus
extern "C" {
#endif
#include 
// 宏定义类似java 层的定义,不同级别的Log LOGI, LOGD, LOGW, LOGE, LOGF。 对就Java中的 Log.i log.d
#define LOG_TAG    "hpc -- JNILOG" // 这个是自定义的LOG的标识
//#undef LOG // 取消默认的LOG
#define ALOGI(...)  __android_log_print(ANDROID_LOG_INFO,LOG_TAG, __VA_ARGS__)
#define ALOGD(...)  __android_log_print(ANDROID_LOG_DEBUG,LOG_TAG, __VA_ARGS__)
#define ALOGW(...)  __android_log_print(ANDROID_LOG_WARN,LOG_TAG, __VA_ARGS__)
#define ALOGE(...)  __android_log_print(ANDROID_LOG_ERROR,LOG_TAG, __VA_ARGS__)
#define ALOGF(...)  __android_log_print(ANDROID_LOG_FATAL,LOG_TAG, __VA_ARGS__)

#ifdef __cplusplus
}
#endif

#endif


OpenGLfuncs.h
#ifndef __OPENGLFUNCS_HH
#define __OPENGLFUNCS_HH
#include
#include
#include
#include 
#include 
#include
#include

#define TYPE_YUV420P 19
#define TYPE_YUV420SP_NV12 21
void mygl_start(ANativeWindow *pWnid,char*file_in, int type,int w, int h,int fps);
void mygl_stop();

//下面函数供其它模块输入渲染数据
void mygl_render(unsigned char *buf,int size);
#endif


Glprograme.h
#ifndef __GLPRAGRAME2__H__
#define __GLPRAGRAME2__H__
#include 
#include
#include
#include 
#include 
#include 

#include 
#include 
struct S_CONTEXT{
    GLint glProgram;
    GLuint mTextures[3];
    int mW;
    int mH;
    int type;
    int mfps;
    ANativeWindow *nwin ;
    unsigned char* buf;
    int buflen;
    pthread_mutex_t bufMutex;
    pthread_mutex_t mCondMutex;
    pthread_cond_t mCond;
    pthread_t mThread;
    char *file_in;
} ;
extern struct S_CONTEXT global_Context;
void createprograme();
void drawFrame(unsigned char* buf, int size) ;
#endif

添加上java 代码:
 

package com.example.opengl_native;

import android.util.Log;
import android.view.Surface;
public class NativeOpengl {
    private static final String TAG = "NativeOpengl";
    private Surface mSurface;
    NativeOpengl(){

    }
    private native void native_start(Surface surface,String file_in,int type,int w, int h, int fps);
    private native void native_stop();

    static {
        System.loadLibrary("NativeOpengl");
    }

    public void play(Surface surface,String file_in,int type,int w, int h,int fps){
        String colortype="unsupport_type";
        if(type ==21){
            colortype = "NV12_YUV420SP";
        }else if(type==19){
            colortype = "YUV420P";
        }
        Log.d(TAG, "play: "+surface+"file_in:"+file_in+" type,"+colortype+",size:"+w+"x"+h+"@"+fps);
        mSurface = surface;
        native_start(surface,file_in,type,w,h,fps);
    }

}
package com.example.opengl_native;

import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import androidx.core.app.ActivityCompat;

import android.Manifest;
import android.app.Activity;
import android.content.pm.PackageManager;
import android.os.Build;
import android.os.Bundle;
import android.util.Log;
import android.view.Surface;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
import android.view.ViewGroup;
import android.widget.Button;
import android.widget.LinearLayout;

public class MainActivity extends AppCompatActivity {
    private static final String TAG = "MainActivity";

    private Button mButtonPlay;
    private SurfaceView mView;
    private NativeOpengl mOpengl;
    private Surface mSurface;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        verifyStoragePermissions(this);

        mView = findViewById(R.id.surface);
        SurfaceHolder holder = mView.getHolder();
        holder.addCallback(new SurfaceHolder.Callback() {
            @Override
            public void surfaceDestroyed(SurfaceHolder holder) {
                // TODO Auto-generated method stub
                Log.i("SURFACE","destroyed");
            }

            @Override
            public void surfaceCreated(SurfaceHolder holder) {
                // TODO Auto-generated method stub
                Log.i("SURFACE","create");
                mSurface =  holder.getSurface();
            }

            @Override
            public void surfaceChanged(SurfaceHolder holder, int format, int width,
                                       int height) {
                // TODO Auto-generated method stub
            }
        });
        mButtonPlay = findViewById(R.id.button);
        mButtonPlay.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                if(mSurface!=null && mOpengl==null) {
                    mOpengl = new NativeOpengl();
                    mOpengl.play(mSurface,"/storage/emulated/0/960X544_nv12.yuv",21,960,544,30);
                }
            }
        });
    }

    private static final int REQUEST_EXTERNAL_STORAGE = 1;
    private static String[] PERMISSIONS_STORAGE = {
            Manifest.permission.READ_EXTERNAL_STORAGE,
            Manifest.permission.WRITE_EXTERNAL_STORAGE};
    public static boolean verifyStoragePermissions(Activity activity) {

        if(Build.VERSION.SDK_INT < 23)
        {
            Log.d(TAG, "verifyStoragePermissions: <23");
            return true;
        }
        // Check if we have write permission
        int permission = ActivityCompat.checkSelfPermission(activity,
                Manifest.permission.RECORD_AUDIO);
        if (permission != PackageManager.PERMISSION_GRANTED) {
            // We don't have permission so prompt the user
            ActivityCompat.requestPermissions(activity,PERMISSIONS_STORAGE,
                    REQUEST_EXTERNAL_STORAGE);
            return false;
        }
        else
        {
            return true;
        }
    }
}

Android.mk
 

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)

LOCAL_MODULE    := NativeOpengl
LOCAL_SRC_FILES :=  Openglfuncs.cpp \
					GlPragrame.cpp \
					jni_NativeOpengl.cpp

LOCAL_LDLIBS += -lGLESv2 -lEGL \
				-llog -landroid
LOCAL_CFLAGS += -DGL_GLEXT_PROTOTYPES

include $(BUILD_SHARED_LIBRARY)
布局:

        
            

你可能感兴趣的:(android多媒体,opengles,android)