视频编辑的时候,为了显示主体内容的突出,一般是在背景中加上模糊背景并且降暗度。类似这种效果
用opengl要怎样去实现呢,那么需要这几步去分解
1、根据视频的显示比例,如果1:1,4:3,16:9,3:4,9:16这样的比例去计算背景显示大小,图片显示大小。
比如显示1:1的视频,如果图片是9:16的,那么根据背景大小,让图片进行居中显示:
具体代码如下:
public void adjustImageScaling(int width, int height) {
float outputWidth = width;
float outputHeight = height;
if (mBitmap != null) {
int framewidth = mBitmap.getWidth();
int frameheight = mBitmap.getHeight();
int rotation = (matrixAngle + bitmapAngle) % 360;
if ((rotation == 90 || rotation == 270)) {
framewidth = mBitmap.getHeight();
frameheight = mBitmap.getWidth();
}
float ratio1 = outputWidth / framewidth;
float ratio2 = outputHeight / frameheight;
float ratioMax = Math.min(ratio1, ratio2);
// 居中后图片显示的大小
int imageWidthNew = Math.round(framewidth * ratioMax);
int imageHeightNew = Math.round(frameheight * ratioMax);
Log.i(TAG, "outputWidth:" + outputWidth + ",outputHeight:" + outputHeight + ",imageWidthNew:" + imageWidthNew + ",imageHeightNew:" + imageHeightNew);
// 图片被拉伸的比例,以及显示比例时的偏移量
float ratioWidth = outputWidth / imageWidthNew;
float ratioHeight = outputHeight / imageHeightNew;
Log.i(TAG, "isDoRotate:" + isDoRotate + ",ratioWidth:" + ratioWidth + ",ratioHeight:" + ratioHeight);
// 根据拉伸比例还原顶点
cube = new float[]{
pos[0] / ratioWidth, pos[1] / ratioHeight,
pos[2] / ratioWidth, pos[3] / ratioHeight,
pos[4] / ratioWidth, pos[5] / ratioHeight,
pos[6] / ratioWidth, pos[7] / ratioHeight,
};
mVerBuffer.clear();
mVerBuffer.put(cube).position(0);
}
}
2、对图片、视频帧进行模糊、暗化操作
模糊算法用的是高斯模糊,只是说自己遍历操作在java跑,用cpu跑逻辑,计算完之后再用gpu进行渲染,比起直接用opengl中遍历,就相对快了一点。然后背景的顶点位置是填充整个显示比例的,模糊的算法借鉴如下:
我只是在里面重新添加了暗度降低的算法:
https://github.com/Martin20150405/In77Camera/blob/e0b2ab455985db43e850244d5c789ce4ebcb49c3/In77Camera/omoshiroilib/src/main/java/com/martin/ads/omoshiroilib/filter/imgproc/CustomizedGaussianBlurFilter.java
const vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}
vec3 rgb2hsv(vec3 c) {
const vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
float d = q.x - min(q.w, q.y);
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + 0.001)), d / (q.x + 0.001), q.x);
}
vec3 a = rgb2hsv(color);
vec3 m = a-vec3(0.0f, 0.0f, 0.04f);
gl_FragColor = vec4(hsv2rgb(m), orAlpha);
3、显示原始的图片和帧,按照第一步计算的顶点绘画该图片
整体调用如下:
//黑色背景
GLES20.glViewport(0, 0, ConstantMediaSize.currentScreenWidth, ConstantMediaSize.currentScreenHeight);
GLES20.glClearColor(0, 0, 0, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
//显示区背景
GLES20.glViewport(0, 0, ConstantMediaSize.showViewWidth, ConstantMediaSize.showViewHeight);
if (currentMediaItem == null) {
return;
}
int cornor = currentMediaItem.getRotation();
imageBackgroundFilter.setmBitmap(currentMediaItem.getFirstFrame(), cornor);
imageBackgroundFilter.initTexture();
imageBackgroundFilter.onSizeChanged(mWidth, mHeight);
imageBackgroundFilter.drawBackgrondPrepare();
EasyGlUtils.bindFrameTexture(fFrame[0], fTexture[0]);
imageBackgroundFilter.drawBackground();
EasyGlUtils.unBindFrameBuffer();```