本文主要分析androidP上hwui有关OpenGL ES的应用,在SurfaceFlinger侧也有应用,本文就不介绍了~
记得下面9.0的调试过程需要如下设置才可以:
adb root;adb remount
adb shell setprop debug.hwui.renderer opengl
adb shell stop;adb shell start
在onxxxOp的过程中首先会构建一个GlopBuilder对象,紧接着调用build方法,之后就直接renderGlop进行渲染,renderGlop最后有介绍。
GlopBuilder build.png
举例说明:
frameworks/base/libs/hwui/BakedOpDispatcher.cpp
static void renderVertexBuffer(BakedOpRenderer& renderer, const BakedOpState& state,
const VertexBuffer& vertexBuffer, float translateX, float translateY,
const SkPaint& paint, int vertexBufferRenderFlags) {
...
Glop glop;
GlopBuilder(renderer.renderState(), renderer.caches(), &glop)
.setRoundRectClipState(state.roundRectClipState)
.setMeshVertexBuffer(vertexBuffer)
.setFillPaint(paint, state.alpha, shadowInterp)
.setTransform(state.computedState.transform, transformFlags)
.setModelViewOffsetRect(translateX, translateY, vertexBuffer.getBounds())
.build();
renderer.renderGlop(state, glop);
}
}
一. build过程
1.GlopBuilder build过程
在GlopBuilder的build的时候会构建mDescription描述的ProgramDescription,并作为参数传递到get()函数中,最后得到program并将结果保存在mOutGlop->fill.program中:
frameworks/base/libs/hwui/GlopBuilder.cpp
Caches& mCaches;
void GlopBuilder::build() {
REQUIRE_STAGES(kAllStages);
if (mOutGlop->mesh.vertices.attribFlags & VertexAttribFlags::TextureCoord) {
Texture* texture = mOutGlop->fill.texture.texture;
if (texture->target() == GL_TEXTURE_2D) {
mDescription.hasTexture = true;
} else {
mDescription.hasExternalTexture = true;
}
mDescription.hasLinearTexture = texture->isLinear();
mDescription.hasColorSpaceConversion = texture->hasColorSpaceConversion();
mDescription.transferFunction = texture->getTransferFunctionType();
mDescription.hasTranslucentConversion = texture->blend;
}
mDescription.hasColors = mOutGlop->mesh.vertices.attribFlags & VertexAttribFlags::Color;
mDescription.hasVertexAlpha = mOutGlop->mesh.vertices.attribFlags & VertexAttribFlags::Alpha;
// Enable debug highlight when what we're about to draw is tested against
// the stencil buffer and if stencil highlight debugging is on
mDescription.hasDebugHighlight =
!Properties::debugOverdraw &&
Properties::debugStencilClip == StencilClipDebug::ShowHighlight &&
mRenderState.stencil().isTestEnabled();
// serialize shader info into ShaderData
GLuint textureUnit = mOutGlop->fill.texture.texture ? 1 : 0;
if (CC_LIKELY(!mShader)) {
mOutGlop->fill.skiaShaderData.skiaShaderType = kNone_SkiaShaderType;
} else {
Matrix4 shaderMatrix;
if (mOutGlop->transform.transformFlags & TransformFlags::MeshIgnoresCanvasTransform) {
// canvas level transform was built into the modelView and geometry,
// so the shader matrix must reverse this
shaderMatrix.loadInverse(mOutGlop->transform.canvas);
shaderMatrix.multiply(mOutGlop->transform.modelView);
} else {
shaderMatrix = mOutGlop->transform.modelView;
}
SkiaShader::store(mCaches, *mShader, shaderMatrix, &textureUnit, &mDescription,
&(mOutGlop->fill.skiaShaderData));
}
// duplicates ProgramCache's definition of color uniform presence
const bool singleColor = !mDescription.hasTexture && !mDescription.hasExternalTexture &&
!mDescription.hasGradient && !mDescription.hasBitmap;
mOutGlop->fill.colorEnabled = mDescription.modulate || singleColor;
verify(mDescription, *mOutGlop);
// Final step: populate program and map bounds into render target space
mOutGlop->fill.program = mCaches.programCache.get(mDescription);
}
注意Caches是单例模式,也就是说不管有多少个drawOp,多少个GlopBuilder,请看代码:
frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
bool OpenGLPipeline::draw(...) {
...
auto& caches = Caches::getInstance();
...
BakedOpRenderer renderer(caches, mRenderThread.renderState(), opaque, wideColorGamut,
lightInfo);
2.program生成
在get中首先会根据ProgramDescription结构提取得对应的key(该值为属性的或集);
然后根据mCache中保存的programe去查询,如果没有查到(iter == mCache.end())的话会去
generateProgram重新生成一个,至于如何重新生成下面一节会有分析,如果mCache中已经存在了program的话就不要重新生成了;
最后,将program继续向上传,program中已经保存了glsl。
frameworks/base/libs/hwui/ProgramCache.cpp
std::map> mCache;
Program* ProgramCache::get(const ProgramDescription& description) {
programid key = description.key();
if (key == (PROGRAM_KEY_TEXTURE | PROGRAM_KEY_A8_TEXTURE)) {
// program for A8, unmodulated, texture w/o shader (black text/path textures) is equivalent
// to standard texture program (bitmaps, patches). Consider them equivalent.
key = PROGRAM_KEY_TEXTURE;
}
//根据ProgramDescription的key从mCache中找到对应的program
auto iter = mCache.find(key);
Program* program = nullptr;
if (iter == mCache.end()) {
description.log("Could not find program");
program = generateProgram(description, key);
mCache[key] = std::unique_ptr(program);
} else {
program = iter->second.get();
}
return program;
}
3.shader glsl生成
该过程为可选过程: 没有变化的话就无需重新编译shader和linkProgram
generateProgram得到vertexShader和fragmentShader的glsl,然后传到program中去:
frameworks/base/libs/hwui/ProgramCache.cpp
Program* ProgramCache::generateProgram(const ProgramDescription& description, programid key) {
String8 vertexShader = generateVertexShader(description);
String8 fragmentShader = generateFragmentShader(description);
return new Program(description, vertexShader.string(), fragmentShader.string());
}
那么看下fragmentShader glsl语言编写:
从ProgramCache.cpp中字符数组取得并拼接成当前的fragment shader glsl,并将结果返回给调用的地方。
以下展示了部分代码:
frameworks/base/libs/hwui/ProgramCache.cpp
//之后会有地方获取projection和transform
const char* gVS_Header_Uniforms =
"uniform mat4 projection;\n"
"uniform mat4 transform;\n";
String8 ProgramCache::generateFragmentShader(const ProgramDescription& description) {
String8 shader(gFS_Header_Start);
...
shader.append(gFS_Header);
// Varyings
if (description.hasTexture || description.hasExternalTexture) {
shader.append(gVS_Header_Varyings_HasTexture);
}
...
// Uniforms
...
if (description.hasTexture || description.useShadowAlphaInterp) {
shader.append(gFS_Uniforms_TextureSampler);
...
// Generate required functions
if (description.hasGradient && description.hasBitmap) {
generateBlend(shader, "blendShaders", description.shadersMode);
}
...
// Begin the shader
shader.append(gFS_Main);
...
bool applyModulate = false;
// Case when we have two shaders set
if (description.hasGradient && description.hasBitmap) {
if (description.isBitmapFirst) {
shader.append(gFS_Main_BlendShadersBG);
....
// Apply the color op if needed
shader.append(gFS_Main_ApplyColorOp[static_cast(description.colorOp)]);
...
// Output the fragment
if (!blendFramebuffer) {
shader.append(gFS_Main_FragColor);
...
// End the shader
shader.append(gFS_Footer);
4.shader的编译,program的链接
既然shader glsl已经有了,那么就需要link到program中:
frameworks/base/libs/hwui/Program.cpp
Program::Program(const ProgramDescription& description, const char* vertex, const char* fragment) {
mInitialized = false;
mHasColorUniform = false;
mHasSampler = false;
mUse = false;
// No need to cache compiled shaders, rely instead on Android's
// persistent shaders cache
//1.构造shader并编译
mVertexShader = buildShader(vertex, GL_VERTEX_SHADER);
if (mVertexShader) {
mFragmentShader = buildShader(fragment, GL_FRAGMENT_SHADER);
if (mFragmentShader) {
//2.创建program
mProgramId = glCreateProgram();
//3.绑定shader到program
glAttachShader(mProgramId, mVertexShader);
glAttachShader(mProgramId, mFragmentShader);
//4.绑定position和texture
bindAttrib("position", kBindingPosition);
if (description.hasTexture || description.hasExternalTexture) {
texCoords = bindAttrib("texCoords", kBindingTexCoords);
} else {
texCoords = -1;
}
ATRACE_BEGIN("linkProgram");
//5.LinkProgram
glLinkProgram(mProgramId);
ATRACE_END();
//获取状态是否正确,与前面glGetShaderiv类似
GLint status;
glGetProgramiv(mProgramId, GL_LINK_STATUS, &status);
if (status != GL_TRUE) {
GLint infoLen = 0;
glGetProgramiv(mProgramId, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 1) {
GLchar log[infoLen];
glGetProgramInfoLog(mProgramId, infoLen, nullptr, &log[0]);
ALOGE("%s", log);
}
LOG_ALWAYS_FATAL("Error while linking shaders");
} else {
mInitialized = true;
}
} else {
glDeleteShader(mVertexShader);
}
}
//到此,program已经初始化完成
if (mInitialized) {
//获取glsl中uniform的transform和projection变量,对应前面一节可以看下generateVertexShader
transform = addUniform("transform");
projection = addUniform("projection");
}
}
//编译shader
GLuint Program::buildShader(const char* source, GLenum type) {
ATRACE_NAME("Build GL Shader");
GLuint shader = glCreateShader(type);
glShaderSource(shader, 1, &source, nullptr);
glCompileShader(shader);
//获取状态是否正确,与前面glGetProgramiv类似
GLint status;
glGetShaderiv(shader, GL_COMPILE_STATUS, &status);
if (status != GL_TRUE) {
ALOGE("Error while compiling this shader:\n===\n%s\n===", source);
// Some drivers return wrong values for GL_INFO_LOG_LENGTH
// use a fixed size instead
GLchar log[512];
glGetShaderInfoLog(shader, sizeof(log), nullptr, &log[0]);
LOG_ALWAYS_FATAL("Shader info log: %s", log);
return 0;
}
return shader;
}
//绑定属性,后续就可以get了
KeyedVector mAttributes;
KeyedVector mUniforms;
enum ShaderBindings { kBindingPosition, kBindingTexCoords };
int Program::bindAttrib(const char* name, ShaderBindings bindingSlot) {
glBindAttribLocation(mProgramId, bindingSlot, name);
mAttributes.add(name, bindingSlot);
return bindingSlot;
}
//glsl 中uniform的获取
int Program::addUniform(const char* name) {
int slot = glGetUniformLocation(mProgramId, name);
mUniforms.add(name, slot);
return slot;
}
//vertexShader和fragmentShader销毁过程
Program::~Program() {
if (mInitialized) {
// This would ideally happen after linking the program
// but Tegra drivers, especially when perfhud is enabled,
// sometimes crash if we do so
glDetachShader(mProgramId, mVertexShader);
glDetachShader(mProgramId, mFragmentShader);
glDeleteShader(mVertexShader);
glDeleteShader(mFragmentShader);
glDeleteProgram(mProgramId);
}
}
二.renderGlop过程
该过程为hwui最后一步的draw过程:
1.setProgram ->glUseProgram
2.setUniform 可以通过2中方式访问到:1)fill.program->setxxx;2)mCaches->program().getUniform。
得到后通过glUniformxxx进行设置。
3.glBindBuffer绑定顶点和索引,同时设置vertex(glVertexAttribPointer);
4.纹理texture处理:glActiveTexture、glBindTexture、glTexParameteri
5.glEnableVertexAttribArray、glVertexAttribPointer
6..blend处理:glEnable(GL_BLEND); 、glBlendFunc
7.真正的draw过程:glDrawElements、glDrawArrays
8.glDisableVertexAttribArray
frameworks/base/libs/hwui/renderstate/RenderState.cpp
void RenderState::render(const Glop& glop, const Matrix4& orthoMatrix,
bool overrideDisableBlending) {
//1.拿出glop中的mesh和fill
const Glop::Mesh& mesh = glop.mesh;
const Glop::Mesh::Vertices& vertices = mesh.vertices;
const Glop::Mesh::Indices& indices = mesh.indices;
const Glop::Fill& fill = glop.fill;
//检查opengl管道是否有error
GL_CHECKPOINT(MODERATE);
// ---------------------------------------------
// ---------- Program + uniform setup ----------
// ---------------------------------------------
//2. setProgram目的是调用:glUseProgram
mCaches->setProgram(fill.program);
//3.set Uniform
if (fill.colorEnabled) {
fill.program->setColor(fill.color);
}
fill.program->set(orthoMatrix, glop.transform.modelView, glop.transform.meshTransform(),
glop.transform.transformFlags & TransformFlags::OffsetByFudgeFactor);
// Color filter uniforms
if (fill.filterMode == ProgramDescription::ColorFilterMode::Blend) {
const FloatColor& color = fill.filter.color;
glUniform4f(mCaches->program().getUniform("colorBlend"), color.r, color.g, color.b,
color.a);
} else if (fill.filterMode == ProgramDescription::ColorFilterMode::Matrix) {
glUniformMatrix4fv(mCaches->program().getUniform("colorMatrix"), 1, GL_FALSE,
fill.filter.matrix.matrix);
glUniform4fv(mCaches->program().getUniform("colorMatrixVector"), 1,
fill.filter.matrix.vector);
}
// Round rect clipping uniforms
if (glop.roundRectClipState) {
// TODO: avoid query, and cache values (or RRCS ptr) in program
const RoundRectClipState* state = glop.roundRectClipState;
const Rect& innerRect = state->innerRect;
// add half pixel to round out integer rect space to cover pixel centers
float roundedOutRadius = state->radius + 0.5f;
// Divide by the radius to simplify the calculations in the fragment shader
// roundRectPos is also passed from vertex shader relative to top/left & radius
glUniform4f(fill.program->getUniform("roundRectInnerRectLTWH"),
innerRect.left / roundedOutRadius, innerRect.top / roundedOutRadius,
(innerRect.right - innerRect.left) / roundedOutRadius,
(innerRect.bottom - innerRect.top) / roundedOutRadius);
glUniformMatrix4fv(fill.program->getUniform("roundRectInvTransform"), 1, GL_FALSE,
&state->matrix.data[0]);
glUniform1f(fill.program->getUniform("roundRectRadius"), roundedOutRadius);
}
//set uniform完成了再check下opengl error
GL_CHECKPOINT(MODERATE);
// --------------------------------
// ---------- Mesh setup ----------
// --------------------------------
// vertices
//4.glBindBuffer
meshState().bindMeshBuffer(vertices.bufferObject);
//5.glVertexAttribPointer
meshState().bindPositionVertexPointer(vertices.position, vertices.stride);
// indices
//6.glBindBuffer
meshState().bindIndicesBuffer(indices.bufferObject);
// texture
if (fill.texture.texture != nullptr) {
const Glop::Fill::TextureData& texture = fill.texture;
// texture always takes slot 0, shader samplers increment from there
//7.glActiveTexture
mCaches->textureState().activateTexture(0);
//8.glBindTexture
mCaches->textureState().bindTexture(texture.texture->target(), texture.texture->id());
//9.glTexParameteri
if (texture.clamp != GL_INVALID_ENUM) {
texture.texture->setWrap(texture.clamp, false, false);
}
if (texture.filter != GL_INVALID_ENUM) {
texture.texture->setFilter(texture.filter, false, false);
}
if (texture.textureTransform) {
glUniformMatrix4fv(fill.program->getUniform("mainTextureTransform"), 1, GL_FALSE,
&texture.textureTransform->data[0]);
}
}
// vertex attributes (tex coord, color, alpha)
if (vertices.attribFlags & VertexAttribFlags::TextureCoord) {
//10.glEnableVertexAttribArray
meshState().enableTexCoordsVertexArray();
//11.glVertexAttribPointer
meshState().bindTexCoordsVertexPointer(vertices.texCoord, vertices.stride);
} else {
meshState().disableTexCoordsVertexArray();
}
//12.开启颜色
int colorLocation = -1;
if (vertices.attribFlags & VertexAttribFlags::Color) {
colorLocation = fill.program->getAttrib("colors");
glEnableVertexAttribArray(colorLocation);
glVertexAttribPointer(colorLocation, 4, GL_FLOAT, GL_FALSE, vertices.stride,
vertices.color);
}
//13.开启alpha
int alphaLocation = -1;
if (vertices.attribFlags & VertexAttribFlags::Alpha) {
// NOTE: alpha vertex position is computed assuming no VBO
const void* alphaCoords = ((const GLbyte*)vertices.position) + kVertexAlphaOffset;
alphaLocation = fill.program->getAttrib("vtxAlpha");
glEnableVertexAttribArray(alphaLocation);
glVertexAttribPointer(alphaLocation, 1, GL_FLOAT, GL_FALSE, vertices.stride, alphaCoords);
}
// Shader uniforms
SkiaShader::apply(*mCaches, fill.skiaShaderData, mViewportWidth, mViewportHeight);
GL_CHECKPOINT(MODERATE);
Texture* texture = (fill.skiaShaderData.skiaShaderType & kBitmap_SkiaShaderType)
? fill.skiaShaderData.bitmapData.bitmapTexture
: nullptr;
const AutoTexture autoCleanup(texture);
// If we have a shader and a base texture, the base texture is assumed to be an alpha mask
// which means the color space conversion applies to the shader's bitmap
Texture* colorSpaceTexture = texture != nullptr ? texture : fill.texture.texture;
if (colorSpaceTexture != nullptr) {
if (colorSpaceTexture->hasColorSpaceConversion()) {
const ColorSpaceConnector* connector = colorSpaceTexture->getColorSpaceConnector();
glUniformMatrix3fv(fill.program->getUniform("colorSpaceMatrix"), 1, GL_FALSE,
connector->getTransform().asArray());
}
TransferFunctionType transferFunction = colorSpaceTexture->getTransferFunctionType();
if (transferFunction != TransferFunctionType::None) {
const ColorSpaceConnector* connector = colorSpaceTexture->getColorSpaceConnector();
const ColorSpace& source = connector->getSource();
switch (transferFunction) {
case TransferFunctionType::None:
break;
case TransferFunctionType::Full:
glUniform1fv(fill.program->getUniform("transferFunction"), 7,
reinterpret_cast(&source.getTransferParameters().g));
break;
case TransferFunctionType::Limited:
glUniform1fv(fill.program->getUniform("transferFunction"), 5,
reinterpret_cast(&source.getTransferParameters().g));
break;
case TransferFunctionType::Gamma:
glUniform1f(fill.program->getUniform("transferFunctionGamma"),
source.getTransferParameters().g);
break;
}
}
}
// ------------------------------------
// ---------- GL state setup ----------
// ------------------------------------
//14.blend设置:glEnable(GL_BLEND); 、glBlendFunc
if (CC_UNLIKELY(overrideDisableBlending)) {
blend().setFactors(GL_ZERO, GL_ZERO);
} else {
blend().setFactors(glop.blend.src, glop.blend.dst);
}
GL_CHECKPOINT(MODERATE);
// ------------------------------------
// ---------- Actual drawing ----------
// ------------------------------------
//15. glDrawElements、glDrawArrays过程
if (indices.bufferObject == meshState().getQuadListIBO()) {
// Since the indexed quad list is of limited length, we loop over
// the glDrawXXX method while updating the vertex pointer
GLsizei elementsCount = mesh.elementCount;
const GLbyte* vertexData = static_cast(vertices.position);
while (elementsCount > 0) {
GLsizei drawCount = std::min(elementsCount, (GLsizei)kMaxNumberOfQuads * 6);
GLsizei vertexCount = (drawCount / 6) * 4;
meshState().bindPositionVertexPointer(vertexData, vertices.stride);
if (vertices.attribFlags & VertexAttribFlags::TextureCoord) {
meshState().bindTexCoordsVertexPointer(vertexData + kMeshTextureOffset,
vertices.stride);
}
if (mCaches->extensions().getMajorGlVersion() >= 3) {
glDrawRangeElements(mesh.primitiveMode, 0, vertexCount - 1, drawCount,
GL_UNSIGNED_SHORT, nullptr);
} else {
glDrawElements(mesh.primitiveMode, drawCount, GL_UNSIGNED_SHORT, nullptr);
}
elementsCount -= drawCount;
vertexData += vertexCount * vertices.stride;
}
} else if (indices.bufferObject || indices.indices) {
if (mCaches->extensions().getMajorGlVersion() >= 3) {
// use glDrawRangeElements to reduce CPU overhead (otherwise the driver has to determine
// the min/max index values)
glDrawRangeElements(mesh.primitiveMode, 0, mesh.vertexCount - 1, mesh.elementCount,
GL_UNSIGNED_SHORT, indices.indices);
} else {
glDrawElements(mesh.primitiveMode, mesh.elementCount, GL_UNSIGNED_SHORT,
indices.indices);
}
} else {
glDrawArrays(mesh.primitiveMode, 0, mesh.elementCount);
}
GL_CHECKPOINT(MODERATE);
// -----------------------------------
// ---------- Mesh teardown ----------
// -----------------------------------
//16.glDisableVertexAttribArray
if (vertices.attribFlags & VertexAttribFlags::Alpha) {
glDisableVertexAttribArray(alphaLocation);
}
if (vertices.attribFlags & VertexAttribFlags::Color) {
glDisableVertexAttribArray(colorLocation);
}
GL_CHECKPOINT(MODERATE);
}
说到这,draw已经结束了,那么回头看下整个draw过程:
OpenGLPipeline::draw
-> FrameBuilder::replayBakedOps
-> LayerBuilder::replayBakedOpsImpl
-> BakedOpDispatcher::onMergexx onxxx
-> build完成后 -> BakedOpRenderer::renderGlop
-> mGlopReceiver
-> DefaultGlopReceiver
-> BakedOpRenderer::renderGlopImpl
-> RenderState::render
看下replayBakedOpsImpl执行过程:
unmergedReceivers和mergedReceivers是函数指针,调用它们的时候实际调用的是BakedOpDispatcher的onMergexx和onxxx函数。
而在BakedOpDispatcher的onMergexx和onxxx函数中构建GlopBuilder对象的同时去构建Glop,给最终的render使用。
frameworks/base/libs/hwui/LayerBuilder.cpp
void LayerBuilder::replayBakedOpsImpl(void* arg, BakedOpReceiver* unmergedReceivers,
MergedOpReceiver* mergedReceivers) const {
if (renderNode) {
ATRACE_FORMAT_BEGIN("Issue HW Layer DisplayList %s %ux%u", renderNode->getName(), width,
height);
} else {
ATRACE_BEGIN("flush drawing commands");
}
for (const BatchBase* batch : mBatches) {
size_t size = batch->getOps().size();
if (size > 1 && batch->isMerging()) {
int opId = batch->getOps()[0]->op->opId;
const MergingOpBatch* mergingBatch = static_cast(batch);
MergedBakedOpList data = {batch->getOps().data(), size,
mergingBatch->getClipSideFlags(),
mergingBatch->getClipRect()};
mergedReceivers[opId](arg, data);
} else {
for (const BakedOpState* op : batch->getOps()) {
unmergedReceivers[op->op->opId](arg, *op);
}
}
}
ATRACE_END();
}
Animation过程会buildLayer,然后OpenGLPipeline::renderLayers,最后也FrameBuilder::replayBakedOps,到最终的render结束。