一、精灵是怎么被渲染出来的
cocos渲染用了opengl。所有精灵顶点的位置,都是真实的在opengl世界坐标系中的大小。这些点在XOY平面上,z都为0,它们进行模型视图变换,再进行投影变换,投影空间进行除法运算规格化最后通过视口变换转化为窗口上的一点。
void CCDirector::setProjection(ccDirectorProjection kProjection)
{
CCSize size = m_obWinSizeInPoints;
setViewport();
switch (kProjection)
{
case kCCDirectorProjection2D:
{
kmGLMatrixMode(KM_GL_PROJECTION);
kmGLLoadIdentity();
#if CC_TARGET_PLATFORM == CC_PLATFORM_WP8
kmGLMultMatrix(CCEGLView::sharedOpenGLView()->getOrientationMatrix());
#endif
kmMat4 orthoMatrix;
kmMat4OrthographicProjection(&orthoMatrix, 0, size.width, 0, size.height, -1024, 1024 );
kmGLMultMatrix(&orthoMatrix);
kmGLMatrixMode(KM_GL_MODELVIEW);
kmGLLoadIdentity();
}
break;
case kCCDirectorProjection3D:
{
float zeye = this->getZEye();
kmMat4 matrixPerspective, matrixLookup;
kmGLMatrixMode(KM_GL_PROJECTION);
kmGLLoadIdentity();
#if CC_TARGET_PLATFORM == CC_PLATFORM_WP8
//if needed, we need to add a rotation for Landscape orientations on Windows Phone 8 since it is always in Portrait Mode
kmGLMultMatrix(CCEGLView::sharedOpenGLView()->getOrientationMatrix());
#endif
// issue #1334
kmMat4PerspectiveProjection( &matrixPerspective, 60, (GLfloat)size.width/size.height, 0.1f, zeye*2);
// kmMat4PerspectiveProjection( &matrixPerspective, 60, (GLfloat)size.width/size.height, 0.1f, 1500);
kmGLMultMatrix(&matrixPerspective);
kmGLMatrixMode(KM_GL_MODELVIEW);
kmGLLoadIdentity();
kmVec3 eye, center, up;
kmVec3Fill( &eye, size.width/2, size.height/2, zeye );
kmVec3Fill( ¢er, size.width/2, size.height/2, 0.0f );
kmVec3Fill( &up, 0.0f, 1.0f, 0.0f);
kmMat4LookAt(&matrixLookup, &eye, ¢er, &up);
kmGLMultMatrix(&matrixLookup);
}
上面是设置顶点变换矩阵的,kmMat4PerspectiveProjection设置的是透视投影矩阵P,把顶点变换到一个投影空间中。kmMat4LookAt是设置眼睛的位置,求得的是视图变换矩阵V,opengl把模型跟视图并起来了,这里同样也是并起来的,跟模型变化矩阵M相乘放在了一个KM_GL_MODELVIEW栈中。注意opengl顶点是列向量,左乘变换矩阵的。比如顶点是P,变换后点P'=PVMP,这个表示P先与M乘进行模型变换,再与V乘,进行视图变换,再与P乘进行投影变换。cocos着色器里对应变量CC_MVPMatrix。
上面求得的矩阵放在两个栈中,一个投影栈、一个模型视图栈,视图矩阵V跟栈顶单位矩阵做乘法得到的还是V,现在我们有矩阵P与V了,还差M。
void CCNode::transform()
{
kmMat4 transfrom4x4;
// Convert 3x3 into 4x4 matrix
CCAffineTransform tmpAffine = this->nodeToParentTransform();
CGAffineToGL(&tmpAffine, transfrom4x4.mat);
// Update Z vertex manually
transfrom4x4.mat[14] = m_fVertexZ;
kmGLMultMatrix( &transfrom4x4 );
// XXX: Expensive calls. Camera should be integrated into the cached affine matrix
if ( m_pCamera != NULL && !(m_pGrid != NULL && m_pGrid->isActive()) )
{
bool translate = (m_obAnchorPointInPoints.x != 0.0f || m_obAnchorPointInPoints.y != 0.0f);
if( translate )
kmGLTranslatef(RENDER_IN_SUBPIXEL(m_obAnchorPointInPoints.x), RENDER_IN_SUBPIXEL(m_obAnchorPointInPoints.y), 0 );
m_pCamera->locate();
if( translate )
kmGLTranslatef(RENDER_IN_SUBPIXEL(-m_obAnchorPointInPoints.x), RENDER_IN_SUBPIXEL(-m_obAnchorPointInPoints.y), 0 );
}
}
上面代码就是求结点的模型变换矩阵,至于什么时候调用这个然后,可以设置跟断点跟踪下你就知道了,每帧渲染drawScene会遍历所有node的visit,visit中调用了这个方法,其中
this
->
nodeToParentTransform
()是关键,代码如下:
CCAffineTransform CCNode::nodeToParentTransform(void)
{
if (m_bTransformDirty)
{
// Translate values
float x = m_obPosition.x;
float y = m_obPosition.y;
if (m_bIgnoreAnchorPointForPosition)
{
x += m_obAnchorPointInPoints.x;
y += m_obAnchorPointInPoints.y;
}
// Rotation values
// Change rotation code to handle X and Y
// If we skew with the exact same value for both x and y then we're simply just rotating
float cx = 1, sx = 0, cy = 1, sy = 0;
if (m_fRotationX || m_fRotationY)
{
float radiansX = -CC_DEGREES_TO_RADIANS(m_fRotationX);
float radiansY = -CC_DEGREES_TO_RADIANS(m_fRotationY);
cx = cosf(radiansX);
sx = sinf(radiansX);
cy = cosf(radiansY);
sy = sinf(radiansY);
}
bool needsSkewMatrix = ( m_fSkewX || m_fSkewY );
// optimization:
// inline anchor point calculation if skew is not needed
// Adjusted transform calculation for rotational skew
if (! needsSkewMatrix && !m_obAnchorPointInPoints.equals(CCPointZero))
{
x += cy * -m_obAnchorPointInPoints.x * m_fScaleX + -sx * -m_obAnchorPointInPoints.y * m_fScaleY;
y += sy * -m_obAnchorPointInPoints.x * m_fScaleX + cx * -m_obAnchorPointInPoints.y * m_fScaleY;
}
m_obAnchorPointInPoints是锚点位置,物体旋转、缩放需要一个中心点,锚点就充当这个角色。然后计算模型变换矩阵要注意,要先计算平移,这个平移根据锚点进行平移,调整物体在局部坐标中的位置,开始时物体左下角在局部坐标原点,锚点(0.5, 0.5)把物体中心移动局部坐标原点了。它的x、y都是减小,根据这个-m_obAnchorPointInPoints可以得到平移矩阵T1,然后再求旋转与缩放,分别为R、S。最后得到局部坐标的变换矩阵,精灵的顶点P变换后为P'=SRT1,变换后还是局部坐标。m_obPosition是精灵世界坐标,最后根据它求出局部坐标到世界坐标的变换矩阵T2.最后精灵点坐标P''=T2P'=T2SRT1。上面函数是直接根据最后的矩阵填充结果的,没有推导过程,自己可以推导一下。
bool CCSprite::initWithTexture(CCTexture2D *pTexture, const CCRect& rect, bool rotated)
{
if (CCNodeRGBA::init())
{
m_pobBatchNode = NULL;
m_bRecursiveDirty = false;
setDirty(false);
m_bOpacityModifyRGB = true;
m_sBlendFunc.src = CC_BLEND_SRC;//设置或者方式GL_ONE
m_sBlendFunc.dst = CC_BLEND_DST;//GL_ONE_MINUS_SRC_ALPHA
m_bFlipX = m_bFlipY = false;
// default transform anchor: center
setAnchorPoint(ccp(0.5f, 0.5f)); //锚点
// zwoptex default values
m_obOffsetPosition = CCPointZero;
m_bHasChildren = false;
// clean the Quad
memset(&m_sQuad, 0, sizeof(m_sQuad));
// Atlas: Color
ccColor4B tmpColor = { 255, 255, 255, 255 };
//精灵四个顶点的颜色,颜色有什么用?CCSprite::setColor()这个改变这个值的
//在精灵使用的片元着色器里,会用从纹理上采样下的样色乘以这个(值/255)的浮点数得到最终片元颜色
m_sQuad.bl.colors = tmpColor;
m_sQuad.br.colors = tmpColor;
m_sQuad.tl.colors = tmpColor;
m_sQuad.tr.colors = tmpColor;
// shader program
setShaderProgram(CCShaderCache::sharedShaderCache()->programForKey(kCCShader_PositionTextureColor));//设置着色器
// update texture (calls updateBlendFunc)
setTexture(pTexture);//这个代码设置纹理,下面分析
setTextureRect(rect, rotated, rect.size);//这个代码设置精灵顶点位置坐标与纹理坐标,下面分析
// by default use "Self Render".
// if the sprite is added to a batchnode, then it will automatically switch to "batchnode Render"
setBatchNode(NULL);
return true;
}
else
{
return false;
}
}
void CCSprite::setTextureCoords(CCRect rect)
{
rect = CC_RECT_POINTS_TO_PIXELS(rect);
CCTexture2D *tex = m_pobBatchNode ? m_pobTextureAtlas->getTexture() : m_pobTexture;
if (! tex)
{
return;
}
float atlasWidth = (float)tex->getPixelsWide();
float atlasHeight = (float)tex->getPixelsHigh();
float left, right, top, bottom;
if (m_bRectRotated)
{
#if CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
left = (2*rect.origin.x+1)/(2*atlasWidth);
right = left+(rect.size.height*2-2)/(2*atlasWidth);
top = (2*rect.origin.y+1)/(2*atlasHeight);
bottom = top+(rect.size.width*2-2)/(2*atlasHeight);
#else
left = rect.origin.x/atlasWidth;
right = (rect.origin.x+rect.size.height) / atlasWidth;
top = rect.origin.y/atlasHeight;
bottom = (rect.origin.y+rect.size.width) / atlasHeight;
#endif // CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
if (m_bFlipX)
{
CC_SWAP(top, bottom, float);
}
if (m_bFlipY)
{
CC_SWAP(left, right, float);
}
m_sQuad.bl.texCoords.u = left;
m_sQuad.bl.texCoords.v = top;
m_sQuad.br.texCoords.u = left;
m_sQuad.br.texCoords.v = bottom;
m_sQuad.tl.texCoords.u = right;
m_sQuad.tl.texCoords.v = top;
m_sQuad.tr.texCoords.u = right;
m_sQuad.tr.texCoords.v = bottom;
}
else
{
#if CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
left = (2*rect.origin.x+1)/(2*atlasWidth);
right = left + (rect.size.width*2-2)/(2*atlasWidth);
top = (2*rect.origin.y+1)/(2*atlasHeight);
bottom = top + (rect.size.height*2-2)/(2*atlasHeight);
#else
left = rect.origin.x/atlasWidth;
right = (rect.origin.x + rect.size.width) / atlasWidth;
top = rect.origin.y/atlasHeight;
bottom = (rect.origin.y + rect.size.height) / atlasHeight;
#endif // ! CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
if(m_bFlipX)
{
CC_SWAP(left,right,float);
}
if(m_bFlipY)
{
CC_SWAP(top,bottom,float);
}
m_sQuad.bl.texCoords.u = left;
m_sQuad.bl.texCoords.v = bottom;
m_sQuad.br.texCoords.u = right;
m_sQuad.br.texCoords.v = bottom;
m_sQuad.tl.texCoords.u = left;
m_sQuad.tl.texCoords.v = top;
m_sQuad.tr.texCoords.u = right;
m_sQuad.tr.texCoords.v = top;
}
}
void CCSprite::setTextureRect(const CCRect& rect, bool rotated, const CCSize& untrimmedSize)
{
m_bRectRotated = rotated;
setContentSize(untrimmedSize);//设置结点原始范围
setVertexRect(rect);//纹理范围
setTextureCoords(rect);//设置纹理坐标
CCPoint relativeOffset = m_obUnflippedOffsetPositionFromCenter;
// issue #732
if (m_bFlipX)
{
relativeOffset.x = -relativeOffset.x;
}
if (m_bFlipY)
{
relativeOffset.y = -relativeOffset.y;
}
//求纹理的偏移坐标,纹理在原始范围的居中位置
m_obOffsetPosition.x = relativeOffset.x + (m_obContentSize.width - m_obRect.size.width) / 2;
m_obOffsetPosition.y = relativeOffset.y + (m_obContentSize.height - m_obRect.size.height) / 2;
// rendering using batch node
if (m_pobBatchNode)
{
// update dirty_, don't update recursiveDirty_
setDirty(true);
}
else
{
// self rendering
// Atlas: Vertex
float x1 = 0 + m_obOffsetPosition.x;
float y1 = 0 + m_obOffsetPosition.y;
float x2 = x1 + m_obRect.size.width;
float y2 = y1 + m_obRect.size.height;
// Don't update Z.
m_sQuad.bl.vertices = vertex3(x1, y1, 0);
m_sQuad.br.vertices = vertex3(x2, y1, 0);
m_sQuad.tl.vertices = vertex3(x1, y2, 0);
m_sQuad.tr.vertices = vertex3(x2, y2, 0);
}
}
上面代码先设置了纹理坐标,然后设置了顶点的坐标,用于渲染的东西一切都OK了,后面结点执行动作会改变用于求顶点变换矩阵的变量,以及用于计算片元颜色的颜色值。
上面这么多代码讲了投影矩阵、视图矩阵、模型矩阵怎么求出来的,精灵是怎么设置顶点颜色、顶点坐标、纹理坐标的。顶点坐标用于nodeToParentTransform函数用于求模型矩阵。那纹理坐标干什么呢?肯定用于着色了。下面是精灵的渲染方法:
void CCSprite::draw(void)
{
CC_PROFILER_START_CATEGORY(kCCProfilerCategorySprite, "CCSprite - draw");
CCAssert(!m_pobBatchNode, "If CCSprite is being rendered by CCSpriteBatchNode, CCSprite#draw SHOULD NOT be called");
CC_NODE_DRAW_SETUP();
ccGLBlendFunc( m_sBlendFunc.src, m_sBlendFunc.dst );//用之前设置的值重新设置自己的混合方式
ccGLBindTexture2D( m_pobTexture->getName() );//绑定纹理用于后面的纹理操作
ccGLEnableVertexAttribs( kCCVertexAttribFlag_PosColorTex );//开启顶点属性,这样可以把数据传给opengl
#define kQuadSize sizeof(m_sQuad.bl)
#ifdef EMSCRIPTEN
long offset = 0;
setGLBufferData(&m_sQuad, 4 * kQuadSize, 0);
#else
long offset = (long)&m_sQuad;
#endif // EMSCRIPTEN
// vertex
int diff = offsetof( ccV3F_C4B_T2F, vertices);//传顶点坐标
glVertexAttribPointer(kCCVertexAttrib_Position, 3, GL_FLOAT, GL_FALSE, kQuadSize, (void*) (offset + diff));
// texCoods
diff = offsetof( ccV3F_C4B_T2F, texCoords);//传闻了坐标
glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, kQuadSize, (void*)(offset + diff));
// color
diff = offsetof( ccV3F_C4B_T2F, colors);//传颜色
glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_UNSIGNED_BYTE, GL_TRUE, kQuadSize, (void*)(offset + diff));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);//画三角形扇 4个顶点画两个:012、123
CHECK_GL_ERROR_DEBUG();
//下面代码画包围盒的 游戏一般不会用
#if CC_SPRITE_DEBUG_DRAW == 1
// draw bounding box
CCPoint vertices[4]={
ccp(m_sQuad.tl.vertices.x,m_sQuad.tl.vertices.y),
ccp(m_sQuad.bl.vertices.x,m_sQuad.bl.vertices.y),
ccp(m_sQuad.br.vertices.x,m_sQuad.br.vertices.y),
ccp(m_sQuad.tr.vertices.x,m_sQuad.tr.vertices.y),
};
ccDrawPoly(vertices, 4, true);
#elif CC_SPRITE_DEBUG_DRAW == 2
// draw texture box
CCSize s = this->getTextureRect().size;
CCPoint offsetPix = this->getOffsetPosition();
CCPoint vertices[4] = {
ccp(offsetPix.x,offsetPix.y), ccp(offsetPix.x+s.width,offsetPix.y),
ccp(offsetPix.x+s.width,offsetPix.y+s.height), ccp(offsetPix.x,offsetPix.y+s.height)
};
ccDrawPoly(vertices, 4, true);
#endif // CC_SPRITE_DEBUG_DRAW
CC_INCREMENT_GL_DRAWS(1);
CC_PROFILER_STOP_CATEGORY(kCCProfilerCategorySprite, "CCSprite - draw");
}
二、纹理的管理、产生、使用、销毁
纹理用于渲染精灵,最终通过插值求得精灵这个四边形的每一个像素点颜色。在编程中我们并没有直接使用CCTexture2D,没有直接去创建纹理以及销毁纹理。我们下面研究一下精灵具体的创建过程。
CCSprite* CCSprite::create(const char *pszFileName)
{
CCSprite *pobSprite = new CCSprite();
if (pobSprite && pobSprite->initWithFile(pszFileName))
{
pobSprite->autorelease();
return pobSprite;
}
CC_SAFE_DELETE(pobSprite);
return NULL;
}
bool CCSprite::initWithFile(const char *pszFilename)
{
CCAssert(pszFilename != NULL, "Invalid filename for sprite");
CCTexture2D *pTexture = CCTextureCache::sharedTextureCache()->addImage(pszFilename);
if (pTexture)
{
CCRect rect = CCRectZero;
rect.size = pTexture->getContentSize();
return initWithTexture(pTexture, rect);
}
// don't release here.
// when load texture failed, it's better to get a "transparent" sprite then a crashed program
// this->release();
return false;
}
CCTexture2D *pTexture = CCTextureCache::sharedTextureCache()->addImage(pszFilename);这句代码是创建纹理的代码,很奇怪的是它怎么不直接用CCTexture2D的方法创建一个纹理使用呢?如果有太多使用相同纹理的精灵,每次创建都重新创一个纹理,读取外部图片文件,生成opengl的纹理,那可想而知效率太低了。好方法就是,加到纹理缓存中管理。initWithTexture(pTexture, rect)就不分析了,之前讲过它是设置纹理(保存纹理对象,此对象保存了opengl分配的纹理ID,可以通过它改变纹理数据,删除纹理)。
CCTexture2D * CCTextureCache::addImage(const char * path)
{
CCAssert(path != NULL, "TextureCache: fileimage MUST not be NULL");
CCTexture2D * texture = NULL;
CCImage* pImage = NULL;
// Split up directory and filename
// MUTEX:
// Needed since addImageAsync calls this method from a different thread
//pthread_mutex_lock(m_pDictLock);
std::string pathKey = path;
pathKey = CCFileUtils::sharedFileUtils()->fullPathForFilename(pathKey.c_str());
if (pathKey.size() == 0)
{
return NULL;
}
texture = (CCTexture2D*)m_pTextures->objectForKey(pathKey.c_str());
std::string fullpath = pathKey; // (CCFileUtils::sharedFileUtils()->fullPathFromRelativePath(path));
if (! texture)
{
std::string lowerCase(pathKey);
for (unsigned int i = 0; i < lowerCase.length(); ++i)
{
lowerCase[i] = tolower(lowerCase[i]);
}
// all images are handled by UIImage except PVR extension that is handled by our own handler
do
{
if (std::string::npos != lowerCase.find(".pvr"))
{
texture = this->addPVRImage(fullpath.c_str());
}
else if (std::string::npos != lowerCase.find(".pkm"))
{
// ETC1 file format, only supportted on Android
texture = this->addETCImage(fullpath.c_str());
}
else
{
CCImage::EImageFormat eImageFormat = CCImage::kFmtUnKnown;
if (std::string::npos != lowerCase.find(".png"))
{
eImageFormat = CCImage::kFmtPng;
}
else if (std::string::npos != lowerCase.find(".jpg") || std::string::npos != lowerCase.find(".jpeg"))
{
eImageFormat = CCImage::kFmtJpg;
}
else if (std::string::npos != lowerCase.find(".tif") || std::string::npos != lowerCase.find(".tiff"))
{
eImageFormat = CCImage::kFmtTiff;
}
else if (std::string::npos != lowerCase.find(".webp"))
{
eImageFormat = CCImage::kFmtWebp;
}
pImage = new CCImage();
CC_BREAK_IF(NULL == pImage);
bool bRet = pImage->initWithImageFile(fullpath.c_str(), eImageFormat);
CC_BREAK_IF(!bRet);
texture = new CCTexture2D();
if( texture &&
texture->initWithImage(pImage) )
{
#if CC_ENABLE_CACHE_TEXTURE_DATA
// cache the texture file name
VolatileTexture::addImageTexture(texture, fullpath.c_str(), eImageFormat);
#endif
m_pTextures->setObject(texture, pathKey.c_str());
texture->release();
}
else
{
CCLOG("cocos2d: Couldn't create texture for file:%s in CCTextureCache", path);
}
}
} while (0);
}
CC_SAFE_RELEASE(pImage);
//pthread_mutex_unlock(m_pDictLock);
return texture;
}
上面是
CCTextureCache
::addImage的代码。
pathKey =
CCFileUtils
::
sharedFileUtils
()->
fullPathForFilename
(pathKey.
c_str
())是获得用于生成纹理的图片文件的绝对路径,有时间我会详解一下代码的。
texture = (
CCTexture2D
*)
m_pTextures
->
objectForKey
(pathKey.
c_str
())这句是查询字典CCDictionary m_pTextures是否有这个文件路径的对象,有的话直接用它了,没有的话后面的代码就创建纹理,它先读取外部文件,然后生成CCImage,然后生成CCTexture2D这个纹理,当然也有可能
texture =
this
->
addPVRImage
(fullpath.
c_str
());
texture =
this
->
addETCImage
(fullpath.
c_str
()),它们使用其它类型生成CCTexture2D。最终我们通过指定创建精灵的图片文件名,创建了纹理并把纹理加到了纹理缓存中去。
texture->
initWithImage
(pImage)这个代码是对纹理的初始化,代码如下:
bool CCTexture2D::initWithImage(CCImage *uiImage)
{
if (uiImage == NULL)
{
CCLOG("cocos2d: CCTexture2D. Can't create Texture. UIImage is nil");
return false;
}
unsigned int imageWidth = uiImage->getWidth();
unsigned int imageHeight = uiImage->getHeight();
CCConfiguration *conf = CCConfiguration::sharedConfiguration();
unsigned maxTextureSize = conf->getMaxTextureSize();
if (imageWidth > maxTextureSize || imageHeight > maxTextureSize)
{
CCLOG("cocos2d: WARNING: Image (%u x %u) is bigger than the supported %u x %u", imageWidth, imageHeight, maxTextureSize, maxTextureSize);
return false;
}
// always load premultiplied images
return initPremultipliedATextureWithImage(uiImage, imageWidth, imageHeight);
}
上面代码获得了纹理图片的宽跟高。
if
(imageWidth > maxTextureSize || imageHeight > maxTextureSize) 这个检查纹理大小,正常是4096,超出这个范围就不支持,加载不进opengl。可以查看下CCConfiguration,它里面是获得GPU一些信息的函数。继续看
initPremultipliedATextureWithImage
(uiImage, imageWidth, imageHeight)的代码,如下:
bool CCTexture2D::initPremultipliedATextureWithImage(CCImage *image, unsigned int width, unsigned int height)
{
unsigned char* tempData = image->getData();
unsigned int* inPixel32 = NULL;
unsigned char* inPixel8 = NULL;
unsigned short* outPixel16 = NULL;
bool hasAlpha = image->hasAlpha();
CCSize imageSize = CCSizeMake((float)(image->getWidth()), (float)(image->getHeight()));
CCTexture2DPixelFormat pixelFormat;
size_t bpp = image->getBitsPerComponent();
// compute pixel format
if (hasAlpha)
{
pixelFormat = g_defaultAlphaPixelFormat;
}
else
{
if (bpp >= 8)
{
pixelFormat = kCCTexture2DPixelFormat_RGB888;
}
else
{
pixelFormat = kCCTexture2DPixelFormat_RGB565;
}
}
// Repack the pixel data into the right format
unsigned int length = width * height;
if (pixelFormat == kCCTexture2DPixelFormat_RGB565)
{
if (hasAlpha)
{
// Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRRGGGGGGBBBBB"
tempData = new unsigned char[width * height * 2];
outPixel16 = (unsigned short*)tempData;
inPixel32 = (unsigned int*)image->getData();
for(unsigned int i = 0; i < length; ++i, ++inPixel32)
{
*outPixel16++ =
((((*inPixel32 >> 0) & 0xFF) >> 3) << 11) | // R
((((*inPixel32 >> 8) & 0xFF) >> 2) << 5) | // G
((((*inPixel32 >> 16) & 0xFF) >> 3) << 0); // B
}
}
else
{
// Convert "RRRRRRRRRGGGGGGGGBBBBBBBB" to "RRRRRGGGGGGBBBBB"
tempData = new unsigned char[width * height * 2];
outPixel16 = (unsigned short*)tempData;
inPixel8 = (unsigned char*)image->getData();
for(unsigned int i = 0; i < length; ++i)
{
*outPixel16++ =
(((*inPixel8++ & 0xFF) >> 3) << 11) | // R
(((*inPixel8++ & 0xFF) >> 2) << 5) | // G
(((*inPixel8++ & 0xFF) >> 3) << 0); // B
}
}
}
else if (pixelFormat == kCCTexture2DPixelFormat_RGBA4444)
{
// Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRGGGGBBBBAAAA"
inPixel32 = (unsigned int*)image->getData();
tempData = new unsigned char[width * height * 2];
outPixel16 = (unsigned short*)tempData;
for(unsigned int i = 0; i < length; ++i, ++inPixel32)
{
*outPixel16++ =
((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | // R
((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | // G
((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | // B
((((*inPixel32 >> 24) & 0xFF) >> 4) << 0); // A
}
}
else if (pixelFormat == kCCTexture2DPixelFormat_RGB5A1)
{
// Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRRGGGGGBBBBBA"
inPixel32 = (unsigned int*)image->getData();
tempData = new unsigned char[width * height * 2];
outPixel16 = (unsigned short*)tempData;
for(unsigned int i = 0; i < length; ++i, ++inPixel32)
{
*outPixel16++ =
((((*inPixel32 >> 0) & 0xFF) >> 3) << 11) | // R
((((*inPixel32 >> 8) & 0xFF) >> 3) << 6) | // G
((((*inPixel32 >> 16) & 0xFF) >> 3) << 1) | // B
((((*inPixel32 >> 24) & 0xFF) >> 7) << 0); // A
}
}
else if (pixelFormat == kCCTexture2DPixelFormat_A8)
{
// Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "AAAAAAAA"
inPixel32 = (unsigned int*)image->getData();
tempData = new unsigned char[width * height];
unsigned char *outPixel8 = tempData;
for(unsigned int i = 0; i < length; ++i, ++inPixel32)
{
*outPixel8++ = (*inPixel32 >> 24) & 0xFF; // A
}
}
if (hasAlpha && pixelFormat == kCCTexture2DPixelFormat_RGB888)
{
// Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRRRRRGGGGGGGGBBBBBBBB"
inPixel32 = (unsigned int*)image->getData();
tempData = new unsigned char[width * height * 3];
unsigned char *outPixel8 = tempData;
for(unsigned int i = 0; i < length; ++i, ++inPixel32)
{
*outPixel8++ = (*inPixel32 >> 0) & 0xFF; // R
*outPixel8++ = (*inPixel32 >> 8) & 0xFF; // G
*outPixel8++ = (*inPixel32 >> 16) & 0xFF; // B
}
}
initWithData(tempData, pixelFormat, width, height, imageSize);
if (tempData != image->getData())
{
delete [] tempData;
}
m_bHasPremultipliedAlpha = image->isPremultipliedAlpha();
return true;
}
上面代码就是对CCImage进行分析,得到图片数据以及格式,最后调用
initWithData
(tempData, pixelFormat, width, height, imageSize)代码如下:
bool CCTexture2D::initWithData(const void *data, CCTexture2DPixelFormat pixelFormat, unsigned int pixelsWide, unsigned int pixelsHigh, const CCSize& contentSize)
{
unsigned int bitsPerPixel;
//Hack: bitsPerPixelForFormat returns wrong number for RGB_888 textures. See function.
if(pixelFormat == kCCTexture2DPixelFormat_RGB888)
{
bitsPerPixel = 24;
}
else
{
bitsPerPixel = bitsPerPixelForFormat(pixelFormat);
}
unsigned int bytesPerRow = pixelsWide * bitsPerPixel / 8;
if(bytesPerRow % 8 == 0)
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
}
else if(bytesPerRow % 4 == 0)
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
}
else if(bytesPerRow % 2 == 0)
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 2);
}
else
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
}
glGenTextures(1, &m_uName);
ccGLBindTexture2D(m_uName);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// Specify OpenGL texture image
switch(pixelFormat)
{
case kCCTexture2DPixelFormat_RGBA8888:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)pixelsWide, (GLsizei)pixelsHigh, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
break;
case kCCTexture2DPixelFormat_RGB888:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, (GLsizei)pixelsWide, (GLsizei)pixelsHigh, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
break;
case kCCTexture2DPixelFormat_RGBA4444:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)pixelsWide, (GLsizei)pixelsHigh, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, data);
break;
case kCCTexture2DPixelFormat_RGB5A1:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)pixelsWide, (GLsizei)pixelsHigh, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1, data);
break;
case kCCTexture2DPixelFormat_RGB565:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, (GLsizei)pixelsWide, (GLsizei)pixelsHigh, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, data);
break;
case kCCTexture2DPixelFormat_AI88:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, (GLsizei)pixelsWide, (GLsizei)pixelsHigh, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, data);
break;
case kCCTexture2DPixelFormat_A8:
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, (GLsizei)pixelsWide, (GLsizei)pixelsHigh, 0, GL_ALPHA, GL_UNSIGNED_BYTE, data);
break;
case kCCTexture2DPixelFormat_I8:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, (GLsizei)pixelsWide, (GLsizei)pixelsHigh, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, data);
break;
default:
CCAssert(0, "NSInternalInconsistencyException");
}
m_tContentSize = contentSize;
m_uPixelsWide = pixelsWide;
m_uPixelsHigh = pixelsHigh;
m_ePixelFormat = pixelFormat;
m_fMaxS = contentSize.width / (float)(pixelsWide);
m_fMaxT = contentSize.height / (float)(pixelsHigh);
m_bHasPremultipliedAlpha = false;
m_bHasMipmaps = false;
setShaderProgram(CCShaderCache::sharedShaderCache()->programForKey(kCCShader_PositionTexture));
return true;
}
上面可以看到很多gl开头的函数,都是onpengl调用。
glGenTextures
(
1
, &
m_uName
)与ccGLBindTexture2D(m_uName)是生成纹理并把纹理绑到opengl的一个纹理单元上。为什么向openg请求l生成纹理了,还得绑定到一个纹理单元上才能进行操作呢?glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR );是设置纹理缩小时进行线性插值,glTexImage2D(GL_TEXTURE_2D,0, GL_RGBA, (GLsizei)pixelsWide, (GLsizei)pixelsHigh,0, GL_RGBA,GL_UNSIGNED_BYTE, data)是指定纹理的数据。我们发现每个函数有GL_TEXTURE_2D,对了这个就是一个纹理单元,我们后面对纹理操作都基于纹理单元的,所以前面要绑定到一个上去setShaderProgram(CCShaderCache::sharedShaderCache()->programForKey(kCCShader_PositionTexture))设置纹理的着色器。既然设置了着色器,那么CCTexture2D肯定有draw相关的渲染函数了,其实还真有,可以看看源码。纹理的生成OK了,再看一下classCC_DLL CCTexture2D : publicCCObject,CCTexture2D刚创建时m_uReference(1)计数为1 ,有什么用呢,CCTexture2D并不需要autorelease这个东东,它不是由CCPoolManager管理的。前面看到CCTextureCache::sharedTextureCache()->addImage(pszFilename)这句代码,说明它由CCTextureCache管理的。
上面代码讲了CCTexture2D的生成,下面继续讲下CCTexture2D怎么销毁的,不仅仅是CCTexture2D这个对象销毁,而且CCTexture2D关联的opengl纹理也销毁。
void removeAllTextures();
/** Removes unused textures
* Textures that have a retain count of 1 will be deleted
* It is convenient to call this method after when starting a new Scene
* @since v0.8
*/
void removeUnusedTextures();
/** Deletes a texture from the cache given a texture
*/
void removeTexture(CCTexture2D* texture);
/** Deletes a texture from the cache given a its key name
@since v0.99.4
*/
void removeTextureForKey(const char *textureKeyName);
上面代码是销毁CCTexture2D的代码,这里研究前两个用的多的。
void CCTextureCache::removeAllTextures()
{
m_pTextures->removeAllObjects();
}
void CCDictionary::removeAllObjects()
{
CCDictElement *pElement, *tmp;
HASH_ITER(hh, m_pElements, pElement, tmp)
{
HASH_DEL(m_pElements, pElement);
pElement->m_pObject->release();
CC_SAFE_DELETE(pElement);
}
}
上面函数就是遍历hash表pElement把里面元素一个一个release掉。下面是releae代码
void CCObject::release(void)
{
CCAssert(m_uReference > 0, "reference count should greater than 0");
--m_uReference;
if (m_uReference == 0)
{
delete this;
}
}
当对象计数为1时,delete it。所以所有CCShaderCache管理的CCTexture2D对象计数为一时都要被删除。查看CCTexture2D的析构函数:
CCTexture2D::~CCTexture2D()
{
#if CC_ENABLE_CACHE_TEXTURE_DATA
VolatileTexture::removeTexture(this);
#endif
CCLOGINFO("cocos2d: deallocing CCTexture2D %u.", m_uName);
CC_SAFE_RELEASE(m_pShaderProgram);
if(m_uName)
{
ccGLDeleteTexture(m_uName);
}
}
if(m_uName) ccGLDeleteTexture(m_uName);说明如果纹理ID存在就把纹理删除掉,调用opengl函数glDeleteTextures(1, &textureId)释放向gl申请的纹理。
上面是CCTextureCache::removeAllTextures()干的事,把所有对象释放了,这里要注意下,CCTexture2D被纹理管理但是它的删除不是由removeAllTextures决定的,这个函数就是清楚CCTextureCache的一个hash表,以及遍历一下所有CCTexture2D检查技术是否为一,为一的话就彻底清楚CCTexture2D,不唯一说明,CCTexture2D被精灵再用,或者被retain了,反正计数不为一。
void CCSprite::setTexture(CCTexture2D *texture)
{
// If batchnode, then texture id should be the same
CCAssert(! m_pobBatchNode || texture->getName() == m_pobBatchNode->getTexture()->getName(), "CCSprite: Batched sprites should use the same texture as the batchnode");
// accept texture==nil as argument
CCAssert( !texture || dynamic_cast(texture), "setTexture expects a CCTexture2D. Invalid argument");
if (NULL == texture)
{
// Gets the texture by key firstly.
texture = CCTextureCache::sharedTextureCache()->textureForKey(CC_2x2_WHITE_IMAGE_KEY);
// If texture wasn't in cache, create it from RAW data.
if (NULL == texture)
{
CCImage* image = new CCImage();
bool isOK = image->initWithImageData(cc_2x2_white_image, sizeof(cc_2x2_white_image), CCImage::kFmtRawData, 2, 2, 8);
CCAssert(isOK, "The 2x2 empty texture was created unsuccessfully.");
texture = CCTextureCache::sharedTextureCache()->addUIImage(image, CC_2x2_WHITE_IMAGE_KEY);
CC_SAFE_RELEASE(image);
}
}
if (!m_pobBatchNode && m_pobTexture != texture)
{
CC_SAFE_RETAIN(texture);
CC_SAFE_RELEASE(m_pobTexture);
m_pobTexture = texture;
updateBlendFunc();
}
}
上面是CCSprite创建时设置纹理的函数,最后面的
CC_SAFE_RETAIN
(texture);就是把纹理计数加一,这个时候一个刚被精灵用的纹理计数为2了。
void CCTextureCache::removeUnusedTextures()
{
/** Inter engineer zhuoshi sun finds that this way will get better performance
*/
if (m_pTextures->count())
{
// find elements to be removed
CCDictElement* pElement = NULL;
list elementToRemove;
CCDICT_FOREACH(m_pTextures, pElement)
{
CCLOG("[cocos2d: CCTextureCache: texture:] [%s]", pElement->getStrKey());
CCTexture2D *value = (CCTexture2D*)pElement->getObject();
if (value->retainCount() == 1)
{
elementToRemove.push_back(pElement);
}
}
// remove elements
for (list::iterator iter = elementToRemove.begin(); iter != elementToRemove.end(); ++iter)
{
CCLOG("cocos2d: CCTextureCache: removing unused texture: %s", (*iter)->getStrKey());
m_pTextures->removeObjectForElememt(*iter);
}
}
}
上面是
CCTextureCache
::removeUnusedTextures代码,遍历m_pTextures这个字典找出说有计数为1的对象,最后把这些计数为1的对象销毁掉。好了纹理销毁也OK了removeUnusedTextures可以在任何需要的时候调用,移除当前没有被使用的纹理对象。正在被使用的纹理计数是大于1的,也就无法被销毁了。假如有纹理在使用,然后
CCTextureCache::removeAllTextures()就会造成那些被使用的纹理计数都减一,CCTextureCache不再对那些纹理进行管理,当使用某个纹理的所有CCSprite删除后,纹理计数为0,if (m_uReference == 0)delete this。removeUnusedTextures可以在确定有些纹理这段时间用不到,或者当前内存吃紧时调用。CCTexture2D *CCTextureCache::addImage(constchar * path)可以预加载纹理。通常都是一个场景开始前预加载,结束后removeUnusedTextures。
纹理的使用就在CCSprite的draw函数里,可以看下上面代码