BY · 2020年03月02日 · 9378 WORDS · ~19MIN READING TIME | IMPROVE ON
原文链接: mpp region osd
海思3516A区域管理操作汇总及位图填充示例_海思,OSD,点阵传位图_mill_li的博客-CSDN博客
【已解决】关于Hi3516A做OSD的问题 - 海思平台开放论坛 - 易百纳论坛 - Powered by Discuz!
(9条消息)海思多媒体(MPP)开发(5)——区域管理(REGION&OSD字符显示)_mpp,OSD,水印_Biao-CSDN博客
HI3559v200/hi_osd.c at 32079ae65ff6a1a683383a6eea73ffb1a153a857 · LiuyunlongLorin/HI3559v200
频叠加就是将图片和文字信息叠加到视频信号中,如电视台的LOGO,电影的字幕,电视机的菜单,都是通过视频叠加的形式显示在视频图像中的。区别于用于专业影视编辑行业的字幕叠加器,字符叠加器这一名称现在更加习惯用于各类监控系统中使用的价格相对低廉的具备基本的视频字幕叠加能力的电子设备。
字符叠加器按照功能分型可分为动态字符叠加器和静态字符叠加器
动态字符叠加器指与微机或其他智能设备配合,可显示随现场情况变化的字符信息,字符信息与现场视频信号相结合,为监控者提供更为详尽准确的信息。
静态字符叠加器指只在视频信号上显示相对固定字符信息的设备,主要用于在视频信号上叠加摄像头位置信息。价格低廉是此类静态字符叠加器的特点之一。
接下来给大家介绍我在hi3516的平台实现的OSD叠加,我选择在VI通道进行osd叠加,具体的初始化操作和通道的配置会在下一篇文件会介绍,本文给大家介绍的是通过自己设置一个汉字模或者ASSIC模,然后通过二进制表示,最后读取字模并绘制在相应的图像区域,经过编码后即可显示叠加的内容。
区位码是与汉字一一对应的编码,用四位数字表示,
前两位从01 到94称区码,后两位从01到94称位码。 一个汉字的前一半是 ASCⅡ码为“160+区码”的字符,
后一半是ASCⅡ码为“160+ 位码”的字符。'例如:“刘”的区位码是 3385,
其意为区码33位码85,它是由ASCⅡ码为160+33=193和160+85=245的两个字符组成。
先来了解一下字库的表示格式
一般我们使用16*16的点阵宋体字库,所谓16*16,是每一个汉字在纵、横各16点的区域内显示的。
不过后来又有了HZK12、HZK24,HZK32和HZK48字库及黑体、楷体和隶书字库。虽然汉字库种类繁多,但都是按照区位的顺序排列的。
前一个字节为该汉字的区号,后一个字节为该字的位号。每一个区记录94个汉字,位号则为该字在该区中的位置。
因此,汉字在汉字库中的具体位置计算公式为:94(区号-1)+位号-1。减1是因为数组是以0为开始而区号位号是以1为开始的。这仅为以汉字为单位该汉字在汉字库中的位置,那么,如何得到以字节为单位得到该汉字在汉字库中的位置呢?只需乘上一个汉字字模占用的字节数即可,即:(94(区号-1)+位号-1)*一个汉字字模占用字节数,而按每种汉字库的汉字大小不同又会得到不同的结果。
以1616点阵字库为例,
计算公式则为:(94(区号-1)+(位号-1))*32。汉字库文该从该位置起的32字节信息即记录了该字的字模信息。
以3232点阵字库为例,
计算公式则为:(94(区号-1)+(位号-1))*128。汉字库文该从该位置起的128字节信息即记录了该字的字模信息。
————————————————
版权声明:本文为CSDN博主「简单同学」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/l471094842/article/details/95171397
1.创建RGN(overlay)
HI_MPI_RGN_Create()
0x8000 == 0x0 黑色
0x7FFF == 0xFFFF 白色
结论:
A. 如果想让背景色透明:
1. stChnAttr.unChnAttr.stOverlayExChn.u32BgAlpha = 0; // 设置背景色的透明度为0
2. stRegion.unAttr.stOverlay.u32BgColor = 0xxxx & 0x7FFF ; // 把图像像素点的 Alpha 位清0
B. 如果想让前景色半透明:
1. stChnAttr.unChnAttr.stOverlayExChn.u32FgAlpha = 64; 设置前景色透明度为 50%
2. 像素点的alpha位设为1 【argb1555 | 0x8000】
图像像素点的 Alpha 位为1 控制前景色透明度
stRegion.unAttr.stOverlay.u32BgColor = 0xFFFF; //背景白色 alpha位为1,那么这个颜色的透明度受u32FgAlpha控制,
如果u32FgAlpha=64则背景就是50%的透明度。
如果u32FgAlpha=128则背景就是不透明度。
注: 这种情况下,背景色和前景色的透明度始终保持一致,也就是说bmp图片转换为 argb1555 时 alpha位始终为1的
图像像素点的 Alpha 位为0 控制背景色透明度
stRegion.unAttr.stOverlay.u32BgColor = 0x7FFF; //背景白色 alpha位为0,那么这个颜色的透明度收u32BgAlpha控制,
如果u32BgAlpha=0则背景就是全透明。
如果u32BgAlpha=128则背景就是不透明。
stChnAttr.unChnAttr.stOverlayExChn.u32BgAlpha = 0;//【argb1555 & 0x8000 == 0】 alpha位 为0的像素点透明度,也称背景透明度
stChnAttr.unChnAttr.stOverlayExChn.u32FgAlpha = 10;//【argb1555 & 0x8000 == 0x8000】alpha为1的像素点透明度,也称前景透明度
mpp/sample/common/sample_comm_region.c
HI_S32 SAMPLE_COMM_REGION_AttachToChn(HI_S32 HandleNum,RGN_TYPE_E enType, MPP_CHN_S *pstMppChn)
{
stChnAttr.unChnAttr.stOverlayChn.u32BgAlpha = 128;
stChnAttr.unChnAttr.stOverlayChn.u32FgAlpha = 128;
}
HI_S32 SAMPLE_REGION_CreateOverLay(HI_S32 HandleNum)
{
HI_S32 s32Ret;
HI_S32 i;
RGN_ATTR_S stRegion;
stRegion.enType = OVERLAY_RGN;
stRegion.unAttr.stOverlay.enPixelFmt = PIXEL_FORMAT_ARGB_1555;
stRegion.unAttr.stOverlay.stSize.u32Height = 200;
269 stRegion.unAttr.stOverlay.stSize.u32Width = 200;
270 stRegion.unAttr.stOverlay.u32BgColor = 0x00ff00ff;
...
return s32Ret;
}
u32BgAlpha + 图像 Alpha 位为0 的像素点的透明度。也称背景Alpha。
u32FgAlpha + 图像 Alpha 位为1 的像素点的透明度。也称前景Alpha。
举例说明:
比如 u32FgAlpha = 128;u32BgAlpha = 0; 假定0表示透明, 128表示不透明
在ARGB1555格式下, 像素A的值是0x7fff, 像素B的值是0x8000
可以看到像素A的 Alpha 位为0, 那么像素B的 Alpha 位为1
那么
u32FgAlpha对于像素A没有作用, u32BgAlpha有作用, 作用效果就是被透明了
u32FgAlpha对于像素B有作用, u32BgAlpha没有作用, 作用效果就是显示了像素B
所以OSD中的像素点的alpha也很重要, 意思就是用alpha区分前景和背景, 比如前景的alpha = 1, 背景的alpha = 0,然后配合u32FgAlpha和u32FgAlpha就能做到了
u32FgAlpha Alpha 位为1 的像素点的透明度。也称前景Alpha。
尴尬了,
我的像素描点值:
前景黑色0x8000,
背景白色是0x7fff;
rgn_attr.unAttr.stOverlay.u32BgColor = 0x7FFF;
stChnAttr.unChnAttr.stOverlayChn.u32BgAlpha = 0;
stChnAttr.unChnAttr.stOverlayChn.u32FgAlpha = 128;
现象是:屏幕上没osd了;把u32BgAlpha=128,能显示,但是背景是蓝色,很奇怪。
rgn_attr.unAttr.stOverlay.u32BgColor = 0x7FFF;
stChnAttr.unChnAttr.stOverlayChn.u32BgAlpha = 128; //背景的透明度
stChnAttr.unChnAttr.stOverlayChn.u32FgAlpha = 128;
stRegion.unAttr.stOverlay.u32BgColor = 0x7FFF; //背景白色
stChnAttr.unChnAttr.stOverlayChn.u32BgAlpha = 0; //背景透明
stChnAttr.unChnAttr.stOverlayChn.u32FgAlpha = 50;
stRegion.unAttr.stOverlay.u32BgColor = 0x8000; //背景黑色
stChnAttr.unChnAttr.stOverlayChn.u32BgAlpha = 0; //背景透明
stChnAttr.unChnAttr.stOverlayChn.u32FgAlpha = 128;
直接根据字体转换为ARGB1555
如果是用HI_MPI_RGN_SetBitMap, 都有点阵了还要什么bmp呀, 直接把点阵转换成ARGB1555(region中通用的格式)的buffer即可
比如楼主的"0", 13x23, 点阵共46bytes, 每行两个bytes共23行, 每个bit代表一个像素, 1表示有前景, 0表示背景
简单的转换代码:
int height, width, i, bytes, bits;
unsigned char *src, *dst;
for ( height = 0; height < 23; height++ )
{
src = data[0] + height * 2;
dst = buffer + height * stride;
for ( width = 0; width < 13 ; width++ )
{
bytes = width / 8;
bits = width % 8;
if ( !(src[bytes] & (1 << bits)) )
{
*dst = 0;
}
else
{
*dst = 0x7fff;
}
dst++;
}
}
没有测试过, 只是简单意思一下
其中, data[0]表示字符0, buffer表示输出用于region的buffer, 格式是ARGB1555(0x7fff表示黑色, 当然还要和region的alpha配合)
stride是输出buffer的, 意味着可以把多个字符放到同一个buffer中(当然buffer的起始坐标要修改)
直接修改位图数据
Search · stCanvasInfo.u32VirtAddr
stBitmap.pData = (HI_VOID*)stCanvasInfo.u32VirtAddr; //位图数据。
stSize.u32Width = stCanvasInfo.stSize.u32Width; //位图宽度。
stSize.u32Height = stCanvasInfo.stSize.u32Height;
s32Ret = SAMPLE_RGN_UpdateCanvas("sys_time.bmp", &stBitmap, HI_FALSE, 0, &stSize, stCanvasInfo.u32Stride,stRgnAttrSet.unAttr.stOverlayEx.enPixelFmt);
在这一步遇到困难,有什么办法可以不保存成位图直接通过SDL得到的buff丢给画图吗?这样就省了保存成位图再加载位图的步骤
目前用SDL生成了BMP图像,然后给到Hi3516A的Vpss通道,实现了如下图的效果,但是还有两个问题,希望大家能够帮忙看一下
1、生成的BMP图像的背景颜色如何变成透明的呢,现在有背景颜色不太好看
2、如何显示两行字符,即在目前显示的时间下面,再显示一行别的信息,这个需要怎么去做,有没有办法让生成的BMP图像有两行或者多行数据呢
第一个问题可以利用overlay的alpha来处理,当然BMP图像要处理成有alpha的
第二个问题就是把几个BMP合成一个BMP而已
还搞得这么麻烦,我直接用freetype2就在上面画汉字字母了,还是实现了时间显示,根本就不用什么SDL
当Alpha 位为1时,芯片使用u32FgAlpha 进行透明度叠加;
当Alpha 位为0时,芯片使用u32BgAlpha 进行透明度叠加。
0 表示全透明;128表示不透明。
而你的BMP的每个象素都是一种alpha的当然就只能一起变了,把要透明的像素的alpha和不要透明的像素的alpha设成不一样的
注意:这里讨论的实BMP图,而不是RGN的设定
举例:
比如你的BMP有两个像素(假设是16bits, RGN设为ARGB1555格式) ,假设BMP的buffer为[0x0000, 0x0123], 可以看出前一个点为黑色,后一个点为有色点,如果要黑点透明,那么要把黑点的值改为
0x8000, 同时u32FgAlpha=0, u32BgAlpha=128
这样一来有色点就留下了,黑点就透明了被下面的视频代替了
关于BMP合并,如果格式相同,就是简单的内存搬移了,很简单
可以分配一个大的内存,用SDL生成两个BMP内存,然后搬到大的内存就好了,只要注意下起始地址,高,宽和stride就可以了
然后把大的内存做为osd送给海思就能显示多行了
s32Ret = SAMPLE_RGN_UpdateCanvas("sys_time.bmp", &stBitmap, HI_FALSE, 0, &stSize, stCanvasInfo.u32Stride, stRgnAttrSet.unAttr.stOverlayEx.enPixelFmt);
请注意第三第四个参数, 这两个可以控制透明, 第三个设为HI_TRUE, 第四个设为BMP中需要透明的背景像素的值(你可以用打印的方法得到)
也可以做个测试,第四个参数设为0x8000(从代码和贴出OSD的实际图猜的), 应该会看到osd中的字(黑色的部分, 希望值是0x8000,祈祷中...)透明了
你每隔500ms,或者1s 刷新一下画布HI_MPI_RGN_UpdateCanvas就可以了啊,没有必要HI_MPI_RGN_DetachFromChn,和销毁HI_MPI_RGN_Destroy啊
SOC_CHECK(HI_MPI_RGN_GetCanvasInfo(Handle, &stCanvasInfo));
if(NULL == overlay || (NULL != overlay)&&(NULL == overlay->canvas)){
SOC_CHECK(HI_MPI_RGN_UpdateCanvas(Handle));
handle_num ++ ;
pthread_mutex_unlock(&_sdk_enc.attr.overlayex_mutex);
continue ;
}else{
memcpy(stCanvasInfo.u32VirtAddr,overlay->canvas->pixels,
overlay->canvas->width * overlay->canvas->height * sizeof(uint16_t));
stCanvasInfo.stSize.u32Height = overlay->canvas->height;
stCanvasInfo.stSize.u32Width = overlay->canvas->width;
}
void DrawPixel (BMP *bmp, int x, int y, int color) {
int bpp;
Uint8 *p;
// Get clip
if(
x < bmp->clip_rect.x ||
x > bmp->clip_rect.x + bmp->clip_rect.w ||
y < bmp->clip_rect.y ||
y >= bmp->clip_rect.y + bmp->clip_rect.h
)
return;
bpp = bmp->format->BytesPerPixel;
// Here p is the address to the pixel we want to set
p = (Uint8 *)bmp->pixels + y * bmp->pitch + x * bpp;
if (bpp==2) { *(Uint16 *)p = color; return; }
if (bpp==4) *(Uint32 *)p = color;
}// DrawPixel()
void DrawHline (BMP *bmp, int x1, int y, int x2, int color) {
int i; // counter
Uint8 *p; // pixel
if ( y < bmp->clip_rect.y || y > bmp->clip_rect.y + bmp->clip_rect.h-1)
return;
// Set cliping
if (x1 < bmp->clip_rect.x) x1 = bmp->clip_rect.x;
if (x2 > bmp->clip_rect.x + bmp->clip_rect.w-1) x2 = bmp->clip_rect.x + bmp->clip_rect.w-1;
int bpp = bmp->format->BytesPerPixel;
//Here p is the address to the pixel we want to set
p = (Uint8 *)bmp->pixels + y * bmp->pitch + x1 * bpp;
switch (bpp) {
case 2:
for (i = x1; i <= x2; i++) {
*(Uint16 *)p = color; // Set color
p += bpp; // Increment
}
break;
case 4:
for (i = x1; i <= x2; i++) {
*(Uint32 *)p = color; // Set color
p += bpp; // Increment
}
break;
}
}// hline ();
void DrawVline (BMP *bmp, int x, int y1, int y2, int color) {
const int bpp = bmp->format->BytesPerPixel;
int i;
Uint8 *p; // pixel
if ( x < bmp->clip_rect.x || x > bmp->clip_rect.x + bmp->clip_rect.w-1 )
return;
// Set cliping
if ( y1 < bmp->clip_rect.y ) y1 = bmp->clip_rect.y;
if ( y2 > bmp->clip_rect.y + bmp->clip_rect.h-1 ) y2 = bmp->clip_rect.y + bmp->clip_rect.h-1;
//Here p is the address to the pixel we want to set
p = (Uint8 *)bmp->pixels + y1 * bmp->pitch + x * bpp;
switch (bpp) {
case 2:
for (i = y1; i <= y2; i++) {
*(Uint16 *)p = color; // Set color
p += bmp->pitch; // Increment
}
break;
case 4:
for (i = y1; i <= y2; i++) {
*(Uint32 *)p = color; // Set color
p += bmp->pitch; // Increment
}
break;
}
}// vline ();
void DrawRect (BMP *bmp, int x, int y, int w, int h, int color) {
DrawHline(bmp, x, y, x+w, color);
DrawHline(bmp, x, y+h, x+w, color);
DrawVline(bmp, x, y, y+h, color);
DrawVline(bmp, x+w, y, y+h, color);
}
void DrawRectR (BMP *bmp, int x, int y, int w, int h, int color) {
DrawHline(bmp, x+2, y, x+w-3, color);
DrawHline(bmp, x+2, y+h-1, x+w-3, color);
DrawVline(bmp, x, y+2, y+h-3, color);
DrawVline(bmp, x+w-1, y+2, y+h-3, color);
DrawPixel(bmp, x+1, y+1, color);
DrawPixel(bmp, x+w-2, y+1, color);
DrawPixel(bmp, x+1, y+h-2, color);
DrawPixel(bmp, x+w-2, y+h-2, color);
}
/*
void draw_rectr (BMP *bmp, int x1, int y1, int x2, int y2, int color) {
short ponto[] = {
x1+1, y1+1,
x2-1, y1+1,
x2-1, y2-1,
x1+1, y2-1
};
draw_hline( bmp, x1+2, y1, x2-2, color ); // top
draw_vline( bmp, x2, y1+2, y2-2, color ); // right >
draw_hline( bmp, x1+2, y2, x2-2, color ); // button
draw_vline( bmp, x1, y1+2, y2-2, color ); // left <
draw_point (bmp, 8, ponto, color);
}
*/
#define FB_BK_COLOR 0xfc00//0x7c00
int GetOSDLayerInfo(int Handle,int * layer_width, int * layer_height,int * enPixelFmt)
{
int iRet = 0;
int iRetNo = MSA_SUCCESS;
int tmp;
RGN_CANVAS_INFO_S stCanvasInfo;
iRet = HI_MPI_RGN_GetCanvasInfo(Handle, &stCanvasInfo);
if(HI_SUCCESS != iRet)
{
printf("HI_MPI_RGN_GetCanvasInfo failed! s32Ret: 0x%x.\n", iRet);
return -1;
}
iRet = HI_MPI_RGN_UpdateCanvas(Handle);
if(HI_SUCCESS != iRet)
{
printf("HI_MPI_RGN_UpdateCanvas failed! s32Ret: 0x%x.\n", iRet);
return -1;
}
*layer_width = stCanvasInfo.stSize.u32Width;
*layer_height = stCanvasInfo.stSize.u32Height;
*enPixelFmt = stCanvasInfo.enPixelFmt;
return 1;
}
int OSDLayerDrawClear(int Handle) {
int x,y;
short * screen_buf;
int iRet; RGN_CANVAS_INFO_S stCanvasInfo;
iRet = HI_MPI_RGN_GetCanvasInfo(Handle, &stCanvasInfo);
if(HI_SUCCESS != iRet)
{
printf("HI_MPI_RGN_GetCanvasInfo failed! s32Ret: 0x%x.\n", iRet);
return -1;
}
memset(stCanvasInfo.u32VirtAddr,FB_BK_COLOR,stCanvasInfo.u32Stride*stCanvasInfo.stSize.u32Height );
iRet = HI_MPI_RGN_UpdateCanvas(Handle);
if (HI_SUCCESS != iRet) {
DPRINTK("HI_MPI_RGN_UpdateCanvas fail! s32Ret: 0x%x.\n", iRet);
return -1;
}
return 1;
}
int OSDLayerDrawPicUseCoordinate(int Handle,int start_x,int start_y,int pic_x,int pic_y,int pic_show_w,int pic_show_h,int pic_width,int pic_height,char * pic_data)
{
int x,y;
short * screen_buf;
int iRet;
RGN_CANVAS_INFO_S stCanvasInfo;
iRet = HI_MPI_RGN_GetCanvasInfo(Handle, &stCanvasInfo);
if(HI_SUCCESS != iRet)
{
printf("HI_MPI_RGN_GetCanvasInfo failed! s32Ret: 0x%x.\n", iRet);
return -1;
}
if( (start_x + pic_show_w) > stCanvasInfo.stSize.u32Width ||
(start_y + pic_show_h) > stCanvasInfo.stSize.u32Height )
{
DPRINTK("[%d] screen(%d,%d,%d,%d) > (%d,%d) error!\n",Handle,pic_x,pic_y,
pic_show_w, pic_show_h,stCanvasInfo.stSize.u32Width,stCanvasInfo.stSize.u32Height );
return -1;
} //memset(stCanvasInfo.u32VirtAddr,FB_BK_COLOR,stCanvasInfo.u32Stride*stCanvasInfo.stSize.u32Height );
{
short * pic_buf;
int pic_line = 0;
screen_buf = (short *)stCanvasInfo.u32VirtAddr;
pic_buf = (short *)pic_data;
pic_line = pic_y; //printf("first pixel is %x\n",pic_buf[0]);
for( y = start_y; y < start_y + pic_show_h; y++ )
{
memcpy(&screen_buf[y* stCanvasInfo.u32Stride /2 + start_x],
&pic_buf[pic_line*pic_width + pic_x],pic_show_w*2);
pic_line++;
}
}
//DPRINTK("HI_MPI_RGN_UpdateCanvas 1\n"); iRet = HI_MPI_RGN_UpdateCanvas(Handle); if (HI_SUCCESS != iRet) { DPRINTK("HI_MPI_RGN_UpdateCanvas fail! s32Ret: 0x%x.\n", iRet); return -1; } //DPRINTK("HI_MPI_RGN_UpdateCanvas 2\n");
return 1;
}
int MSA_OSDLayerDrawPic(MSA_HANDLE hHandle, MSA_DRAW_OSD_PIC_ST stDrawOsd)
{
int iRet = 0;
int iRetNo = MSA_SUCCESS;
int tmp;
MSA_CHANNEL_ST tmpChan;
MSA_CHANNEL_INFO_ST * pChan = (MSA_CHANNEL_INFO_ST *)hHandle;
if( gMsaCtrl.isInit == 0 )
return MSA_ERR_NOT_INIT;
MSA_OpLock();
if( hHandle == NULL )
{
iRetNo = MSA_ERR_NULL_PTR;
goto err;
}
if( pChan->iOsdIsClear == 0 )
{
OSDLayerDrawClear(pChan->iOsdLayerChan);
pChan->iOsdIsClear = 1;
}
iRet = OSDLayerDrawPicUseCoordinate(pChan->iOsdLayerChan,stDrawOsd.screen_x,stDrawOsd.screen_y,\
stDrawOsd.pic_show_offset_x,stDrawOsd.pic_show_offset_y,stDrawOsd.pic_show_w,stDrawOsd.pic_show_h,\
stDrawOsd.pic_width,stDrawOsd.pic_height,stDrawOsd.pic_data_ptr);
if (iRet < 0)
{
DPRINTK("OSDLayerDrawPicUseCoordinate failed! s32Ret: 0x%x.\n", iRet);
iRetNo = MSA_FAILED;
goto err;
}
MSA_OpUnLock();
return iRetNo;
err:
MSA_OpUnLock();
return iRetNo;
}
HI_S32 RegionCtr::SAMPLE_RGN_SetOverlayPosToVpss(RGN_HANDLE Handle,VPSS_GRP VpssGrp,POINT_S &Point)
{
MPP_CHN_S stChn;
HI_S32 s32Ret = HI_SUCCESS;
RGN_CHN_ATTR_S stChnAttr;
stChn.enModId = HI_ID_VPSS;
stChn.s32DevId = VpssGrp;
stChn.s32ChnId = 0;
s32Ret = HI_MPI_RGN_GetDisplayAttr(Handle,&stChn,&stChnAttr);
if(s32Ret != HI_SUCCESS){
qDebug("HI_MPI_RGN_GetDisplayAttr failed with %#x!",s32Ret);
return s32Ret;
}
// qDebug()<<"back setRgnPoint:"<
static inline uint16_t overlay_pixel_argb4444(stSDK_ENC_VIDEO_OVERLAY_PIXEL pixel)
{
return ((pixel.alpha>>4)<<12)|((pixel.red>>4)<<8)|((pixel.green>>4)<<4)|((pixel.blue>>4)<<0);
}
static inline stSDK_ENC_VIDEO_OVERLAY_PIXEL overlay_pixel_argb8888(uint16_t pixel)
{
stSDK_ENC_VIDEO_OVERLAY_PIXEL pixel_8888;
pixel_8888.alpha = ((pixel>>12)<<4) & 0xff;
pixel_8888.red = ((pixel>>8)<<4) & 0xff;
pixel_8888.green = ((pixel>>4)<<4) & 0xff;
pixel_8888.blue = ((pixel>>0)<<4) & 0xff;
return pixel_8888;
}
static int overlay_canvas_put_pixel(LP_SDK_ENC_VIDEO_OVERLAY_CANVAS canvas, int x, int y, stSDK_ENC_VIDEO_OVERLAY_PIXEL pixel)
{
if(canvas){
if(x < canvas->width && y < canvas->height){
if(NULL != canvas->pixels){
uint16_t *const pixels = (uint16_t*)canvas->pixels;
*(pixels + y * canvas->width + x) = overlay_pixel_argb4444(pixel);
return 0;
}
}
}
return -1;
}
static int overlay_canvas_get_pixel(LP_SDK_ENC_VIDEO_OVERLAY_CANVAS canvas, int x, int y, stSDK_ENC_VIDEO_OVERLAY_PIXEL* ret_pixel)
{
if(canvas){
if(x < canvas->width && y < canvas->height){
if(NULL != canvas->pixels){
uint16_t *const pixels = (uint16_t*)canvas->pixels;
if(ret_pixel){
*ret_pixel = overlay_pixel_argb8888(*(pixels + y * canvas->width + x));
return 0;
}
}
}
}
return -1;
}
static bool overlay_canvas_match_pixel(LP_SDK_ENC_VIDEO_OVERLAY_CANVAS canvas, stSDK_ENC_VIDEO_OVERLAY_PIXEL pixel1, stSDK_ENC_VIDEO_OVERLAY_PIXEL pixel2)
{
return overlay_pixel_argb4444(pixel1) == overlay_pixel_argb4444(pixel2);
}
static int overlay_canvas_put_rect(LP_SDK_ENC_VIDEO_OVERLAY_CANVAS canvas, int x, int y, size_t width, size_t height,stSDK_ENC_VIDEO_OVERLAY_PIXEL pixel)
{
if(canvas){
if(x < canvas->width && y < canvas->height){
if(NULL != canvas->pixels){
int i, ii;
uint16_t *const pixels = (uint16_t*)(canvas->pixels);
uint16_t const pixel_4444 = overlay_pixel_argb4444(pixel);
if(x + width >= canvas->width){
width = canvas->width - x;
}
if(y + height >= canvas->height){
height = canvas->height - y;
}
for(i = 0; i < height; ++i){
uint16_t* pixel_pos = pixels + i * canvas->width;
if(0 == i || height - 1 == i){
// put one line
for(ii = 0; ii < width; ++ii){
*pixel_pos++ = pixel_4444;
}
}else{
// put 2 dots
pixel_pos[0] = pixel_pos[width - 1] = pixel_4444;
}
}
return 0;
}
}
}
return -1;
}
static int overlay_canvas_fill_rect(LP_SDK_ENC_VIDEO_OVERLAY_CANVAS canvas, int x, int y, size_t width, size_t height, stSDK_ENC_VIDEO_OVERLAY_PIXEL pixel)
{
if(canvas){
if(!width){
width = canvas->width;
}
if(!height){
height = canvas->height;
}
if(x < canvas->width && y < canvas->height){
if(NULL != canvas->pixels){
int i, ii;
uint16_t *const pixels = (uint16_t*)(canvas->pixels);
uint16_t const pixel_4444 = overlay_pixel_argb4444(pixel);
if(x + width >= canvas->width){
width = canvas->width - x;
}
if(y + height >= canvas->height){
height = canvas->height - y;
}
for(i = 0; i < height; ++i){
uint16_t* pixel_pos = pixels + i * canvas->width;
for(ii = 0; ii < width; ++ii){
*pixel_pos++ = pixel_4444;
}
}
return 0;
}
}
}
return -1;
}
static int overlay_canvas_erase_rect(LP_SDK_ENC_VIDEO_OVERLAY_CANVAS canvas, int x, int y, size_t width, size_t height)
{
stSDK_ENC_VIDEO_OVERLAY_PIXEL erase_pixel;
erase_pixel.alpha = 0;
erase_pixel.red = 0;
erase_pixel.green = 0;
erase_pixel.blue = 0;
return canvas->fill_rect(canvas, x, y, width, height, erase_pixel);
}
static LP_SDK_ENC_VIDEO_OVERLAY_CANVAS enc_create_overlay_canvas(size_t width, size_t height)
{
int i = 0;
if(width > 0 && height > 0){
LP_SDK_ENC_VIDEO_OVERLAY_CANVAS const canvas_stock = _sdk_enc.attr.canvas_stock;
for(i = 0; i < HI_VENC_OVERLAY_CANVAS_STOCK_REF; ++i){
LP_SDK_ENC_VIDEO_OVERLAY_CANVAS const canvas = canvas_stock + i;
if(!canvas->pixels){
// has not been allocated
LP_SDK_ENC_VIDEO_OVERLAY_CANVAS const canvas = &canvas_stock[i];
canvas->width = SDK_ALIGNED_BIG_ENDIAN(width, 2); // aligned to 2 pixel
canvas->height = SDK_ALIGNED_BIG_ENDIAN(height, 2);
// hisilicon use argb444 format
canvas->pixel_format.rmask = 0x0f00;
canvas->pixel_format.gmask = 0x00f0;
canvas->pixel_format.bmask = 0x000f;
canvas->pixel_format.amask = 0xf000;
// frame buffer
// canvas->pixels = calloc(canvas->width * canvas->height * sizeof(uint16_t), 1);
HI_MPI_SYS_MmzAlloc(&canvas->phy_addr, (void**)(&canvas->pixels),
NULL, NULL,canvas->width * canvas->height * sizeof(uint16_t));
// interfaces
canvas->put_pixel = overlay_canvas_put_pixel;
canvas->get_pixel = overlay_canvas_get_pixel;
canvas->match_pixel = overlay_canvas_match_pixel;
canvas->put_rect = overlay_canvas_put_rect;
canvas->fill_rect = overlay_canvas_fill_rect;
canvas->erase_rect = overlay_canvas_erase_rect;
return canvas;
}
}
}
return NULL;
}
static LP_SDK_ENC_VIDEO_OVERLAY_CANVAS enc_load_overlay_canvas(const char *bmp24_path)
{
int i = 0, ii = 0;
int ret = 0;
typedef struct BIT_MAP_FILE_HEADER {
char type[2]; // "BM" (0x4d42)
uint32_t file_size;
uint32_t reserved_zero;
uint32_t off_bits; // data area offset to the file set (unit. byte)
uint32_t info_size;
uint32_t width;
uint32_t height;
uint16_t planes; // 0 - 1
uint16_t bit_count; // 0 - 1
uint32_t compression; // 0 - 1
uint32_t size_image; // 0 - 1
uint32_t xpels_per_meter;
uint32_t ypels_per_meter;
uint32_t clr_used;
uint32_t clr_important;
}__attribute__((packed)) BIT_MAP_FILE_HEADER_t; //
FILE *bmp_fid = NULL;
bmp_fid = fopen(bmp24_path, "rb");
if(NULL != bmp_fid){
BIT_MAP_FILE_HEADER_t bmp_hdr;
ret = fread(&bmp_hdr, 1, sizeof(bmp_hdr), bmp_fid);
if(sizeof(bmp_hdr) == ret){
if('B' == bmp_hdr.type[0]
&& 'M' == bmp_hdr.type[1]
&& 24 == bmp_hdr.bit_count){
int const bmp_width = bmp_hdr.width;
int const bmp_height = bmp_hdr.height;
char *canvas_cache = calloc(bmp_hdr.size_image, 1);
LP_SDK_ENC_VIDEO_OVERLAY_CANVAS canvas = NULL;
stSDK_ENC_VIDEO_OVERLAY_PIXEL canvas_pixel;
SOC_INFO("IMAGE %dx%d size=%d offset=%d info=%d", bmp_width, bmp_height, bmp_hdr.size_image, bmp_hdr.off_bits, bmp_hdr.info_size);
// load image to buf
if(0 == fseek(bmp_fid, bmp_hdr.off_bits, SEEK_SET)){
ret = fread(canvas_cache, 1, bmp_hdr.size_image, bmp_fid);
}
fclose(bmp_fid);
bmp_fid = NULL;
// load to canvas
//canvas_pixel.argb8888 = 0xffffffff;
canvas = sdk_enc->create_overlay_canvas(bmp_width, bmp_height);
for(i = 0; i < bmp_height; ++i){
char *const line_offset = canvas_cache + SDK_ALIGNED_BIG_ENDIAN(3 * bmp_width, 4) * (bmp_height - 1 - i) + 2;
for(ii = 0; ii < bmp_width; ++ii){
char *const column_offset = line_offset + 3 * ii;
canvas_pixel.alpha = 0xff;
canvas_pixel.red = column_offset[0];
canvas_pixel.green = column_offset[1];
canvas_pixel.blue = column_offset[2];
canvas->put_pixel(canvas, ii, i, canvas_pixel);
}
}
//canvas->fill_rect(canvas, 0, 0, bmp_width, bmp_height, canvas_pixel);
// free the canvas cache
free(canvas_cache);
canvas_cache = NULL;
return canvas;
}
}
fclose(bmp_fid);
bmp_fid = NULL;
}
return NULL;
}
static void enc_release_overlay_canvas(LP_SDK_ENC_VIDEO_OVERLAY_CANVAS canvas)
{
if(canvas){
canvas->width = 0;
canvas->height = 0;
if(canvas->pixels){
// free(canvas->pixels);
SOC_CHECK(HI_MPI_SYS_MmzFree(canvas->phy_addr,canvas->pixels));
canvas->pixels = NULL; // very important
}
// baccaus the canvas is created from the stock
// so it's needless to be free
}
}
static lpSDK_ENC_VIDEO_OVERLAY_ATTR enc_lookup_overlay_byname(int vin, int stream, const char* name)
{
int i = 0;
if(vin < HI_VENC_CH_BACKLOG_REF
&& stream < HI_VENC_STREAM_BACKLOG_REF){
lpSDK_ENC_VIDEO_OVERLAY_ATTR_SET const overlay_set = &_sdk_enc.attr.video_overlay_set[vin][stream];
// check name override
for(i = 0; i < HI_VENC_OVERLAY_BACKLOG_REF; ++i){
lpSDK_ENC_VIDEO_OVERLAY_ATTR const overlay = &overlay_set->attr[i];
//SOC_DEBUG("Looking up \"%s\"/\"%s\"", name, overlay->name);
if(overlay->canvas && 0 == strcmp(overlay->name, name)){
// what's my target
return overlay;
}
}
}
return NULL;
}
HI_S32 enc_rgn_add_reverse_color_task(TDE_HANDLE handle,
TDE2_SURFACE_S *pstForeGround, TDE2_RECT_S *pstForeGroundRect,
TDE2_SURFACE_S *pstBackGround, TDE2_RECT_S *pstBackGroundRect)
{
HI_S32 s32Ret;
TDE2_OPT_S stOpt = {0};
HI_ASSERT(NULL != pstForeGround);
HI_ASSERT(NULL != pstForeGroundRect);
HI_ASSERT(NULL != pstBackGround);
HI_ASSERT(NULL != pstBackGroundRect);
stOpt.enAluCmd = TDE2_ALUCMD_ROP;
stOpt.enRopCode_Alpha = TDE2_ROP_COPYPEN;
stOpt.enRopCode_Color = TDE2_ROP_NOT;
s32Ret = HI_TDE2_Bitblit(handle, pstBackGround, pstBackGroundRect, pstForeGround,
pstForeGroundRect, pstBackGround, pstBackGroundRect, &stOpt);
if (HI_SUCCESS != s32Ret)
{
printf("HI_TDE2_Bitblit fail! s32Ret: 0x%x.\n", s32Ret);
return s32Ret;
}
return HI_SUCCESS;
}
HI_S32 enc_rgn_reverse_osd_color_tde(TDE2_SURFACE_S *pstSrcSurface, TDE2_SURFACE_S *pstDstSurface,
const VPSS_REGION_INFO_S *pstRgnInfo)
{
HI_S32 i;
HI_S32 s32Ret;
TDE_HANDLE handle;
TDE2_RECT_S stRect;
HI_ASSERT(NULL != pstSrcSurface);
HI_ASSERT(NULL != pstDstSurface);
HI_ASSERT(NULL != pstRgnInfo);
s32Ret = HI_TDE2_Open();
if (HI_SUCCESS != s32Ret)
{
printf("HI_TDE2_Open fail! s32Ret: 0x%x.\n", s32Ret);
return s32Ret;
}
handle = HI_TDE2_BeginJob();
if (handle < 0)
{
printf("HI_TDE2_BeginJob fail!\n");
return HI_FAILURE;
}
stRect.s32Xpos = 0;
stRect.s32Ypos = 0;
stRect.u32Width = pstSrcSurface->u32Width;
stRect.u32Height = pstSrcSurface->u32Height;
s32Ret = HI_TDE2_QuickCopy(handle, pstSrcSurface, &stRect, pstDstSurface, &stRect);
if (HI_SUCCESS != s32Ret)
{
printf("HI_TDE2_QuickCopy fail! s32Ret: 0x%x.\n", s32Ret);
HI_TDE2_CancelJob(handle);
return s32Ret;
}
for (i = 0; i < pstRgnInfo->u32RegionNum; ++i)
{
stRect.s32Xpos = pstRgnInfo->pstRegion[i].s32X;
stRect.s32Ypos = pstRgnInfo->pstRegion[i].s32Y;
stRect.u32Width = pstRgnInfo->pstRegion[i].u32Width;
stRect.u32Height = pstRgnInfo->pstRegion[i].u32Height;
s32Ret = enc_rgn_add_reverse_color_task(handle, pstSrcSurface, &stRect, pstDstSurface, &stRect);
if (HI_SUCCESS != s32Ret)
{
printf("enc_rgn_add_reverse_color_task fail! s32Ret: 0x%x.\n", s32Ret);
HI_TDE2_CancelJob(handle);
return s32Ret;
}
}
s32Ret = HI_TDE2_EndJob(handle, HI_FALSE, HI_FALSE, 5);
if (HI_SUCCESS != s32Ret)
{
printf("HI_TDE2_EndJob fail! s32Ret: 0x%x.\n", s32Ret);
HI_TDE2_CancelJob(handle);
return s32Ret;
}
s32Ret = HI_TDE2_WaitForDone(handle);
if (HI_SUCCESS != s32Ret)
{
printf("HI_TDE2_WaitForDone fail! s32Ret: 0x%x.\n", s32Ret);
return s32Ret;
}
return HI_SUCCESS;
}
HI_S32 enc_rgn_conv_osd_cavas_to_tde_surface(TDE2_SURFACE_S *pstSurface, const RGN_CANVAS_INFO_S *pstCanvasInfo)
{
HI_ASSERT((NULL != pstSurface) && (NULL != pstCanvasInfo));
switch (pstCanvasInfo->enPixelFmt)
{
case PIXEL_FORMAT_RGB_4444:
{
pstSurface->enColorFmt = TDE2_COLOR_FMT_ARGB4444;
break ;
}
case PIXEL_FORMAT_RGB_1555:
{
pstSurface->enColorFmt = TDE2_COLOR_FMT_ARGB1555;
break ;
}
case PIXEL_FORMAT_RGB_8888:
{
pstSurface->enColorFmt = TDE2_COLOR_FMT_ARGB8888;
break ;
}
default :
{
printf("[Func]:%s [Line]:%d [Info]:invalid Osd pixel format(%d)\n",
__FUNCTION__, __LINE__, pstCanvasInfo->enPixelFmt);
return HI_FAILURE;
}
}
pstSurface->bAlphaExt1555 = HI_FALSE;
pstSurface->bAlphaMax255 = HI_TRUE;
pstSurface->u32PhyAddr = pstCanvasInfo->u32PhyAddr;
pstSurface->u32Width = pstCanvasInfo->stSize.u32Width;
pstSurface->u32Height = pstCanvasInfo->stSize.u32Height;
pstSurface->u32Stride = pstCanvasInfo->u32Stride;
return HI_SUCCESS;
}
HI_VOID *enc_rgn_vpss_osd_reverse_thread()
{
pthread_detach(pthread_self());
usleep(1000*1000);
int handle_num = 0;
while (HI_FALSE == bExitOverlayLoop){
while((handle_num < _sdk_enc.attr.overlay_handle_num) && (HI_FALSE == bExitOverlayRelease) ){
if(NULL == _sdk_enc.attr.overlay[handle_num] ){
handle_num ++ ;
continue ;
}else{
if(0 == strcmp("clock",_sdk_enc.attr.overlay[handle_num]->name)){
handle_num ++ ;
continue;
}
}
RGN_ATTR_S stRgnAttrSet;
RGN_CHN_ATTR_S stChnAttr;
HI_S32 i,VpssChn,VpssChn_sub;
HI_S32 k = 0, j = 0;
RGN_HANDLE Handle;
RGN_HANDLE Handle_sub;
HI_U32 u32OsdRectCnt;
SIZE_S stSize;
RGN_OSD_REVERSE_INFO_S stOsdReverseInfo;
RECT_S astOsdLumaRect[64];
RECT_S astOsdRevRect[OSD_REVERSE_RGN_MAXCNT];
HI_U32 au32LumaData[OSD_REVERSE_RGN_MAXCNT];
HI_S32 s32Ret = HI_SUCCESS;
TDE2_SURFACE_S stRgnOrignSurface = {0};
TDE2_SURFACE_S stRgnSurface = {0};
RGN_CANVAS_INFO_S stCanvasInfo;
VPSS_REGION_INFO_S stReverseRgnInfo;
MPP_CHN_S stChn;
MPP_CHN_S stMppChn = {0};
RGN_CHN_ATTR_S stOsdChnAttr = {0};
lpSDK_ENC_VIDEO_OVERLAY_ATTR overlay;
int width_max;
VPSS_CHN_MODE_S stVpssMode;
VPSS_CROP_INFO_S stCropInfo;
int stream_width;
HI_U32 per_width;
memset(&stOsdReverseInfo, 0, sizeof(stOsdReverseInfo));
Handle = _sdk_enc.attr.overlay_handle[handle_num];
VpssChn = _sdk_enc.attr.vpss_chn[handle_num];
u32OsdRectCnt = 16;//32
stOsdReverseInfo.Handle = Handle;
stOsdReverseInfo.VpssGrp = 0;
stOsdReverseInfo.VpssChn = 2;//VpssChn;
stOsdReverseInfo.u8PerPixelLumaThrd = 128;
stOsdReverseInfo.stLumaRgnInfo.u32RegionNum = u32OsdRectCnt;
stOsdReverseInfo.stLumaRgnInfo.pstRegion = astOsdLumaRect;
if(1 == VpssChn){//main stream
VpssChn_sub = VpssChn + 1;
Handle_sub = _sdk_enc.attr.overlay_handle[handle_num + 1];
}else if (2 == VpssChn){//sub stream
VpssChn_sub = VpssChn;
Handle_sub = Handle;
}
if(0 == Handle_sub){
return -1;
}
pthread_mutex_lock(&_sdk_enc.attr.overlayex_mutex);
SOC_CHECK(HI_MPI_RGN_GetAttr(Handle_sub, &stRgnAttrSet));
stSize.u32Width = stRgnAttrSet.unAttr.stOverlay.stSize.u32Width;
stSize.u32Height = stRgnAttrSet.unAttr.stOverlay.stSize.u32Height;
stChn.enModId = HI_ID_VENC;
stChn.s32DevId = 0;
stChn.s32ChnId = _hi3518_venc_ch_map[VpssChn_sub]; // 1
SOC_CHECK(HI_MPI_RGN_GetDisplayAttr(Handle_sub, &stChn, &stChnAttr));
SOC_CHECK(HI_MPI_VPSS_GetChnCrop(0,VpssChn_sub,&stCropInfo));
SOC_CHECK(HI_MPI_VPSS_GetChnMode(0,VpssChn_sub,&stVpssMode));
if(stCropInfo.bEnable == 1 ){
stream_width = stCropInfo.stCropRect.u32Width;
}else{
stream_width = stVpssMode.u32Width ;
}
per_width = stSize.u32Height;
u32OsdRectCnt = stSize.u32Width / per_width;
stOsdReverseInfo.stLumaRgnInfo.u32RegionNum = u32OsdRectCnt;
for (i=0;i < u32OsdRectCnt; i++)
{
width_max = (per_width * (i+1)) + stChnAttr.unChnAttr.stOverlayChn.stPoint.s32X;
if( width_max > stream_width){
stOsdReverseInfo.stLumaRgnInfo.u32RegionNum = i;
break;
}
astOsdLumaRect[i].s32X = (per_width * i) + stChnAttr.unChnAttr.stOverlayChn.stPoint.s32X;
astOsdLumaRect[i].s32Y = stChnAttr.unChnAttr.stOverlayChn.stPoint.s32Y;
astOsdLumaRect[i].u32Width = per_width;
astOsdLumaRect[i].u32Height = stSize.u32Height;
}
overlay = _sdk_enc.attr.overlay[handle_num];
SOC_CHECK(HI_MPI_RGN_GetCanvasInfo(Handle, &stCanvasInfo));
if(NULL == overlay || (NULL != overlay)&&(NULL == overlay->canvas)){
SOC_CHECK(HI_MPI_RGN_UpdateCanvas(Handle));
handle_num ++ ;
pthread_mutex_unlock(&_sdk_enc.attr.overlayex_mutex);
continue ;
}else{
memcpy(stCanvasInfo.u32VirtAddr,overlay->canvas->pixels,
overlay->canvas->width * overlay->canvas->height * sizeof(uint16_t));
stCanvasInfo.stSize.u32Height = overlay->canvas->height;
stCanvasInfo.stSize.u32Width = overlay->canvas->width;
}
SOC_CHECK(enc_rgn_conv_osd_cavas_to_tde_surface(&stRgnSurface, &stCanvasInfo));
stRgnOrignSurface.enColorFmt = TDE2_COLOR_FMT_ARGB4444;
stRgnOrignSurface.bAlphaExt1555 = HI_FALSE;
stRgnOrignSurface.bAlphaMax255 = HI_TRUE;
stRgnOrignSurface.u32PhyAddr = overlay->canvas->phy_addr;
stRgnOrignSurface.u32Width = stCanvasInfo.stSize.u32Width;
stRgnOrignSurface.u32Height = stCanvasInfo.stSize.u32Height;
stRgnOrignSurface.u32Stride = stCanvasInfo.u32Stride;
/* 3.get the display attribute of OSD attached to vpss*/
stMppChn.enModId = HI_ID_VENC;
stMppChn.s32DevId = 0;
stMppChn.s32ChnId = _hi3518_venc_ch_map[VpssChn];//1;
s32Ret = HI_MPI_RGN_GetDisplayAttr(Handle, &stMppChn, &stOsdChnAttr);
stReverseRgnInfo.pstRegion = (RECT_S *)astOsdRevRect;
HI_U32 interval_size;
/* 4.get the sum of luma of a region specified by user*/
s32Ret = HI_MPI_VPSS_GetRegionLuma(stOsdReverseInfo.VpssGrp, stOsdReverseInfo.VpssChn, &(stOsdReverseInfo.stLumaRgnInfo), au32LumaData, -1);
if (HI_SUCCESS != s32Ret)
{
printf("[Func]:%s [Line]:%d [Info]:HI_MPI_VPSS_GetRegionLuma VpssGrp=%d failed, s32Ret: 0x%x,overlay name:%s.\n",
__FUNCTION__, __LINE__, stOsdReverseInfo.VpssGrp, s32Ret,_sdk_enc.attr.overlay[handle_num]->name);
}
/* 5.decide which region to be reverse color according to the sum of the region*/
for (k = 0, j = 0; k < stOsdReverseInfo.stLumaRgnInfo.u32RegionNum; ++k)
{
if (au32LumaData[k] > (stOsdReverseInfo.u8PerPixelLumaThrd *
stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].u32Width *
stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].u32Height))
{
/* 6.get the regions to be reverse color */
if(1 == VpssChn){
SOC_CHECK(HI_MPI_RGN_GetAttr(Handle, &stRgnAttrSet));
stSize.u32Width = stRgnAttrSet.unAttr.stOverlay.stSize.u32Width;
stSize.u32Height = stRgnAttrSet.unAttr.stOverlay.stSize.u32Height;
interval_size = stSize.u32Height ;
stReverseRgnInfo.pstRegion[j].s32X = interval_size *k;
stReverseRgnInfo.pstRegion[j].s32Y = 0;
stReverseRgnInfo.pstRegion[j].u32Width = interval_size;
stReverseRgnInfo.pstRegion[j].u32Height = stSize.u32Height;
// printf("main s32X=%d s32Y=%d u32Width=%d u32Height=%d\n",stReverseRgnInfo.pstRegion[j].s32X,
// stReverseRgnInfo.pstRegion[j].s32Y,stReverseRgnInfo.pstRegion[j].u32Width,
// stReverseRgnInfo.pstRegion[j].u32Height);
++j;
}else if (2 == VpssChn){
stReverseRgnInfo.pstRegion[j].s32X = stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].s32X
- stOsdChnAttr.unChnAttr.stOverlayChn.stPoint.s32X;
stReverseRgnInfo.pstRegion[j].s32Y = stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].s32Y
- stOsdChnAttr.unChnAttr.stOverlayChn.stPoint.s32Y;
stReverseRgnInfo.pstRegion[j].u32Width = stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].u32Width;
stReverseRgnInfo.pstRegion[j].u32Height = stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].u32Height;
// printf("sub s32X=%d s32Y=%d u32Width=%d u32Height=%d\n",stReverseRgnInfo.pstRegion[j].s32X,
// stReverseRgnInfo.pstRegion[j].s32Y,stReverseRgnInfo.pstRegion[j].u32Width,
// stReverseRgnInfo.pstRegion[j].u32Height);
++j;
}
}
}
stReverseRgnInfo.u32RegionNum = j;
if(NULL == overlay || (NULL != overlay)&&(NULL == overlay->canvas)){
SOC_CHECK(HI_MPI_RGN_UpdateCanvas(Handle));
handle_num ++ ;
pthread_mutex_unlock(&_sdk_enc.attr.overlayex_mutex);
continue ;
}else{
/* 8.reverse color */
#if 0
enc_rgn_reverse_osd_color_tde(&stRgnOrignSurface, &stRgnSurface, &stReverseRgnInfo);
#endif
}
// 9.update OSD
SOC_CHECK(HI_MPI_RGN_UpdateCanvas(Handle));
pthread_mutex_unlock(&_sdk_enc.attr.overlayex_mutex);
handle_num++;
}
handle_num = 0;
usleep(500*1000);
}
pthread_exit(NULL);
return 0;
}
static int enc_rgn_vpss_osd_reverse(int handle_num_id)
{
int handle_num = handle_num_id;
RGN_ATTR_S stRgnAttrSet;
RGN_CHN_ATTR_S stChnAttr;
HI_S32 i,VpssChn,VpssChn_sub;
HI_S32 k = 0, j = 0;
RGN_HANDLE Handle;
RGN_HANDLE Handle_sub;
HI_U32 u32OsdRectCnt;
SIZE_S stSize;
RGN_OSD_REVERSE_INFO_S stOsdReverseInfo;
RECT_S astOsdLumaRect[64];
RECT_S astOsdRevRect[OSD_REVERSE_RGN_MAXCNT];
HI_U32 au32LumaData[OSD_REVERSE_RGN_MAXCNT];
HI_S32 s32Ret = HI_SUCCESS;
TDE2_SURFACE_S stRgnOrignSurface = {0};
TDE2_SURFACE_S stRgnSurface = {0};
RGN_CANVAS_INFO_S stCanvasInfo;
VPSS_REGION_INFO_S stReverseRgnInfo;
MPP_CHN_S stChn;
MPP_CHN_S stMppChn = {0};
RGN_CHN_ATTR_S stOsdChnAttr = {0};
lpSDK_ENC_VIDEO_OVERLAY_ATTR overlay;
int width_max;
VPSS_CHN_MODE_S stVpssMode;
VPSS_CROP_INFO_S stCropInfo;
int stream_width;
HI_U32 per_width;
if(NULL == _sdk_enc.attr.overlay[handle_num]){
return -1 ;
}else{
overlay = _sdk_enc.attr.overlay[handle_num];
}
memset(&stOsdReverseInfo, 0, sizeof(stOsdReverseInfo));
Handle = _sdk_enc.attr.overlay_handle[handle_num];
VpssChn = _sdk_enc.attr.vpss_chn[handle_num];
u32OsdRectCnt = 16;//32;
stOsdReverseInfo.Handle = Handle;
stOsdReverseInfo.VpssGrp = 0;
stOsdReverseInfo.VpssChn = 2;
stOsdReverseInfo.u8PerPixelLumaThrd = 128;
stOsdReverseInfo.stLumaRgnInfo.u32RegionNum = u32OsdRectCnt;
stOsdReverseInfo.stLumaRgnInfo.pstRegion = astOsdLumaRect;
if(1 == VpssChn){//main stream
VpssChn_sub = VpssChn + 1;
Handle_sub = _sdk_enc.attr.overlay_handle[handle_num + 1];
}else if (2 == VpssChn){//sub stream
VpssChn_sub = VpssChn;
Handle_sub = Handle;
}
if(0 == Handle_sub){
return -1;
}
pthread_mutex_lock(&_sdk_enc.attr.overlayex_mutex);
SOC_CHECK(HI_MPI_RGN_GetAttr(Handle_sub, &stRgnAttrSet));
stSize.u32Width = stRgnAttrSet.unAttr.stOverlay.stSize.u32Width;
stSize.u32Height = stRgnAttrSet.unAttr.stOverlay.stSize.u32Height;
stChn.enModId = HI_ID_VENC;
stChn.s32DevId = 0;
stChn.s32ChnId = _hi3518_venc_ch_map[VpssChn_sub]; // 1
SOC_CHECK(HI_MPI_RGN_GetDisplayAttr(Handle_sub, &stChn, &stChnAttr));
SOC_CHECK(HI_MPI_VPSS_GetChnCrop(0,VpssChn_sub,&stCropInfo));
SOC_CHECK(HI_MPI_VPSS_GetChnMode(0,VpssChn_sub,&stVpssMode));
if(stCropInfo.bEnable == 1 ){
stream_width = stCropInfo.stCropRect.u32Width;
}else{
stream_width = stVpssMode.u32Width ;
}
per_width = stSize.u32Height;
u32OsdRectCnt = stSize.u32Width / per_width;
stOsdReverseInfo.stLumaRgnInfo.u32RegionNum = u32OsdRectCnt;
for (i=0;i < u32OsdRectCnt; i++)
{
width_max = (per_width * (i+1)) + stChnAttr.unChnAttr.stOverlayChn.stPoint.s32X;
if( width_max > stream_width){
stOsdReverseInfo.stLumaRgnInfo.u32RegionNum = i;
break;
}
astOsdLumaRect[i].s32X = (per_width * i) + stChnAttr.unChnAttr.stOverlayChn.stPoint.s32X;
astOsdLumaRect[i].s32Y = stChnAttr.unChnAttr.stOverlayChn.stPoint.s32Y;
astOsdLumaRect[i].u32Width = per_width;
astOsdLumaRect[i].u32Height = stSize.u32Height;
// printf("[%d] s32X=%d ,s32Y=%d ,u32Width=%d,u32Height=%d \n",i,astOsdLumaRect[i].s32X,astOsdLumaRect[i].s32Y,
// astOsdLumaRect[i].u32Width,astOsdLumaRect[i].u32Height);
}
SOC_CHECK(HI_MPI_RGN_GetCanvasInfo(Handle, &stCanvasInfo));
if(NULL == overlay || (NULL != overlay)&&(NULL == overlay->canvas)){
SOC_CHECK(HI_MPI_RGN_UpdateCanvas(Handle));
pthread_mutex_unlock(&_sdk_enc.attr.overlayex_mutex);
return -1 ;
}else{
memcpy(stCanvasInfo.u32VirtAddr,overlay->canvas->pixels,
overlay->canvas->width * overlay->canvas->height * sizeof(uint16_t));
stCanvasInfo.stSize.u32Height = overlay->canvas->height;
stCanvasInfo.stSize.u32Width = overlay->canvas->width;
}
SOC_CHECK(enc_rgn_conv_osd_cavas_to_tde_surface(&stRgnSurface, &stCanvasInfo));
stRgnOrignSurface.enColorFmt = TDE2_COLOR_FMT_ARGB4444;
stRgnOrignSurface.bAlphaExt1555 = HI_FALSE;
stRgnOrignSurface.bAlphaMax255 = HI_TRUE;
stRgnOrignSurface.u32PhyAddr = overlay->canvas->phy_addr;
stRgnOrignSurface.u32Width = stCanvasInfo.stSize.u32Width;
stRgnOrignSurface.u32Height = stCanvasInfo.stSize.u32Height;
stRgnOrignSurface.u32Stride = stCanvasInfo.u32Stride;
/* 3.get the display attribute of OSD attached to vpss*/
stMppChn.enModId = HI_ID_VENC;
stMppChn.s32DevId = 0;
stMppChn.s32ChnId = _hi3518_venc_ch_map[VpssChn];//1;
s32Ret = HI_MPI_RGN_GetDisplayAttr(Handle, &stMppChn, &stOsdChnAttr);
stReverseRgnInfo.pstRegion = (RECT_S *)astOsdRevRect;
HI_U32 interval_size;
/* 4.get the sum of luma of a region specified by user*/
s32Ret = HI_MPI_VPSS_GetRegionLuma(stOsdReverseInfo.VpssGrp, stOsdReverseInfo.VpssChn, &(stOsdReverseInfo.stLumaRgnInfo), au32LumaData, -1);
if (HI_SUCCESS != s32Ret)
{
printf("[Func]:%s [Line]:%d [Info]:HI_MPI_VPSS_GetRegionLuma VpssGrp=%d failed, s32Ret: 0x%x,overlay name:%s.\n",
__FUNCTION__, __LINE__, stOsdReverseInfo.VpssGrp, s32Ret,_sdk_enc.attr.overlay[handle_num]->name);
}
/* 5.decide which region to be reverse color according to the sum of the region*/
for (k = 0, j = 0; k < stOsdReverseInfo.stLumaRgnInfo.u32RegionNum; ++k)
{
if (au32LumaData[k] > (stOsdReverseInfo.u8PerPixelLumaThrd *
stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].u32Width *
stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].u32Height))
{
/* 6.get the regions to be reverse color */
if(1 == VpssChn){
SOC_CHECK(HI_MPI_RGN_GetAttr(Handle, &stRgnAttrSet));
stSize.u32Width = stRgnAttrSet.unAttr.stOverlay.stSize.u32Width;
stSize.u32Height = stRgnAttrSet.unAttr.stOverlay.stSize.u32Height;
interval_size = stSize.u32Height;
stReverseRgnInfo.pstRegion[j].s32X = interval_size *k;
stReverseRgnInfo.pstRegion[j].s32Y = 0;
stReverseRgnInfo.pstRegion[j].u32Width = interval_size;
stReverseRgnInfo.pstRegion[j].u32Height = stSize.u32Height;
// printf("main s32X=%d s32Y=%d u32Width=%d u32Height=%d\n",stReverseRgnInfo.pstRegion[j].s32X,
// stReverseRgnInfo.pstRegion[j].s32Y,stReverseRgnInfo.pstRegion[j].u32Width,
// stReverseRgnInfo.pstRegion[j].u32Height);
++j;
}else if (2 == VpssChn){
stReverseRgnInfo.pstRegion[j].s32X = stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].s32X
- stOsdChnAttr.unChnAttr.stOverlayChn.stPoint.s32X;
stReverseRgnInfo.pstRegion[j].s32Y = stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].s32Y
- stOsdChnAttr.unChnAttr.stOverlayChn.stPoint.s32Y;
stReverseRgnInfo.pstRegion[j].u32Width = stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].u32Width;
stReverseRgnInfo.pstRegion[j].u32Height = stOsdReverseInfo.stLumaRgnInfo.pstRegion[k].u32Height;
// printf("sub s32X=%d s32Y=%d u32Width=%d u32Height=%d\n",stReverseRgnInfo.pstRegion[j].s32X,
// stReverseRgnInfo.pstRegion[j].s32Y,stReverseRgnInfo.pstRegion[j].u32Width,
// stReverseRgnInfo.pstRegion[j].u32Height);
++j;
}
}
}
stReverseRgnInfo.u32RegionNum = j;
if(NULL == overlay || (NULL != overlay)&&(NULL == overlay->canvas)){
SOC_CHECK(HI_MPI_RGN_UpdateCanvas(Handle));
pthread_mutex_unlock(&_sdk_enc.attr.overlayex_mutex);
return -1 ;
}else{
/* 8.reverse color */
#if 0
enc_rgn_reverse_osd_color_tde(&stRgnOrignSurface, &stRgnSurface, &stReverseRgnInfo);
#endif
}
// 9.update OSD
SOC_CHECK(HI_MPI_RGN_UpdateCanvas(Handle));
pthread_mutex_unlock(&_sdk_enc.attr.overlayex_mutex);
return 0;
}
static int enc_create_overlay(int vin, int stream, const char* overlay_name,
float x, float y, LP_SDK_ENC_VIDEO_OVERLAY_CANVAS const canvas)
{
int i = 0;
static pthread_t stOsdReverseThread = NULL;
if(NULL != canvas
&& vin < HI_VENC_CH_BACKLOG_REF
&& stream < HI_VENC_STREAM_BACKLOG_REF ){
int canvas_x = 0, canvas_y = 0;
size_t canvas_width = 0, canvas_height = 0;
LP_SDK_ENC_STREAM_H264_ATTR const streamH264Attr = &_sdk_enc.attr.h264_attr[vin][stream];
lpSDK_ENC_VIDEO_OVERLAY_ATTR_SET const overlay_set = &_sdk_enc.attr.video_overlay_set[vin][stream];
// check name override
if(NULL != enc_lookup_overlay_byname(vin, stream, overlay_name)){
SOC_DEBUG("Overlay name %s override", overlay_name);
return -1;
}
//printf("rect: %f/%f %d/%d\n", x, y, streamH264Attr->width, streamH264Attr->height);
VPSS_CHN_MODE_S stVpssMode;
VPSS_CROP_INFO_S stCropInfo;
HI_S32 VpssChn = _hi3518_vpss_ch_map[__HI_VENC_CH(vin, stream)];
SOC_CHECK(HI_MPI_VPSS_GetChnCrop(0,VpssChn,&stCropInfo));
SOC_CHECK(HI_MPI_VPSS_GetChnMode(vin,VpssChn,&stVpssMode));
canvas_width = canvas->width;
canvas_height = canvas->height;
// width /height
if(stCropInfo.bEnable == 1){
canvas_x = (typeof(canvas_x))(x * (float)(streamH264Attr->width) * stCropInfo.stCropRect.u32Width / stVpssMode.u32Width);
canvas_y = (typeof(canvas_y))(y * (float)(streamH264Attr->height) * stCropInfo.stCropRect.u32Height / stVpssMode.u32Height);
if((canvas_x + canvas->width) > streamH264Attr->width){
canvas_x = streamH264Attr->width - canvas->width;
}
if((canvas_y + canvas_height) > streamH264Attr->height){
canvas_y = streamH264Attr->height - canvas->height;
}
}else {
canvas_x = (typeof(canvas_x))(x * (float)(streamH264Attr->width));
canvas_y = (typeof(canvas_y))(y * (float)(streamH264Attr->height));
if((canvas_x + canvas->width) > streamH264Attr->width){
if(stream == 0){
canvas_x = canvas_x - 100;
}else if(stream == 1){
canvas_x = canvas_x - 60;
}
}
if((canvas_y + canvas_height) > streamH264Attr->height){
canvas_y = streamH264Attr->height - canvas->height;
}
}
// alignment
canvas_x = SDK_ALIGNED_BIG_ENDIAN(canvas_x, 4);
canvas_y = SDK_ALIGNED_BIG_ENDIAN(canvas_y, 4);
canvas_width = SDK_ALIGNED_BIG_ENDIAN(canvas_width, 2);
canvas_height = SDK_ALIGNED_BIG_ENDIAN(canvas_height, 2);
if(stCropInfo.bEnable == 1){
if((canvas_x + canvas_width > stCropInfo.stCropRect.u32Width) && (canvas_x > 4)){
canvas_x = canvas_x -4;
}
if((canvas_y + canvas_height > stCropInfo.stCropRect.u32Height) && (canvas_y > 4)){
canvas_y = canvas_y - 4;
}
}else{
if((canvas_x + canvas_width > stVpssMode.u32Width) && (canvas_x > 4)){
canvas_x = canvas_x -4;
}
if((canvas_y + canvas_height > stVpssMode.u32Height) && (canvas_y > 4)){
canvas_y = canvas_y - 4;
}
}
if(canvas_x < 0){
canvas_x = 0;
}
if(canvas_y < 0){
canvas_y = 0;
}
for(i = 0; i < HI_VENC_OVERLAY_BACKLOG_REF; ++i){
lpSDK_ENC_VIDEO_OVERLAY_ATTR const overlay = &overlay_set->attr[i];
if(!overlay->canvas){
overlay->canvas = canvas;
snprintf(overlay->name, sizeof(overlay->name), "%s", overlay_name);
overlay->x = canvas_x;
overlay->y = canvas_y;
overlay->width = canvas_width;
overlay->height = canvas_height;
//printf("rect: %d/%d %d/%d\n", overlay->x, overlay->y, overlay->width, overlay->height);
if(1){//if(overlay->region_handle >= 0){
RGN_HANDLE const region_vpss_handle = overlay->region_handle;
RGN_ATTR_S region_attr;
RGN_CHN_ATTR_S region_ch_attr;
MPP_CHN_S mppChannelVENC;
///Attach To venc chn
MPP_CHN_S stVpssChn;
HI_S32 VpssChn;
int handle_num;
VpssChn = _hi3518_vpss_ch_map[__HI_VENC_CH(vin, stream)];
memset(®ion_attr, 0, sizeof(region_attr));
region_attr.enType = OVERLAY_RGN;
region_attr.unAttr.stOverlay.enPixelFmt = PIXEL_FORMAT_RGB_4444;
region_attr.unAttr.stOverlay.stSize.u32Width = overlay->width;
region_attr.unAttr.stOverlay.stSize.u32Height = overlay->height;
region_attr.unAttr.stOverlay.u32BgColor = 0;
SOC_CHECK(HI_MPI_RGN_Create(region_vpss_handle, ®ion_attr));
memset(&mppChannelVENC, 0, sizeof(mppChannelVENC));
mppChannelVENC.enModId = HI_ID_VENC;
mppChannelVENC.s32DevId = 0;
mppChannelVENC.s32ChnId = _hi3518_venc_ch_map[VpssChn];//VpssChn;
memset(®ion_ch_attr,0,sizeof(region_ch_attr));
region_ch_attr.bShow = HI_TRUE;
region_ch_attr.enType = OVERLAY_RGN;
region_ch_attr.unChnAttr.stOverlayChn.stPoint.s32X = overlay->x; //0
region_ch_attr.unChnAttr.stOverlayChn.stPoint.s32Y = overlay->y; //680
region_ch_attr.unChnAttr.stOverlayChn.u32BgAlpha = 0;
region_ch_attr.unChnAttr.stOverlayChn.u32FgAlpha = 64;
region_ch_attr.unChnAttr.stOverlayChn.u32Layer = 0;
region_ch_attr.unChnAttr.stOverlayChn.stQpInfo.bAbsQp = HI_FALSE;
region_ch_attr.unChnAttr.stOverlayChn.stQpInfo.s32Qp = 0;
// region_ch_attr.unChnAttr.stOverlayChn.stQpInfo.bQpDisable = HI_FALSE;
region_ch_attr.unChnAttr.stOverlayChn.stInvertColor.bInvColEn = HI_FALSE;
SOC_CHECK(HI_MPI_RGN_AttachToChn(region_vpss_handle, &mppChannelVENC, ®ion_ch_attr));
}
return 0;
}
}
}
return -1;
}
static int enc_release_overlay(int vin, int stream, const char* overlay_name)
{
if(vin < HI_VENC_CH_BACKLOG_REF
&& stream < HI_VENC_STREAM_BACKLOG_REF ){
lpSDK_ENC_VIDEO_OVERLAY_ATTR const overlay = enc_lookup_overlay_byname(vin, stream, overlay_name);
if(NULL != overlay){
overlay->canvas = NULL;
// SOC_CHECK(HI_MPI_RGN_DetachFrmChn(overlay->region_handle, &mppChannelVPSS));
SOC_CHECK(HI_MPI_RGN_Destroy(overlay->region_handle));
return 0;
}
}
return -1;
}
static LP_SDK_ENC_VIDEO_OVERLAY_CANVAS enc_get_overlay_canvas(int vin, int stream, const char* overlay_name)
{
lpSDK_ENC_VIDEO_OVERLAY_ATTR overlay = enc_lookup_overlay_byname(vin, stream, overlay_name);
if(overlay){
return overlay->canvas;
}
return NULL;
}
static int enc_show_overlay(int vin, int stream, const char* overlayName, bool showFlag)
{
if(vin < HI_VENC_CH_BACKLOG_REF
&& stream < HI_VENC_STREAM_BACKLOG_REF){
lpSDK_ENC_VIDEO_OVERLAY_ATTR const overlay = enc_lookup_overlay_byname(vin, stream, overlayName);
if(NULL != overlay){
RGN_HANDLE const regionHandle = overlay->region_handle;
MPP_CHN_S mppChannel;
RGN_CHN_ATTR_S regionChannelAttr;
/*
mppChannel.enModId = HI_ID_VPSS;
mppChannel.s32DevId = 0;
mppChannel.s32ChnId = _hi3518_vpss_ch_map[__HI_VENC_CH(vin, stream)];
*/
mppChannel.enModId = HI_ID_VENC;
mppChannel.s32DevId = 0;
mppChannel.s32ChnId = __HI_VENC_CH(vin, stream);
SOC_CHECK(HI_MPI_RGN_GetDisplayAttr(regionHandle, &mppChannel, ®ionChannelAttr));
if(0 != showFlag){
regionChannelAttr.bShow = HI_TRUE;
}else{
regionChannelAttr.bShow = HI_FALSE;
}
//SOC_NOTICE("region_ch_attr.bShow = %x/%x", showFlag, regionChannelAttr.bShow);
SOC_CHECK(HI_MPI_RGN_SetDisplayAttr(regionHandle, &mppChannel, ®ionChannelAttr));
return 0;
}
}
return -1;
}
static int enc_update_overlay(int vin, int stream, const char* overlay_name)
{
if(vin < HI_VENC_CH_BACKLOG_REF
&& stream < HI_VENC_STREAM_BACKLOG_REF){
lpSDK_ENC_VIDEO_OVERLAY_ATTR const overlay = enc_lookup_overlay_byname(vin, stream, overlay_name);
if(NULL != overlay){
RGN_HANDLE const region_handle = overlay->region_handle;
BITMAP_S overlay_bitmap;
overlay_bitmap.enPixelFormat = PIXEL_FORMAT_RGB_4444;
overlay_bitmap.u32Width = overlay->canvas->width;
overlay_bitmap.u32Height = overlay->canvas->height;
overlay_bitmap.pData = overlay->canvas->pixels;
SOC_CHECK(HI_MPI_RGN_SetBitMap(region_handle, &overlay_bitmap));
return 0;
}
}
return -1;
}
static int hi3518e_enc_eptz_ctrl(int vin, int stream, int cmd, int param)
{
return 0;
}
static int hi3518e_enc_usr_mode(int vin, int stream, int fix_mode, int show_mode)
{
return 0;
}
static int hi3518_mpi_init()
{
HI_S32 s32Ret;
VB_CONF_S struVbConf;
MPP_SYS_CONF_S struSysConf;
HI_MPI_SYS_Exit();
HI_MPI_VB_Exit();
memset(&struVbConf, 0, sizeof(VB_CONF_S));
s32Ret = HI_MPI_VB_SetConf(&struVbConf);
if (HI_SUCCESS != s32Ret)
{
printf("HI_MPI_VB_SetConf fail,Error(%#x)\n", s32Ret);
return s32Ret;
}
s32Ret = HI_MPI_VB_Init();
if (HI_SUCCESS != s32Ret)
{
printf("HI_MPI_VB_Init fail,Error(%#x)\n", s32Ret);
return s32Ret;
}
s32Ret = HI_MPI_SYS_Init();
if (HI_SUCCESS != s32Ret)
{
printf("HI_MPI_SYS_Init fail,Error(%#x)\n", s32Ret);
(HI_VOID)HI_MPI_VB_Exit();
return s32Ret;
}
return HI_SUCCESS;
}
/**************************************************************************************
**描述:内存映射,根据长度在mmz映射相同长度的内存给用户空间
**返回:虚拟地址
***************************************************************************************/
static void *hi3518e_enc_sdk_mmap(void *param)
{
umount2("/media/custom", MNT_DETACH);// like 'umount -l'
VB_BLK VbBlk;
HI_U8* pVirAddr;
HI_U32 u32PhyAddr;
VB_POOL VbPool;
//APP_OVERLAY_destroy();
SDK_ENC_destroy();
/*
SDK_ISP_destroy();
SDK_destroy_vin();
sdk_audio->release_ain_ch(0);
sdk_audio->destroy_ain();
sdk_audio->release_ain_ch(0);
sdk_audio->destroy_ain();
SDK_destroy_audio();
SDK_destroy_sys();
*/
//OVERLAY_destroy();
ssize_t length = *((ssize_t *)param);
if(HI_SUCCESS != hi3518_mpi_init())
{
printf("MPI_Init err\n");
return -1;
}
/* create a video buffer pool*/
VbPool = HI_MPI_VB_CreatePool(length,1,"anonymous");
if ( VB_INVALID_POOLID == VbPool )
{
printf("create vb err\n");
return NULL;
}
VbBlk = HI_MPI_VB_GetBlock(VbPool, length, "anonymous");
if (VB_INVALID_HANDLE == VbBlk)
{
printf("HI_MPI_VB_GetBlock err! size:%d\n", length);
return NULL;
}
u32PhyAddr = HI_MPI_VB_Handle2PhysAddr(VbBlk);
if (0 == u32PhyAddr)
{
printf("HI_MPI_VB_Handle2PhysAddr error\n");
return NULL;
}
pVirAddr = HI_MPI_SYS_Mmap(u32PhyAddr, length);
if(0 == pVirAddr)
{
return NULL;
printf("HI_MPI_SYS_Mmap error\n");
}
//返回地址
return pVirAddr;
}
int SDK_unmmap(void *virmem, int length)
{
return HI_MPI_SYS_Munmap(virmem, length);
}
static int hi3518e_update_overlay_by_text(int vin, int stream, const char* text)
{
return -1;
}
static void *enc_resolution_reboot(void *arg)
{
pthread_detach(pthread_self());
sleep(3);
SOC_INFO("MAIN_RESOLUTION changed,restart!");
#ifdef F5
#else
exit(0);
#endif
}
static int hi3518e_enc_resolution(int width, int height)//main resolution
{
uint32_t sensor_width_max,sensor_height_max;
HI_SDK_ISP_get_sensor_resolution(&sensor_width_max, &sensor_height_max);
if(width >= 720 && width <= sensor_width_max &&
height >= 576 && height <= sensor_height_max){
FILE *fid0 = fopen(MAIN_RESOLUTION, "rb");
char buf[16] = "";
char tmp[5] = "";
int width_old ,height_old;
pthread_t pid = NULL;
bool flag_restart;
if(NULL != fid0){
fread(buf,sizeof(buf),1,fid0);
fclose(fid0);
fid0 = NULL;
sscanf(buf,"%d %s %d",&width_old,tmp,&height_old);
}else{
SOC_INFO("Open %s failed", MAIN_RESOLUTION);
return -1;
}
if(width_old * height_old >= 1920*1080 || width * height >= 1920*1080){
flag_restart = true;
}else{
flag_restart = false;
}
FILE *fid = fopen(MAIN_RESOLUTION, "w+b");
memset(buf,0,sizeof(buf));
sprintf(buf,"%d x %d",width,height);
if(NULL != fid){
fwrite(buf,1,strlen(buf),fid);
fclose(fid);
fid = NULL;
SOC_INFO("cur resolution: %dx%d ; new resolution: %dx%d \n",width_old,height_old,width,height);
if(flag_restart){
pthread_create(&pid,NULL,enc_resolution_reboot,NULL);
}
return 0;
}else{
SOC_INFO("Open %s failed", MAIN_RESOLUTION);
}
return -1;
}else{
SOC_INFO("Resolution size exceeds limit!!");
return -1;
}
}
static stSDK_ENC_HI3521 _sdk_enc =
{
// init the interfaces
.api = {
// h264 stream
.create_stream_h264 = enc_create_stream_h264,
.release_stream_h264 = enc_release_stream_h264,
.enable_stream_h264 = enc_enable_stream_h264,
.set_stream_h264 = enc_set_stream_h264,
.get_stream_h264 = enc_get_stream_h264,
.request_stream_h264_keyframe = enc_request_stream_h264_keyframe,
//h265 stream
.create_stream_h265 = enc_create_stream_h265,
.release_stream_h265 = enc_release_stream_h265,
.enable_stream_h265 = enc_enable_stream_h265,
.set_stream_h265 = enc_set_stream_h265,
.get_stream_h265 = enc_get_stream_h265,
.request_stream_h265_keyframe = enc_request_stream_h265_keyframe,
.create_stream_g711a = enc_create_stream_g711a,
.create_audio_stream = enc_create_audio_stream,
.release_stream_g711a = enc_release_stream_g711a,
// snapshot a picture
.creat_snapshot_chn = enc_creat_snapshot_chn,
.release_snapshot_chn = enc_release_snapshot_chn,
.snapshot = enc_snapshot,
// overlay
.create_overlay_canvas = enc_create_overlay_canvas,
.load_overlay_canvas = enc_load_overlay_canvas,
.release_overlay_canvas = enc_release_overlay_canvas,
.create_overlay = enc_create_overlay,
.release_overlay = enc_release_overlay,
.get_overlay_canvas = enc_get_overlay_canvas,
.show_overlay = enc_show_overlay,
.update_overlay = enc_update_overlay,
// encode start / stop
.start = enc_start,
.stop = enc_stop,
//fish eye
.eptz_ctrl = hi3518e_enc_eptz_ctrl,
.enc_mode = hi3518e_enc_usr_mode,
//upgrade
.upgrade_env_prepare = hi3518e_enc_sdk_mmap,
.update_overlay_bytext = hi3518e_update_overlay_by_text,
.enc_resolution = hi3518e_enc_resolution,
},
};
int SDK_ENC_init()
{
int i = 0, ii = 0, iii = 0;
// only 'sdk_enc' pointer is NULL could be init
if(NULL == sdk_enc){
// set handler pointer
sdk_enc = (lpSDK_ENC_API)(&_sdk_enc);
// clear the buffering callback
sdk_enc->do_buffer_request = NULL;
sdk_enc->do_buffer_append = NULL;
sdk_enc->do_buffer_commit = NULL;
// init the internal attribute value
// clear the stream attrubutes
// clear the frame counter
for(i = 0; i < HI_VENC_CH_BACKLOG_REF; ++i){
for(ii = 0; ii < HI_VENC_STREAM_BACKLOG_REF; ++ii){
LP_SDK_ENC_STREAM_H264_ATTR const streamH264Attr = &_sdk_enc.attr.h264_attr[i][ii];
LP_SDK_ENC_STREAM_H265_ATTR const streamH265Attr = &_sdk_enc.attr.h265_attr[i][ii];
uint8_t *const frame_ref_counter = &_sdk_enc.attr.frame_ref_counter[i][ii];
STREAM_H264_CLEAR(streamH264Attr);
STREAM_H264_CLEAR(streamH265Attr);
*frame_ref_counter = 0;
}
}
// init the overlay set handl
for(i = 0; i < HI_VENC_CH_BACKLOG_REF; ++i){
for(ii = 0; ii < HI_VENC_STREAM_BACKLOG_REF; ++ii){
lpSDK_ENC_VIDEO_OVERLAY_ATTR_SET const overlay_set = &_sdk_enc.attr.video_overlay_set[i][ii];
for(iii = 0; iii < HI_VENC_OVERLAY_BACKLOG_REF; ++iii){
lpSDK_ENC_VIDEO_OVERLAY_ATTR const overlay = &overlay_set->attr[iii];
overlay->canvas = NULL;
memset(overlay->name, 0, sizeof(overlay->name));
overlay->x = 0;
overlay->y = 0;
overlay->width = 0;
overlay->height = 0;
// very important, pre-alloc the handle number
overlay->region_handle = HI_VENC_OVERLAY_HANDLE_OFFSET;
overlay->region_handle += i * HI_VENC_STREAM_BACKLOG_REF * HI_VENC_OVERLAY_BACKLOG_REF;
overlay->region_handle += ii * HI_VENC_OVERLAY_BACKLOG_REF;
overlay->region_handle += iii;
}
}
}
// _sdk_enc.attr.overlay_handle_num = 0;
// init the snapshot mutex
pthread_mutex_init(&_sdk_enc.attr.snapshot_mutex, NULL);
sdk_enc->creat_snapshot_chn(0,VENC_MAX_CHN_NUM - 2,-1,-1);
sdk_enc->creat_snapshot_chn(0,VENC_MAX_CHN_NUM - 1,640,360);
// start
//sdk_enc->start();
// success to init
return 0;
}
return -1;
}
int SDK_ENC_wdr_destroy()
{
if(sdk_enc){
int i = 0, ii = 0;
// release the video encode
for(i = 0; i < HI_VENC_CH_BACKLOG_REF; ++i){
for(ii = 0; ii < HI_VENC_STREAM_BACKLOG_REF; ++ii){ // destroy sub stream firstly
switch(_sdk_enc.attr.enType[i][ii]){
default:
case kSDK_ENC_BUF_DATA_H264:
sdk_enc->release_stream_h264(i, ii);
break;
case kSDK_ENC_BUF_DATA_H265:
sdk_enc->release_stream_h265(i, ii);
break;
}
}
}
return 0;
}
return -1;
}
int SDK_ENC_vpss_destroy()
{
MPP_CHN_S stSrcChn;
MPP_CHN_S stDestChn;
HI_U32 VpssGrp = 0;
HI_U32 VpssChn;
// vpss chn 0,1,2,3
for(VpssChn = 0; VpssChn <=3; VpssChn++){
HI_MPI_VPSS_DisableChn(VpssGrp,VpssChn);
usleep(500);
}
stSrcChn.enModId = HI_ID_VIU;
stSrcChn.s32DevId = 0;
stSrcChn.s32ChnId = 0;
stDestChn.enModId = HI_ID_VPSS;
stDestChn.s32DevId = VpssGrp;
stDestChn.s32ChnId = 0;
HI_MPI_SYS_UnBind(&stSrcChn, &stDestChn);
HI_MPI_VPSS_StopGrp(VpssGrp);
HI_MPI_VPSS_DestroyGrp(VpssGrp);
return 0;
}
int SDK_ENC_destroy()
{
SDK_destroy_vin();
if(sdk_enc){
int i = 0, ii = 0;
// bExitOverlayLoop = HI_TRUE;
// destroy the snapshot mutex
pthread_mutex_destroy(&_sdk_enc.attr.snapshot_mutex);
// stop encode firstly
sdk_enc->stop();
//release the sanapshot chn
sdk_enc->release_snapshot_chn(0,VENC_MAX_CHN_NUM - 2); //main JPEG
sdk_enc->release_snapshot_chn(0,VENC_MAX_CHN_NUM - 1); //sub JPEG
// release the canvas stock
for(i = 0; i < HI_VENC_OVERLAY_CANVAS_STOCK_REF; ++i){
sdk_enc->release_overlay_canvas(_sdk_enc.attr.canvas_stock + i);
}
// release the audio encode
for(i = 0; i < HI_AENC_CH_BACKLOG_REF; ++i){
sdk_enc->release_stream_g711a(i);
}
// release the video encode
for(i = 0; i < HI_VENC_CH_BACKLOG_REF; ++i){
for(ii = 0; ii < HI_VENC_STREAM_BACKLOG_REF; ++ii){
switch(_sdk_enc.attr.enType[i][ii]){
default:
case kSDK_ENC_BUF_DATA_H264:
sdk_enc->release_stream_h264(i, ii);
break;
case kSDK_ENC_BUF_DATA_H265:
sdk_enc->release_stream_h265(i, ii);
break;
}
}
}
// clear handler pointer
sdk_enc = NULL;
// success to destroy
}
//destroy vpss
SDK_ENC_vpss_destroy();
// SDK_destroy_vin();
if(sdk_audio){
sdk_audio->release_ain_ch(0);
}
SDK_destroy_audio();
SENSOR_destroy();
usleep(1000);
SDK_destroy_sys();
return 0;
}
int SDK_ENC_create_stream(int vin, int stream, LP_SDK_ENC_STREAM_ATTR stream_attr)
{
if(sdk_enc){
int ret;
_sdk_enc.attr.enType[vin][stream] = stream_attr->enType;
switch(stream_attr->enType){
default:
case kSDK_ENC_BUF_DATA_H264:
ret = sdk_enc->create_stream_h264(vin, stream,&stream_attr->H264_attr);
break;
case kSDK_ENC_BUF_DATA_H265:
ret = sdk_enc->create_stream_h265(vin, stream,&stream_attr->H265_attr);
break;
}
return ret;
}else{
return -1;
}
}
int SDK_ENC_release_stream(int vin, int stream)
{
if(sdk_enc){
int ret;
switch(_sdk_enc.attr.enType[vin][stream]){
default:
case kSDK_ENC_BUF_DATA_H264:
ret = sdk_enc->release_stream_h264(vin,stream);
break;
case kSDK_ENC_BUF_DATA_H265:
ret = sdk_enc->release_stream_h265(vin,stream);
break;
}
return ret;
}else{
return -1;
}
}
int SDK_ENC_set_stream(int vin, int stream,LP_SDK_ENC_STREAM_ATTR stream_attr)
{
if(sdk_enc){
int ret;
switch(stream_attr->enType){
default:
case kSDK_ENC_BUF_DATA_H264:
ret = sdk_enc->set_stream_h264(vin, stream,&stream_attr->H264_attr);
break;
case kSDK_ENC_BUF_DATA_H265:
ret = sdk_enc->set_stream_h265(vin, stream,&stream_attr->H265_attr);
break;
}
return ret;
}else{
return -1;
}
}
int SDK_ENC_get_stream(int vin, int stream, LP_SDK_ENC_STREAM_ATTR stream_attr)
{
if(sdk_enc){
int ret;
switch(_sdk_enc.attr.enType[vin][stream]){
default:
case kSDK_ENC_BUF_DATA_H264:
ret = sdk_enc->get_stream_h264(vin, stream,&stream_attr->H264_attr);
stream_attr->enType = kSDK_ENC_BUF_DATA_H264;
break;
case kSDK_ENC_BUF_DATA_H265:
ret = sdk_enc->get_stream_h265(vin, stream,&stream_attr->H265_attr);
stream_attr->enType = kSDK_ENC_BUF_DATA_H265;
break;
}
return ret;
}else{
return -1;
}
}
int SDK_ENC_enable_stream(int vin, int stream, bool flag)
{
if(sdk_enc){
int ret;
switch(_sdk_enc.attr.enType[vin][stream]){
default:
case kSDK_ENC_BUF_DATA_H264:
ret = sdk_enc->enable_stream_h264(vin, stream, flag);
break;
case kSDK_ENC_BUF_DATA_H265:
ret = sdk_enc->enable_stream_h265(vin, stream, flag);
break;
}
return ret;
}else{
return -1;
}
}
int SDK_ENC_request_stream_keyframe(int vin, int stream)
{
if(sdk_enc){
switch(_sdk_enc.attr.enType[vin][stream]){
default:
case kSDK_ENC_BUF_DATA_H264:
sdk_enc->request_stream_h264_keyframe(vin, stream);
break;
case kSDK_ENC_BUF_DATA_H265:
sdk_enc->request_stream_h265_keyframe(vin, stream);
break;
}
return 0;
}else{
return -1;
}
}
PAYLOAD_TYPE_E SDK_ENC_request_venc_type(int vin, int stream)
{
VENC_CHN_ATTR_S venc_ch_attr;
enSDK_ENC_BUF_DATA_TYPE enc_type;
int const venc_ch = __HI_VENC_CH(vin, stream);
SOC_CHECK(HI_MPI_VENC_GetChnAttr(venc_ch, &venc_ch_attr));
switch(venc_ch_attr.stVeAttr.enType){
default:
case PT_H264:
enc_type = kSDK_ENC_BUF_DATA_H264;
break;
case PT_H265:
enc_type = kSDK_ENC_BUF_DATA_H265;
break;
}
return enc_type;
}
int SDK_ENC_get_enc_pts(int vin, int stream, unsigned long long *encPts)
{
if(encPts){
*encPts = _sdk_enc.attr.u64encPTS[vin][stream];
return 0;
}
return -1;
}