关于编码的部分一直网上的资料不多,自己也整理了一下,理出一条可通的路子给大家。
此篇文章的环境:xcode4.2 sdk5.0
编译的版本:真机armv7
一,x264库的编译
首先到http://www.videolan.org/developers/x264.html下载x264的库,然后解压。
打开shell,进入x264的目录,执行如下语句
CC=/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc ./configure --host=arm-apple-darwin --sysroot=/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.0.sdk --extra-cflags='-arch armv7' --extra-ldflags='-arch armv7 -L/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.0.sdk/usr/lib/system' --enable-pic
--enable-shared --enable-static
make,make install;
编译好之后,文件会生成在虚拟路径/usr/local下面
二,ffmpeg库的编译
1.https://github.com/yuvi/gas-preprocessor下载gas-preprocessor.pl文件并将其放置到/usr/sbin路径下
2.去官网下载http://ffmpeg.org/download.html下载所需要版本代码
3.进入终端,找到文件目录,以下是编译参数
./configure --enable-libx264 --enable-gpl --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/lib --disable-doc --disable-ffmpeg --disable-ffplay --disable-ffserver --disable-avfilter --disable-debug --disable-decoders --enable-cross-compile --disable-encoders --disable-armv5te --enable-decoder=h264 --enable-encoder=libx264 --enable-pic --cc=/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc --as='gas-preprocessor/gas-preprocessor.pl /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc' --extra-ldflags='-arch armv7 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.0.sdk' --sysroot=/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.0.sdk --target-os=darwin --arch=arm --cpu=cortex-a8 --extra-cflags='-arch armv7' --disable-asm
make,make install;
编译的时候需要注意几点,
--extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/lib就是你x264库的路径,用来寻找x264库的。用我之前编译的,默认就是/usr/local下面了;
--enable-libx264,--enable-encoder=libx264,这两句不要忘了,一个是连接x264,一个是打开x264的编码器,不写的话,因为我编译里面写了--disable-encoders ,程序里面会找不到x264的编码器的.
都编译好了以后,找到生成的几个库文件,将其添加到项目里面就可以使用了,当然别忘了头文件导入
三,开始编码
1,在采集视频的回调函数里面获取到图片的buffer
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
记得使用这个buffer的时候,要¥可在开始的时候
CVPixelBufferLockBaseAddress(pixelBuffer, 0); 结束了解锁CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
2.接下来就是ffmpeg的编码了,网上有蛮多资料我就照搬了
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// access the data
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
AVFrame *pFrame = avcodec_alloc_frame();
pFrame->quality = 0;
AVFrame* outpic = avcodec_alloc_frame();
avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_BGR32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
avcodec_register_all();
av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int out_size, size, outbuf_size;
//FILE *f;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context3(codec);
/* put sample parameters */
c->bit_rate = 400000;
// c->bit_rate_tolerance = 10;
// c->me_method = 2;
/* resolution must be a multiple of two */
c->width = 192;//width;//352;
c->height = 144;//height;//288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10;//25; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
c->thread_count = 1;
// c ->me_range = 16;
// c ->max_qdiff = 4;
// c ->qmin = 10;
// c ->qmax = 51;
// c ->qcompress = 0.6f;
/* open it */
if (avcodec_open2(c, codec,NULL) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
AVPacket avpkt;
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
fflush(stdout);
for (int i=0;i<15;++i){
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_BGR32,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_POINT, NULL, NULL, NULL);
//perform the conversion
pFrame->data[0] += pFrame->linesize[0] * (height - 1);
pFrame->linesize[0] *= -1;
pFrame->data[1] += pFrame->linesize[1] * (height / 2 - 1);
pFrame->linesize[1] *= -1;
pFrame->data[2] += pFrame->linesize[2] * (height / 2 - 1);
pFrame->linesize[2] *= -1;
int xx = sws_scale(fooContext,(const uint8_t**)pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
NSLog(@"xxxxx=====%d",xx);
/* encode the image */
int got_packet_ptr = 0;
av_init_packet(&avpkt);
avpkt.size = outbuf_size;
avpkt.data = outbuf;
out_size = avcodec_encode_video2(c, &avpkt, outpic, &got_packet_ptr);
printf("encoding frame (size=%5d)\n", out_size);
printf("encoding frame %s\n", avpkt.data);
fwrite(avpkt.data,1,avpkt.size ,fp);
}
free(outbuf);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
avcodec_close(c);
av_free(c);
av_free(pFrame);
av_free(outpic);
到这一步,我其实是遇到一些问题的,
最开始的时候是找不到编码器 codec =avcodec_find_encoder(CODEC_ID_H264)后来按照我先前的编译解决了。
然后网上下的那些ffmpeg的代码都有点过时,我做了一部分改正,如果还有不一致的,希望大家到ffmpeg目录下面doc/apichangees下面去对照新的接口,用法的话到源码里面去找例子
在编码的时候,我一开始不知道为什么都没数据,avpkt一直是空的,后来找了些资料,在这个地方做了个循环for (int i=0;i<15;++i){}有数据了
到这里就是我的研究成果,因为这个时候,领导让我直接用x264编码,所以就没继续调试了,至少可以编出h264的数据,不过好像是黑白的,需要朋友们自己去看看参数了,我这个循环有个朋友说她没用到。我也没时间去验证,希望大家在摸索的时候,把正确的编码代码贴到这个帖子。