OpenCV is a library of computer vision developed by Intel, we can easily detect faces using this library for example. I’d note how to use it with iOS SDK, including the building scripts and a demo application. Here I attached screen shots from the demo applications.
Updated the project, support OpenCV 2.2.0, iOS SDK 4.3, Xcode 4 (Updated 04/17/2011.)
All source codes and resources are opened and you can get them from my github repository. It includes pre-compiled OpenCV libraries and headers so that you can easily start to test it. If you already have git command, just clone whole repository from github. If not, just take it by zip or tar from download link on github and inflate it.
% git clone git://github.com/niw/iphone_opencv_test.git
After getting source codes, open OpenCVTest.xcodeproj with Xcode, then build it. You will get a demo application on both iPhone Simulator and iPhone device.
You can also make OpenCV library from source code using cross environment compile with gcc. I added some support script so that you can easy to do so. The important point is that iOS SDK doesn’t support dynamic link like “.framework”. We have to make it as static link library and link it to your application statically.
Building OpenCV requiers CMake. You can easily install it by using Homebrew or MacPorts.
# Using Homebrew % brew install cmake # Using MacPorts % sudo port install cmake -gui
If you’ve already installed recent Java update, you may be asked to install javadeveloper_10.6_10m3261.dmg
. This is weird but cmake needs jni.h which is removed from recent Java update, you can download it from here for Mac OS X 10.6 which may require you to subscribe Apple Developer Connection. Yes, Apple is now going to deprecate Java on MacOS X (Updated 10/30/2010).
Getting source code from sourceforge. I tested with OpenCV-2.2.0.tar.bz2.
Extract downloaded archive on the top of demo project directory
% tar xjvf OpenCV-2.2.0.tar.bz2
Apply patch for iOS SDK
% cd OpenCV-2.2.0 % patch -p1 < ../OpenCV-2.2.0.patch
Following next steps to build OpenCV static library for simulator. All files are installed into opencv_simulator
directory. When running make
command, you’ve better assign -j
option and number according to number of your CPU cores. Without -j
option, it takes a long time.
% cd .. # Back to the top of demo project directory. % mkdir build_simulator % cd build_simulator % ../opencv_cmake.sh Simulator ../OpenCV-2.2.0 % make -j 4 % make install
Following next steps to build OpenCV static library for device All files are installed into opencv_device
directory.
% cd .. # Back to the top of demo project directory. % mkdir build_device % cd build_device % ../opencv_cmake.sh Device ../OpenCV-2.2.0 % make -j 4 % make install
Build support script opencv_cmake.sh
has some options to build OpenCV with iOS SDK. Try --help
option to get the all options of it.
OpenCV is using IplImage structure for processing, and iOS SDK using UIImage object to display it on the screen. This means, we need a converter between UIImage and IplImage. Thankfully, we can do with iOS SDK APIs.
Creating IplImage from UIImage is like this.
// NOTE you SHOULD cvReleaseImage() for the return value when end of the code. - (IplImage *)CreateIplImageFromUIImage:(UIImage *)image { // Getting CGImage from UIImage CGImageRef imageRef = image.CGImage; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Creating temporal IplImage for drawing IplImage *iplimage = cvCreateImage( cvSize(image.size.width,image.size.height), IPL_DEPTH_8U, 4 ); // Creating CGContext for temporal IplImage CGContextRef contextRef = CGBitmapContextCreate( iplimage->imageData, iplimage->width, iplimage->height, iplimage->depth, iplimage->widthStep, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault ); // Drawing CGImage to CGContext CGContextDrawImage( contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef ); CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace); // Creating result IplImage IplImage *ret = cvCreateImage(cvGetSize(iplimage), IPL_DEPTH_8U, 3); cvCvtColor(iplimage, ret, CV_RGBA2BGR); cvReleaseImage(&iplimage); return ret; }
Don’t forget release IplImage after using it by cvReleaseImage!
And creating UIImage from IplImage is like this.
// NOTE You should convert color mode as RGB before passing to this function - (UIImage *)UIImageFromIplImage:(IplImage *)image { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Allocating the buffer for CGImage NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize]; CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data); // Creating CGImage from chunk of IplImage CGImageRef imageRef = CGImageCreate( image->width, image->height, image->depth, image->depth * image->nChannels, image->widthStep, colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault ); // Getting UIImage from CGImage UIImage *ret = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); return ret; }
Ok, now you can enjoy with OpenCV with iPhone!
The demo application which you can download from my repository is well configured to use these libraries. If you wanted to use OpenCV libraries on your own project, you should need to adding next configurations on it. You can see these settings on the Xcode project of this demo application.
libopencv_core.a
etc, from OpenCV lib directory for either simulators or devices. Actually Xcode doesn’t care which one is for devices or simulators at this point because it is selected by the library search path.Accelerate.framework
which is used internally from OpenCV library.-lstdc++
and -lz
to Other Linker Flagsinclude
directory to Header Search Paths for both simulators and devices.lib
directory to Library Search Paths for both simulators and devices.If you would like to help this project, please feel free to donate via PayPal using the followin form. Thank you for your donation!