There are already four parts of OpenCV Tutorial published. By publishing a roadmap plan i reveal next steps i will perform. But first, take a closer look to the already done stuff:
Part 1 – Introduction
The first part of the tutorial – in this part we setup a base XCode project for our application and define a base interface of our samples.
Part 2 – Writing a base UI
In the second part we created a basic user interface for displaying registered samples using master-detail paradigm and created two samples: edge and contour detection.
Part 3 – Video and image processing
In this part the video and image processing logic was written. We created necessary UI to present result of image processing.
Part 4 – Correction of mistakes
The four part of the tutorial was dedicated to fixing annoying bugs with video orientation and improvements of application performance. The optimized color conversion algorithm was added to increase overall frame processing speed. A new feature was added too – users now can save processed images to photo album.
Part 5 – Options
Options! Options! Options! In this part i will add adjustable options to our samples and create a UI to adjust them. Options will be supported for both image and video processing. From this sample you’ll learn how to bind to C++ data types from objective C code and create specific user controls depending on type of bound variable. Also we will use popover controllers to present options on iPad devices and flip animation to show options on iPhone.
In addition, a “Edge Detection” sample will now has four algorithms to find edges. You’ll be able to toggle between them using options view and see the difference.
Part 6 – Social interaction
In this part i going to to add twitter integration to give you an ability to post your processed images and share them using twitter. Second feature that i will add – video recording. You’ll be able to record a video from the processed frames to and share it too.
Part 7 – New samples
There are a lot of image processing algorithms in this world! Edge detection is good, but it’s boring to see the same edges always. The new sample – “Video tracking” should be a good starting point if you would like to learn how to find correspondences on video stream. “Video tracking” is a sample that demonstrate use of following algorithms to image to image correspondences:
- KLT tracking
- Block matching tracking
- Key point matching using template matching
- Key point matching using BRIEF, ORB and FREAK descriptors
Starting from this part i expect to make another new sample each month or so. Of course, your contribution are welcome!
Part 8 – Polishing UI
Let’s make our application looks like a pro! Add cool graphics, animation, stunning transitions and follow iOS User Interface Guidelines. User experience is always a top priority.
Part 9 – Optimization
Use of XCode profile let you identify bottlenecks in your code and optimize them to gain better FPS. Since your optimization mainly targets to image processing routine we will optimize our code using ARM Assembly language and ARM NEON to get a maximum of our CPU.
Part 10 – Release
In this part we finish 1.0 version of the OpenCV Tutorial and do a submission to the App Store. I’ll tell the details of application preparation to the submission, profiling, memory leaks detection and other necessary steps you have to perform before see your app in App Store.