In this post, we will learn how to implement a simple Video Stabilizer using a technique called Point Feature Matching in OpenCV library. We will discuss the algorithm and share the code(in python) to design a simple stabilizer using this method in OpenCV. This post’s code is inspired by work presented by Nghia Ho here and the post from my website.
Example of Low-frequency camera motion in video
Video stabilization refers to a family of methods used to reduce the effect of camera motion on the final video. The motion of the camera would be a translation ( i.e. movement in the x, y, z-direction ) or rotation (yaw, pitch, roll).
The need for video stabilization spans many domains.
It is extremely important in consumer and professional videography. Therefore, many different mechanical, optical, and algorithmic solutions exist. Even in still image photography, stabilization can help take handheld pictures with long exposure times.
In medical diagnostic applications like endoscopy and colonoscopy, videos need to be stabilized to determine the exact location and width of the problem.
Similarly, in military applications, videos captured by aerial vehicles on a reconnaissance flight need to be stabilized for localization, navigation, target tracking, etc. The same applies to robotic applications.
Video Stabilization approaches include mechanical, optical and digital stabilization methods. These are discussed briefly below:
We will learn a fast and robust implementation of a digital video stabilization algorithm in this post. It is based on a two-dimensional motion model where we apply a Euclidean (a.k.a Similarity) transformation incorporating translation, rotation, and scaling.
As you can see in the image above, in a Euclidean motion model, a square in an image can transform to any other square with a different location, size or rotation. It is more restrictive than affine and homography transforms but is adequate for motion stabilization because the camera movement between successive frames of a video is usually small.
This method involves tracking a few feature points between two consecutive frames. The tracked features allow us to estimate the motion between frames and compensate for it.
The flowchart below shows the basic steps.
Block Diagram
Let’s go over the steps.
Download Code To easily follow along this tutorial, please download code by clicking on the button below. It's FREE!
DOWNLOAD CODE
First, let’s complete the setup for reading the input video and writing the output video. The comments in the code explain every line.
Python
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
C++
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
For video stabilization, we need to capture two frames of a video, estimate motion between the frames, and finally correct the motion.
Python
1 2 3 4 5 |
|
C++
1 2 3 4 5 6 7 8 9 |
|
This is the most crucial part of the algorithm. We will iterate over all the frames, and find the motion between the current frame and the previous frame. It is not necessary to know the motion of each and every pixel. The Euclidean motion model requires that we know the motion of only 2 points in the two frames. However, in practice, it is a good idea to find the motion of 50-100 points, and then use them to robustly estimate the motion model.
3.1 Good Features to Track
The question now is what points should we choose for tracking. Keep in mind that tracking algorithms use a small patch around a point to track it. Such tracking algorithms suffer from the aperture problem as explained in the video below
So, smooth regions are bad for tracking and textured regions with lots of corners are good. Fortunately, OpenCV has a fast feature detector that detects features that are ideal for tracking. It is called goodFeaturesToTrack (no kidding!).
3.2 Lucas-Kanade Optical Flow
Once we have found good features in the previous frame, we can track them in the next frame using an algorithm called Lucas-Kanade Optical Flow named after the inventors of the algorithm.
It is implemented using the function calcOpticalFlowPyrLK in OpenCV. In the name calcOpticalFlowPyrLK, LK stands for Lucas-Kanade, and Pyrstands for the pyramid. An image pyramid in computer vision is used to process an image at different scales (resolutions).
calcOpticalFlowPyrLK may not be able to calculate the motion of all the points because of a variety of reasons. For example, the feature point in the current frame could get occluded by another object in the next frame. Fortunately, as you will see in the code below, the statusflag in calcOpticalFlowPyrLK can be used to filter out these values.
3.3 Estimate Motion
To recap, in step 3.1, we found good features to track in the previous frame. In step 3.2, we used optical flow to track the features. In other words, we found the location of the features in the current frame, and we already knew the location of the features in the previous frame. So we can use these two sets of points to find the rigid (Euclidean) transformation that maps the previous frame to the current frame. This is done using the function estimateRigidTransform.
Once we have estimated the motion, we can decompose it into x and y translation and rotation (angle). We store these values in an array so we can change them smoothly.
The code below goes over steps 3.1 to 3.3. Make sure to read the comments in the code to follow along.
Python
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
C++
In the C++ implementation, we first define a few classes that will help us store the estimated motion vectors. The TransformParam class below stores the motion information (dx — motion in x, dy — motion in y, and da — change in angle), and provides a method getTransformto convert this motion into a transformation matrix.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
We loop over the frames and perform steps 3.1 to 3.3, in the code below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
In the previous step, we estimated the motion between the frames and stored them in an array. We now need to find the trajectory of motion by cumulatively adding the differential motion estimated in the previous step.
Step 4.1 : Calculate trajectory
In this step, we will add up the motion between the frames to calculate the trajectory. Our ultimate goal is to smooth out this trajectory.
Python
In Python, it is easily achieved using cumsum (cumulative sum) in numpy.
1 2 |
|
C++
In C++, we define a class called Trajectory to store the cumulative sum of the transformation parameters.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
We also, define a function cumsum that takes in a vector of TransformParams and returns trajectory by performing the cumulative sum of differential motion dx, dy, and da (angle).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Step 4.2 : Calculate smooth trajectory
In the previous step, we calculated the trajectory of motion. So we have three curves that show how the motion (x, y, and angle) changes over time.
In this step, we will show how to smooth these three curves.
The easiest way to smooth any curve is to use a moving average filter. As the name suggests, a moving average filter replaces the value of a function at the point by the average of its neighbors defined by a window. Let’s look at an example.
Let’s say we have stored a curve in an array , so the points on the curve are … . Let be the smooth curve we obtain by filtering with a moving average filter of width 5.
The element of this curve is calculated using
As you can see, the values of the smooth curve are the values of the noisy curve averaged over a small window. The figure below shows an example of the noisy curve on the left, smoothed using a box filter of size 5 on the right.
Python
In the Python implementation, we define a moving average filter that takes in any curve ( i.e. a 1-D of numbers) as an input and returns the smoothed version of the curve.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
We also define a function that takes in the trajectory and performs smoothing on the three components.
1 2 3 4 5 6 7 |
|
And, here is the final usage.
1 2 |
|
C++
In the C++ version, we define a function called smooth, that calculates the smoothed moving average trajectory.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
And we use it in the main function.
1 2 |
|
Step 4.3 : Calculate smooth transforms
So far we have obtained a smooth trajectory. In this step, we will use the smooth trajectory to obtain smooth transforms that can be applied to frames of the videos to stabilize it.
This is done by finding the difference between the smooth trajectory and the original trajectory and adding this difference back to the original transforms.
Python
1 2 3 4 5 |
|
C++
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
We are almost done. All we need to do now is to loop over the frames and apply the transforms we just calculated.
If we have a motion specified as , the corresponding transformation matrix is given by
Read the comments in the code to follow along.
Python
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|
C++
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Step 5.1 : Fix border artifacts
When we stabilize a video, we may see some black boundary artifacts. This is expected because to stabilize the video, a frame may have to shrink in size.
We can mitigate the problem by scaling the video about its center by a small amount (e.g. 4%).
The function fixBorder below shows the implementation. We use getRotationMatrix2D because it scales and rotates the image without moving the center of the image. All we need to do is call this function with 0 rotation and scale 1.04 ( i.e. 4% upscale).
Python
1 2 3 4 5 6 |
|
C++
1 2 3 4 5 |
|
Left: Input video. Right: Stabilized video.
The result of the stabilization code we have shared is shown above. Our objective was to reduce the motion significantly, but not to eliminate it completely.
We leave it to the reader to think of a modification of the code that will eliminate motion between frames completely. What could be the side effects if you try to eliminate all camera motion?
The current method only works for a fixed length video and not with a real-time feed. We have to modify this method heavily to attain real-time video output which is out of the scope for this post but it is achievable, more information can be found here.
Pros
Cons
If you liked this article and would like to download code (C++ and Python) and example images used in this post, please subscribe to our newsletter. You will also receive a free Computer Vision ResourceGuide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.