KLT光流法运动目标跟踪的OpenCV实现

最近在研究KLT光流法对运动目标的跟踪,这里贴出我根据《Learning OpenCV3》及《OpenCV3 CookBook》相关源码进行简化得出的对视频中运动物体的跟踪实现。

简介

它是空间运动物体在观察成像平面上的像素运动的瞬时速度,是利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的相应关系,从而计算出相邻帧之间物体的运动信息的一种方法。一般而言,光流是因为场景中前景目标本身的移动、相机的运动,或者两者的共同运动所产生的。

原理

详细的原理讲解,可参考这两篇博客:

  • https://www.cnblogs.com/mthoutai/p/7150625.html
  • https://www.cnblogs.com/moondark/archive/2012/05/12/2497391.html

代码

  • 《Learning OpenCV3》实现
    《Learning OpenCV3》中给出的源码,只实现了2张图片之间的目标跟踪,但是包括了KLT目标追踪的全部流程。具体源码和图片可从官网下载:https://github.com/oreillymedia/Learning-OpenCV-3_examples
    官网源码得出的结果:
KLT光流法运动目标跟踪的OpenCV实现_第1张图片
  • 《OpenCV3 CookBook》的简化实现
    《OpenCV3 CookBook》书中给出的源码,为了兼容整章内容,所以代码整个框架显得比较紧密,理解起来略显复杂。这里对其进行了简化,分离出了有关KLT目标追踪的部分,供大家参考:
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
using namespace std;
using namespace cv;
int main(int argc, char* argv[]) {

	cv::Mat output;
	cv::Mat gray;			// current gray-level image
	cv::Mat gray_prev;		// previous gray-level image
	std::vector<cv::Point2f> points[2]; // tracked features from 0->1
	std::vector<cv::Point2f> initial;   // initial position of tracked points

	std::vector<uchar> status; // status of tracked features
	std::vector<float> err;    // error in tracking

	cv::VideoCapture capture("bike.avi");
	if (!capture.isOpened())
	{
		return 0;
	}

	Mat frame;

	while (1)
	{	
		capture >> frame;

		// convert to gray-level image
		cv::cvtColor(frame, gray, CV_BGR2GRAY);
		frame.copyTo(output);

		// 1. detect the points
		std::vector<cv::Point2f> features;  // detected features
		int max_count = 500;	  // maximum number of features to detect
		double qlevel = 0.01;    // quality level for feature detection
		double minDist = 10.0;   // minimum distance between two feature points

		if (points[0].size() <= 10)
		{
			cv::goodFeaturesToTrack(gray, // the image 
				features,   // the output detected features
				max_count,  // the maximum number of features 
				qlevel,     // quality level
				minDist);   // min distance between two features

			// add the detected features to the currently tracked features
			points[0].insert(points[0].end(), features.begin(), features.end());
			initial.insert(initial.end(), features.begin(), features.end());
		}
		// for first image of the sequence
		if (gray_prev.empty())
			gray.copyTo(gray_prev);

		// 2. track features
		TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30, 0.01);
		double derivlambda = 0.5;
		int flags = 0;
		cv::calcOpticalFlowPyrLK(gray_prev, gray, // 2 consecutive images
								 points[0], // input point position in first image
								 points[1], // output point postion in the second image
								 status,    // tracking success
								 err,		// tracking error
								 Size(31, 31), 3, criteria, derivlambda, flags);      

		// 3. loop over the tracked points to reject the undesirables
		int k = 0;
		for (int i = 0; i < points[1].size(); i++) {
			// do we keep this point?
			if (status[i] && (abs(points[0][i].x - points[1][i].x) +(abs(points[0][i].y - points[1][i].y)) > 2))
			{
				// keep this point in vector
				initial[k] = initial[i];
				points[1][k++] = points[1][i];
			}
		}
		// eliminate unsuccesful points
		points[1].resize(k);
		initial.resize(k);

		// 4. draw all tracked points
		RNG rng;
		for (int i = 0; i < points[1].size(); i++) {
			// draw line and circle
			cv::line(output, initial[i], points[1][i], cv::Scalar(rng.uniform(0,255), rng.uniform(0, 255), rng.uniform(0, 255)),2,8,0);
			cv::circle(output, points[1][i], 3, cv::Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), -1);
		}
		
		// 5. current points and image become previous ones
		std::swap(points[1], points[0]);
		cv::swap(gray_prev, gray);

		cv::imshow("video_processing", output);
		if (cv::waitKey(80) >= 0)
		{
			break;
		}

	}

	cv::waitKey();
	std::cin.get();
	return 0;
}

上述代码及原文件也可去这儿下载:KLT光流法目标跟踪的OpenCV源码实现

结果

最终运行结果:https://v.qq.com/x/page/q1351lzflgq.html

你可能感兴趣的:(图像处理&OpenCV)