最近在研究KLT光流法对运动目标的跟踪,这里贴出我根据《Learning OpenCV3》及《OpenCV3 CookBook》相关源码进行简化得出的对视频中运动物体的跟踪实现。
它是空间运动物体在观察成像平面上的像素运动的瞬时速度,是利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的相应关系,从而计算出相邻帧之间物体的运动信息的一种方法。一般而言,光流是因为场景中前景目标本身的移动、相机的运动,或者两者的共同运动所产生的。
详细的原理讲解,可参考这两篇博客:
#include
#include
#include
#include
#include
#include
#include
#include
using namespace std;
using namespace cv;
int main(int argc, char* argv[]) {
cv::Mat output;
cv::Mat gray; // current gray-level image
cv::Mat gray_prev; // previous gray-level image
std::vector<cv::Point2f> points[2]; // tracked features from 0->1
std::vector<cv::Point2f> initial; // initial position of tracked points
std::vector<uchar> status; // status of tracked features
std::vector<float> err; // error in tracking
cv::VideoCapture capture("bike.avi");
if (!capture.isOpened())
{
return 0;
}
Mat frame;
while (1)
{
capture >> frame;
// convert to gray-level image
cv::cvtColor(frame, gray, CV_BGR2GRAY);
frame.copyTo(output);
// 1. detect the points
std::vector<cv::Point2f> features; // detected features
int max_count = 500; // maximum number of features to detect
double qlevel = 0.01; // quality level for feature detection
double minDist = 10.0; // minimum distance between two feature points
if (points[0].size() <= 10)
{
cv::goodFeaturesToTrack(gray, // the image
features, // the output detected features
max_count, // the maximum number of features
qlevel, // quality level
minDist); // min distance between two features
// add the detected features to the currently tracked features
points[0].insert(points[0].end(), features.begin(), features.end());
initial.insert(initial.end(), features.begin(), features.end());
}
// for first image of the sequence
if (gray_prev.empty())
gray.copyTo(gray_prev);
// 2. track features
TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30, 0.01);
double derivlambda = 0.5;
int flags = 0;
cv::calcOpticalFlowPyrLK(gray_prev, gray, // 2 consecutive images
points[0], // input point position in first image
points[1], // output point postion in the second image
status, // tracking success
err, // tracking error
Size(31, 31), 3, criteria, derivlambda, flags);
// 3. loop over the tracked points to reject the undesirables
int k = 0;
for (int i = 0; i < points[1].size(); i++) {
// do we keep this point?
if (status[i] && (abs(points[0][i].x - points[1][i].x) +(abs(points[0][i].y - points[1][i].y)) > 2))
{
// keep this point in vector
initial[k] = initial[i];
points[1][k++] = points[1][i];
}
}
// eliminate unsuccesful points
points[1].resize(k);
initial.resize(k);
// 4. draw all tracked points
RNG rng;
for (int i = 0; i < points[1].size(); i++) {
// draw line and circle
cv::line(output, initial[i], points[1][i], cv::Scalar(rng.uniform(0,255), rng.uniform(0, 255), rng.uniform(0, 255)),2,8,0);
cv::circle(output, points[1][i], 3, cv::Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)), -1);
}
// 5. current points and image become previous ones
std::swap(points[1], points[0]);
cv::swap(gray_prev, gray);
cv::imshow("video_processing", output);
if (cv::waitKey(80) >= 0)
{
break;
}
}
cv::waitKey();
std::cin.get();
return 0;
}
上述代码及原文件也可去这儿下载:KLT光流法目标跟踪的OpenCV源码实现
最终运行结果:https://v.qq.com/x/page/q1351lzflgq.html