光流跟踪

这里写目录标题

  • 1. 稀疏光流跟踪:Lucas-Kanade方法
    • 原理
    • 程序示例
  • 2. 稠密光流跟踪:Farneback算法

1. 稀疏光流跟踪:Lucas-Kanade方法

原理

前提

  1. 亮度恒定。图像场景中目标的像素在帧间运动时外观上保持不变。对于灰度图像,需要假设像素被逐帧跟踪时且亮度不变。
  2. 时间连续或者运动是“小运动”。图像的运动随时间的变化比较缓慢。实际应用中指的是时间变化相对图像中运动的比例要足够小,这样目标在帧间的运动就比较小。
  3. 空间一致。一个场景中同一表面上邻近的点具有相似的运动,在图像平面上的投影也在邻近区域。
void calcOpticalFlowPyrLK(InputArray prevImg,
						  InputArray nextImg,
						  InputArray prevPts,
						  InputOutputArray nextPts,
						  OutputArray status,
						  OutputArray err,
						  Size 	winSize = Size(21, 21),
						  int maxLevel = 3,
						  TermCriteria 	criteria = 	TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01),
						  int flags = 0,
						  double 	minEigThreshold = 1e-4 
)		

参数详解:

  • 第一个参数:深度为8位的前一帧图像或金字塔图像。
  • 第二个参数:和prevImg有向图
  • 第三个参数:计算光流所需要的输入2D点矢量。点坐标必须时单精度浮点数。
  • 第四个参数:输出2D矢量。
  • 第五个参数:输出状态矢量。
  • 第六个参数:输出误差矢量。
  • 第七个参数:每个金字塔层搜索窗口大小。
  • 第八个参数:金字塔层的最大数目。
  • 第九个参数:指定搜索算法收敛迭代的类型。
  • 第十个参数:算法计算的光流等式的2x2常规矩阵的最小特征值。

程序示例

int main(int argc, char* argv[])
{
     
	// Variable declaration and initialization
	// Iterate until the user hits the Esc key
	while(true)
	{
     
		// Capture the current frame
		cap >> frame;

		// Check if the frame is empty
		if(frame.empty())
			break;
	
		//Resize the frame
		resize(frame, frame, Size(), scalingFactor, scalingFactor, INTER_AREA);
		
		// Copy the input frame
		frame.copyTo(image);

		// Convert the image ot gtayscale
		cvtColor(image, curGrayImage, COLOR_BGR2GRAY);

		// Check if there are points to track
		if(!trackingPoints[0].empty())
		{
     
			// Status vector to indicate whether the flow for the corresponding features has been found
			vector<uchar> statusVector;

			// Error vector to indicate the error for the corresponding feature
			vector<float> errorVector;

			// Check if previous image is empty
			if(prevGrayImage.empty())
			{
     
				curGrayImage.copyTo(preGrayImage);
			}

			// Calculate the optial flow using Lucas-Kanade algorithm 
			calcOpticalFlowPyrLK(prevGrayImage, curGrayImage, trackingPoints[0], trackingPoints[1], statusVector, errorVector, windowSize, 3, terminationCriteria, 0, 0.001);

			int count = 0;

			// Minimum distace between any two tracking points
			int minDist = 7;
			for(int i = 0; i < trackingPoints[1].size(); i++)
			{
     
				if(pointTrackingFlag)
				{
     
					// If the new point is within 'minDist' distance from an existing point, it will not be tracked
					if(norm(currentPoint - trackingPoints[1][i]) <= minDist)
					{
     
						pointTrackingFlag = false;
						continue;
					}
				}
				// Check if the status vector is good
				if(!statusVector[i])
					continue;
				trackingPoints[1][count ++] = trackingPoints[1][i];

				// Draw a filled circle for each of the tracking points
				int radius = 8;
				int thickness = 2;
				int lineType = 8;
				circle(image, trackingPoints[1][i], radius, Scalar(0, 255, 0), thickness, lineType);
			}
			trackingPoints[1].resize(count);
		}

		// Refining the location of the feature points
		if(pointTrackingFlag && trackingPoints[1].size() < maxNumPoints)
		{
     
			vector<Pointt2f> tempPoints;
			// Function to refine the location of the corners to subpixel accuracy
			// Here 'pixel' refers to the image path of size 'windowSize' and not the actual image pixel
			cornerSubPix(curGrayImage, tempPoints, windowSize, Size(-1, -1), terminationCriteria);
			trackingPoints[1].push_back(tempPoints[0]);
			pointTrackingFlag = false;
		}

		// Display the image with the tracking points
		imshow(windowName, image);

		// Check if the user pressed the Esc key
		char ch = waitkey(10);
		if(ch == 27)
		break;

		// Swap the 'points' vectors to update 'previous' to 'current'
		std::swap(trackingPoints[1], trackingPoints[0]);

		// Swap the images to update previous image to current image
		cv::swap(prevGrayImage, curGrayImage);
	}
	return 1;
}

2. 稠密光流跟踪:Farneback算法

你可能感兴趣的:(#,Opencv)