OpenVINS 代码解读(3) - 前端跟踪Track

anson2004110的文章 - 知乎 https://zhuanlan.zhihu.com/p/97287380

 

目录

0. 在VioManager.cpp中初始化跟踪器

1. TrackBase

1.1 成员变量

1.2 undistort_point去畸变矫正,针孔和鱼眼模型

1.2.1 undistort_point_brown针孔模型:

1.2.2 undistort_point_fisheye鱼眼模型:

1.3 set_calibration相机参数更新,调整特征点去畸变后的2D坐标

2. TrackKLT: public TrackBase

2.1 初始化

2.2 feed_stereo双目跟踪

2.3 perform_detection_stereo特征点跟踪的补点模块

2.4 perform_matching 两帧的金字塔光流跟踪和Ransac剔除错误匹配


正文

0. 在VioManager.cpp中初始化跟踪器

OpenVINS提供了三种跟踪器,TrackKLT光流跟踪,TrackDescriptor描述子匹配跟踪和TrackAruco二维码跟踪 。本文仅分析光流跟踪过程,描述子匹配法类似。

// Lets make a feature extractor

if(use_klt) {

trackFEATS = new TrackKLT(num_pts,state->options().max_aruco_features,fast_threshold,grid_x,grid_y,min_px_dist);

trackFEATS->set_calibration(camera_calib, camera_fisheye);

} else {

trackFEATS = new TrackDescriptor(num_pts,state->options().max_aruco_features,fast_threshold,grid_x,grid_y,knn_ratio);

trackFEATS->set_calibration(camera_calib, camera_fisheye);

}

// Initialize our aruco tag extractor

if(use_aruco) {

trackARUCO = new TrackAruco(state->options().max_aruco_features,do_downsizing);

trackARUCO->set_calibration(camera_calib, camera_fisheye);

}

1. TrackBase,基类

1.1 成员变量

/// Database with all our current features

FeatureDatabase *database;

/// If we are a fisheye model or not

std::map camera_fisheye;

/// Camera intrinsics in OpenCV format

std::map camera_k_OPENCV;

/// Camera distortion in OpenCV format

std::map camera_d_OPENCV;

/// Number of features we should try to track frame to frame

int num_features;

/// Mutexs for our last set of image storage (img_last, pts_last, and ids_last)

std::vector mtx_feeds;

/// Last set of images (use map so all trackers render in the same order)

std::map img_last;

/// Last set of tracked points

std::unordered_map> pts_last;

/// Set of IDs of each current feature in the database

std::unordered_map> ids_last;

/// Master ID for this tracker (atomic to allow for multi-threading)

std::atomic currid;

1.2 undistort_point去畸变矫正,针孔和鱼眼模型

* @brief Main function that will undistort/normalize a point.

* @param pt_in uv 2x1 point that we will undistort

* @param cam_id id of which camera this point is in

* @return undistorted 2x1 point

* Given a uv point, this will undistort it based on the camera matrices.

* This will call on the model needed, depending on what type of camera it is!

* So if we have fisheye for camera_1 is true, we will undistort with the fisheye model.

* In Kalibr's terms, the non-fisheye is `pinhole-radtan` while the fisheye is the `pinhole-equi` model.

*/

cv::Point2f undistort_point(cv::Point2f pt_in, size_t cam_id) {

// Determine what camera parameters we should use

cv::Matx33d camK = this->camera_k_OPENCV.at(cam_id);

cv::Vec4d camD = this->camera_d_OPENCV.at(cam_id);

// Call on the fisheye if we should!

if (this->camera_fisheye.at(cam_id)) {

return undistort_point_fisheye(pt_in, camK, camD);

}

return undistort_point_brown(pt_in, camK, camD);

}

1.2.1 undistort_point_brown针孔模型:

* @brief Undistort function RADTAN/BROWN.

* Given a uv point, this will undistort it based on the camera matrices.

* To equate this to Kalibr's models, this is what you would use for `pinhole-radtan`.

cv::Point2f undistort_point_brown(cv::Point2f pt_in, cv::Matx33d &camK, cv::Vec4d &camD) {

// Convert to opencv format

cv::Mat mat(1, 2, CV_32F);

mat.at(0, 0) = pt_in.x;

mat.at(0, 1) = pt_in.y;

mat = mat.reshape(2); // Nx1, 2-channel

// Undistort it!

cv::undistortPoints(mat, mat, camK, camD);

// Construct our return vector

cv::Point2f pt_out;

mat = mat.reshape(1); // Nx2, 1-channel

pt_out.x = mat.at(0, 0);

pt_out.y = mat.at(0, 1);

return pt_out;

}

1.2.2 undistort_point_fisheye鱼眼模型:

* @brief Undistort function FISHEYE/EQUIDISTANT.

* Given a uv point, this will undistort it based on the camera matrices.

* To equate this to Kalibr's models, this is what you would use for `pinhole-equi`.

cv::Point2f undistort_point_fisheye(cv::Point2f pt_in, cv::Matx33d &camK, cv::Vec4d &camD) {

// Convert point to opencv format

cv::Mat mat(1, 2, CV_32F);

mat.at(0, 0) = pt_in.x;

mat.at(0, 1) = pt_in.y;

mat = mat.reshape(2); // Nx1, 2-channel

// Undistort it!

cv::fisheye::undistortPoints(mat, mat, camK, camD);

// Construct our return vector

cv::Point2f pt_out;

mat = mat.reshape(1); // Nx2, 1-channel

pt_out.x = mat.at(0, 0);

pt_out.y = mat.at(0, 1);

return pt_out;

}

1.3 set_calibration相机参数更新,调整特征点去畸变后的2D坐标

* @brief Given a the camera intrinsic values, this will set what we should normalize points with.

* This will also update the feature database with corrected normalized values.

* Normally this would only be needed if we are optimizing our camera parameters, and thus should re-normalize.

* @param camera_calib Calibration parameters for all cameras [fx,fy,cx,cy,d1,d2,d3,d4]

* @param camera_fisheye Map of camera_id => bool if we should do radtan or fisheye distortion model

* @param correct_active If we should re-undistort active features in our database

*/

void set_calibration(std::map camera_calib,

std::map camera_fisheye, bool correct_active=false) {

 

step 1 若是第一次调用,则进行构造

if(mtx_feeds.empty() || camera_k_OPENCV.empty() || camera_k_OPENCV.empty() || this->camera_fisheye.empty()) {

// Create our mutex array based on the number of cameras we have

// See https://stackoverflow.com/a/24170141/7718197

std::vector list(camera_calib.size());

mtx_feeds.swap(list); //交换空间,这里的作用类似于初始化

// Overwrite our fisheye calibration

this->camera_fisheye = camera_fisheye;

// Convert values to the OpenCV format

for (auto const &cam : camera_calib) {

// Assert we are of size eight

assert(cam.second.rows()==8);

// Camera matrix

cv::Matx33d tempK;

tempK(0, 0) = cam.second(0);

tempK(0, 1) = 0;

tempK(0, 2) = cam.second(2);

tempK(1, 0) = 0;

tempK(1, 1) = cam.second(1);

tempK(1, 2) = cam.second(3);

tempK(2, 0) = 0;

tempK(2, 1) = 0;

tempK(2, 2) = 1;

camera_k_OPENCV.insert({cam.first, tempK}); // 内参数

// Distortion parameters

cv::Vec4d tempD;

tempD(0) = cam.second(4);

tempD(1) = cam.second(5);

tempD(2) = cam.second(6);

tempD(3) = cam.second(7);

camera_d_OPENCV.insert({cam.first, tempD}); // 畸变系数

}

return;

}

若不是第一次初始化,则执行下列操作

// Convert values to the OpenCV format

for (auto const &cam : camera_calib) {

// Lock this image feed

std::unique_lock lck(mtx_feeds.at(cam.first)); // 线程锁,防止同时修改或使用

// Fisheye value

this->camera_fisheye.at(cam.first) = camera_fisheye.at(cam.first);

// Assert we are of size eight

assert(cam.second.rows()==8);

// Camera matrix

cv::Matx33d tempK;

tempK(0, 0) = cam.second(0);

tempK(0, 1) = 0;

tempK(0, 2) = cam.second(2);

tempK(1, 0) = 0;

tempK(1, 1) = cam.second(1);

tempK(1, 2) = cam.second(3);

tempK(2, 0) = 0;

tempK(2, 1) = 0;

tempK(2, 2) = 1;

camera_k_OPENCV.at(cam.first) = tempK;

// Distortion parameters

cv::Vec4d tempD;

tempD(0) = cam.second(4);

tempD(1) = cam.second(5);

tempD(2) = cam.second(6);

tempD(3) = cam.second(7);

camera_d_OPENCV.at(cam.first) = tempD;

}

step 2 根据标志位correct_active =true,进行图像特征点去畸变2D坐标重新计算(仅在EKF更新之后,在线标定相机内参数调整之后调用的)

// If we are calibrating camera intrinsics our normalize coordinates will be stale

// This is because we appended them to the database with the current best guess *at that timestep*

// Thus here since we have a change in calibration, re-normalize all the features we have

if(correct_active) {

// Get all features in this database

std::unordered_map features_idlookup = database->get_internal_data();

// Loop through and correct each one

for(const auto& pair_feat : features_idlookup) {

// Get our feature

Feature *feat = pair_feat.second;

// Loop through each camera for this feature

for (auto const& meas_pair : feat->timestamps) {

size_t camid = meas_pair.first;

std::unique_lock lck(mtx_feeds.at(camid)); //线程锁

for(size_t m=0; muvs.at(camid).size(); m++) {

cv::Point2f pt(feat->uvs.at(camid).at(m)(0), feat->uvs.at(camid).at(m)(1));

cv::Point2f pt_n = undistort_point(pt,camid);

feat->uvs_norm.at(camid).at(m)(0) = pt_n.x;

feat->uvs_norm.at(camid).at(m)(1) = pt_n.y;

}

}

}

}

 

2. TrackKLT: public TrackBase

跟踪点数在400 -500个

* @brief KLT tracking of features.

* This is the implementation of a KLT visual frontend for tracking sparse features.

* We can track either monocular cameras across time (temporally) along with

* stereo cameras which we also track across time (temporally) but track from left to right

* to find the stereo correspondence information also.

* This uses the [calcOpticalFlowPyrLK](https://github.com/opencv/opencv/blob/master/modules/video/src/lkpyramid.cpp) OpenCV function to do the KLT tracking.

*/

2.1 初始化

/**

* @brief Public constructor with configuration variables

* @param numfeats number of features we want want to track (i.e. track 200 points from frame to frame)

* @param numaruco the max id of the arucotags, so we ensure that we start our non-auroc features above this value

* @param fast_threshold FAST detection threshold

* @param gridx size of grid in the x-direction / u-direction

* @param gridy size of grid in the y-direction / v-direction

* @param minpxdist features need to be at least this number pixels away from each other

*/

explicit TrackKLT(int numfeats, int numaruco, int fast_threshold, int gridx, int gridy, int minpxdist) : TrackBase(numfeats, numaruco), threshold(fast_threshold), grid_x(gridx), grid_y(gridy), min_px_dist(minpxdist) {}

 

2.2 feed_stereo双目跟踪

void TrackKLT::feed_stereo(double timestamp, cv::Mat &img_leftin, cv::Mat &img_rightin, size_t cam_id_left, size_t cam_id_right) {

step 1 直方图均衡化(图像预处理)

// Histogram equalize

cv::Mat img_left, img_right;

boost::thread t_lhe = boost::thread(cv::equalizeHist, boost::cref(img_leftin), boost::ref(img_left));

boost::thread t_rhe = boost::thread(cv::equalizeHist, boost::cref(img_rightin), boost::ref(img_right));

t_lhe.join();

t_rhe.join();

step 2 金字塔,层数pyr_levels=3

// Extract image pyramids (boost seems to require us to put all the arguments even if there are defaults....)

std::vector imgpyr_left, imgpyr_right;

boost::thread t_lp = boost::thread(cv::buildOpticalFlowPyramid, boost::cref(img_left),

boost::ref(imgpyr_left), boost::ref(win_size), boost::ref(pyr_levels), false,

cv::BORDER_REFLECT_101, cv::BORDER_CONSTANT, true);

boost::thread t_rp = boost::thread(cv::buildOpticalFlowPyramid, boost::cref(img_right),

boost::ref(imgpyr_right), boost::ref(win_size), boost::ref(pyr_levels),

false, cv::BORDER_REFLECT_101, cv::BORDER_CONSTANT, true);

t_lp.join();

t_rp.join();


step 3 首次增加左右相机检测和跟踪上的特征点然后返回,其余直接提取前后的

// If we didn't have any successful tracks last time, just extract this time

// This also handles, the tracking initalization on the first call to this extractor

if(pts_last[cam_id_left].empty() || pts_last[cam_id_right].empty()) {

// Track into the new image

perform_detection_stereo(imgpyr_left, imgpyr_right, pts_last[cam_id_left], pts_last[cam_id_right], ids_last[cam_id_left], ids_last[cam_id_right]);

// Save the current image and pyramid

img_last[cam_id_left] = img_left.clone();

img_last[cam_id_right] = img_right.clone();

img_pyramid_last[cam_id_left] = imgpyr_left;

img_pyramid_last[cam_id_right] = imgpyr_right;

return;

}

step 4 左右相机分别用前后帧进行特征点光流跟踪和Ransac匹配剔除各自错误匹配

// Our return success masks, and predicted new features

std::vector mask_ll, mask_rr;

std::vector pts_left_new = pts_last[cam_id_left];

std::vector pts_right_new = pts_last[cam_id_right];

// Lets track temporally

boost::thread t_ll = boost::thread(&TrackKLT::perform_matching, this, boost::cref(img_pyramid_last[cam_id_left]), boost::cref(imgpyr_left), boost::ref(pts_last[cam_id_left]), boost::ref(pts_left_new), cam_id_left, cam_id_left, boost::ref(mask_ll));

boost::thread t_rr = boost::thread(&TrackKLT::perform_matching, this, boost::cref(img_pyramid_last[cam_id_right]), boost::cref(imgpyr_right),boost::ref(pts_last[cam_id_right]), boost::ref(pts_right_new), cam_id_right, cam_id_right, boost::ref(mask_rr));

注意返回的mask_ll和mask_ll

// Wait till both threads finish

t_ll.join();

t_rr.join();

 

step 5 再用左右相机光流和Ransac剔除错误匹配

// left to right matching

std::vector mask_lr;

perform_matching(imgpyr_left, imgpyr_right, pts_left_new, pts_right_new, cam_id_left, cam_id_right, mask_lr);

步骤4和5 两个匹配特征点剔除的操作如下图所示:

OpenVINS 代码解读(3) - 前端跟踪Track_第1张图片

step 6 跟踪失败则重新开始(也就是说msckf 遇到快速运动,纹理较少的时候特征点没有连续跟踪也

无所谓,丢失的点会参与更新IMU运动学方程,而下一帧新加入的点再重新进行跟踪,只是IMU的估计误差会大一些)

 

// If any of our masks are empty, that means we didn't have enough to do ransac, so just return

if(mask_ll.empty() || mask_rr.empty() || mask_lr.empty()) {

step 7 存储跟踪较好的点

// Loop through all left points

for(size_t i=0; i

// Ensure we do not have any bad KLT tracks (i.e., points are negative)

if(pts_left_new[i].pt.x < 0 || pts_left_new[i].pt.y < 0 || pts_right_new[i].pt.x < 0 || pts_right_new[i].pt.y < 0)

continue;

// If it is a good track, and also tracked from left to right

if(mask_ll[i] && mask_rr[i] && mask_lr[i]) {

good_left.push_back(pts_left_new[i]);

good_right.push_back(pts_right_new[i]);

good_ids_left.push_back(ids_last[cam_id_left][i]);

good_ids_right.push_back(ids_last[cam_id_right][i]);

}

step 8 更新相机特征点坐标,包括畸变前后的坐标,和时间戳

// Update our feature database, with theses new observations

for(size_t i=0; i

// Assert that our IDs are the same (i.e., stereo )

assert(good_ids_left.at(i)==good_ids_right.at(i));

// Try to undistort the point

cv::Point2f npt_l = undistort_point(good_left.at(i).pt, cam_id_left);

cv::Point2f npt_r = undistort_point(good_right.at(i).pt, cam_id_right);

// Append to the database

database->update_feature(good_ids_left.at(i), timestamp, cam_id_left,

good_left.at(i).pt.x, good_left.at(i).pt.y,

npt_l.x, npt_l.y);

database->update_feature(good_ids_left.at(i), timestamp, cam_id_right,

good_right.at(i).pt.x, good_right.at(i).pt.y,

npt_r.x, npt_r.y);

}

在 FeatureDatabase.h中定义了一个features_idlookup:

/// Our lookup array that allow use to query based on ID

std::unordered_map features_idlookup;

它存储了特征点的ID 唯一标识,和该特征点在所有跟踪帧里的坐标和时间戳信息。 它维护了所有特征点的生命周期。

update_feature 参考Feature 文件中的定义

step 9 更新最后一帧信息,用于下一帧跟踪

// Move forward in time

img_last[cam_id_left] = img_left.clone();

img_last[cam_id_right] = img_right.clone();

img_pyramid_last[cam_id_left] = imgpyr_left;

img_pyramid_last[cam_id_right] = imgpyr_right;

pts_last[cam_id_left] = good_left;

pts_last[cam_id_right] = good_right;

ids_last[cam_id_left] = good_ids_left;

ids_last[cam_id_right] = good_ids_right;

 

2.3 perform_detection_stereo特征点跟踪的补点模块

先在左边相机提取的角点,然后用光流法找右相机上的匹配的特征点,这样返回左右相机配对的特征点。前提是两个相机的基线不能太大,否则光流法的窗口大小参数需要调整。

* @brief Detects new features in the current stereo pair

* @param img0pyr left image we will detect features on (first level of pyramid)

* @param img1pyr right image we will detect features on (first level of pyramid)

* @param pts0 left vector of currently extracted keypoints

* @param pts1 right vector of currently extracted keypoints

* @param ids0 left vector of feature ids for each currently extracted keypoint

* @param ids1 right vector of feature ids for each currently extracted keypoint

*

* This does the same logic as the perform_detection_monocular() function, but we also enforce stereo contraints.

* So we detect features in the left image, and then KLT track them onto the right image.

* If we have valid tracks, then we have both the keypoint on the left and its matching point in the right image.

* Will try to always have the "max_features" being tracked through KLT at each timestep.

*/

void perform_detection_stereo(const std::vector &img0pyr, const std::vector &img1pyr, std::vector &pts0,std::vector &pts1, std::vector &ids0, std::vector &ids1);


step 1 将图像划分成Grid,每个子grid中只有一个特征点

// Create a 2D occupancy grid for this current image

// Note that we scale this down, so that each grid point is equal to a set of pixels

// This means that we will reject points that less then grid_px_size points away then existing features

Eigen::MatrixXi grid_2d_current = Eigen::MatrixXi::Zero((int)(img0pyr.at(0).cols / min_px_dist) + 10, (int)(img0pyr.at(0).rows / min_px_dist) + 10); //min_px_dist在euroc数据集下是10个像素,640x480分辨率的整幅图一共有64x48个grid,每个子grid中只有一个特征点

while(it0 != pts0.end()) {

// Get current left keypoint

cv::KeyPoint kpt = *it0;

// Check if this keypoint is near another point

if(grid_2d_current((int)(kpt.pt.x/min_px_dist),(int)(kpt.pt.y/min_px_dist)) == 1) { // 已经被占用,则删除这个特征点

it0 = pts0.erase(it0);

it1 = pts1.erase(it1);

it2 = ids0.erase(it2);

it3 = ids1.erase(it3);

continue;

}

// Else we are good, move forward to the next point

grid_2d_current((int)(kpt.pt.x/min_px_dist),(int)(kpt.pt.y/min_px_dist)) = 1; // 设置grid被占用

it0++;

it1++;

it2++;

it3++;

}

step 2 只在金字塔的第一层图像提取FAST特征点,总数不超过num_features ,通常400-500个点

// First compute how many more features we need to extract from this image

int num_featsneeded = num_features - (int)pts0.size();

// If we don't need any features, just return

if(num_featsneeded < 1)

return;

// Extract our features (use fast with griding)

std::vector pts0_ext;

Grider_FAST::perform_griding(img0pyr.at(0), pts0_ext, num_featsneeded, grid_x, grid_y, threshold, true); // 通常fast特征点的 grid_x=10 , grid_y =8,threshold=10

step 3 在新检测出来的特征点的基础上,在原有grid的基础上增加新的未被占用的特征点

// Now, reject features that are close a current feature

std::vector kpts0_new;

std::vector pts0_new;

for(auto& kpt : pts0_ext) {

// See if there is a point at this location

if(grid_2d((int)(kpt.pt.x/min_px_dist),(int)(kpt.pt.y/min_px_dist)) == 1)

continue;

// Else lets add it!

kpts0_new.push_back(kpt);

pts0_new.push_back(kpt.pt);

grid_2d((int)(kpt.pt.x/min_px_dist),(int)(kpt.pt.y/min_px_dist)) = 1;

}

step 4 金字塔光流跟踪

// Now do KLT tracking to get the valid projections

// Note: we have a pretty big window size here since our projection might be bad

// Note: but this might cause failure in cases of repeated textures (eg. checkerboard)

std::vector mask;

std::vector error;

cv::TermCriteria term_crit = cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 15, 0.01);

cv::calcOpticalFlowPyrLK(img0pyr, img1pyr, pts0_new, pts1_new, mask, error, win_size, pyr_levels, term_crit, cv::OPTFLOW_USE_INITIAL_FLOW);

//参数win_size = cv::Size(15, 15);

step 5 拷贝跟踪成功点

// Loop through and record only ones that are valid

for(size_t i=0; i

if(mask[i] == 1) {

// update the uv coordinates

kpts0_new.at(i).pt = pts0_new.at(i);

kpts1_new.at(i).pt = pts1_new.at(i);

// append the new uv coordinate

pts0.push_back(kpts0_new.at(i));

pts1.push_back(kpts1_new.at(i));

// move id foward and append this new point

size_t temp = ++currid; // 这个是特征点唯一的ID,不断累计

ids0.push_back(temp);

ids1.push_back(temp);

}

}

2.4 perform_matching 两帧的金字塔光流跟踪和Ransac剔除错误匹配

* @brief KLT track between two images, and do RANSAC afterwards

* @param img0pyr starting image pyramid

* @param img1pyr image pyramid we want to track too

* @param pts0 starting points

* @param pts1 points we have tracked

* @param id0 id of the first camera

* @param id1 id of the second camera

* @param mask_out what points had valid tracks

* This will track features from the first image into the second image.

* The two point vectors will be of equal size, but the mask_out variable will specify which points are good or bad.

* If the second vector is non-empty, it will be used as an initial guess of where the keypoints are in the second image.

*/

void perform_matching(const std::vector &img0pyr, const std::vector &img1pyr, std::vector &pts0,std::vector &pts1, size_t id0, size_t id1, std::vector &mask_out);

 

step 1 光流法跟踪特征点的新位置

// Now do KLT tracking to get the valid new points

std::vector mask_klt;

std::vector error;

cv::TermCriteria term_crit = cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 15, 0.01);

cv::calcOpticalFlowPyrLK(img0pyr, img1pyr, pts0, pts1, mask_klt, error, win_size, pyr_levels, term_crit, cv::OPTFLOW_USE_INITIAL_FLOW);

 

step 2 特征点的去畸变

// Normalize these points, so we can then do ransac

// We don't want to do ransac on distorted image uvs since the mapping is nonlinear

std::vector pts0_n, pts1_n;

for(size_t i=0; i

pts0_n.push_back(undistort_point(pts0.at(i),id0));

pts1_n.push_back(undistort_point(pts1.at(i),id1));

}

step 3 用Ransac的基本矩阵方法剔除错误跟踪的点集

// Do RANSAC outlier rejection (note since we normalized the max pixel error is now in the normalized cords)

std::vector mask_rsc;

double max_focallength_img0 = std::max(camera_k_OPENCV.at(id0)(0,0),camera_k_OPENCV.at(id0)(1,1));

double max_focallength_img1 = std::max(camera_k_OPENCV.at(id1)(0,0),camera_k_OPENCV.at(id1)(1,1));

double max_focallength = std::max(max_focallength_img0,max_focallength_img1);

cv::findFundamentalMat(pts0_n, pts1_n, cv::FM_RANSAC, 1/max_focallength, 0.999, mask_rsc);

step 4 用光流和Ransac的方法剔除错误跟踪的点,成功的设置mask=1

// Loop through and record only ones that are valid

for(size_t i=0; i

auto mask = (uchar)((i < mask_klt.size() && mask_klt[i] && i < mask_rsc.size() && mask_rsc[i])? 1 : 0);

mask_out.push_back(mask);

}

step 5 更新光流法得到当前帧特征点位置

// Copy back the updated positions

for(size_t i=0; i

kpts0.at(i).pt = pts0.at(i);

kpts1.at(i).pt = pts1.at(i);

}

注意这里并没有返回畸变矫正的那组特征点坐标pts0_n,pts1_n,而仍然用带有畸变的,经过光流和Ransac验证通过的特征点坐标pts0,pts1。 而不带畸变的特征点坐标会在最后一步的2.2节step 8操作中加入。

你可能感兴趣的:(SLAM,MSCKF,算法,OpenVINS,MSCKF,VIO)