VINS-Course代码解析[手写VIO第七章作业](1)——主函数解析

前言

记录代码解析一方面可以帮助自己复习,另一方面也可以督促自己学习,欢迎监督指正。
VINS-Course是从零开始手写VIO课程代码,由于代码比较长,我们分模块对代码进行解析。今天是第一篇,先从编译运行 及 主框架 入手开始(为了代码截图简洁,在不影响查看的基础上,我做了适当的删除注释和cout语句。)

VINS-Course\CMakeLists.txt阅读

生成两个库文件:camera_model,MyVio;
生成两个可执行文件:run_euroc,testCurveFitting;
我们重点看的是run_euroc文件内容;

编译代码

mkdir vins_course
cd vins_course
git clone https://github.com/HeYijia/VINS-Course
mkdir build 
cd build
cmake ..
make -j4

运行

1. CurveFitting Example to Verify Our Solver.

cd build
../bin/testCurveFitting 

2. VINs-Mono on Euroc Dataset

cd build
../bin/run_euroc /home/dataset/EuRoC/MH-05/mav0/ ../config/

主函数框架

定义了三个线程,一个系统变量
VINS-Course代码解析[手写VIO第七章作业](1)——主函数解析_第1张图片

1.变量pSystem

主函数中定义了一个变量pSystem,并调用System的构造函数从配置文件…/config/euroc_config.yaml中读取相机参数,imu参数,imu相机外参等数据;读取参数生成一个相机模型m_camera;设置估计器的参数,代码注释如下:

System::System(string sConfig_file_):bStart_backend(true) sConfig_file_="../config/";
{
    string sConfig_file = sConfig_file_ + "euroc_config.yaml";//../config/euroc_config.yaml
    readParameters(sConfig_file);//从yaml文件中读取相机和imu之类的参数等
    trackerData[0].readIntrinsicParameter(sConfig_file);//从euroc_config.yaml文件中读取参数生成一个相机模型m_camera

    estimator.setParameter();//设置相机与imu外参
    ofs_pose.open("./pose_output.txt",fstream::app | fstream::out);
}

2.thd_PubImuData线程

主要是读取imu数据文件,并把数据打包成(时间,角速度,加速度)的形式调用系统函数PubImuData进行处理:

void PubImuData()
{
	string sImu_data_file = sConfig_path + "MH_05_imu0.txt";
	ifstream fsImu;
	fsImu.open(sImu_data_file.c_str());

	std::string sImu_line;
	double dStampNSec = 0.0;
	Vector3d vAcc;
	Vector3d vGyr;
	while (std::getline(fsImu, sImu_line) && !sImu_line.empty()) // read imu data
	{
		std::istringstream ssImuData(sImu_line);
		ssImuData >> dStampNSec >> vGyr.x() >> vGyr.y() >> vGyr.z() >> vAcc.x() >> vAcc.y() >> vAcc.z();
		pSystem->PubImuData(dStampNSec / 1e9, vGyr, vAcc);
		usleep(5000*nDelayTimes);
	}
	fsImu.close();
}

系统的PubImuData函数

检测imu数据是否乱序,装进去imu_buf中,并唤醒条件变量去获取数据,代码注释如下:

//检测imu数据是否乱序,装进去imu_buf中,并唤醒条件变量去获取数据
void System::PubImuData(double dStampSec, const Eigen::Vector3d &vGyr, const Eigen::Vector3d &vAcc)
{
    shared_ptr<IMU_MSG> imu_msg(new IMU_MSG());//imu数据指针
	imu_msg->header = dStampSec;      //分别赋值 时间戳
	imu_msg->linear_acceleration = vAcc;//加速度
	imu_msg->angular_velocity = vGyr;//角速度

    if (dStampSec <= last_imu_t)//last_imu_t是上一次的imu数据时间
    {
        cerr << "imu message in disorder!" << endl;
        return;
    }
    last_imu_t = dStampSec;

    m_buf.lock();
    imu_buf.push(imu_msg);//数据放到队列去
    m_buf.unlock();
    con.notify_one();//唤醒
}

3.thd_PubImageData线程

先从MH_05_cam0.txt文件读取数据,打包成(时间,图像)调用系统的PubImageData函数进行img数据处理。

void PubImageData()
{
	string sImage_file = sConfig_path + "MH_05_cam0.txt";
	ifstream fsImage;
	fsImage.open(sImage_file.c_str());


	std::string sImage_line;
	double dStampNSec;
	string sImgFileName;
	
	while (std::getline(fsImage, sImage_line) && !sImage_line.empty())
	{
		std::istringstream ssImuData(sImage_line);
		ssImuData >> dStampNSec >> sImgFileName;
		string imagePath = sData_path + "cam0/data/" + sImgFileName;

		Mat img = imread(imagePath.c_str(), 0);

		**pSystem->PubImageData(dStampNSec / 1e9, img);**

		usleep(50000*nDelayTimes);
	}
	fsImage.close();
}

系统的PubImageData函数

函数很长,我们分块介绍:
跳过前两帧图像数据:

if (!init_feature) //第一帧图像数据不包含光流信息
    {
        cout << "1 PubImageData skip the first detected feature, which doesn't contain optical flow speed" << endl;
        init_feature = 1;
        return;
    }

    if (first_image_flag)//第二帧图像
    {
        cout << "2 PubImageData first_image_flag" << endl;
        first_image_flag = false;
        first_image_time = dStampSec;
        last_image_time = dStampSec;
        return;
    }

然后去检测相机给的数据流是否乱序了,如果乱序了则进行重置:

    // 检测相机给的数据流是否乱序了
    if (dStampSec - last_image_time > 1.0 || dStampSec < last_image_time) //last_image_time是上一帧图像数据
    {
        cerr << "3 PubImageData image discontinue! reset the feature tracker!" << endl;
        first_image_flag = true;
        last_image_time = 0;
        pub_count = 1;
        return;
    }

下面进行把发布数据的频率控制在FREQ

    last_image_time = dStampSec;
    // 控制频率在FREQ
    if (round(1.0 * pub_count / (dStampSec - first_image_time)) <= FREQ)
    {
        PUB_THIS_FRAME = true;//发布flag
        // reset the frequency control
        if (abs(1.0 * pub_count / (dStampSec - first_image_time) - FREQ) < 0.01 * FREQ)
        {
            first_image_time = dStampSec;
            pub_count = 0;
        }
    }
    else
    {
        PUB_THIS_FRAME = false;
    }

下面是重点readImage函数,trackerData是FeatureTracker类的实例,是System类的私有成员。

trackerData[0].readImage(img, dStampSec);//重点

下面我们去看下这个函数,需要注意的是prev_img是前前帧已经处理好的图像,cur_img是上一帧图像,forw_img是当前处理的图像数据。对应的图像上提取的光流坐标分别是prev_pts ,cur_pts和forw_pts。

(1)进行直方图均衡化,把对比度拉平衡

if (EQUALIZE)//直方图均衡化
    {
        cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(3.0, cv::Size(8, 8));
        TicToc t_c;
        clahe->apply(_img, img);
    }
    else
        img = _img;

(2)提取角点信息

这里逻辑是第三帧进来后,先进行提取角点,之后帧再进行它的上一帧图像追踪它的角点信息,如果追踪的角点信息不够最大值的话再提取新的角点。

if (PUB_THIS_FRAME)
    {
        rejectWithF();
        TicToc t_m;
        setMask();

        TicToc t_t;
        int n_max_cnt = MAX_CNT - static_cast<int>(forw_pts.size());
        if (n_max_cnt > 0)
        {
            if(mask.empty())
                cout << "mask is empty " << endl;
            if (mask.type() != CV_8UC1)
                cout << "mask type wrong " << endl;
            if (mask.size() != forw_img.size())
                cout << "wrong size " << endl;
            cv::goodFeaturesToTrack(forw_img, n_pts, MAX_CNT - forw_pts.size(), 0.01, MIN_DIST, mask);
        }
        else
            n_pts.clear();

rejectWithF函数在第三帧图像数据来了之后,是不进行外点检测的,因为此时没有角点信息,之后的跟踪会进行,因为光流信息会出现很多外点,基于Fundamental矩阵的外点检测会增强系统的鲁棒性。
这里是调用opencv的函数goodFeaturesToTrack进行在forw_img图像上提取MAX_CNT个角点信息,放在n_pts中。
之后把提取的角点信息通过addPoints();函数加入到forw_pts中,并设置新提取的特征的id都是-1.跟踪次数都是1.

void FeatureTracker::addPoints()
{
    for (auto &p : n_pts)
    {
        forw_pts.push_back(p);
        ids.push_back(-1);
        track_cnt.push_back(1);
    }
}

(3)去除畸变

去除畸变的函数是undistortedPoints(),代码如下:

void FeatureTracker::undistortedPoints()
{
    cur_un_pts.clear();
    cur_un_pts_map.clear();
    //cv::undistortPoints(cur_pts, un_pts, K, cv::Mat());
    for (unsigned int i = 0; i < cur_pts.size(); i++)
    {
        Eigen::Vector2d a(cur_pts[i].x, cur_pts[i].y);
        Eigen::Vector3d b;
        m_camera->liftProjective(a, b);
        cur_un_pts.push_back(cv::Point2f(b.x() / b.z(), b.y() / b.z()));
        cur_un_pts_map.insert(make_pair(ids[i], cv::Point2f(b.x() / b.z(), b.y() / b.z())));
        //printf("cur pts id %d %f %f", ids[i], cur_un_pts[i].x, cur_un_pts[i].y);
    }
    // caculate points velocity
    if (!prev_un_pts_map.empty())
    {
        double dt = cur_time - prev_time;
        pts_velocity.clear();
        for (unsigned int i = 0; i < cur_un_pts.size(); i++)
        {
            if (ids[i] != -1)
            {
                std::map<int, cv::Point2f>::iterator it;
                it = prev_un_pts_map.find(ids[i]);
                if (it != prev_un_pts_map.end())
                {
                    double v_x = (cur_un_pts[i].x - it->second.x) / dt;
                    double v_y = (cur_un_pts[i].y - it->second.y) / dt;
                    pts_velocity.push_back(cv::Point2f(v_x, v_y));
                }
                else
                    pts_velocity.push_back(cv::Point2f(0, 0));
            }
            else
            {
                pts_velocity.push_back(cv::Point2f(0, 0));
            }
        }
    }
    else
    {
        for (unsigned int i = 0; i < cur_pts.size(); i++)
        {
            pts_velocity.push_back(cv::Point2f(0, 0));
        }
    }
    prev_un_pts_map = cur_un_pts_map;
}

(4)光流跟踪并更新信息

调用opencv的calcOpticalFlowPyrLK函数光流跟踪,根据cur_img的特征点cur_pts,在forw_img进行跟踪,结果放在forw_pts中,如果跟踪到了则对应的status是1,否则是0。之后根据状态信息去去除更新失败的点以及在边缘处的点,更新状态信息。代码注释如下:

if (cur_pts.size() > 0)
    {
        TicToc t_o;
        vector<uchar> status;
        vector<float> err;
        //calcOpticalFlowPyrLK 光流跟踪,根据cur_img的特征点cur_pts,在forw_img进行跟踪,结果放在forw_pts中,如果跟踪到了则对应的status是1,否则是0
        cv::calcOpticalFlowPyrLK(cur_img, forw_img, cur_pts, forw_pts, status, err, cv::Size(21, 21), 3);
        //根据状态信息更新失败的点以及在边缘处的点
        for (int i = 0; i < int(forw_pts.size()); i++)
            if (status[i] && !inBorder(forw_pts[i]))
                status[i] = 0;
        //根据跟踪的状态进行去除失败的信息
        reduceVector(prev_pts, status);
        reduceVector(cur_pts, status);
        reduceVector(forw_pts, status);
        reduceVector(ids, status);
        reduceVector(cur_un_pts, status);
        reduceVector(track_cnt, status);
    }

(5)跟踪到的角点次数+1

 for (auto &n : track_cnt)
        n++;

(6)FM ransac外点去除

void FeatureTracker::rejectWithF()
{
    if (forw_pts.size() >= 8)
    {
        TicToc t_f;
        vector<cv::Point2f> un_cur_pts(cur_pts.size()), un_forw_pts(forw_pts.size());
        for (unsigned int i = 0; i < cur_pts.size(); i++)
        {
            Eigen::Vector3d tmp_p;
            m_camera->liftProjective(Eigen::Vector2d(cur_pts[i].x, cur_pts[i].y), tmp_p);
            tmp_p.x() = FOCAL_LENGTH * tmp_p.x() / tmp_p.z() + COL / 2.0;
            tmp_p.y() = FOCAL_LENGTH * tmp_p.y() / tmp_p.z() + ROW / 2.0;
            un_cur_pts[i] = cv::Point2f(tmp_p.x(), tmp_p.y());

            m_camera->liftProjective(Eigen::Vector2d(forw_pts[i].x, forw_pts[i].y), tmp_p);
            tmp_p.x() = FOCAL_LENGTH * tmp_p.x() / tmp_p.z() + COL / 2.0;
            tmp_p.y() = FOCAL_LENGTH * tmp_p.y() / tmp_p.z() + ROW / 2.0;
            un_forw_pts[i] = cv::Point2f(tmp_p.x(), tmp_p.y());
        }

        vector<uchar> status;
        //求F
        cv::findFundamentalMat(un_cur_pts, un_forw_pts, cv::FM_RANSAC, F_THRESHOLD, 0.99, status);
        int size_a = cur_pts.size();
        reduceVector(prev_pts, status);
        reduceVector(cur_pts, status);
        reduceVector(forw_pts, status);
        reduceVector(cur_un_pts, status);
        reduceVector(ids, status);
        reduceVector(track_cnt, status);
    }
}

(7)设置mask

这里主要是为了避免角点扎堆的现象,极大值抑制。
就是在原来有角点信息的地方,设置一个半径,在这个半径内,不再提取新的角点信息,代码如下:

void FeatureTracker::setMask()
{
    if(FISHEYE)
        mask = fisheye_mask.clone();
    else
        mask = cv::Mat(ROW, COL, CV_8UC1, cv::Scalar(255));
    

    // prefer to keep features that are tracked for long time
    vector<pair<int, pair<cv::Point2f, int>>> cnt_pts_id;

    for (unsigned int i = 0; i < forw_pts.size(); i++)
        cnt_pts_id.push_back(make_pair(track_cnt[i], make_pair(forw_pts[i], ids[i])));

    sort(cnt_pts_id.begin(), cnt_pts_id.end(), [](const pair<int, pair<cv::Point2f, int>> &a, const pair<int, pair<cv::Point2f, int>> &b)
         {
            return a.first > b.first;
         });

    forw_pts.clear();
    ids.clear();
    track_cnt.clear();

    for (auto &it : cnt_pts_id)
    {
        if (mask.at<uchar>(it.second.first) == 255)
        {
            forw_pts.push_back(it.second.first);
            ids.push_back(it.second.second);
            track_cnt.push_back(it.first);
            cv::circle(mask, it.second.first, MIN_DIST, 0, -1);
        }
    }
}

如果看到了这里,已经可以根据数据读取的接口,把贺博手写VIO课程第七讲的作业完成了,只需要把图像和imu的模拟数据进行处理,输入这个系统即可。

3.主线程thd_BackEnd

考虑到本文很成了,这个主线程有很多内容需要记录,就另外再写一篇文章记录。

你可能感兴趣的:(VIO)