VINS-Mono代码精简版代码详解-无ROS依赖(一)

精简版无ROS依赖的Vins-Mono代码(code is provided by 深蓝学院VIO Class)详解一

  • 代码框架
    • 主函数
    • 数据发布函数
    • 后端优化函数

代码框架

所详解的代码是深蓝学院课程提供的VIO代码,是港科大VINS-Mono代码的精简版本,保留了主要的初始化/前端/后端优化模块,整个代码不依赖于ROS系统,对于进一步深入学习VINS系统具有极大的帮助。下面将对代码的主要部分进行讲解,有理解错误的地方请多指正。首先给出整个系统流程图。
VINS-Mono代码精简版代码详解-无ROS依赖(一)_第1张图片

主函数

针对EuRoC数据集的测试主函数为run_euroc.cpp,首先创建后端线程:

// An highlighted block
std::thread thd_BackEnd(&System::ProcessBackEnd, pSystem);

然后依次创建IMU数据的发布/图像数据发布/画图线程

// An highlighted block
std::thread thd_PubImuData(PubImuData);
std::thread thd_PubImageData(PubImageData);
std::thread thd_Draw(&System::Draw, pSystem);
thd_PubImuData.join();//执行结束后才进行后面的线程
thd_PubImageData.join();

数据发布函数

  1. IMU数据发布
// An highlighted block
void PubImuData()
{
  string sImu_data_file = sConfig_path + "MH_05_imu0.txt";
  cout << "1 PubImuData start sImu_data_filea: " << sImu_data_file << endl;
  ifstream fsImu;
  fsImu.open(sImu_data_file.c_str());
  if (!fsImu.is_open())
  {
  	cerr << "Failed to open imu file! " << sImu_data_file << endl;
  	return;
  }
  std::string sImu_line;
  double dStampNSec = 0.0;
  Vector3d vAcc;
  Vector3d vGyr;
  while (std::getline(fsImu, sImu_line) && !sImu_line.empty()) // read imu data
  {
  	std::istringstream ssImuData(sImu_line);
  	ssImuData >> dStampNSec >> vGyr.x() >> vGyr.y() >> vGyr.z() >> vAcc.x() >> vAcc.y() >> vAcc.z();
  	// cout << "Imu t: " << fixed << dStampNSec << " gyr: " << vGyr.transpose() << " acc: " << vAcc.transpose() << endl;
  	pSystem->PubImuData(dStampNSec / 1e9, vGyr, vAcc);
  	usleep(5000*nDelayTimes);
  }
  fsImu.close();
}

代码在MH-05数据集上进行测试。与订阅ROS消息不同,这里采取的文件读取的方式,
读取的数据为三维角速度,和三维线加速度。读取的数据存储到imu_buf中

shared_ptr<IMU_MSG> imu_msg(new IMU_MSG());
imu_msg->header = dStampSec;
imu_msg->linear_acceleration = vAcc;
imu_msg->angular_velocity = vGyr;
*********
*********
imu_buf.push(imu_msg);
  1. 图像数据发布
void PubImageData()
{
	string sImage_file = sConfig_path + "MH_05_cam0.txt";

	cout << "1 PubImageData start sImage_file: " << sImage_file << endl;

	ifstream fsImage;
	fsImage.open(sImage_file.c_str());
	if (!fsImage.is_open())
	{
		cerr << "Failed to open image file! " << sImage_file << endl;
		return;
	}

	std::string sImage_line;
	double dStampNSec;
	string sImgFileName;
	
	// cv::namedWindow("SOURCE IMAGE", CV_WINDOW_AUTOSIZE);
	while (std::getline(fsImage, sImage_line) && !sImage_line.empty())
	{
		std::istringstream ssImuData(sImage_line);
		ssImuData >> dStampNSec >> sImgFileName;
		// cout << "Image t : " << fixed << dStampNSec << " Name: " << sImgFileName << endl;
		string imagePath = sData_path + "cam0/data/" + sImgFileName;

		Mat img = imread(imagePath.c_str(), 0);
		if (img.empty())
		{
			cerr << "image is empty! path: " << imagePath << endl;
			return;
		}
		pSystem->PubImageData(dStampNSec / 1e9, img);
		// cv::imshow("SOURCE IMAGE", img);
		// cv::waitKey(0);
		usleep(50000*nDelayTimes);
	}
	fsImage.close();
}

这里读取config文件夹中的MH_05_cam0.txt文件,依次读取相应的图片,并发布出来,这里的关键在于发布函数

void System::PubImageData(double dStampSec, Mat &img)

首先,判断init_feature,如果非init_feature,就置1,并返回。后面会看到init_feature
的具体含义。

if (!init_feature)
    {
        cout << "1 PubImageData skip the first detected feature, which doesn't contain optical flow speed" << endl;
        init_feature = 1;
        return;
    }

然后判断first_image_flag,如果是,则置0,并将第一帧图像时间戳和最后一帧图像时间戳都置為当前的时间戳。

if (first_image_flag)
    {
        cout << "2 PubImageData first_image_flag" << endl;
        first_image_flag = false;
        first_image_time = dStampSec;
        last_image_time = dStampSec;
        return;
    }

然后检测不稳定的图像流,如果当前的时间戳减去上次最后一帧时间戳大于1,或者当前的时间戳小于last_imgae_time,则认为i检测到不稳定的图像流,将首帧标志重置为true,last_image_time置為0,发布数目置為1。

if (dStampSec - last_image_time > 1.0 || dStampSec < last_image_time)
    {
        cerr << "3 PubImageData image discontinue! reset the feature tracker!" << endl;
        first_image_flag = true;
        last_image_time = 0;
        pub_count = 1;
        return;
    }
    last_image_time = dStampSec;

接下里是控制图像发布的频率,在读取文件的方式中会有这个,在ORBSLAM中同样。
如果已经发布的图像数目除以总的时间差小于等于频率,那么继续发布,否则不发布。进一步判断,如果频率相差0.01倍,就从当前帧开始重新计算频率。保证其始终小于等于FREQ。

if (round(1.0 * pub_count / (dStampSec - first_image_time)) <= FREQ)
    {
        PUB_THIS_FRAME = true;
        // reset the frequency control
        if (abs(1.0 * pub_count / (dStampSec - first_image_time) - FREQ) < 0.01 * FREQ)
        {
            first_image_time = dStampSec;
            pub_count = 0;
        }
    }
    else
    {
        PUB_THIS_FRAME = false;
    }

接下来是

trackerData[0].readImage(img, dStampSec);

其在FeatureTracker类中,实现的操作为:基于cv::CLAHE的图像增强,如果forw_img是空的,那么就将当前img复制给prev_img,cur_img,forw_img,三种img的含义后面会更新说明。否则,将当前图像复制给forw_img. 然后将forw_pts清空。
如果cur_pts非空,计算cur_img和forw_img的像素光流。
如果要发布当前的图像,首先执行

rejectWithF(); //去除外点
setMask();//针对鱼眼相机,用mask进行一个过滤,统计每个feature跟踪是次数,首先选择跟踪次数高的特征,在其周围进行非极大值抑制

然后提取特征,这里所提取的特征数目是最大特征数目减去已经跟踪的特征数目,如下

cv::goodFeaturesToTrack(forw_img, n_pts, MAX_CNT - forw_pts.size(), 0.01, MIN_DIST, mask);
*******
addPoints();//然后将提取的特征点加入到forw_pts,ids,以及track_cnt中。

最后是更新赋值,注意,这里计算光流应用的两帧图像是cur_img, forw_img。

 prev_img = cur_img;
    prev_pts = cur_pts;
    prev_un_pts = cur_un_pts;
    cur_img = forw_img;
    cur_pts = forw_pts;
    undistortedPoints();
    prev_time = cur_time;

以上是readImage实现的操作。
然后对所读取图像中特征的id进行更新。如果发布该图像,那么将所有的特征点都保存到feature_buf中。

if (PUB_THIS_FRAME)
    {
        pub_count++;
        shared_ptr<IMG_MSG> feature_points(new IMG_MSG());
        feature_points->header = dStampSec;
        vector<set<int>> hash_ids(NUM_OF_CAM);
        for (int i = 0; i < NUM_OF_CAM; i++)
        {
            auto &un_pts = trackerData[i].cur_un_pts;
            auto &cur_pts = trackerData[i].cur_pts;
            auto &ids = trackerData[i].ids;
            auto &pts_velocity = trackerData[i].pts_velocity;
            for (unsigned int j = 0; j < ids.size(); j++)
            {
                if (trackerData[i].track_cnt[j] > 1)
                {
                    int p_id = ids[j];
                    hash_ids[i].insert(p_id);
                    double x = un_pts[j].x;
                    double y = un_pts[j].y;
                    double z = 1;
                    feature_points->points.push_back(Vector3d(x, y, z));
                    feature_points->id_of_point.push_back(p_id * NUM_OF_CAM + i);
                    feature_points->u_of_point.push_back(cur_pts[j].x);
                    feature_points->v_of_point.push_back(cur_pts[j].y);
                    feature_points->velocity_x_of_point.push_back(pts_velocity[j].x);
                    feature_points->velocity_y_of_point.push_back(pts_velocity[j].y);
                }
            }
            //}
            // skip the first image; since no optical speed on frist image
            if (!init_pub)
            {
                cout << "4 PubImage init_pub skip the first image!" << endl;
                init_pub = 1;
            }
            else
            {
                m_buf.lock();
                feature_buf.push(feature_points);
                // cout << "5 PubImage t : " << fixed << feature_points->header
                //     << " feature_buf size: " << feature_buf.size() << endl;
                m_buf.unlock();
                con.notify_one();
            }
        }
    }

    cv::Mat show_img;
	cv::cvtColor(img, show_img, CV_GRAY2RGB);
	if (SHOW_TRACK)
	{
		for (unsigned int j = 0; j < trackerData[0].cur_pts.size(); j++)
        {
			double len = min(1.0, 1.0 * trackerData[0].track_cnt[j] / WINDOW_SIZE);
			cv::circle(show_img, trackerData[0].cur_pts[j], 2, cv::Scalar(255 * (1 - len), 0, 255 * len), 2);
		}

        cv::namedWindow("IMAGE", CV_WINDOW_AUTOSIZE);
		cv::imshow("IMAGE", show_img);
        cv::waitKey(1);
	}

注意在显示特征点的时候,特征点的颜色深度与其被追踪的时长正相关。
上述两个线程是读取并发布IMU数据和图像数据的线程。

后端优化函数

下面介绍后端优化函数

System::ProcessBackEnd();

首先获取观测数据,包括图像特征,IMU数据,获取数据的时候使用的是互斥锁。

vector<pair<vector<ImuConstPtr>, ImgConstPtr>> measurements;
        
        unique_lock<mutex> lk(m_buf);
        con.wait(lk, [&] {
            return (measurements = getMeasurements()).size() != 0;
        });
        if( measurements.size() > 1){
        cout << "1 getMeasurements size: " << measurements.size() 
            << " imu sizes: " << measurements[0].first.size()
            << " feature_buf size: " <<  feature_buf.size()
            << " imu_buf size: " << imu_buf.size() << endl;
        }
        lk.unlock();
        m_estimator.lock();

然后对于每个观测,如果IMU时间戳t小于图像时间戳,将当前时间置為t,直接读取角速度和线加速度,然后执行

estimator.processIMU(dt, Vector3d(dx, dy, dz), Vector3d(rx, ry, rz));

如果t大于图像时间戳img_t,先前current_time到img_t以及img_t到t的时间差,将img_t置為当前时间,然后将加权后的IMU数据用于后面处理。注意到,img_t到t的时间差越小,具体代码如下:

 double dt_1 = img_t - current_time;
                    double dt_2 = t - img_t;
                    current_time = img_t;
                    assert(dt_1 >= 0);
                    assert(dt_2 >= 0);
                    assert(dt_1 + dt_2 > 0);
                    double w1 = dt_2 / (dt_1 + dt_2);
                    double w2 = dt_1 / (dt_1 + dt_2);
                    dx = w1 * dx + w2 * imu_msg->linear_acceleration.x();
                    dy = w1 * dy + w2 * imu_msg->linear_acceleration.y();
                    dz = w1 * dz + w2 * imu_msg->linear_acceleration.z();
                    rx = w1 * rx + w2 * imu_msg->angular_velocity.x();
                    ry = w1 * ry + w2 * imu_msg->angular_velocity.y();
                    rz = w1 * rz + w2 * imu_msg->angular_velocity.z();
                    estimator.processIMU(dt_1, Vector3d(dx, dy, dz), Vector3d(rx, ry, rz));

然后,对于图像数据,读取其空间点坐标,像素坐标,以及速度。然后,执行

estimator.processImage(image, img_msg->header);

下面介绍处理IMU数据的函数。采用的是中值法计算预积分,对应的公式为
p w b k + 1 = p w b k + v k w Δ t + 1 2 a Δ t 2 v k + 1 w = v k w + a Δ t q w b k + 1 = q w b k ⊗ [ 1 1 2 w δ t ] a = 1 2 [ q w b k ( a b k − b k a ) + q w b k + 1 ( a b k + 1 − b k a ) ] − g w w = 1 2 [ ( w b k − b k g ) + ( w b k + 1 − b k g ) ] \mathbf{p}_{wb_{k+1}}=\mathbf{p}_{wb_k}+\mathbf{v}_k^w\Delta t+\frac{1}{2}\mathbf{a}\Delta t^2 \\ \mathbf{v}_{k+1}^w=\mathbf{v}_k^w+\mathbf{a}\Delta t \\ \mathbf{q}_{wb_{k+1}}=\mathbf{q}_{wb_k}\otimes \begin{bmatrix} 1 \\ \frac{1}{2}w\delta t \end{bmatrix} \\ \mathbf{a} = \frac{1}{2}[\mathbf{q}_{wb_k}(\mathbf{a^{b_k}-b_k^a})+\mathbf{q}_{wb_{k+1}}(\mathbf{a^{b_{k+1}}-b_k^a})]-\mathbf{g}^w \\ w=\frac{1}{2}[(w^{b_k}-\mathbf{b}_k^g)+(w^{b_{k+1}}-\mathbf{b}_{k}^g)] pwbk+1=pwbk+vkwΔt+21aΔt2vk+1w=vkw+aΔtqwbk+1=qwbk[121wδt]a=21[qwbk(abkbka)+qwbk+1(abk+1bka)]gww=21[(wbkbkg)+(wbk+1bkg)]

void Estimator::processIMU(double dt, const Vector3d &linear_acceleration, const Vector3d &angular_velocity)
{
    if (!first_imu)
    {
        first_imu = true;
        acc_0 = linear_acceleration;
        gyr_0 = angular_velocity;
    }

    if (!pre_integrations[frame_count])
    {
        pre_integrations[frame_count] = new IntegrationBase{acc_0, gyr_0, Bas[frame_count], Bgs[frame_count]};
    }
    if (frame_count != 0)
    {
        pre_integrations[frame_count]->push_back(dt, linear_acceleration, angular_velocity);
        //if(solver_flag != NON_LINEAR)
        tmp_pre_integration->push_back(dt, linear_acceleration, angular_velocity);

        dt_buf[frame_count].push_back(dt);
        linear_acceleration_buf[frame_count].push_back(linear_acceleration);
        angular_velocity_buf[frame_count].push_back(angular_velocity);

        int j = frame_count;
        Vector3d un_acc_0 = Rs[j] * (acc_0 - Bas[j]) - g;
        Vector3d un_gyr = 0.5 * (gyr_0 + angular_velocity) - Bgs[j];
        Rs[j] *= Utility::deltaQ(un_gyr * dt).toRotationMatrix();
        Vector3d un_acc_1 = Rs[j] * (linear_acceleration - Bas[j]) - g;
        Vector3d un_acc = 0.5 * (un_acc_0 + un_acc_1);
        Ps[j] += dt * Vs[j] + 0.5 * dt * dt * un_acc;
        Vs[j] += dt * un_acc;
    }
    acc_0 = linear_acceleration;
    gyr_0 = angular_velocity;
}

下面介绍处理图像数据的函数. 初始化成功则进行一次非线性优化,不成功则进行滑窗操作.solver_flag==NON_LINEAR进行非线性优化.执行非线性优化具体函数solveOdometry(), 检测系统运行是否失败,若失败则重置估计器, 执行窗口滑动函数slideWindow();具体代码如下:

void Estimator::processImage(const map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>>>> &image, double header)
{
    //ROS_DEBUG("new image coming ------------------------------------------");
    // cout << "Adding feature points: " << image.size()<
    if (f_manager.addFeatureCheckParallax(frame_count, image, td))
        marginalization_flag = MARGIN_OLD;
    else
        marginalization_flag = MARGIN_SECOND_NEW;

    //ROS_DEBUG("this frame is--------------------%s", marginalization_flag ? "reject" : "accept");
    //ROS_DEBUG("%s", marginalization_flag ? "Non-keyframe" : "Keyframe");
    //ROS_DEBUG("Solving %d", frame_count);
    // cout << "number of feature: " << f_manager.getFeatureCount()<
    Headers[frame_count] = header;

    ImageFrame imageframe(image, header);
    imageframe.pre_integration = tmp_pre_integration;
    all_image_frame.insert(make_pair(header, imageframe));
    tmp_pre_integration = new IntegrationBase{acc_0, gyr_0, Bas[frame_count], Bgs[frame_count]};

    if (ESTIMATE_EXTRINSIC == 2)
    {
        cout << "calibrating extrinsic param, rotation movement is needed" << endl;
        if (frame_count != 0)
        {
            vector<pair<Vector3d, Vector3d>> corres = f_manager.getCorresponding(frame_count - 1, frame_count);
            Matrix3d calib_ric;
            if (initial_ex_rotation.CalibrationExRotation(corres, pre_integrations[frame_count]->delta_q, calib_ric))
            {
                // ROS_WARN("initial extrinsic rotation calib success");
                // ROS_WARN_STREAM("initial extrinsic rotation: " << endl
                                                            //    << calib_ric);
                ric[0] = calib_ric;
                RIC[0] = calib_ric;
                ESTIMATE_EXTRINSIC = 1;
            }
        }
    }

    if (solver_flag == INITIAL)
    {
        if (frame_count == WINDOW_SIZE)
        {
            bool result = false;
            if (ESTIMATE_EXTRINSIC != 2 && (header - initial_timestamp) > 0.1)
            {
                // cout << "1 initialStructure" << endl;
                result = initialStructure();
                initial_timestamp = header;
            }
            if (result)
            {
                solver_flag = NON_LINEAR;
                solveOdometry();
                slideWindow();
                f_manager.removeFailures();
                cout << "Initialization finish!" << endl;
                last_R = Rs[WINDOW_SIZE];
                last_P = Ps[WINDOW_SIZE];
                last_R0 = Rs[0];
                last_P0 = Ps[0];
            }
            else
                slideWindow();
        }
        else
            frame_count++;
    }
    else
    {
        TicToc t_solve;
        solveOdometry();
        //ROS_DEBUG("solver costs: %fms", t_solve.toc());

        if (failureDetection())
        {
            // ROS_WARN("failure detection!");
            failure_occur = 1;
            clearState();
            setParameter();
            // ROS_WARN("system reboot!");
            return;
        }

        TicToc t_margin;
        slideWindow();
        f_manager.removeFailures();
        //ROS_DEBUG("marginalization costs: %fms", t_margin.toc());
        // prepare output of VINS
        key_poses.clear();
        for (int i = 0; i <= WINDOW_SIZE; i++)
            key_poses.push_back(Ps[i]);

        last_R = Rs[WINDOW_SIZE];
        last_P = Ps[WINDOW_SIZE];
        last_R0 = Rs[0];
        last_P0 = Ps[0];
    }
}

这里包含了初始化的部分。首先判断是否初始化,如果是,再判断frame_count是否达到了预设的窗口尺寸。然后判断是否进行初始化,执行初始化的函数为

result = initialStructure();

其中,首先对IMU的能观性进行检查。通过计算线加速度的标准差,判断IMU是否有充分运动激励,以进行初始化。

{
        map<double, ImageFrame>::iterator frame_it;
        Vector3d sum_g;
        for (frame_it = all_image_frame.begin(), frame_it++; frame_it != all_image_frame.end(); frame_it++)
        {
            double dt = frame_it->second.pre_integration->sum_dt;
            Vector3d tmp_g = frame_it->second.pre_integration->delta_v / dt;
            sum_g += tmp_g;
        }
        Vector3d aver_g;
        aver_g = sum_g * 1.0 / ((int)all_image_frame.size() - 1);
        double var = 0;
        for (frame_it = all_image_frame.begin(), frame_it++; frame_it != all_image_frame.end(); frame_it++)
        {
            double dt = frame_it->second.pre_integration->sum_dt;
            Vector3d tmp_g = frame_it->second.pre_integration->delta_v / dt;
            var += (tmp_g - aver_g).transpose() * (tmp_g - aver_g);
            //cout << "frame g " << tmp_g.transpose() << endl;
        }
        var = sqrt(var / ((int)all_image_frame.size() - 1));
        //ROS_WARN("IMU variation %f!", var);
        if (var < 0.25)
        {
            // ROS_INFO("IMU excitation not enouth!");
            //return false;
        }
    }

然后执行全局SfM。首先将将f_manager中的所有feature保存到vector sfm_f中,代码如下。
这里解释一下SFMFeature,其存放的是特征点的信息

struct SFMFeature
{
    bool state;//状态(是否被三角化)
    int id;
    vector<pair<int,Vector2d>> observation;//所有观测到该特征点的图像帧ID和图像座标
    double position[3];//3d座标
    double depth;//深度
};
  Quaterniond Q[frame_count + 1];
    Vector3d T[frame_count + 1];
    map<int, Vector3d> sfm_tracked_points;
    vector<SFMFeature> sfm_f;
    for (auto &it_per_id : f_manager.feature)
    {
        int imu_j = it_per_id.start_frame - 1;
        SFMFeature tmp_feature;
        tmp_feature.state = false;
        tmp_feature.id = it_per_id.feature_id;
        for (auto &it_per_frame : it_per_id.feature_per_frame)
        {
            imu_j++;
            Vector3d pts_j = it_per_frame.point;
            tmp_feature.observation.push_back(make_pair(imu_j, Eigen::Vector2d{pts_j.x(), pts_j.y()}));
        }
        sfm_f.push_back(tmp_feature);
    }

然后调用relativePose函数,判断当前的图像帧是否产生了足够了平移,能够计算相对位姿态。对于窗口中的每一帧,调用getCorresponding函数获取特征匹配关系,具体如下

vector<pair<Vector3d, Vector3d>> FeatureManager::getCorresponding(int frame_count_l, int frame_count_r)
{
    vector<pair<Vector3d, Vector3d>> corres;
    for (auto &it : feature)
    {
        if (it.start_frame <= frame_count_l && it.endFrame() >= frame_count_r)
        {
            Vector3d a = Vector3d::Zero(), b = Vector3d::Zero();
            int idx_l = frame_count_l - it.start_frame;
            int idx_r = frame_count_r - it.start_frame;

            a = it.feature_per_frame[idx_l].point;

            b = it.feature_per_frame[idx_r].point;
            
            corres.push_back(make_pair(a, b));
        }
    }
    return corres;
}

对于所跟踪的所有特征,如果其跨度包含了[i,window_size],那么其在i帧上的坐标点赋值给a,其在windows_size上的特征点坐标赋值给b,存储为对应关系并返回。
接下来,如果所找到的匹配特征数目大于20,计算坐标的平均位移,如果平均位移大于30/460,并且能够结算出相对旋转与平移,就将对应的帧索引保留,并返回true,说明能够计算相对位姿变换。 这里的第L帧是从第一帧开始到滑动窗口中第一个满足与当前帧的平均视差足够大的帧,会作为参考帧到下面的全局sfm使用,得到的Rt为当前帧到第l帧的座标系变换Rt。
然后调用sfm.construct()函数进行三维重建,对应代码如下。如果SFM重建失败,将边缘化标志设置为MARGIN_OLD,并返回false,初始化失败。对窗口中每个图像帧求解sfm问题,得到所有图像帧相对于参考帧的旋转四元数Q、平移向量T和特征点坐标sfm_tracked_points。

Matrix3d relative_R;
    Vector3d relative_T;
    int l;
    if (!relativePose(relative_R, relative_T, l))
    {
        cout << "Not enough features or parallax; Move device around" << endl;
        return false;
    }
    GlobalSFM sfm;
    if (!sfm.construct(frame_count + 1, Q, T, l,
                       relative_R, relative_T,
                       sfm_f, sfm_tracked_points))
    {
        cout << "global SFM failed!" << endl;
        marginalization_flag = MARGIN_OLD;
        return false;
    }

然后,对于所有的图像帧,包括不在滑动窗口中的,提供初始的RT估计,然后solvePnP进行优化

 map<double, ImageFrame>::iterator frame_it;
    map<int, Vector3d>::iterator it;
    frame_it = all_image_frame.begin( );
    for (int i = 0; frame_it != all_image_frame.end( ); frame_it++)
    {
        cv::Mat r, rvec, t, D, tmp_r;
        if((frame_it->first) == Headers[i].stamp.toSec())
        {
            frame_it->second.is_key_frame = true;
            frame_it->second.R = Q[i].toRotationMatrix() * RIC[0].transpose();
            frame_it->second.T = T[i];
            i++;
            continue;
        }
        if((frame_it->first) > Headers[i].stamp.toSec())
            i++;
 
        //注意这里的 Q和 T是图像帧的位姿,而不是求解PNP时所用的座标系变换矩阵,两者具有对称关系
        Matrix3d R_inital = (Q[i].inverse()).toRotationMatrix();
        Vector3d P_inital = - R_inital * T[i];
        cv::eigen2cv(R_inital, tmp_r);
        //罗德里格斯公式将旋转矩阵转换成旋转向量
        cv::Rodrigues(tmp_r, rvec);
        cv::eigen2cv(P_inital, t);

        frame_it->second.is_key_frame = false;
        //获取 pnp需要用到的存储每个特征点三维点和图像座标的 vector
        vector<cv::Point3f> pts_3_vector;
        vector<cv::Point2f> pts_2_vector;
        for (auto &id_pts : frame_it->second.points)
        {
            int feature_id = id_pts.first;
            for (auto &i_p : id_pts.second)
            {
                it = sfm_tracked_points.find(feature_id);
                if(it != sfm_tracked_points.end())
                {
                    Vector3d world_pts = it->second;
                    cv::Point3f pts_3(world_pts(0), world_pts(1), world_pts(2));
                    pts_3_vector.push_back(pts_3);
                    Vector2d img_pts = i_p.second.head<2>();
                    cv::Point2f pts_2(img_pts(0), img_pts(1));
                    pts_2_vector.push_back(pts_2);
                }
            }
        }
        //保证特征点数大于 5
        cv::Mat K = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);     
        if(pts_3_vector.size() < 6)
        {
            cout << "pts_3_vector size " << pts_3_vector.size() << endl;
            ROS_DEBUG("Not enough points for solve pnp !");
            return false;
        }
        /** 
         *bool cv::solvePnP(    求解 pnp问题
         *   InputArray  objectPoints,   特征点的3D座标数组
         *   InputArray  imagePoints,    特征点对应的图像座标
         *   InputArray  cameraMatrix,   相机内参矩阵
         *   InputArray  distCoeffs,     失真系数的输入向量
         *   OutputArray     rvec,       旋转向量
         *   OutputArray     tvec,       平移向量
         *   bool    useExtrinsicGuess = false, 为真则使用提供的初始估计值
         *   int     flags = SOLVEPNP_ITERATIVE 采用LM优化
         *)   
         */
        if (! cv::solvePnP(pts_3_vector, pts_2_vector, K, D, rvec, t, 1))
        {
            ROS_DEBUG("solve pnp fail!");
            return false;
        }
        cv::Rodrigues(rvec, r);
        MatrixXd R_pnp,tmp_R_pnp;
        cv::cv2eigen(r, tmp_R_pnp);
        //这里也同样需要将座标变换矩阵转变成图像帧位姿,并转换为IMU座标系的位姿
        R_pnp = tmp_R_pnp.transpose();
        MatrixXd T_pnp;
        cv::cv2eigen(t, T_pnp);
        T_pnp = R_pnp * (-T_pnp);
        frame_it->second.R = R_pnp * RIC[0].transpose();
        frame_it->second.T = T_pnp;
    }

然后进行视觉惯性联合初始化

if (visualInitialAlign())
        return true;
    else
    {
        cout << "misalign visual structure with IMU" << endl;
        return false;
    }

下面介绍具体的函数bool sfm.construct()
frame_num=frame_count + 1=11,frame_num-1表示当前帧
1、 这里把第l帧看作参考座标系,根据当前帧到第l帧的relative_R,relative_T,得到当前帧在参考座标系下的位姿,之后的pose[i]表示第l帧到第i帧的变换矩阵[R|T]

	q[l].w() = 1;
	q[l].x() = 0;
	q[l].y() = 0;
	q[l].z() = 0;
	T[l].setZero();
	q[frame_num - 1] = q[l] * Quaterniond(relative_R);
	T[frame_num - 1] = relative_T;

2、 先三角化第l帧(参考帧)与第frame_num-1帧(当前帧)的路标点
3、 pnp求解参考坐标系到第l+1开始的每一帧的变换矩阵R_initial, P_initial,保存在Pose中。
并与当前帧进行三角化。

for (int i = l; i < frame_num - 1 ; i++)
	{
		if (i > l)
		{
			Matrix3d R_initial = c_Rotation[i - 1];
			Vector3d P_initial = c_Translation[i - 1];
			if(!solveFrameByPnP(R_initial, P_initial, i, sfm_f))
				return false;
			c_Rotation[i] = R_initial;
			c_Translation[i] = P_initial;
			c_Quat[i] = c_Rotation[i];
			Pose[i].block<3, 3>(0, 0) = c_Rotation[i];
			Pose[i].block<3, 1>(0, 3) = c_Translation[i];
		}
		triangulateTwoFrames(i, Pose[i], frame_num - 1, Pose[frame_num - 1], sfm_f);
	}

4、 对第l帧与从第l+1到frame_num-2的每一帧再进行三角化

for (int i = l + 1; i < frame_num - 1; i++)
		triangulateTwoFrames(l, Pose[l], i, Pose[i], sfm_f);

5、 PNP求解参考座标系到从第l-1到第0帧的每一帧之间的变换矩阵,并进行三角化

for (int i = l - 1; i >= 0; i--)
	{
		Matrix3d R_initial = c_Rotation[i + 1];
		Vector3d P_initial = c_Translation[i + 1];
		if(!solveFrameByPnP(R_initial, P_initial, i, sfm_f))
			return false;
		c_Rotation[i] = R_initial;
		c_Translation[i] = P_initial;
		c_Quat[i] = c_Rotation[i];
		Pose[i].block<3, 3>(0, 0) = c_Rotation[i];
		Pose[i].block<3, 1>(0, 3) = c_Translation[i];
		triangulateTwoFrames(i, Pose[i], l, Pose[l], sfm_f);
	}

6、 三角化其他未恢复的特征点。至此得到了滑动窗口中所有图像帧的位姿以及特征点的3d座标

for (int j = 0; j < feature_num; j++)
	{
		if (sfm_f[j].state == true)
			continue;
		if ((int)sfm_f[j].observation.size() >= 2)
		{
			Vector2d point0, point1;
			int frame_0 = sfm_f[j].observation[0].first;
			point0 = sfm_f[j].observation[0].second;
			int frame_1 = sfm_f[j].observation.back().first;
			point1 = sfm_f[j].observation.back().second;
			Vector3d point_3d;
			triangulatePoint(Pose[frame_0], Pose[frame_1], point0, point1, point_3d);
			sfm_f[j].state = true;
			sfm_f[j].position[0] = point_3d(0);
			sfm_f[j].position[1] = point_3d(1);
			sfm_f[j].position[2] = point_3d(2);
		}		
	}

7、 使用cares进行全局BA优化,代码略

8、 这里得到的是第l帧座标系到各帧的变换矩阵,应将其转变为每一帧在第l帧座标系上的位姿

for (int i = 0; i < frame_num; i++)
	{
		q[i].w() = c_rotation[i][0]; 
		q[i].x() = c_rotation[i][1]; 
		q[i].y() = c_rotation[i][2]; 
		q[i].z() = c_rotation[i][3]; 
		q[i] = q[i].inverse();
	}
	for (int i = 0; i < frame_num; i++)
	{
		T[i] = -1 * (q[i] * Vector3d(c_translation[i][0], c_translation[i][1], c_translation[i][2]));
	}
	for (int i = 0; i < (int)sfm_f.size(); i++)
	{
		if(sfm_f[i].state)
			sfm_tracked_points[sfm_f[i].id] = Vector3d(sfm_f[i].position[0], sfm_f[i].position[1], sfm_f[i].position[2]);
	}

上述内容主要参考博客:https://www.twblogs.net/a/5ca3fe9cbd9eee5b1a06f4ef/zh-cn
接下来,关于bool Estimator::visualInitialAlign() 视觉惯性联合初始化的部分将在下篇博客展开介绍。

你可能感兴趣的:(VINS-Mono代码精简版代码详解-无ROS依赖(一))