之前已经对VINS-Mono的初始化部分进行了介绍,下面结合代码和公式介绍其非线性优化部分。本文部分参考 https://blog.csdn.net/u012871872/article/details/78128087 \url{https://blog.csdn.net/u012871872/article/details/78128087} https://blog.csdn.net/u012871872/article/details/78128087
1. 入口
非线性优化的入口是在
estimator.processImage(image, img_msg->header);
其中,其首先判断是否初始化,下面我们直接进入非初始化部分。其代码如下
TicToc t_solve;
solveOdometry();
//ROS_DEBUG("solver costs: %fms", t_solve.toc());
if (failureDetection())
{
// ROS_WARN("failure detection!");
failure_occur = 1;
clearState();
setParameter();
// ROS_WARN("system reboot!");
return;
}
TicToc t_margin;
slideWindow();
f_manager.removeFailures();
//ROS_DEBUG("marginalization costs: %fms", t_margin.toc());
// prepare output of VINS
key_poses.clear();
for (int i = 0; i <= WINDOW_SIZE; i++)
key_poses.push_back(Ps[i]);
last_R = Rs[WINDOW_SIZE];
last_P = Ps[WINDOW_SIZE];
last_R0 = Rs[0];
last_P0 = Ps[0];
2. solveOdometry()函数
首先判断frame_count是否小于窗口大小,如果是则返回。然后进行三角化,以及后端优化。
void Estimator::solveOdometry()
{
if (frame_count < WINDOW_SIZE)
return;
if (solver_flag == NON_LINEAR)
{
TicToc t_tri;
f_manager.triangulate(Ps, tic, ric);
//cout << "triangulation costs : " << t_tri.toc() << endl;
backendOptimization();
}
}
2.1 三角化部分
代码中重要变量的说明:Ps:世界坐标系下机体的平移量 p b w \mathbf{p}_b^w pbw,Rs:世界坐标系下的机体的旋转量 q b w \mathbf{q}_b^w qbw,Vs:世界坐标系机体的速度量 v b w \mathbf{v}_b^w vbw;para_pose跟Ps和Rs是等价的,只是数据类型有区别,ric和tic是IMU与相机之间的坐标变换(外参) p c b , q c b \mathbf{p}_c^b,\mathbf{q}_c^b pcb,qcb。estimator.f_manager.feature是全局特征点。
(1) 参考坐标系构造
对于每个特征,根据其对应的IMU位姿和外参,计算该特征的起始帧的位姿 t 0 , R 0 t_0,R_0 t0,R0,然后对于所有观测到该特征的帧,计算其全局位姿,然后再变换到 ( R 0 , t 0 ) (R_0,t_0) (R0,t0)坐标系中,构造相应的投影矩阵,然后构造 A A A矩阵进行求解。对应的公式为
λ k x k = P k y , P k = [ R k , t k ] ∈ R 3 × 4 \lambda_k \mathbf{x}_k=\mathbf{P}_k\mathbf{y},\mathbf{P}_k=[\mathbf{R}_k,\mathbf{t}_k] \in \mathbb{R}^{3\times 4} λkxk=Pky,Pk=[Rk,tk]∈R3×4
其中, y ∈ R 4 \mathbf{y} \in \mathbb{R}^4 y∈R4,取齐次坐标,每次观测为 x k = [ u k , v k , 1 ] T \mathbf{x}_k=[u_k,v_k,1]^T xk=[uk,vk,1]T, P k \mathbf{P}_k Pk为参考系到camera系的投影矩阵。
则,对应的约束方程为
[ u 1 P 1 , 3 T − P 1 , 1 T v 1 P 1 , 3 T − P 1 , 2 T ⋮ u n P n , 3 T − P n , 1 T v n P n , 3 T − P n , 2 T ] y = 0 ⇒ A y = 0 \begin{bmatrix} u_1\mathbf{P}_{1,3}^T-\mathbf{P}_{1,1}^T \\ v_1\mathbf{P}_{1,3}^T-\mathbf{P}_{1,2}^T \\ \vdots \\ u_n\mathbf{P}_{n,3}^T-\mathbf{P}_{n,1}^T \\ v_n\mathbf{P}_{n,3}^T-\mathbf{P}_{n,2}^T \end{bmatrix}\mathbf{y}=\mathbf{0} \rArr \mathbf{Ay=0} ⎣⎢⎢⎢⎢⎢⎡u1P1,3T−P1,1Tv1P1,3T−P1,2T⋮unPn,3T−Pn,1TvnPn,3T−Pn,2T⎦⎥⎥⎥⎥⎥⎤y=0⇒Ay=0
对应的代码为
Eigen::MatrixXd svd_A(2 * it_per_id.feature_per_frame.size(), 4);
int svd_idx = 0;
Eigen::Matrix<double, 3, 4> P0;
Eigen::Vector3d t0 = Ps[imu_i] + Rs[imu_i] * tic[0];
Eigen::Matrix3d R0 = Rs[imu_i] * ric[0];
P0.leftCols<3>() = Eigen::Matrix3d::Identity();
P0.rightCols<1>() = Eigen::Vector3d::Zero();
for (auto &it_per_frame : it_per_id.feature_per_frame)
{
imu_j++;
Eigen::Vector3d t1 = Ps[imu_j] + Rs[imu_j] * tic[0];
Eigen::Matrix3d R1 = Rs[imu_j] * ric[0];
Eigen::Vector3d t = R0.transpose() * (t1 - t0);
Eigen::Matrix3d R = R0.transpose() * R1;
Eigen::Matrix<double, 3, 4> P;
P.leftCols<3>() = R.transpose();
P.rightCols<1>() = -R.transpose() * t;
Eigen::Vector3d f = it_per_frame.point.normalized();
svd_A.row(svd_idx++) = f[0] * P.row(2) - f[2] * P.row(0);
svd_A.row(svd_idx++) = f[1] * P.row(2) - f[2] * P.row(1);
if (imu_i == imu_j)
continue;
}
assert(svd_idx == svd_A.rows());
然后对 A A A矩阵进行svd分解,其深度计算即其最小奇异值对应的奇异向量第三元素除以第四元素,即
assert(svd_idx == svd_A.rows());
Eigen::Vector4d svd_V = Eigen::JacobiSVD<Eigen::MatrixXd>(svd_A, Eigen::ComputeThinV).matrixV().rightCols<1>();
double svd_method = svd_V[2] / svd_V[3];
//it_per_id->estimated_depth = -b / A;
//it_per_id->estimated_depth = svd_V[2] / svd_V[3];
it_per_id.estimated_depth = svd_method;
//it_per_id->estimated_depth = INIT_DEPTH;
if (it_per_id.estimated_depth < 0.1)
{
it_per_id.estimated_depth = INIT_DEPTH;
}
2.2 backOptimization()后端优化部分
在三角化之后,得到的是特征点的深度信息,然后进行后端优化。主要包括:
2.2.1 系统数据类型转换
由Ps,Rs转换为para_Pose,Vs,Bas,Bgs转换为para_SpeedBias,tic,ric转换为para_Ex_Pose。代码如下:
void Estimator::vector2double()
{
for (int i = 0; i <= WINDOW_SIZE; i++)
{
para_Pose[i][0] = Ps[i].x();
para_Pose[i][1] = Ps[i].y();
para_Pose[i][2] = Ps[i].z();
Quaterniond q{Rs[i]};
para_Pose[i][3] = q.x();
para_Pose[i][4] = q.y();
para_Pose[i][5] = q.z();
para_Pose[i][6] = q.w();
para_SpeedBias[i][0] = Vs[i].x();
para_SpeedBias[i][1] = Vs[i].y();
para_SpeedBias[i][2] = Vs[i].z();
para_SpeedBias[i][3] = Bas[i].x();
para_SpeedBias[i][4] = Bas[i].y();
para_SpeedBias[i][5] = Bas[i].z();
para_SpeedBias[i][6] = Bgs[i].x();
para_SpeedBias[i][7] = Bgs[i].y();
para_SpeedBias[i][8] = Bgs[i].z();
}
for (int i = 0; i < NUM_OF_CAM; i++)
{
para_Ex_Pose[i][0] = tic[i].x();
para_Ex_Pose[i][1] = tic[i].y();
para_Ex_Pose[i][2] = tic[i].z();
Quaterniond q{ric[i]};
para_Ex_Pose[i][3] = q.x();
para_Ex_Pose[i][4] = q.y();
para_Ex_Pose[i][5] = q.z();
para_Ex_Pose[i][6] = q.w();
}
VectorXd dep = f_manager.getDepthVector();
for (int i = 0; i < f_manager.getFeatureCount(); i++)
para_Feature[i][0] = dep(i);
if (ESTIMATE_TD)
para_Td[0][0] = td;
}
2.2.2 添加残差
包括先验残差,IMU测量残差,camera测量残差。注意这里IMU项和camera项之间是有一个系数,这个系数就是他们各自的协方差矩阵:IMU的协方差是预积分的协方差(IMUFactor::Evaluate,中添加IMU协方差,求解jacobian矩阵),而camera的测量残差则是一个固定的系数 f 1.5 \frac{f}{1.5} 1.5f
对应的函数为
problemSolve();
主要包括以下步骤:
(1) 构建problem
backend::Problem problem(backend::Problem::ProblemType::SLAM_PROBLEM);
vector<shared_ptr<backend::VertexPose>> vertexCams_vec;
vector<shared_ptr<backend::VertexSpeedBias>> vertexVB_vec;
int pose_dim = 0;
(2) 添加外参数节点
如果外参数已经估计优化好,则设置该节点为Fix。代码如下
shared_ptr<backend::VertexPose> vertexExt(new backend::VertexPose());
{
Eigen::VectorXd pose(7);
pose << para_Ex_Pose[0][0], para_Ex_Pose[0][1], para_Ex_Pose[0][2], para_Ex_Pose[0][3], para_Ex_Pose[0][4], para_Ex_Pose[0][5], para_Ex_Pose[0][6];
vertexExt->SetParameters(pose);
if (!ESTIMATE_EXTRINSIC)
{
//ROS_DEBUG("fix extinsic param");
// TODO:: set Hessian prior to zero
vertexExt->SetFixed();
}
else{
//ROS_DEBUG("estimate extinsic param");
}
problem.AddVertex(vertexExt);
pose_dim += vertexExt->LocalDimension();
}
(2) 添加IMU位姿和IMU速度以及偏置节点
代码如下:
for (int i = 0; i < WINDOW_SIZE + 1; i++)
{
shared_ptr<backend::VertexPose> vertexCam(new backend::VertexPose());
Eigen::VectorXd pose(7);
pose << para_Pose[i][0], para_Pose[i][1], para_Pose[i][2], para_Pose[i][3], para_Pose[i][4], para_Pose[i][5], para_Pose[i][6];
vertexCam->SetParameters(pose);
vertexCams_vec.push_back(vertexCam);
problem.AddVertex(vertexCam);
pose_dim += vertexCam->LocalDimension();
shared_ptr<backend::VertexSpeedBias> vertexVB(new backend::VertexSpeedBias());
Eigen::VectorXd vb(9);
vb << para_SpeedBias[i][0], para_SpeedBias[i][1], para_SpeedBias[i][2],
para_SpeedBias[i][3], para_SpeedBias[i][4], para_SpeedBias[i][5],
para_SpeedBias[i][6], para_SpeedBias[i][7], para_SpeedBias[i][8];
vertexVB->SetParameters(vb);
vertexVB_vec.push_back(vertexVB);
problem.AddVertex(vertexVB);
pose_dim += vertexVB->LocalDimension();
}
(3) 添加IMU预积分对应的约束边
for (int i = 0; i < WINDOW_SIZE; i++)
{
int j = i + 1;
if (pre_integrations[j]->sum_dt > 10.0)
continue;
std::shared_ptr<backend::EdgeImu> imuEdge(new backend::EdgeImu(pre_integrations[j]));
std::vector<std::shared_ptr<backend::Vertex>> edge_vertex;
edge_vertex.push_back(vertexCams_vec[i]);
edge_vertex.push_back(vertexVB_vec[i]);
edge_vertex.push_back(vertexCams_vec[j]);
edge_vertex.push_back(vertexVB_vec[j]);
imuEdge->SetVertex(edge_vertex);
problem.AddEdge(imuEdge);
}
(4) 添加Visual Factor
首先添加每个特征的逆深度节点,
shared_ptr<backend::VertexInverseDepth> verterxPoint(new backend::VertexInverseDepth());
VecX inv_d(1);
inv_d << para_Feature[feature_index][0];
verterxPoint->SetParameters(inv_d);
problem.AddVertex(verterxPoint);
vertexPt_vec.push_back(verterxPoint);
然后针对每个特征,遍历其所有观测,构建所有观测到同一特征的帧之间的约束边,包括的信息:逆深度,边的顶点对应的IMU全局位姿,外参数,信息矩阵,lossfunction核函数
// 遍历所有的观测
for (auto &it_per_frame : it_per_id.feature_per_frame)
{
imu_j++;
if (imu_i == imu_j)
continue;
Vector3d pts_j = it_per_frame.point;
std::shared_ptr<backend::EdgeReprojection> edge(new backend::EdgeReprojection(pts_i, pts_j));
std::vector<std::shared_ptr<backend::Vertex>> edge_vertex;
edge_vertex.push_back(verterxPoint);
edge_vertex.push_back(vertexCams_vec[imu_i]);
edge_vertex.push_back(vertexCams_vec[imu_j]);
edge_vertex.push_back(vertexExt);
edge->SetVertex(edge_vertex);
edge->SetInformation(project_sqrt_info_.transpose() * project_sqrt_info_);
edge->SetLossFunction(lossfunction);
problem.AddEdge(edge);
}
(4) 添加先验
if (Hprior_.rows() > 0)
{
// 外参数先验设置为 0. TODO:: 这个应该放到 solver 里去弄
// Hprior_.block(0,0,6,Hprior_.cols()).setZero();
// Hprior_.block(0,0,Hprior_.rows(),6).setZero();
problem.SetHessianPrior(Hprior_); // 告诉这个 problem
problem.SetbPrior(bprior_);
problem.SetErrPrior(errprior_);
problem.SetJtPrior(Jprior_inv_);
problem.ExtendHessiansPriorSize(15); // 但是这个 prior 还是之前的维度,需要扩展下装新的pose
}
(4) 非线性优化求解——LM方法
所有的节点和边添加完毕后,开始进行非线性优化求解。
problem.Solve(10);
(4) 位姿更新
优化求解完毕后,对窗口的IMU位姿进行更新,以及速度及偏置向量进行更新。
for (int i = 0; i < WINDOW_SIZE + 1; i++)
{
VecX p = vertexCams_vec[i]->Parameters();
for (int j = 0; j < 7; ++j)
{
para_Pose[i][j] = p[j];
}
VecX vb = vertexVB_vec[i]->Parameters();
for (int j = 0; j < 9; ++j)
{
para_SpeedBias[i][j] = vb[j];
}
}
然后对每个特征的逆深度进行更新
for (int i = 0; i < vertexPt_vec.size(); ++i)
{
VecX f = vertexPt_vec[i]->Parameters();
para_Feature[i][0] = f[0];
}
注意,这里的全局坐标系是第一帧的相机坐标系。
(4) 边缘化
这里前面已经根据视差范围确定边缘化选项(MARGIN_OLD or MARGIN_SECOND_NEW)
if (f_manager.addFeatureCheckParallax(frame_count, image, td))
marginalization_flag = MARGIN_OLD;
else
marginalization_flag = MARGIN_SECOND_NEW;
如果是MARGIN_OLD,对应的代码为
// 维护 marg
TicToc t_whole_marginalization;
if (marginalization_flag == MARGIN_OLD)
{
vector2double();
MargOldFrame();
// prior 中对应的保留下来的参数地址
std::unordered_map<long, double *> addr_shift;
for (int i = 1; i <= WINDOW_SIZE; i++)
{
addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i - 1];
addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i - 1];
}
for (int i = 0; i < NUM_OF_CAM; i++)
addr_shift[reinterpret_cast<long>(para_Ex_Pose[i])] = para_Ex_Pose[i];
if (ESTIMATE_TD)
{
addr_shift[reinterpret_cast<long>(para_Td[0])] = para_Td[0];
}
}
如果是MARGIN_SECOND_NEW,对应的代码为
if (Hprior_.rows() > 0)
{
vector2double();
MargNewFrame();
std::unordered_map<long, double *> addr_shift;
for (int i = 0; i <= WINDOW_SIZE; i++)
{
if (i == WINDOW_SIZE - 1)
continue;
else if (i == WINDOW_SIZE)
{
addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i - 1];
addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i - 1];
}
else
{
addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i];
addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i];
}
}
for (int i = 0; i < NUM_OF_CAM; i++)
addr_shift[reinterpret_cast<long>(para_Ex_Pose[i])] = para_Ex_Pose[i];
if (ESTIMATE_TD)
{
addr_shift[reinterpret_cast<long>(para_Td[0])] = para_Td[0];
}
}