参考博客: https://www.cnblogs.com/feifanrensheng/p/10532918.html \url{https://www.cnblogs.com/feifanrensheng/p/10532918.html} https://www.cnblogs.com/feifanrensheng/p/10532918.html
https://blog.csdn.net/u012871872/article/details/78128087 \url{https://blog.csdn.net/u012871872/article/details/78128087} https://blog.csdn.net/u012871872/article/details/78128087
https://blog.csdn.net/weixin_41394379/article/details/89975386 \url{https://blog.csdn.net/weixin_41394379/article/details/89975386} https://blog.csdn.net/weixin_41394379/article/details/89975386
首先可参考边缘化策略的示例说明。
VINS 根据次新帧是否为关键帧,分为两种边缘化策略:通过对比次新帧和次次新帧的视差量,来决定 marg 掉次新帧或者最老帧,对应到代码中详细分析: 1. 当次新帧为关键帧时,MARGIN_OLD,将 marg 掉最老帧,及其看到的路标点和相关联 的 IMU 数据,将其转化为先验信息加到整体的目标函数中。
1. 边缘化最老帧
对应的函数为
void Estimator::MargOldFrame()
该函数,首先构建problem,依次加入外参数节点,IMU信息的相关节点,视觉节点和约束边,设置先验信息,然后设置需要边缘化掉的节点,进行边缘化操作
std::vector<std::shared_ptr<backend::Vertex>> marg_vertex;
marg_vertex.push_back(vertexCams_vec[0]);//这里的0即为最老帧对应的索引
marg_vertex.push_back(vertexVB_vec[0]);
problem.Marginalize(marg_vertex, pose_dim);
2. 边缘化次新帧
在VINS中,当次新帧不是关键帧时,MARGIN_SECOND_NEW。
对应的函数为
void Estimator::MargNewFrame()
该函数,首先构建problem,依次加入外参数节点,IMU信息的相关节点,设置先验信息,然后设置需要边缘化掉的节点,进行边缘化操作。注意这里没有加入视觉约束边???
std::vector<std::shared_ptr<backend::Vertex>> marg_vertex;
marg_vertex.push_back(vertexCams_vec[WINDOW_SIZE - 1]); //注意这里的索引,次新帧
marg_vertex.push_back(vertexVB_vec[WINDOW_SIZE - 1]);
problem.Marginalize(marg_vertex, pose_dim);
3. 边缘化函数
首先,我们介绍边缘化的理论过程。
VINS中的滑窗优化策略,将滑出窗外的帧与滑窗内的帧的约束使用边缘化的形式保存为先验误差进行后续非线性优化,以保留约束信息。本文对具体的方案进行记录。
紧耦合VIO处理的两种误差分别为IMU误差与图像误差,采用LM、GN、Dogleg等方式迭代求解待优化参数增量,以求得最大似然估计,最小化非线性化误差。
迭代优化的核心即求解增量方程:
H δ x = b \mathbf{H\delta x = b} Hδx=b
由于引入滑窗后,对于滑到窗口之外的帧不再优化其参数,但其与滑窗内的数据仍然存在约束,直接丢掉这些窗外帧会造成约束信息丢失,所以要将其封装成先验信息,加入到大的非线性优化问题中作为一部分误差,这一步即为边缘化。假设要边缘化的状态为 x 1 \mathbf{x_1} x1,要保留的状态为 x 2 \mathbf{x_2} x2,对于上述增量方程,变为:
[ H 11 H 12 H 21 H 22 ] [ δ x 1 δ x 2 ] = [ b 1 b 2 ] \begin{bmatrix} \mathbf{H}_{11} & \mathbf{H}_{12} \\ \mathbf{H}_{21} & \mathbf{H}_{22} \end{bmatrix} \begin{bmatrix} \delta \mathbf{x}_1 \\ \delta \mathbf{x}_2 \end{bmatrix}=\begin{bmatrix} \mathbf{b}_1 \\ \mathbf{b}_2 \end{bmatrix} [H11H21H12H22][δx1δx2]=[b1b2]
经过舒尔补操作:
[ I − H 12 H 22 − 1 0 I ] [ H 11 H 12 H 21 H 22 ] [ δ x 1 δ x 2 ] = [ I − H 12 H 22 − 1 0 I ] [ b 1 b 2 ] [ H 11 − H 12 H 22 − 1 H 21 0 H 21 H 22 ] [ δ x 1 δ x 2 ] = [ b 1 − H 12 H 22 − 1 b 2 b 2 ] \begin{bmatrix} \mathbf{I} & -\mathbf{H}_{12}\mathbf{H}_{22}^{-1} \\ \mathbf{0} & \mathbf{I}\end{bmatrix}\begin{bmatrix} \mathbf{H}_{11} & \mathbf{H}_{12} \\ \mathbf{H}_{21} & \mathbf{H}_{22} \end{bmatrix} \begin{bmatrix} \delta \mathbf{x}_1 \\ \delta \mathbf{x}_2 \end{bmatrix}=\begin{bmatrix} \mathbf{I} & -\mathbf{H}_{12}\mathbf{H}_{22}^{-1} \\ \mathbf{0} & \mathbf{I}\end{bmatrix}\begin{bmatrix} \mathbf{b}_1 \\ \mathbf{b}_2 \end{bmatrix} \\ \begin{bmatrix} \mathbf{H}_{11}-\mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{H}_{21} & 0 \\ \mathbf{H}_{21} & \mathbf{H}_{22} \end{bmatrix}\begin{bmatrix} \delta \mathbf{x}_1 \\ \delta \mathbf{x}_2 \end{bmatrix}=\begin{bmatrix} \mathbf{b}_1- \mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{b}_2\\ \mathbf{b}_2 \end{bmatrix} [I0−H12H22−1I][H11H21H12H22][δx1δx2]=[I0−H12H22−1I][b1b2][H11−H12H22−1H21H210H22][δx1δx2]=[b1−H12H22−1b2b2]
求解第一个方程
( H 11 − H 12 H 22 − 1 H 21 ) δ x 1 = b 1 − H 12 H 22 − 1 b 2 (\mathbf{H}_{11}-\mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{H}_{21} )\delta \mathbf{x}_1=\mathbf{b}_1-\mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{b}_2 (H11−H12H22−1H21)δx1=b1−H12H22−1b2
通过这种操作,既没有损失约束信息,也不需要求解 x 2 \mathbf{x}_2 x2
上式为边缘化后待求解的增量方程,我们要根据这个增量方程恢复出边缘化封装的先验误差的具体形式epep
随着迭代的过程,状态量被不断更新。但在边缘化时,被边缘化的状态值固定,舒尔补时使用的雅克比即为当时泰勒展开使用该固定的值(x线性化点)求得的雅克比,都需要固定。在VINS中,线性化点处的参数值x0被保存为keep_block_data。此即为EFJ,更多解释详看滑窗优化、边缘化、舒尔补、FEJ及fill-in问题。但在上述方程中,由于 b = J T e \mathbf{b}=\mathbf{J}^{T}\mathbf{e} b=JTe,其中的 e \mathbf{e} e随着状态的更新而变化,因此 b \mathbf{b} b也需随之变化。
状态的更新:
x ′ = E x p ( δ x ) ⊗ x 0 \mathbf{x}'=Exp(\delta \mathbf{x}) \otimes \mathbf{x}_0 x′=Exp(δx)⊗x0
残差的更新可以表示为
b = b 0 + ∂ b ∂ x ∣ x 0 δ x = b 0 + J T ∂ e ∂ x ∣ x 0 δ x = b 0 + H δ x \mathbf{b} =\mathbf{b}_0+\frac{\partial \mathbf{b}}{\partial \mathbf{x}}|_{\mathbf{x}_0}\delta \mathbf{x}=\mathbf{b}_0+\mathbf{J}^{T}\frac{\partial \mathbf{e}}{\partial \mathbf{x}}|_{\mathbf{x}_0}\delta \mathbf{x}=\mathbf{b}_0+{\mathbf{H}}\delta \mathbf{x} b=b0+∂x∂b∣x0δx=b0+JT∂x∂e∣x0δx=b0+Hδx
有
[ b 1 b 2 ] = [ b 1 , 0 b 2 , 0 ] + [ H 11 H 12 H 21 H 22 ] [ δ x 1 δ x 2 ] b ∗ = b 1 − H 12 H 22 − 1 b 2 \begin{bmatrix} \mathbf{b}_1 \\ \mathbf{b}_2\end{bmatrix}=\begin{bmatrix} \mathbf{b}_{1,0} \\ \mathbf{b}_{2,0}\end{bmatrix}+\begin{bmatrix} \mathbf{H}_{11} & \mathbf{H}_{12} \\ \mathbf{H}_{21} & \mathbf{H}_{22} \end{bmatrix} \begin{bmatrix} \delta \mathbf{x}_1 \\ \delta \mathbf{x}_2 \end{bmatrix} \\ \mathbf{b}^*=\mathbf{b}_1- \mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{b}_2 [b1b2]=[b1,0b2,0]+[H11H21H12H22][δx1δx2]b∗=b1−H12H22−1b2
有
b ∗ = ( b 1 , 0 + H 11 δ x 1 + H 12 δ x 2 ) − H 12 H 22 − 1 ( b 2 , 0 + H 21 δ x 1 + H 22 δ x 2 ) = b 1 , 0 − H 12 H 22 − 1 b 2 , 0 + ( H 11 − H 12 H 22 − 1 H 21 ) δ x 1 = b 0 ∗ + H 0 ∗ δ x 1 = J l T ( ( J l T ) + b 0 ∗ + J l d x ) = J l T e p \mathbf{b}^*=(\mathbf{b}_{1,0}+\mathbf{H}_{11}\delta \mathbf{x}_1+\mathbf{H}_{12}\delta \mathbf{x}_2)- \mathbf{H}_{12}\mathbf{H}_{22}^{-1}(\mathbf{b}_{2,0}+\mathbf{H}_{21}\delta \mathbf{x}_1+\mathbf{H}_{22}\delta \mathbf{x}_2) \\ =\mathbf{b}_{1,0}-\mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{b}_{2,0}+(\mathbf{H}_{11}-\mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{H}_{21})\delta{x}_1 \\ = \mathbf{b}_0^*+\mathbf{H}_0^*\delta \mathbf{x}_1 \\ =\mathbf{J}_l^T((\mathbf{J}_l^T)^+\mathbf{b}_0^*+\mathbf{J}_ldx)=\mathbf{J}_l^Te_p b∗=(b1,0+H11δx1+H12δx2)−H12H22−1(b2,0+H21δx1+H22δx2)=b1,0−H12H22−1b2,0+(H11−H12H22−1H21)δx1=b0∗+H0∗δx1=JlT((JlT)+b0∗+Jldx)=JlTep
因此,边缘化后等价的先验误差为 ( J l T ) + b 0 ∗ + J l d x (\mathbf{J}_l^T)^+\mathbf{b}_0^*+\mathbf{J}_ldx (JlT)+b0∗+Jldx
使用 H \mathbf{H} H分解 J \mathbf{J} J的方式如下:
H = U S U T = J T J \mathbf{H}=\mathbf{USU}^T=\mathbf{J}^T\mathbf{J} H=USUT=JTJ
整体步骤参考边缘化
下面对代码进行介绍
bool Problem::Marginalize(const std::vector<std::shared_ptr<Vertex> > margVertexs, int pose_dim)
首先找到需要marg的边
SetOrdering();
/// 找到需要 marg 的 edge, margVertexs[0] is frame, its edge contained pre-intergration
std::vector<shared_ptr<Edge>> marg_edges = GetConnectedEdges(margVertexs[0]);
对于marg_edges所关联的特征,重新设定其顺序:pose_dim + marg_landmark_size,这里的posedim是增加了所有节点后的维度。
std::unordered_map<int, shared_ptr<Vertex>> margLandmark;
int marg_landmark_size = 0;
// std::cout << "\n marg edge 1st id: "<< marg_edges.front()->Id() << " end id: "<Id()<
for (size_t i = 0; i < marg_edges.size(); ++i) {
// std::cout << "marg edge id: "<< marg_edges[i]->Id() <
auto verticies = marg_edges[i]->Verticies();
for (auto iter : verticies) {
if (IsLandmarkVertex(iter) && margLandmark.find(iter->Id()) == margLandmark.end()) {
iter->SetOrderingId(pose_dim + marg_landmark_size);
margLandmark.insert(make_pair(iter->Id(), iter));
marg_landmark_size += iter->LocalDimension();
}
}
}
构造 H \mathbf{H} H矩阵和 b \mathbf{b} b矩阵
int cols = pose_dim + marg_landmark_size;
/// 构建误差 H 矩阵 H = H_marg + H_pp_prior
MatXX H_marg(MatXX::Zero(cols, cols));
VecX b_marg(VecX::Zero(cols));
接下来,对于每个要边缘化的边,计算其残差和雅克比矩阵,然后叠加所有节点对之间的 J i T W J j \mathbf{J}_i^T\mathbf{W}\mathbf{J}_j JiTWJj构造 H \mathbf{H} H矩阵,以及构造 b \mathbf{b} b
int ii = 0;
for (auto edge: marg_edges) {
edge->ComputeResidual();
edge->ComputeJacobians();
auto jacobians = edge->Jacobians();
auto verticies = edge->Verticies();
ii++;
assert(jacobians.size() == verticies.size());
for (size_t i = 0; i < verticies.size(); ++i) {
auto v_i = verticies[i];
auto jacobian_i = jacobians[i];
ulong index_i = v_i->OrderingId();
ulong dim_i = v_i->LocalDimension();
double drho;
MatXX robustInfo(edge->Information().rows(),edge->Information().cols());
edge->RobustInfo(drho,robustInfo);
for (size_t j = i; j < verticies.size(); ++j) {
auto v_j = verticies[j];
auto jacobian_j = jacobians[j];
ulong index_j = v_j->OrderingId();
ulong dim_j = v_j->LocalDimension();
MatXX hessian = jacobian_i.transpose() * robustInfo * jacobian_j;
assert(hessian.rows() == v_i->LocalDimension() && hessian.cols() == v_j->LocalDimension());
// 所有的信息矩阵叠加起来
H_marg.block(index_i, index_j, dim_i, dim_j) += hessian;
if (j != i) {
// 对称的下三角
H_marg.block(index_j, index_i, dim_j, dim_i) += hessian.transpose();
}
}
b_marg.segment(index_i, dim_i) -= drho * jacobian_i.transpose() * edge->Information() * edge->Residual();
}
}
然后开始构造以下形式
[ H 11 H 12 H 21 H 22 ] [ δ x 1 δ x 2 ] = [ b 1 b 2 ] \begin{bmatrix} \mathbf{H}_{11} & \mathbf{H}_{12} \\ \mathbf{H}_{21} & \mathbf{H}_{22} \end{bmatrix} \begin{bmatrix} \delta \mathbf{x}_1 \\ \delta \mathbf{x}_2 \end{bmatrix}=\begin{bmatrix} \mathbf{b}_1 \\ \mathbf{b}_2 \end{bmatrix} [H11H21H12H22][δx1δx2]=[b1b2]
int marg_size = marg_landmark_size;
MatXX Hmm = H_marg.block(reserve_size, reserve_size, marg_size, marg_size);
MatXX Hpm = H_marg.block(0, reserve_size, reserve_size, marg_size);
MatXX Hmp = H_marg.block(reserve_size, 0, marg_size, reserve_size);
VecX bpp = b_marg.segment(0, reserve_size);
VecX bmm = b_marg.segment(reserve_size, marg_size);
然后是舒尔补的过程
// Hmm 是对角线矩阵,它的求逆可以直接为对角线块分别求逆,如果是逆深度,对角线块为1维的,则直接为对角线的倒数,这里可以加速
MatXX Hmm_inv(MatXX::Zero(marg_size, marg_size));
// TODO:: use openMP
for (auto iter: margLandmark) {
int idx = iter.second->OrderingId() - reserve_size;
int size = iter.second->LocalDimension();
Hmm_inv.block(idx, idx, size, size) = Hmm.block(idx, idx, size, size).inverse();
}
MatXX tempH = Hpm * Hmm_inv;
MatXX Hpp = H_marg.block(0, 0, reserve_size, reserve_size) - tempH * Hmp;
bpp = bpp - tempH * bmm;
H_marg = Hpp;
b_marg = bpp;
上面是针对所关联的视觉特征的边缘化,下面针对帧和speedbias进行边缘化
需要把将要边缘化的节点移动到H_marg的右下角。
/// marg frame and speedbias
int marg_dim = 0;
// index 大的先移动
for (int k = margVertexs.size() -1 ; k >= 0; --k)
{
int idx = margVertexs[k]->OrderingId();
int dim = margVertexs[k]->LocalDimension();
// std::cout << k << " "<
marg_dim += dim;
// move the marg pose to the Hmm bottown right
// 将 row i 移动矩阵最下面
Eigen::MatrixXd temp_rows = H_marg.block(idx, 0, dim, reserve_size);
Eigen::MatrixXd temp_botRows = H_marg.block(idx + dim, 0, reserve_size - idx - dim, reserve_size);
H_marg.block(idx, 0, reserve_size - idx - dim, reserve_size) = temp_botRows;
H_marg.block(reserve_size - dim, 0, dim, reserve_size) = temp_rows;
// 将 col i 移动矩阵最右边
Eigen::MatrixXd temp_cols = H_marg.block(0, idx, reserve_size, dim);
Eigen::MatrixXd temp_rightCols = H_marg.block(0, idx + dim, reserve_size, reserve_size - idx - dim);
H_marg.block(0, idx, reserve_size, reserve_size - idx - dim) = temp_rightCols;
H_marg.block(0, reserve_size - dim, reserve_size, dim) = temp_cols;
Eigen::VectorXd temp_b = b_marg.segment(idx, dim);
Eigen::VectorXd temp_btail = b_marg.segment(idx + dim, reserve_size - idx - dim);
b_marg.segment(idx, reserve_size - idx - dim) = temp_btail;
b_marg.segment(reserve_size - dim, dim) = temp_b;
}
然后接下来就是舒尔补操作,最后产生的就是先验。
int m2 = marg_dim;
int n2 = reserve_size - marg_dim; // marg pose
Eigen::MatrixXd Amm = 0.5 * (H_marg.block(n2, n2, m2, m2) + H_marg.block(n2, n2, m2, m2).transpose());
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes(Amm);
Eigen::MatrixXd Amm_inv = saes.eigenvectors() * Eigen::VectorXd(
(saes.eigenvalues().array() > eps).select(saes.eigenvalues().array().inverse(), 0)).asDiagonal() *
saes.eigenvectors().transpose();
Eigen::VectorXd bmm2 = b_marg.segment(n2, m2);
Eigen::MatrixXd Arm = H_marg.block(0, n2, n2, m2);
Eigen::MatrixXd Amr = H_marg.block(n2, 0, m2, n2);
Eigen::MatrixXd Arr = H_marg.block(0, 0, n2, n2);
Eigen::VectorXd brr = b_marg.segment(0, n2);
Eigen::MatrixXd tempB = Arm * Amm_inv;
H_prior_ = Arr - tempB * Amr;
b_prior_ = brr - tempB * bmm2;
然后,为了利用ceres进行优化,需要将 H \mathbf{H} H和 b \mathbf{b} b转化为如下形式
H = U S U T = J T J → J = S 1 2 U T b = J T e → e = ( J T ) − 1 b \mathbf{H}=\mathbf{USU}^T=\mathbf{J}^T\mathbf{J}\to \mathbf{J}=\mathbf{S^{\frac{1}{2}}\mathbf{U}^T} \\ \mathbf{b}=\mathbf{J}^T\mathbf{e}\to\mathbf{e}=(\mathbf{J}^T)^{-1}\mathbf{b} H=USUT=JTJ→J=S21UTb=JTe→e=(JT)−1b
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes2(H_prior_);
Eigen::VectorXd S = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array(), 0));
Eigen::VectorXd S_inv = Eigen::VectorXd(
(saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array().inverse(), 0));
Eigen::VectorXd S_sqrt = S.cwiseSqrt();
Eigen::VectorXd S_inv_sqrt = S_inv.cwiseSqrt();
Jt_prior_inv_ = S_inv_sqrt.asDiagonal() * saes2.eigenvectors().transpose();
err_prior_ = -Jt_prior_inv_ * b_prior_;
MatXX J = S_sqrt.asDiagonal() * saes2.eigenvectors().transpose();
H_prior_ = J.transpose() * J;
MatXX tmp_h = MatXX( (H_prior_.array().abs() > 1e-9).select(H_prior_.array(),0) );
H_prior_ = tmp_h;
最后,移除相关的边缘化的节点和边
// remove vertex and remove edge
for (size_t k = 0; k < margVertexs.size(); ++k) {
RemoveVertex(margVertexs[k]);
}
for (auto landmarkVertex: margLandmark) {
RemoveVertex(landmarkVertex.second);
}
边缘化之后,就需要对剩下的变量进行维护,主要是更新其索引
TicToc t_whole_marginalization;
if (marginalization_flag == MARGIN_OLD)
{
vector2double();
MargOldFrame();
// prior 中对应的保留下来的参数地址
std::unordered_map<long, double *> addr_shift;
for (int i = 1; i <= WINDOW_SIZE; i++)
{
addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i - 1];
addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i - 1];
}
for (int i = 0; i < NUM_OF_CAM; i++)
addr_shift[reinterpret_cast<long>(para_Ex_Pose[i])] = para_Ex_Pose[i];
if (ESTIMATE_TD)
{
addr_shift[reinterpret_cast<long>(para_Td[0])] = para_Td[0];
}
}
else
{
if (Hprior_.rows() > 0)
{
vector2double();
MargNewFrame();
std::unordered_map<long, double *> addr_shift;
for (int i = 0; i <= WINDOW_SIZE; i++)
{
if (i == WINDOW_SIZE - 1)
continue;
else if (i == WINDOW_SIZE)
{
addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i - 1];
addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i - 1];
}
else
{
addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i];
addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i];
}
}
for (int i = 0; i < NUM_OF_CAM; i++)
addr_shift[reinterpret_cast<long>(para_Ex_Pose[i])] = para_Ex_Pose[i];
if (ESTIMATE_TD)
{
addr_shift[reinterpret_cast<long>(para_Td[0])] = para_Td[0];
}
}
}
到此,函数
void Estimator::backendOptimization()
就介绍完了。接下来是滑窗操作。需要注意的是边缘化之后产生的 H , b \mathbf{H,b} H,b 是作为后面优化求解的先验矩阵。然后,先验再加上新的测量信息,就是新的信息矩阵。
对先验残差的更新是在problem.solve()函数中的,代码为
void Problem::UpdateStates() {
// update vertex
for (auto vertex: verticies_) {
vertex.second->BackUpParameters(); // 保存上次的估计值
ulong idx = vertex.second->OrderingId();
ulong dim = vertex.second->LocalDimension();
VecX delta = delta_x_.segment(idx, dim);
vertex.second->Plus(delta);
}
// update prior
if (err_prior_.rows() > 0) {
// BACK UP b_prior_
b_prior_backup_ = b_prior_;
err_prior_backup_ = err_prior_;
/// update with first order Taylor, b' = b + \frac{\delta b}{\delta x} * \delta x
/// \delta x = Computes the linearized deviation from the references (linearization points)
b_prior_ -= H_prior_ * delta_x_.head(ordering_poses_); // update the error_prior
err_prior_ = -Jt_prior_inv_ * b_prior_.head(ordering_poses_ - 15);
// std::cout << " : "<< b_prior_.norm()<<" " <
// std::cout << " delta_x_ ex: "<< delta_x_.head(6).norm() << std::endl;
}
}
虽然先验信息矩阵固定不变,但是随着迭代的进行,变量被不断优化,先验残差也需要跟随变化,否则,求解系统可能崩溃。
对先验残差的变化可以用一阶泰勒近似
b p ′ = b p + ∂ b p ∂ x p δ x p = b p − H δ x p \mathbf{b}_p'=\mathbf{b}_p+\frac{\partial \mathbf{b}_p}{\partial \mathbf{x}_p}\delta \mathbf{x}_p=\mathbf{b}_p-\mathbf{H}\delta \mathbf{x}_p bp′=bp+∂xp∂bpδxp=bp−Hδxp