在讲解高斯牛顿前,首先讲一下矩阵求导,毕竟都是为了求极值,需要用到导数
令 A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 ⋯ a n n ] , x = ( x 1 ⋮ a n ) A= \begin{bmatrix} a_{11}& a_{12}& \cdots & a_{1n} \\ a_{21}& a_{22}& \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1}& a_{n2}& \cdots & a_{nn} \end{bmatrix} , x =\begin{pmatrix} x_{1} \\ \vdots \\ a_{n} \end{pmatrix} A=⎣⎢⎢⎢⎡a11a21⋮an1a12a22⋮an2⋯⋯⋱⋯a1na2n⋮ann⎦⎥⎥⎥⎤,x=⎝⎜⎛x1⋮an⎠⎟⎞
则 A x = [ a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n ⋮ a n 1 x 1 + a n 2 x 2 + ⋯ + a n n x n ] n ∗ 1 , 令 f ( x ) = ( f 1 ( x ) f 2 ( x ) ⋮ f n ( x ) ) Ax= \begin{bmatrix} a_{11}x_{1}+a_{12}x_{2}+\cdots+a_{1n}x_{n} \\ a_{21}x_{1}+a_{22}x_{2}+\cdots+a_{2n}x_{n} \\ \vdots \\ a_{n1}x_{1}+a_{n2}x_{2}+\cdots+a_{nn}x_{n} \\ \end{bmatrix}_{n*1},令 f(x) =\begin{pmatrix} f_{1}(x) \\ f_{2}(x) \\ \vdots \\ f_{n}(x) \end{pmatrix} Ax=⎣⎢⎢⎢⎡a11x1+a12x2+⋯+a1nxna21x1+a22x2+⋯+a2nxn⋮an1x1+an2x2+⋯+annxn⎦⎥⎥⎥⎤n∗1,令f(x)=⎝⎜⎜⎜⎛f1(x)f2(x)⋮fn(x)⎠⎟⎟⎟⎞
∂ f ( x ) ∂ x T = [ ∂ f 1 ( x ) ∂ x 1 ∂ f 1 ( x ) ∂ x 2 ⋯ ∂ f 1 ( x ) ∂ x 3 ∂ f 2 ( x ) ∂ x 1 ∂ f 2 ( x ) ∂ x 2 ⋯ ∂ f 2 ( x ) ∂ x 3 ⋮ ⋮ ⋱ ⋮ ∂ f n ( x ) ∂ x 1 ∂ f n ( x ) ∂ x 2 ⋯ ∂ f n ( x ) ∂ x 3 ] = ( a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 ⋯ a n n ) = A \frac{\partial f(x)}{\partial x^{T}}= \begin{bmatrix} \frac{\partial f_{1}(x)}{\partial x_{1}}& \frac{\partial f_{1}(x)}{\partial x_{2}} &\cdots &\frac{\partial f_{1}(x)}{\partial x_{3}}\\ \frac{\partial f_{2}(x)}{\partial x_{1}}& \frac{\partial f_{2}(x)}{\partial x_{2}} &\cdots &\frac{\partial f_{2}(x)}{\partial x_{3}}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_{n}(x)}{\partial x_{1}}& \frac{\partial f_{n}(x)}{\partial x_{2}} &\cdots &\frac{\partial f_{n}(x)}{\partial x_{3}}\\ \end{bmatrix} =\begin{pmatrix} a_{11} & a_{12} &\cdots & a_{1n} \\ a_{21} & a_{22} &\cdots & a_{2n} \\ \vdots & \vdots &\ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{pmatrix} =A ∂xT∂f(x)=⎣⎢⎢⎢⎢⎡∂x1∂f1(x)∂x1∂f2(x)⋮∂x1∂fn(x)∂x2∂f1(x)∂x2∂f2(x)⋮∂x2∂fn(x)⋯⋯⋱⋯∂x3∂f1(x)∂x3∂f2(x)⋮∂x3∂fn(x)⎦⎥⎥⎥⎥⎤=⎝⎜⎜⎜⎛a11a21⋮an1a12a22⋮an2⋯⋯⋱⋯a1na2n⋮ann⎠⎟⎟⎟⎞=A
所以 ∂ A x ∂ x T = A \frac{\partial Ax}{\partial x^{T}}= A ∂xT∂Ax=A
对于一个最小二乘的问题
x ∗ = a r g min 1 2 ∣ ∣ f ( x ) ∣ ∣ 2 ① \tag*{①}x^ {*} =arg \min \frac {1}{2} ||f(x) ||^ {2} x∗=argmin21∣∣f(x)∣∣2①
高斯牛顿的思想是把 f ( x ) f(x) f(x)泰勒展开,取一阶线性项近似:
f ( x + Δ x ) ≈ f ( x ) + f ′ ( x ) Δ x = f ( x ) + J ( x ) T △ x ② \tag*{②}f(x+ \Delta x) \approx f(x)+f'(x) \Delta x = f(x)+J(x)^{T} \triangle x f(x+Δx)≈f(x)+f′(x)Δx=f(x)+J(x)T△x②
将②式代入①式
1 2 ∥ f ( x + J ( x ) T Δ x ) ∥ 2 = 1 2 ( f ( x ) T f ( x ) + 2 f ( x ) T J ( x ) Δ x + Δ x T J ( x ) T J ( x ) Δ x ) \frac{1}{2}\left\|f(x+J(x)^{T}\Delta x) \right\|^{2} = \frac{1}{2}(f(x)^{T}f(x)+2f(x)^{T}J(x)\Delta x+\Delta x^{T}J(x)^{T}J(x)\Delta x) 21∥∥f(x+J(x)TΔx)∥∥2=21(f(x)Tf(x)+2f(x)TJ(x)Δx+ΔxTJ(x)TJ(x)Δx)
最后求得结果:
J ( x ) T J ( x ) Δ x = − J ( x ) T f ( x ) J(x)^ {T}J(x)\Delta x=-J(x)^ {T}f(x) J(x)TJ(x)Δx=−J(x)Tf(x)
备注:
1.这里求导的第①项与 Δ x \Delta x Δx无关,求导结果为0,
2.第②项 ( f ( x ) T J ( x ) Δ x ) ′ = J ( x ) T f ( x ) , (f(x)^{T}J(x)\Delta x)^{'} = J(x)^{T}f(x), (f(x)TJ(x)Δx)′=J(x)Tf(x),这个是根据上面的的矩阵求导 ∂ A x ∂ x = A T \frac{\partial Ax }{\partial x}=A^{T} ∂x∂Ax=AT得到的
3.第③项 Δ x T J ( x ) T J ( x ) Δ x = 2 J ( x ) T J ( x ) Δ x \Delta x^{T}J(x)^{T}J(x)\Delta x = 2J(x)^{T}J(x)\Delta x ΔxTJ(x)TJ(x)Δx=2J(x)TJ(x)Δx详情见手写的,不想敲公式了结论就是: d x T A x d x = ( A + A T ) x \frac{\mathrm{d} x^{T}Ax}{\mathrm{d} x} = (A+A^{T})x dxdxTAx=(A+AT)x,等有时间再敲… ( J ( x ) T J ( x ) ) T = J ( x ) T J ( x ) (J(x)^{T}J(x))^{T} =J(x)^{T}J(x) (J(x)TJ(x))T=J(x)TJ(x), 整体 J ( x ) T J ( x ) J(x)^{T}J(x) J(x)TJ(x)是反对称矩阵,
拟合一条如下的曲线方程, w w w为高斯噪声,满足 w ( 0 , σ 2 ) w~(0,\sigma_{2}) w (0,σ2).
y = e x p ( a x 2 + b x + c ) + w y = exp(ax^{2}+bx+c)+w y=exp(ax2+bx+c)+w
有N个观测数据,构建一个最小二乘问题,
min a , b , c 1 2 ∑ i = 1 N ∥ y i − e x p ( a x i 2 + b x i + c ) ∥ \min_{a,b,c} \frac{1}{2} \sum_{i=1}^{N} \left \| yi-exp(ax_{i}^2+bx_{i}+c) \right \| a,b,cmin21i=1∑N∥∥yi−exp(axi2+bxi+c)∥∥
定义误差项 e i = y i − e x p ( a x i 2 + b x i + c ) ⇒ f ( x i ) = e i ei = yi-exp(ax_{i}^2+bx_{i}+c) \Rightarrow f(xi) = ei ei=yi−exp(axi2+bxi+c)⇒f(xi)=ei
待估计变量 a , b , c a,b,c a,b,c,注意这里的自变量不是 x x x不是向量 [ a , b , c ] [a,b,c] [a,b,c],这里的雅克比就是梯度
{ ∂ e i ∂ a = − x 2 ∗ e x p ( a x 2 + b x + c ) ∂ e i ∂ b = − x ∗ e x p ( a x 2 + b x + c ) ∂ e i ∂ c = − e x p ( a x 2 + b x + c ) \begin{cases} &\frac{\partial ei}{\partial a} = -x_{2}*exp(ax^{2}+bx+c) \\ &\frac{\partial ei}{\partial b} = -x*exp(ax^{2}+bx+c)\\ &\frac{\partial ei}{\partial c} = -exp(ax^{2}+bx+c) \end{cases} ⎩⎪⎨⎪⎧∂a∂ei=−x2∗exp(ax2+bx+c)∂b∂ei=−x∗exp(ax2+bx+c)∂c∂ei=−exp(ax2+bx+c)
J i = [ ∂ e i ∂ a ∂ e i ∂ b ∂ e i ∂ c ] J_{i}=\begin{bmatrix} \frac{\partial ei}{\partial a} &\frac{\partial ei}{\partial b} &\frac{\partial ei}{\partial c} \end{bmatrix} Ji=[∂a∂ei∂b∂ei∂c∂ei]
最终代入高斯牛顿的公式中
∑ J x i J x i T △ x = ∑ − e i ∗ J ( x i ) T ⇒ ∑ H i △ X = ∑ b i \sum J_{xi}J_{xi}^T \bigtriangleup x = \sum -ei*J_(xi)^T \Rightarrow \sum Hi \bigtriangleup X=\sum bi ∑JxiJxiT△x=∑−ei∗J(xi)T⇒∑Hi△X=∑bi
得到线性方程之后,即可每次求解,迭代更新,
△ X = H . l d l t ( ) . s o l v e ( b ) \bigtriangleup X = H.ldlt().solve(b) △X=H.ldlt().solve(b)
//
// Created by lyr on 2022/1/12.
//
#include "iostream"
#include
#include
#include
int main() {
std::cout << "高斯牛顿曲线拟合" << "\n";
double ar = 1.0, br = 2.0, cr = 1.0; //真实的参数值
double ae = 2.0, be = -1.0, ce = 5.0; //估计参数值
int N = 100; //数据点的个数
double w_sigma = 1.0; //噪声sigma值(标准差)
double inv_sigma = 1.0 / w_sigma;
cv::RNG rng; //opencv随机数产生器
/**
* 带有噪声的曲线:y = exp(ar*x^2 +br*x + cr) + w;//其中w是满足高斯分布的噪声w~(0,sigma^2)
* *****/
std::vector<double> x_data, y_data; //存储含有噪声的真实数据
for (int i = 0; i < N; i++) {
double x = i / 100.0;
x_data.push_back(x);
/**产生一个满足真实的曲线的方程 + 协方差为w_sigma^2的高斯分布的随机数***/
y_data.push_back(exp(ar * x * x + br * x + cr) + rng.gaussian(pow(w_sigma, 2)));
}
/*** 开始 Gauss-Newton ***/
int iterations = 100; //最高的迭代次数
//本次迭代的cost和上一次迭代的cost
//用于比较是不是误差越来越小,类似于开口向上的二次函数,去寻找极值点,不一定是最小值
double cost = 0, lastCost = 0;
for (int iter = 0; iter < iterations; iter++) {
//每次求解一次 Hδx = b;
Eigen::Matrix3d H = Eigen::Matrix3d::Zero();
Eigen::Vector3d b = Eigen::Vector3d::Zero();
cost = 0;
for (int i = 0; i < N; ++i) {
double xi = x_data[i];
double yi = y_data[i];
//构建一个误差方程 f(x)
//真实的 - 估计的: f(x) = y - exp(ae * x * x + be * x + ce)
/**每一个数据的误差项
* ei = yi - exp(ae * xi * xi + be * xi + ce)**/
double error = yi - exp(ae * xi * xi + be * xi + ce);
//定义雅克比
Eigen::Vector3d J;
/*** δei/δae = -xi^2 * exp(ae * xi * xi + be * xi + ce)
* δei/δbe = -xi * exp(ae * xi * xi + be * xi + ce)
* δei/δce = -exp(ae * xi * xi + be * xi + ce)
*/
J[0] = -xi * xi * exp(ae * xi * xi + be * xi + ce);
J[1] = -xi * exp(ae * xi * xi + be * xi + ce);
J[2] = -exp(ae * xi * xi + be * xi + ce);
H += inv_sigma * inv_sigma * J * J.transpose(); //J*Q^-1*J^t
b += -inv_sigma * inv_sigma * error * J; // -f(x)*Q^-1*J
cost = +error * error; //整个误差的方差
}
//采用ldlt 求解线性方程 Hδx = b
Eigen::Vector3d dx = H.ldlt().solve(b);
if(isnan(dx[0]))
{
std::cout <<"result is nan!!!"<<"\n";
break;
}
//当前的误差比之前的误差还要大,说明找到了一个极值点,停止迭代
if(iter > 0 && cost >= lastCost)
{
std::cout <<"iter:"<<cost<<" >= last cost: "<<lastCost <<",break."<<"\n";
break;
}
//令 Xk+1 = Xk + δx
ae += dx[0];
be += dx[1];
ce += dx[2];
lastCost = cost;
std::cout <<"total cost: "<<cost <<",\t\tupdate: "<<dx.transpose() <<"\t\testimated params: "<<ae <<" , "<<be <<" , "<<ce <<"\n";
}
std::cout <<" estimated params: "<<ae <<" , "<<be <<" , "<<ce <<"\n";
}
整个就更简单了 ,只需要传入优化的参数,计算残差,计算整体的雅克比即可,重新实现一下Evaluate函数
//
// Created by lyr on 2022/1/12.
//
#include
#include
#include
#include
class CURVE_FITTING_COST_FUNCTION : public ceres::SizedCostFunction<1 , 3>{ //误差项的维度
public:
CURVE_FITTING_COST_FUNCTION(double x, double y) : x_(x), y_(y) {}
virtual bool Evaluate(double const *const *parameters,
double *residuals,
double **jacobians) const
{
//获取传入的参数
double ae = parameters[0][0];
double be = parameters[0][1];
double ce = parameters[0][2];
//计算残差(
residuals[0] = y_ - exp(ae * x_ * x_ + be * x_ + ce);
// 计算雅克比(对残差函数求导)
if (jacobians != NULL && jacobians[0] != NULL) {
jacobians[0][0] = -x_ * x_ * exp(ae * x_ * x_ + be * x_ + ce);
jacobians[0][1] = -x_ * exp(ae * x_ * x_ + be * x_ + ce);
jacobians[0][2] = -exp(ae * x_ * x_ + be * x_ + ce);
}
return true;
}
private:
const double x_;
const double y_;
};
int main(int argc, char **argv) {
double ar = 1.0, br = 2.0, cr = 1.0; // 真实参数值
double ae = 2.0, be = -1.0, ce = 5.0; // 估计参数值
int N = 100; // 数据点
double w_sigma = 1.0; // 噪声Sigma值
double inv_sigma = 1.0 / w_sigma;
cv::RNG rng; // OpenCV随机数产生器
ceres::Problem problem;
double para_t[3] = {ae, be, ce}; //待优化的变量
problem.AddParameterBlock(para_t ,3); //增加到参数列表中
vector<double> x_data, y_data;
for (int i = 0; i < N; i++) {
double x = i / 100.0;
x_data.push_back(x);
y_data.push_back(exp(ar * x * x + br * x + cr) + rng.gaussian(w_sigma * w_sigma));
}
for(int i = 0 ; i<x_data.size();i++)
{
//构建代价函数
ceres::CostFunction *cost_function = new CURVE_FITTING_COST_FUNCTION(x_data[i], y_data[i]);
problem.AddResidualBlock(cost_function,NULL, para_t);
}
ceres::Solver::Options options; // 这里有很多配置项可以填
options.linear_solver_type = ceres::DENSE_NORMAL_CHOLESKY; // 增量方程如何求解
options.minimizer_progress_to_stdout = true; // 输出到cout
ceres::Solver::Summary summary; // 优化信息
ceres::Solve(options, &problem, &summary); // 开始优化
// 输出结果
cout << summary.BriefReport() << endl;
cout << "estimated a,b,c = ";
for (auto a:para_t) cout << a << " ";
cout << endl;
自动求导,只需要知道残差也就是误差项跟优化变量,雅克比cere自动求出来
#include
#include
#include
#include
using namespace std;
// 代价函数的计算模型
struct CURVE_FITTING_COST {
CURVE_FITTING_COST(double x, double y) : _x(x), _y(y) {}
// // 残差的计算
template<typename T>
bool operator()(
const T *const abc, // 模型参数,有3维
T *residual) const {
residual[0] = T(_y) - ceres::exp(abc[0] * T(_x) * T(_x) + abc[1] * T(_x) + abc[2]); // y-exp(ax^2+bx+c)
return true;
}
const double _x, _y; // x,y数据
};
int main(int argc, char **argv) {
double ar = 1.0, br = 2.0, cr = 1.0; // 真实参数值
double ae = 2.0, be = -1.0, ce = 5.0; // 估计参数值
int N = 100; // 数据点
double w_sigma = 1.0; // 噪声Sigma值
double inv_sigma = 1.0 / w_sigma;
cv::RNG rng; // OpenCV随机数产生器
vector<double> x_data, y_data;
for (int i = 0; i < N; i++) {
double x = i / 100.0;
x_data.push_back(x);
y_data.push_back(exp(ar * x * x + br * x + cr) + rng.gaussian(w_sigma * w_sigma));
}
double abc[3] = {ae, be, ce};
// 构建最小二乘问题
ceres::Problem problem;
for (int i = 0; i < N; i++) {
problem.AddResidualBlock( // 向问题中添加误差项
// 使用自动求导,模板参数:误差类型,输出维度,输入维度,维数要与前面struct中一致
new ceres::AutoDiffCostFunction<CURVE_FITTING_COST, 1, 3>(
new CURVE_FITTING_COST(x_data[i], y_data[i])
),
nullptr, // 核函数,这里不使用,为空
abc // 待估计参数
);
}
// 配置求解器
ceres::Solver::Options options; // 这里有很多配置项可以填
options.linear_solver_type = ceres::DENSE_NORMAL_CHOLESKY; // 增量方程如何求解
options.minimizer_progress_to_stdout = true; // 输出到cout
ceres::Solver::Summary summary; // 优化信息
chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
ceres::Solve(options, &problem, &summary); // 开始优化
chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
cout << "solve time cost = " << time_used.count() << " seconds. " << endl;
// 输出结果
cout << summary.BriefReport() << endl;
cout << "estimated a,b,c = ";
for (auto a:abc) cout << a << " ";
cout << endl;
return 0;
}
g2o曲线拟合跟上面的一样,知道待优化的变量,定义好残差就行,这里计算了雅克比
//
// Created by lyr on 2022/1/12.
//
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
/***不在g2o定义的范围内,需要自己重新实现定点的定义
* 定义顶点一般是重写以下函数
* 其中read 和 write 针对与文件,这里没有用到***/
//曲线模型的顶点,模板的参数: 优化变量的维度 跟数据类型(我们现在要优化的就是ae ,be ,ce)
class CurveFittingVertex : public g2o::BaseVertex<3, Eigen::Vector3d> {
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
CurveFittingVertex() {
setToOriginImpl();
};
//重置
virtual void setToOriginImpl() override {
_estimate << 0.0, 0.0, 0.0;
}
//更新
virtual void oplusImpl(const double *update) override {
_estimate +=Eigen::Vector3d(update); //这个是把每次迭代的做一个累加的更新
}
bool read(std::istream &is) override {
}
bool write(std::ostream &os) const override {
}
};
//误差用边来表示,因为是方程 一一对应,所以继承的是一元边
class CurveFittingEdge :public g2o::BaseUnaryEdge<1,double ,CurveFittingVertex>{
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
CurveFittingEdge(double x) : BaseUnaryEdge(),_x(x){}
/****计算误差项****/
virtual void computeError() override
{
const CurveFittingVertex *v = static_cast<const CurveFittingVertex *>(_vertices[0]);
const Eigen::Vector3d abc = v->estimate();
/**误差项 等于 测量值 - 预测值***/
_error(0,0)= _measurement - exp(abc[0]*_x*_x +abc[1]*_x+abc[2]);
}
/***计算雅克比**/
/***整个函数要是没有实现.则会自动求导****/
virtual void linearizeOplus() override
{
const CurveFittingVertex *v = static_cast<const CurveFittingVertex *>(_vertices[0]);
const Eigen::Vector3d abc = v->estimate();
_jacobianOplusXi[0] = -_x * _x * exp(abc[0]*_x*_x +abc[1]*_x+abc[2]);
_jacobianOplusXi[1] = -_x * exp(abc[0]*_x*_x +abc[1]*_x+abc[2]);
_jacobianOplusXi[2] = -exp(abc[0]*_x*_x +abc[1]*_x+abc[2]);
}
bool read(std::istream &is)override {
}
bool write(std::ostream &os) const override {
}
public:
double _x; //输入的值
};
int main() {
double ar = 1.0, br = 2.0, cr = 1.0; //真实的参数值
double ae = 2.0, be = -1.0, ce = 5.0; //估计参数值
int N = 100; //数据点的个数
double w_sigma = 1.0; //噪声sigma值(标准差)
double inv_sigma = 1.0 / w_sigma;
cv::RNG rng; //opencv随机数产生器
/**
* 带有噪声的曲线:y = exp(ar*x^2 +br*x + cr) + w;//其中w是满足高斯分布的噪声w~(0,sigma^2)
* *****/
std::vector<double> x_data, y_data; //存储含有噪声的真实数据
for (int i = 0; i < N; i++) {
double x = i / 100.0;
x_data.push_back(x);
/**产生一个满足真实的曲线的方程 + 协方差为w_sigma^2的高斯分布的随机数***/
y_data.push_back(exp(ar * x * x + br * x + cr) + rng.gaussian(pow(w_sigma, 2)));
}
typedef g2o::BlockSolver<g2o::BlockSolverTraits<3,1>>BlockSolverType; // 优化变量的维度是3维(ae,be,ce),误差项是1维
//线性求解器的类型
typedef g2o::LinearSolverDense<BlockSolverType::PoseMatrixType>LinearSolverType;
//选择优化方法
auto solver = new g2o::OptimizationAlgorithmGaussNewton(g2o::make_unique<BlockSolverType>(g2o::make_unique<LinearSolverType>()));
//创建图模型 设置优化方法
g2o::SparseOptimizer optimizer;
optimizer.setVerbose(true);
optimizer.setAlgorithm(solver);
//往图里面添加顶点(待优化的变量)
CurveFittingVertex *v = new CurveFittingVertex();
v->setEstimate(Eigen::Vector3d(ae,be,ce)); //这个就是传入一个预测值 (也就是给这个类里面的_estimate赋初值)
v->setId(0);
optimizer.addVertex(v); //加入到顶点
//往图中增加边(误差项)
for(int i = 0 ;i<N;i++)
{
CurveFittingEdge *edge = new CurveFittingEdge(x_data[i]);
edge->setId(i);
edge->setVertex(0,v); //设置连接的顶点(只有一个顶点)
edge->setMeasurement(y_data[i]); // 观测数值
// 信息矩阵:协方差矩阵之逆(这里讲的是对加的误差项的一个置信度)
edge->setInformation( Eigen::Matrix<double,1,1>::Identity()*1/(w_sigma*w_sigma) );
optimizer.addEdge(edge);
}
optimizer.initializeOptimization();
optimizer.optimize(10); //迭代次数
Eigen::Vector3d abc_estimate = v->estimate(); //估计是顶点,这里只有一个定点,返回顶点的估计值
std::cout <<" abc_estimate : "<<abc_estimate.transpose()<<"\n";
return 0;
}
雅克比自己不计算,需要g2o自动计算的话,就不需要实现 l i n e a r i z e O p l u s linearizeOplus linearizeOplus
//
// Created by lyr on 2022/1/12.
//
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
/***不在g2o定义的范围内,需要自己重新实现定点的定义
* 定义顶点一般是重写以下函数
* 其中read 和 write 针对与文件,这里没有用到***/
//曲线模型的顶点,模板的参数: 优化变量的维度 跟数据类型(我们现在要优化的就是ae ,be ,ce)
class CurveFittingVertex : public g2o::BaseVertex<3, Eigen::Vector3d> {
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
CurveFittingVertex() {
setToOriginImpl();
};
//重置
virtual void setToOriginImpl() override {
_estimate << 0.0, 0.0, 0.0;
}
//更新
virtual void oplusImpl(const double *update) override {
_estimate +=Eigen::Vector3d(update); //这个是把每次迭代的做一个累加的更新
}
bool read(std::istream &is) override {
}
bool write(std::ostream &os) const override {
}
};
//误差用边来表示,因为是方程 一一对应,所以继承的是一元边
class CurveFittingEdge :public g2o::BaseUnaryEdge<1,double ,CurveFittingVertex>{
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
CurveFittingEdge(double x) : BaseUnaryEdge(),_x(x){}
/****计算误差项****/
virtual void computeError() override
{
const CurveFittingVertex *v = static_cast<const CurveFittingVertex *>(_vertices[0]);
const Eigen::Vector3d abc = v->estimate();
/**误差项 等于 测量值 - 预测值***/
_error(0,0)= _measurement - exp(abc[0]*_x*_x +abc[1]*_x+abc[2]);
}
/***计算雅克比**/
/***整个函数要是没有实现.则会自动求导****/
// virtual void linearizeOplus() override
// {
// const CurveFittingVertex *v = static_cast(_vertices[0]);
// const Eigen::Vector3d abc = v->estimate();
// _jacobianOplusXi[0] = -_x * _x * exp(abc[0]*_x*_x +abc[1]*_x+abc[2]);
// _jacobianOplusXi[1] = -_x * exp(abc[0]*_x*_x +abc[1]*_x+abc[2]);
// _jacobianOplusXi[2] = -exp(abc[0]*_x*_x +abc[1]*_x+abc[2]);
// }
bool read(std::istream &is)override {
}
bool write(std::ostream &os) const override {
}
public:
double _x;
};
int main() {
double ar = 1.0, br = 2.0, cr = 1.0; //真实的参数值
double ae = 2.0, be = -1.0, ce = 5.0; //估计参数值
int N = 100; //数据点的个数
double w_sigma = 1.0; //噪声sigma值(标准差)
double inv_sigma = 1.0 / w_sigma;
cv::RNG rng; //opencv随机数产生器
/**
* 带有噪声的曲线:y = exp(ar*x^2 +br*x + cr) + w;//其中w是满足高斯分布的噪声w~(0,sigma^2)
* *****/
std::vector<double> x_data, y_data; //存储含有噪声的真实数据
for (int i = 0; i < N; i++) {
double x = i / 100.0;
x_data.push_back(x);
/**产生一个满足真实的曲线的方程 + 协方差为w_sigma^2的高斯分布的随机数***/
y_data.push_back(exp(ar * x * x + br * x + cr) + rng.gaussian(pow(w_sigma, 2)));
}
typedef g2o::BlockSolver<g2o::BlockSolverTraits<3,1>>BlockSolverType; // 优化变量的维度是3维(ae,be,ce),误差项是1维
//线性求解器的类型
typedef g2o::LinearSolverDense<BlockSolverType::PoseMatrixType>LinearSolverType;
//选择优化方法
auto solver = new g2o::OptimizationAlgorithmGaussNewton(g2o::make_unique<BlockSolverType>(g2o::make_unique<LinearSolverType>()));
//创建图模型 设置优化方法
g2o::SparseOptimizer optimizer;
optimizer.setVerbose(true);
optimizer.setAlgorithm(solver);
//往图里面添加顶点(待优化的变量)
CurveFittingVertex *v = new CurveFittingVertex();
v->setEstimate(Eigen::Vector3d(ae,be,ce)); //这个就是传入一个预测值 (也就是给这个类里面的_estimate赋初值)
v->setId(0);
optimizer.addVertex(v); //加入到顶点
//往图中增加边(误差项)
for(int i = 0 ;i<N;i++)
{
CurveFittingEdge *edge = new CurveFittingEdge(x_data[i]);
edge->setId(i);
edge->setVertex(0,v); //设置连接的顶点(只有一个顶点)
edge->setMeasurement(y_data[i]); // 观测数值
// 信息矩阵:协方差矩阵之逆(这里讲的是对加的误差项的一个置信度)
edge->setInformation( Eigen::Matrix<double,1,1>::Identity()*1/(w_sigma*w_sigma) );
optimizer.addEdge(edge);
}
optimizer.initializeOptimization();
optimizer.optimize(10); //迭代次数
Eigen::Vector3d abc_estimate = v->estimate(); //估计是顶点,这里只有一个定点,返回顶点的估计值
std::cout <<" abc_estimate : "<<abc_estimate.transpose()<<"\n";
return 0;
}
参考[1]: <视觉slam十四讲>