Ceres-Solver是google求解非线性最小二乘问题的库,非线性最小二乘问题具有如下形式:
m i n 1 2 ∑ i ρ i ( ∥ f i ( x i 1 , x i 2 , . . . , x i k ) ∥ 2 ) min\frac12\sum_i\rho_i(\|f_i(x_{i1},x_{i2},...,x_{ik})\|^2) min21i∑ρi(∥fi(xi1,xi2,...,xik)∥2)其中 f i ( ⋅ ) f_i(⋅) fi(⋅)在Ceres定义为CostFunction, ρ i ( ⋅ ) ρ_i(⋅) ρi(⋅)为LossFunction。
最重要的就是构建CostFunction。根据选择的微分模型的不同有三种构建方式(自动微分,数值微分,手动微分)
// struct
struct CostFunctor {
template <typename T>
bool operator()(const T* const x, T* residual) const {
residual[0] = T(10.0) - x[0];
return true;
}
};
// make CostFunction
CostFunction* cost_function = new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
problem.AddResidualBlock(cost_function, NULL, &x);
// struct
struct NumericDiffCostFunctor {
bool operator()(const double* const x, double* residual) const {
residual[0] = 10.0 - x[0];
return true;
}
};
// make CostFunction
CostFunction* cost_function = new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>(new NumericDiffCostFunctor);
problem.AddResidualBlock(cost_function, NULL, &x);
谷歌推荐类型为AutoDiffCostFunction,C++模板的使用使得AutoDiff效率更高,而数值的差花费更多,容易出现数字错误,导致收敛速度变慢。class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> {
public:
virtual ~QuadraticCostFunction() {}
virtual bool Evaluate(double const* const* parameters,double* residuals,double** jacobians) const {
const double x = parameters[0][0];
residuals[0] = 10 - x;
// Compute the Jacobian if asked for.
if (jacobians != NULL && jacobians[0] != NULL) {
jacobians[0][0] = -1;
}
return true;
}
};
QuadraticCostFunction *quadratic_factor = new QuadraticCostFunction();
problem.AddResidualBlock(quadratic_factor, NULL, &x);
SizedCostFunction::Evaluate 提供一个 parameters 数组作为输入, 输出 residuals 数组作为残差,输出数组 jacobians来显示Jacobians. jacobians是一个可选项,Evaluate检查他是否为 non-null,如果非空,就用残差方程的解析导数来填充他。// struct
struct F1 {
template <typename T>
bool operator()(const T* const x1, const T* const x2, T* residual) const {
residual[0] = x1[0] + 10 * x2[0];
return true;
}
};
struct F2 {
template <typename T>
bool operator()(const T* const x3, const T* const x4, T* residual) const {
residual[0] = T(sqrt(5.0)) * sqrt(x3[0] - x4[0]);
return true;
}
};
struct F3 {
template <typename T>
bool operator()(const T* const x2, const T* const x3, T* residual) const {
residual[0] =pow((x2[0] - 2 * x3[0]),2);
return true;
}
};
struct F4 {
template <typename T>
bool operator()(const T* const x1, const T* const x4, T* residual) const {
residual[0] = T(sqrt(10.0)) * (x1[0] - x4[0]) * (x1[0] - x4[0]);
return true;
}
};
// make CostFunction
problem.AddResidualBlock(
new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), NULL, &x1, &x2);
problem.AddResidualBlock(
new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), NULL, &x3, &x4);
problem.AddResidualBlock(
new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), NULL, &x2, &x3)
problem.AddResidualBlock(
new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), NULL, &x1, &x4);
其中添加残差项:
Problem.AddResidualBlock(cost_fuction,NULL/loss_function,input_param1,input_param2,...)
void Problem::AddParameterBlock(double *values, int size, LocalParameterization*local_parameterization)
void Problem::AddParameterBlock(double *values, int size)
这个函数的目的是告诉Problem在目标函数中有哪些是变量。其实这个可以不添加。Google的官网有说明:
用户可以选择使用AddParameterBlock显式添加参数块。这带来额外的正确性检查;但是,AddResidualBlock会隐式添加参数块(如果它们不存在),因此不需要显式调用AddParameterBlock。
但也不是完全没用,比如要固定一些变量,就需要设定Problem::SetParametersBlockConstant(&x),x为你要设定的固定变量,这些变量需要显式地添加参数块。
Solver::Options options;
options.minimizer_progress_to_stdout = true;
Solver::Summary summary;
Solve(options, &problem, &summary);
以上几步骤是必须的。以下部分是根据需要选择的部分(主要是BA时的一些选项)
DEFINE_string(trust_region_strategy, "levenberg_marquardt",
"Options are: levenberg_marquardt, dogleg.");
DEFINE_string(dogleg, "traditional_dogleg", "Options are: traditional_dogleg,"
"subspace_dogleg.");
DEFINE_bool(inner_iterations, false, "Use inner iterations to non-linearly "
"refine each successful trust region step.");
DEFINE_string(blocks_for_inner_iterations, "automatic", "Options are: "
"automatic, cameras, points, cameras,points, points,cameras");
DEFINE_string(linear_solver, "sparse_schur", "Options are: "
"sparse_schur, dense_schur, iterative_schur, sparse_normal_cholesky, "
"dense_qr, dense_normal_cholesky and cgnr.");
DEFINE_bool(explicit_schur_complement, false, "If using ITERATIVE_SCHUR "
"then explicitly compute the Schur complement.");
DEFINE_string(preconditioner, "jacobi", "Options are: "
"identity, jacobi, schur_jacobi, cluster_jacobi, "
"cluster_tridiagonal.");
DEFINE_string(visibility_clustering, "canonical_views",
"single_linkage, canonical_views");
DEFINE_string(sparse_linear_algebra_library, "suite_sparse",
"Options are: suite_sparse and cx_sparse.");
DEFINE_string(dense_linear_algebra_library, "eigen",
"Options are: eigen and lapack.");
DEFINE_string(ordering, "automatic", "Options are: automatic, user.");
DEFINE_bool(use_quaternions, false, "If true, uses quaternions to represent "
"rotations. If false, angle axis is used.");
DEFINE_bool(use_local_parameterization, false, "For quaternions, use a local "
"parameterization.");
DEFINE_bool(robustify, false, "Use a robust loss function.");
DEFINE_double(eta, 1e-2, "Default value for eta. Eta determines the "
"accuracy of each linear solve of the truncated newton step. "
"Changing this parameter can affect solve performance.");
由于Bundle adjustment 本身具有稀疏的结构那使得我们可以利用它稀疏的性质做出更有效的求解策略。在ceres-solver中的SPARSE_SCHUR,DENSE_SCHUR,ITERARIVE_SCHUR就充分利用了BA的稀疏特性。我们可以定义Options::ordering_type=ceres::SCHUR 它将自动决定ParameterBlock ordering.当然了也可以手动设置ParameterBlock ordering.