参考:https://mp.weixin.qq.com/s?__biz=MzI1NjkxOTMyNQ==&mid=2247485879&idx=1&sn=d921ab976ebc72c502eb5df1ef7ff548&chksm=ea1e1bc5dd6992d3220bd47c5cd951f38648c060d4f9dc9bfbe5eac1a3076f920fa3e360c038&scene=21#wechat_redirect
在后处理的计算中,2D-to-3D的几何计算、时序计算、多相机环视融合等因素密不可分,模块之间的算法需相通。这也要求开发团队积极配合,形成一个有机的整体。
以下是Apollo社区开发者朱炎亮在Github-Apollo-Note上分享的《HM对象跟踪》,感谢他为我们在tracking这一步所做的详细注解和释疑。
上述是Apollo官方文档对HM对象跟踪的描述,这部分意思比较明了,主要的跟踪流程可以分为:
(1)预处理;(lidar->local ENU坐标系变换、跟踪对象创建、跟踪目标保存)
(2)卡尔曼滤波器滤波,预测物体当前位置与速度;(卡尔曼滤波阶段1:Predict阶段)
(3)匈牙利算法比配,关联检测物体和跟踪物体;
(4)卡尔曼滤波,更新跟踪物体位置与速度信息。(卡尔曼滤波阶段2:Update阶段)
/// file in apollo/modules/perception/obstacle/onboard/lidar_process_subnode.cc
void LidarProcessSubnode::OnPointCloud(const sensor_msgs::PointCloud2& message) {
/// call hdmap to get ROI
...
/// call roi_filter
...
/// call segmentor
...
/// call object builder
...
/// call tracker
if (tracker_ != nullptr) {
TrackerOptions tracker_options;
tracker_options.velodyne_trans = velodyne_trans;
tracker_options.hdmap = hdmap;
tracker_options.hdmap_input = hdmap_input_;
if (!tracker_->Track(objects, timestamp_, tracker_options, &(out_sensor_objects->objects))) {
...
}
}
}
在这部分,总共有三个比较绕的对象类,分别是Object、TrackedObject 和 ObjectTrack,在这里统一说明一下区别:
所以可以看到,跟踪过程需要将原始Object封装成TrackedObject,创立跟踪对象;最后跟踪对象创立跟踪过程ObjectTrack,可以通过ObjectTrack里面的函数来对ObjectTrack所标记的TrackedObject进行跟踪。
/// file in apollo/modules/perception/obstacle/onboard/lidar_process_subnode.cc
void LidarProcessSubnode::OnPointCloud(const sensor_msgs::PointCloud2& message) {
/// call hdmap to get ROI
...
/// call roi_filter
...
/// call segmentor
...
/// call object builder
...
/// call tracker
if (tracker_ != nullptr) {
TrackerOptions tracker_options;
tracker_options.velodyne_trans = velodyne_trans;
tracker_options.hdmap = hdmap;
tracker_options.hdmap_input = hdmap_input_;
if (!tracker_->Track(objects, timestamp_, tracker_options, &(out_sensor_objects->objects))) {
...
}
}
} /// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
bool HmObjectTracker::Track(const std::vector& objects,
double timestamp, const TrackerOptions& options,
std::vector* tracked_objects) {
// A. track setup
if (!valid_) {
valid_ = true;
return Initialize(objects, timestamp, options, tracked_objects);
}
// B. preprocessing
// B.1 transform given pose to local one
TransformPoseGlobal2Local(&velo2world_pose);
// B.2 construct objects for tracking
std::vectortransformed_objects;
ConstructTrackedObjects(objects, &transformed_objects, velo2world_pose,options);
...
}
bool HmObjectTracker::Initialize(const std::vector& objects,
const double& timestamp,
const TrackerOptions& options,
std::vector* tracked_objects) {
global_to_local_offset_ = Eigen::Vector3d(-velo2world_pose(0, 3), -velo2world_pose(1, 3), -velo2world_pose(2, 3));
// B. preprocessing
// B.1 coordinate transformation
TransformPoseGlobal2Local(&velo2world_pose);
// B.2 construct tracked objects
std::vectortransformed_objects;
ConstructTrackedObjects(objects, &transformed_objects, velo2world_pose, options);
// C. create tracks
CreateNewTracks(transformed_objects, unassigned_objects);
time_stamp_ = timestamp;
// D. collect tracked results
CollectTrackedResults(tracked_objects);
return true;
}
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
void HmObjectTracker::TransformPoseGlobal2Local(Eigen::Matrix4d* pose) {
(*pose)(0, 3) += global_to_local_offset_(0);
(*pose)(1, 3) += global_to_local_offset_(1);
(*pose)(2, 3) += global_to_local_offset_(2);
}
从上面的TransformPoseGlobal2Local函数代码我们可以得到一个没有平移成分,只有旋转成分的变换矩阵velo2world_pose,这个矩阵有什么作用?很简单,这个矩阵就是lidar坐标系到lidar局部ENU坐标系的转换矩阵。
名称 | 备注 |
---|---|
ObjectPtr object_ptr | Object对象指针 |
Eigen::Vector3f barycenter | 重心,取该类所有点云xyz的平均值得到 |
Eigen::Vector3f center | 中心, bbox4个角点外加平均高度计算得到 |
Eigen::Vector3f velocity | 速度,卡尔曼滤波器预测得到 |
Eigen::Matrix3f velocity_uncertainty | 不确定速度 |
Eigen::Vector3f acceleration | 加速度 |
ObjectType type | 物体类型,行人、自行车、车辆等 |
float association_score | – |
从上面表格可以看到,TrackedObject封装了Object,并且只增加了少量速度,加速度等额外信息。
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
void HmObjectTracker::ConstructTrackedObjects(
const std::vector& objects,
std::vector* tracked_objects, const Eigen::Matrix4d& pose,
const TrackerOptions& options) {
int num_objects = objects.size();
tracked_objects->clear();
tracked_objects->resize(num_objects);
for (int i = 0; i < num_objects; ++i) {
ObjectPtr obj(new Object());
obj->clone(*objects[i]);
(*tracked_objects)[i].reset(new TrackedObject(obj)); // create new TrackedObject with object
// Computing shape featrue
if (use_histogram_for_match_) {
ComputeShapeFeatures(&((*tracked_objects)[i])); // compute shape feature
}
// Transforming all tracked objects
TransformTrackedObject(&((*tracked_objects)[i]), pose); // transform coordinate from lidar frame to local ENU frame
// Setting barycenter as anchor point of tracked objects
Eigen::Vector3f anchor_point = (*tracked_objects)[i]->barycenter;
(*tracked_objects)[i]->anchor_point = anchor_point;
// Getting lane direction of tracked objects
pcl_util::PointD query_pt; // get lidar's world coordinate equals lidar2world_trans's translation part
query_pt.x = anchor_point(0) - global_to_local_offset_(0);
query_pt.y = anchor_point(1) - global_to_local_offset_(1);
query_pt.z = anchor_point(2) - global_to_local_offset_(2);
Eigen::Vector3d lane_dir;
if (!options.hdmap_input->GetNearestLaneDirection(query_pt, &lane_dir)) {
lane_dir = (pose * Eigen::Vector4d(1, 0, 0, 0)).head(3); // get nearest line direction from hd map
}
(*tracked_objects)[i]->lane_direction = lane_dir.cast<float>();
}
}
ConstructTrackedObjects 是由物体对象来创建跟踪对象的代码,这个过程相对来说比较简单易懂,没大的难点,下面就解释一下具体的功能。
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
void HmObjectTracker::CreateNewTracks(
const std::vector& new_objects,
const std::vector<int>& unassigned_objects) {
// Create new tracks for objects without matched tracks
for (size_t i = 0; i < unassigned_objects.size(); i++) {
int obj_id = unassigned_objects[i];
ObjectTrackPtr track(new ObjectTrack(new_objects[obj_id]));
object_tracks_.AddTrack(track);
}
}
同时函数CollectTrackedResults会将当前正在跟踪的对象(世界坐标系坐标形式)保存到向量中,该部分代码比较简单就不贴出来了。
首先,在这里我们简单介绍一下卡尔曼滤波的一些基础公式,方便下面理解。
卡尔曼滤波分为两个阶段,分别是预测Predict与更新Update:
最终t从1开始递归计算k时刻的最优状态 X k X_k Xk与最优协方差矩阵 P t P_t Pt
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
bool HmObjectTracker::Track(const std::vector& objects,
double timestamp, const TrackerOptions& options,
std::vector* tracked_objects) {
// A. track setup
...
// B. preprocessing
// B.1 transform given pose to local one
...
// B.2 construct objects for tracking
...
// C. prediction
std::vectortracks_predict;
ComputeTracksPredict(&tracks_predict, time_diff);
...
}
void HmObjectTracker::ComputeTracksPredict(std::vector* tracks_predict, const double& time_diff) {
// Compute tracks' predicted states
std::vector& tracks = object_tracks_.GetTracks();
for (int i = 0; i < no_track; ++i) {
(*tracks_predict)[i] = tracks[i]->Predict(time_diff); // track every tracked object in object_tracks_(ObjectTrack class)
}
}
从代码中我们可以看到,这个过程其实就是对object_tracks_列表中每个物体调用其Predict函数进行滤波跟踪(object_tracks_是上阶段Object–TrackedObject–ObjectTrack的依次封装)。接下去我们就对这个Predict函数进行深层次的挖掘和分析,看看它实现了卡尔曼过滤器的哪个阶段工作。
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/object_track.cc
Eigen::VectorXf ObjectTrack::Predict(const double& time_diff) {
// Get the predict of filter
Eigen::VectorXf filter_predict = filter_->Predict(time_diff);
// Get the predict of track
Eigen::VectorXf track_predict = filter_predict;
track_predict(0) = belief_anchor_point_(0) + belief_velocity_(0) * time_diff;
track_predict(1) = belief_anchor_point_(1) + belief_velocity_(1) * time_diff;
track_predict(2) = belief_anchor_point_(2) + belief_velocity_(2) * time_diff;
track_predict(3) = belief_velocity_(0);
track_predict(4) = belief_velocity_(1);
track_predict(5) = belief_velocity_(2);
return track_predict;
}
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/kalman_filter.cc
Eigen::VectorXf KalmanFilter::Predict(const double& time_diff) {
// Compute predict states
Eigen::VectorXf predicted_state;
predicted_state.resize(6);
predicted_state(0) = belief_anchor_point_(0) + belief_velocity_(0) * time_diff;
predicted_state(1) = belief_anchor_point_(1) + belief_velocity_(1) * time_diff;
predicted_state(2) = belief_anchor_point_(2) + belief_velocity_(2) * time_diff;
predicted_state(3) = belief_velocity_(0);
predicted_state(4) = belief_velocity_(1);
predicted_state(5) = belief_velocity_(2);
// Compute predicted covariance
Propagate(time_diff);
return predicted_state;
}
void KalmanFilter::Propagate(const double& time_diff) {
// Only propagate tracked motion
ity_covariance_ += s_propagation_noise_ * time_diff * time_diff;
}
从上面两个函数可以明显看到这个阶段就是卡尔曼滤波器的Predict阶段。同时可以看到:
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
bool HmObjectTracker::Track(const std::vector& objects,
double timestamp, const TrackerOptions& options,
std::vector* tracked_objects) {
// A. track setup
...
// B. preprocessing
// B.1 transform given pose to local one
...
// B.2 construct objects for tracking
...
// C. prediction
...
// D. match objects to tracks
std::vectorassignments;
std::vector<int> unassigned_objects;
std::vector<int> unassigned_tracks;
std::vector& tracks = object_tracks_.GetTracks();
if (matcher_ != nullptr) {
matcher_->Match(&transformed_objects, tracks, tracks_predict, &assignments, &unassigned_tracks, &unassigned_objects);
}
...
}
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hungarian_matcher.cc
void HungarianMatcher::Match(std::vector* objects,
const std::vector& tracks,
const std::vector& tracks_predict,
std::vector* assignments,
std::vector<int>* unassigned_tracks,
std::vector<int>* unassigned_objects) {
// A. computing association matrix
Eigen::MatrixXf association_mat(tracks.size(), objects->size());
ComputeAssociateMatrix(tracks, tracks_predict, (*objects), &association_mat);
// B. computing connected components
std::vector<std::vector<int>> object_components;
std::vector<std::vector<int>> track_components;
ComputeConnectedComponents(association_mat, s_match_distance_maximum_,
&track_components, &object_components);
// C. matching each sub-graph
...
}
这个阶段主要的工作是匹配CNN分割+MinBox检测到的物体和当前ObjectTrack的跟踪物体。主要的工作为:
最终以0.6,0.2,0.1,0.1,0.5的权重加权求和得到关联评分。
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/track_object_distance.cc
float TrackObjectDistance::ComputeDistance(const ObjectTrackPtr& track,
const Eigen::VectorXf& track_predict,
const TrackedObjectPtr& new_object) {
// Compute distance for given trakc & object
float location_distance = ComputeLocationDistance(track, track_predict, new_object);
float direction_distance = ComputeDirectionDistance(track, track_predict, new_object);
float bbox_size_distance = ComputeBboxSizeDistance(track, new_object);
float point_num_distance = ComputePointNumDistance(track, new_object);
float histogram_distance = ComputeHistogramDistance(track, new_object);
float result_distance = s_location_distance_weight_ * location_distance + // s_location_distance_weight_ = 0.6
s_direction_distance_weight_ * direction_distance + // s_direction_distance_weight_ = 0.2
s_bbox_size_distance_weight_ * bbox_size_distance + // s_bbox_size_distance_weight_ = 0.1
s_point_num_distance_weight_ * point_num_distance + // s_point_num_distance_weight_ = 0.1
s_histogram_distance_weight_ * histogram_distance; // s_histogram_distance_weight_ = 0.5
return result_distance;
}
各个子项的计算方式,这里以文字形式描述,假设:
/// file in apollo/master/modules/perception/obstacle/lidar/tracker/hm_tracker/track_object_distance.cc
float TrackObjectDistance::ComputeBboxSizeDistance(const ObjectTrackPtr& track, const TrackedObjectPtr& new_object) {
double dot_val_00 = fabs(old_bbox_dir(0) * new_bbox_dir(0) + old_bbox_dir(1) * new_bbox_dir(1));
double dot_val_01 = fabs(old_bbox_dir(0) * new_bbox_dir(1) - old_bbox_dir(1) * new_bbox_dir(0));
bool bbox_dir_close = dot_val_00 > dot_val_01;
if (bbox_dir_close) {
float diff_1 = fabs(old_bbox_size(0) - new_bbox_size(0)) / std::max(old_bbox_size(0), new_bbox_size(0));
float diff_2 = fabs(old_bbox_size(1) - new_bbox_size(1)) / std::max(old_bbox_size(1), new_bbox_size(1));
size_distance = std::min(diff_1, diff_2);
} else {
float diff_1 = fabs(old_bbox_size(0) - new_bbox_size(1)) / std::max(old_bbox_size(0), new_bbox_size(1));
float diff_2 = fabs(old_bbox_size(1) - new_bbox_size(0)) / std::max(old_bbox_size(1), new_bbox_size(0));
size_distance = std::min(diff_1, diff_2);
}
return size_distance;
}
这两个量有什么意义?这里简单解释一下,从计算方式可以看到:
void HungarianMatcher::ComputeConnectedComponents(
const Eigen::MatrixXf& association_mat, const float& connected_threshold,
std::vector<std::vector<int>>* track_components,
std::vector<std::vector<int>>* object_components) {
// Compute connected components within given threshold
int no_track = association_mat.rows();
int no_object = association_mat.cols();
std::vector<std::vector<int>> nb_graph;
nb_graph.resize(no_track + no_object);
for (int i = 0; i < no_track; i++) {
for (int j = 0; j < no_object; j++) {
if (association_mat(i, j) <= connected_threshold) {
nb_graph[i].push_back(no_track + j);
nb_graph[j + no_track].push_back(i);
}
}
}
std::vector<std::vector<int>> components;
ConnectedComponentAnalysis(nb_graph, &components); // sub_graph segment
...
}
主要的子图划分工作在ConnectedComponentAnalysis函数完成,具体的可以参考代码,一个比较简单地广度优先搜索。最后得到的components二维向量中,每一行为一个子图的组成元素。
匹配的算法主要还是匈牙利算法的矩阵形式,跟wiki百科的基本描述一致,可以参考主页匈牙利算法。
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
bool HmObjectTracker::Track(const std::vector& objects,
double timestamp, const TrackerOptions& options,
std::vector* tracked_objects) {
// E. update tracks
// E.1 update tracks with associated objects
UpdateAssignedTracks(&tracks_predict, &transformed_objects, assignments, time_diff);
// E.2 update tracks without associated objects
UpdateUnassignedTracks(tracks_predict, unassigned_tracks, time_diff);
DeleteLostTracks();
// E.3 create new tracks for objects without associated tracks
CreateNewTracks(transformed_objects, unassigned_objects);
// F. collect tracked results
CollectTrackedResults(tracked_objects);
return true;
}
void HmObjectTracker::UpdateAssignedTracks(
std::vector* tracks_predict,
std::vector* new_objects,
const std::vector& assignments, const double& time_diff) {
// Update assigned tracks
std::vector& tracks = object_tracks_.GetTracks();
for (size_t i = 0; i < assignments.size(); i++) {
int track_id = assignments[i].first;
int obj_id = assignments[i].second;
tracks[track_id]->UpdateWithObject(&(*new_objects)[obj_id], time_diff);
}
}
从上述的代码可以看到,更新过程有ObjectTrack::UpdateWithObject和ObjectTrack::UpdateWithoutObject两个函数完成,这两个函数间接调用kalman滤波器完成滤波更新,接下去我们简单地分析ObjectTrack::UpdateWithObject函数的流程。
/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/object_track.cc
void ObjectTrack::UpdateWithObject(TrackedObjectPtr* new_object, const double& time_diff) {
// A. update object track
// A.1 update filter
filter_->UpdateWithObject((*new_object), current_object_, time_diff);
filter_->GetState(&belief_anchor_point_, &belief_velocity_);
filter_->GetOnlineCovariance(&belief_velocity_uncertainty_);
(*new_object)->anchor_point = belief_anchor_point_;
(*new_object)->velocity = belief_velocity_;
(*new_object)->velocity_uncertainty = belief_velocity_uncertainty_;
belief_velocity_accelaration_ = ((*new_object)->velocity - current_object_->velocity) / time_diff;
// A.2 update track info
...
// B. smooth object track
// B.1 smooth velocity
SmoothTrackVelocity((*new_object), time_diff);
// B.2 smooth orientation
SmoothTrackOrientation();
}
从代码中也可以间接看出更新的过程A.1和A.2是更新KalmanFilter和TrackedObject状态信息,B是更新ObjectTrack状态,这里必须按顺序来更新!
Step1. 计算更新评分 ComputeUpdateQuality(new_object, old_object)
这个过程主要是计算更新力度,因为每个Object和对应的跟踪目标TrackedObject之间有一个关联系数association_score,这个系数衡量两个目标之间的相似度,所以这里需要增加对目标的更新力度参数。
计算关联力度: update_quality_according_association_score = 1 - association_score / s_association_score_maximum_。默认s_association_score_maximum_= 1,关联越大(相似度越大),更新力度越大
计算点云变化力度: update_quality_according_point_num_change = 1 - |n1 - n2| / max(n1, n2)。点云变化越小,更新力度越大
最终取两个值的较小值最为最终的更新力度。
Step2. 计算当前时刻的速度measured_velocity和加速度measured_acceleration (这两个变量相当于卡尔曼滤波中的观测变量 Z t Z_t Zt)
首先计算重心速度: measured_anchor_point_velocity = [NewObject_barycenter(x,y,z) - OldObject_barycenter(x,y,z)] / timediff。timediff是两次计算的时间差,很简单地计算方式
其次计算标定框(中心)速度:measured_bbox_center_velocity = [NewObject_center(x,y,z) - OldObject_center(x,y,z)] / timediff。这里的中心区别于上面的重心,重心是所有点云的平均值;重心是MinBox的中心值。还有一个需要注意的是,如果求出来的中心速度方向和重心方向相反,这时候有干扰,中心速度暂时置为0。
然后计算标定框角点速度:
A. 根据NewObject的点云计算bbox(这不是MinBox),并求出中心center,然后根据反向求出4个点。如果NewObject方向是dir,那么首先对dir进行归一化得到dir_normal=dir/|dir|^2=(nx,ny,0),然后求他的正交方向dir_ortho=(-ny,nx,0),如果中心点坐标center,那么左上角的坐标就是: center+dir*size[0]0.5+ortho_dirsize[1]*0.5。根据这个公式可以计算出其他三个点的坐标。
B. 计算标定框bbox四个角点的速度: bbox_corner_velocity = ((new_bbox_corner - old_bbox_corner) / time_diff)公式与上面的重心、中心计算方式一样。
C. 计算4个角点的在主方向上的速度,去最小的点最为标定框角点速度。只需要将B中的bbox_corner_velocity投影到主方向即可。
最后在重心速度、重心速度、bbox角速度中选择速度增益最小的最后最终物体的增益。增益=当前速度-上时刻最优速度
加速度measured_acceleration 计算比较简单,采用最近3次的速度(v1,t1),(v2,t2),(v3,t3),然后加速度a=(v3-v1)/(t2+t3)。注意(v2,t2)意思是某一时刻最优估计速度为v2,且距离上次的时间差为t2,所以三次测量的时间差为t2+t3。速度变化为v3-v1。
Step3. 估算最优的速度与加速度(卡尔曼滤波Update步骤)
首先,计算卡尔曼增益 H t = P t , t − 1 C t T [ C t P t , t − 1 C t T + R ] − 1 H_t = P_{t,t-1}{C_t}^T[C_tP_{t,t-1}{C_t}^T + R]^{-1} Ht=Pt,t−1CtT[CtPt,t−1CtT+R]−1,在apollo代码中计算代码如下:
// Compute kalman gain
Eigen::Matrix3d mat_c = Eigen::Matrix3d::Identity(); // C_t
Eigen::Matrix3d mat_q = s_measurement_noise_ * Eigen::Matrix3d::Identity(); // R_t
Eigen::Matrix3d mat_k = velocity_covariance_ * mat_c.transpose() * // H_t
(mat_c * velocity_covariance_ * mat_c.transpose() + mat_q).inverse();
从上面可知,代码和我们给出的结果是一致的。
然后,由当前时刻的估算速度 X t , t − 1 X_{t,t-1} Xt,t−1、观测变量 Z t Z_t Zt以及卡尔曼增益 H t H_t Ht,得到当前时刻的最优速度估计 X t = X t , t − 1 + H t [ Z t − C t X t , t − 1 ] X_t = X_{t,t-1} + H_t[Z_t - C_tX_{t,t-1}] Xt=Xt,t−1+Ht[Zt−CtXt,t−1],在apollo代码中计算了速度增益,也就是 X t − X t , t − 1 X_t-X_{t,t-1} Xt−Xt,t−1:
// Compute posterior belief
Eigen::Vector3d measured_velocity_d = measured_velocity.cast<double>(); // Zv_t
Eigen::Vector3d priori_velocity = belief_velocity_ + belief_acceleration_gain_ * time_diff; // Xv_{t,t-1}
Eigen::Vector3d velocity_gain = mat_k * (measured_velocity_d - mat_c * priori_velocity); // Gain = Xv_t - Xv_{t,t-1}
然后对速度增益进行平滑并且保存当前t时刻最优速度以及最优加速度。
// Breakdown
ComputeBreakdownThreshold();
if (velocity_gain.norm() > breakdown_threshold_) {
velocity_gain.normalize();
velocity_gain *= breakdown_threshold_;
}
belief_anchor_point_ = measured_anchor_point_d;
belief_velocity_ = priori_velocity + velocity_gain; // Xv_t = Xv_{t,t-1} + Gain
belief_acceleration_gain_ = velocity_gain / time_diff; // Acc_t = Xv_t / timediff
最后就是速度整流并且修正估计协方差矩阵 P t , t − 1 P_{t,t-1} Pt,t−1,得到当前时刻最优协方差矩阵 P t = [ I − H t C t ] P t , t − 1 P_t=[I-H_tC_t]P_{t,t-1} Pt=[I−HtCt]Pt,t−1,在这个应用中 C t ≡ 1 C_t\equiv1 Ct≡1。
// Adaptive
if (s_use_adaptive_) {
belief_velocity_ -= belief_acceleration_gain_ * time_diff;
belief_acceleration_gain_ *= update_quality_;
belief_velocity_ += belief_acceleration_gain_ * time_diff;
}
// Compute posterior covariance
velocity_covariance_ = (Eigen::Matrix3d::Identity() - mat_k * mat_c) * velocity_covariance_; // P_t
加速度更新与上述速度更新方法一致。
Eigen::Vector3f predicted_shift = predict_state.tail(3) * time_diff;
new_obj->anchor_point = current_object_->anchor_point + predicted_shift;
new_obj->barycenter = current_object_->barycenter + predicted_shift;
new_obj->center = current_object_->center + predicted_shift;
更新完匹配成功和不成功的跟踪物体以后,下一步就是从跟踪列表中删掉丢失的跟踪物体。遍历整个跟踪列表:
如果Object没有找到对应的TrackedObject与之匹配,那么就创建新的跟踪目标,并且加入跟踪队列。
最终对HM物体跟踪做一个总结与梳理,物体跟踪主要是对上述CNN分割与MinBox边框构建产生的Object对一个跟踪与匹配,主要流程是:
计算Object与TrackedObject的关联矩阵
根据关联矩阵,配合阈值,划分子图
对于每个子图使用匈牙利匹配算法(Hungarian Match)进行匹配,得到
删除丢失的跟踪目标