多传感器融合算法,雷视融合算法

在这里插入图片描述

0: 设备选型

Camera选型: 如何选择视场角和焦距:

多传感器融合算法,雷视融合算法_第1张图片

Radar选型

多传感器融合算法,雷视融合算法_第2张图片

lidar选型

多传感器融合算法,雷视融合算法_第3张图片

1: Radar毫米波雷达和Camera视觉融合

算法要求:
图像目标检测
Radar目标检测
radar和camera标定

融合效果

例如:
多传感器融合算法,雷视融合算法_第4张图片
可用作交通事件分析

2: Lidar激光雷达和Camera视觉融合

算法要求:
图像目标检测
Lidar目标检测
Lidar和camera标定

激光测距到相机像素深度值:
标定投影原理:
https://blog.csdn.net/tMb8Z9Vdm66wH68VX1/article/details/120499743

zc 是三维点在相机坐标系中深度值。多传感器融合算法,雷视融合算法_第5张图片
多传感器融合算法,雷视融合算法_第6张图片
多传感器融合算法,雷视融合算法_第7张图片
多传感器融合算法,雷视融合算法_第8张图片

效果展示:

左边图像目标检测和融合效果
右边Lidar目标检测

基于图像深度学习目标检测
基于激光雷达的深度学习目标检测

融合目标,重新画框显示可视化,赋予车辆速度信息

多传感器融合算法,雷视融合算法_第9张图片
多传感器融合算法,雷视融合算法_第10张图片

3: Lidar激光雷达和Radar毫米波雷达融合

算法要求:
图像目标检测
Radar目标检测
Radar和camera标定
多传感器融合算法,雷视融合算法_第11张图片

融合的前提之跟踪算法

4: camera跟踪算法

匈牙利算法(又叫KM算法)就是用来解决分配问题的一种方法,它基于定理:

如果代价矩阵的某一行或某一列同时加上或减去某个数,则这个新的代价矩阵的最优分配仍然是原代价矩阵的最优分配。
算法步骤(假设矩阵为NxN方阵):

1:对于矩阵的每一行,减去其中最小的元素
2:对于矩阵的每一列,减去其中最小的元素
3:用最少的水平线或垂直线覆盖矩阵中所有的0
4:如果线的数量等于N,则找到了最优分配,算法结束,否则进入步骤
5:找到没有被任何线覆盖的最小元素,每个没被线覆盖的行减去这个元素,每个被线覆盖的列加上这个元素,返回步骤3

图像的跟踪算法的理解

在跟踪之前,对所有目标已经完成检测,实现了特征建模过程。

  1. 第一帧进来时,以检测到的目标初始化并创建新的跟踪器,标注id。
  2. 后面帧进来时,先到卡尔曼滤波器中得到由前面帧box产生的状态预测和协方差预测。求跟踪器所有目标状态预测与本帧检测的box的IOU,通过匈牙利指派算法得到IOU最大的唯一匹配(数据关联部分),再去掉匹配值小于iou_threshold的匹配对。
    https://blog.csdn.net/u011837761/article/details/52058703
  3. 用本帧中匹配到的目标检测box去更新卡尔曼跟踪器,计算卡尔曼增益、状态更新和协方差更新,并将状态更新值输出,作为本帧的跟踪box。对于本帧中没有匹配到的目标重新初始化跟踪器。

其中,卡尔曼跟踪器联合了历史跟踪记录,调节历史box与本帧box的残差,更好的匹配跟踪id。

message WeightParam {
optional float appearance = 1 [default = 0];
optional float motion = 2 [default = 0];
optional float shape = 3 [default = 0];
optional float tracklet = 4 [default = 0];
optional float overlap = 5 [default = 0];
}
message OmtParam {
optional int32 img_capability = 1 [default = 7];
optional int32 lost_age = 2 [default = 2];
optional int32 reserve_age = 3 [default = 3];
optional WeightParam weight_same_camera = 4;
optional WeightParam weight_diff_camera = 5;
optional float border = 9 [default = 30];
optional float target_thresh = 10 [default = 0.65];
optional bool correct_type = 11 [default = false];
optional TargetParam target_param = 12;
optional float min_init_height_ratio = 13 [default = 17];
optional float target_combine_iou_threshold = 14 [default = 0.5];
optional float fusion_target_thresh = 15 [default = 0.45];
optional float image_displacement = 16 [default = 50];
optional float abnormal_movement = 17 [default = 0.3];
optional double same_ts_eps = 18 [default = 0.05];
optional ReferenceParam reference = 19;
optional string type_change_cost = 20;
}

如果采用算法,可以综合以下,计算关联矩阵
  float sa = ScoreAppearance(targets_[i], objects[j]);
  float sm = ScoreMotion(targets_[i], objects[j]);
  float ss = ScoreShape(targets_[i], objects[j]);
  float so = ScoreOverlap(targets_[i], objects[j]);
  if (sa == 0) {
    hypo.score =
        omt_param_.weight_diff_camera().motion() * sm
            + omt_param_.weight_diff_camera().shape() * ss
            + omt_param_.weight_diff_camera().overlap() * so;
  } else {
    hypo.score = (omt_param_.weight_same_camera().appearance() * sa +
        omt_param_.weight_same_camera().motion() * sm +
        omt_param_.weight_same_camera().shape() * ss +
        omt_param_.weight_same_camera().overlap() * so);
  }

5: Lidar跟踪算法

https://blog.csdn.net/lemonxiaoxiao/article/details/108645427

perception/lidar/lib/tracker/multi_lidar_fusion/mlf_track_object_matcher.cc

void MlfTrackObjectMatcher::Match(
    const MlfTrackObjectMatcherOptions &options,
    const std::vector<TrackedObjectPtr> &objects,
    const std::vector<MlfTrackDataPtr> &tracks,
    std::vector<std::pair<size_t, size_t>> *assignments,
    std::vector<size_t> *unassigned_tracks,
    std::vector<size_t> *unassigned_objects) {
  assignments->clear();
  unassigned_objects->clear();
  unassigned_tracks->clear();
  if (objects.empty() || tracks.empty()) {
    unassigned_objects->resize(objects.size());
    unassigned_tracks->resize(tracks.size());
    std::iota(unassigned_objects->begin(), unassigned_objects->end(), 0);
    std::iota(unassigned_tracks->begin(), unassigned_tracks->end(), 0);
    return;
  }

  BipartiteGraphMatcherOptions matcher_options;
  matcher_options.cost_thresh = max_match_distance_;
  matcher_options.bound_value = bound_value_;

  BaseBipartiteGraphMatcher *matcher =
      objects[0]->is_background ? background_matcher_ : foreground_matcher_;

  common::SecureMat<float> *association_mat = matcher->cost_matrix();

  association_mat->Resize(tracks.size(), objects.size());
  ComputeAssociateMatrix(tracks, objects, association_mat);
  matcher->Match(matcher_options, assignments, unassigned_tracks,
                 unassigned_objects);
  for (size_t i = 0; i < assignments->size(); ++i) {
    objects[assignments->at(i).second]->association_score =
        (*association_mat)(assignments->at(i).first,
                           assignments->at(i).second) /
        max_match_distance_;
  }
}
float MlfTrackObjectDistance::ComputeDistance(
    const TrackedObjectConstPtr& object,
    const MlfTrackDataConstPtr& track) const {
  bool is_background = object->is_background;
  const TrackedObjectConstPtr latest_object = track->GetLatestObject().second;
  std::string key = latest_object->sensor_info.name + object->sensor_info.name;
  const std::vector<float>* weights = nullptr;
  if (is_background) {
    auto iter = background_weight_table_.find(key);
    if (iter == background_weight_table_.end()) {
      weights = &kBackgroundDefaultWeight;
    } else {
      weights = &iter->second;
    }
  } else {
    auto iter = foreground_weight_table_.find(key);
    if (iter == foreground_weight_table_.end()) {
      weights = &kForegroundDefaultWeight;
    } else {
      weights = &iter->second;
    }
  }
  if (weights == nullptr || weights->size() < 7) {
    AERROR << "Invalid weights";
    return 1e+10f;
  }
  float distance = 0.f;
  float delta = 1e-10f;

  double current_time = object->object_ptr->latest_tracked_time;
  track->PredictState(current_time);

  double time_diff =
      track->age_ ? current_time - track->latest_visible_time_ : 0;
  if (weights->at(0) > delta) {
    distance +=
        weights->at(0) * LocationDistance(latest_object, track->predict_.state,
                                          object, time_diff);
  }
  if (weights->at(1) > delta) {
    distance +=
        weights->at(1) * DirectionDistance(latest_object, track->predict_.state,
                                           object, time_diff);
  }
  if (weights->at(2) > delta) {
    distance +=
        weights->at(2) * BboxSizeDistance(latest_object, track->predict_.state,
                                          object, time_diff);
  }
  if (weights->at(3) > delta) {
    distance +=
        weights->at(3) * PointNumDistance(latest_object, track->predict_.state,
                                          object, time_diff);
  }
  if (weights->at(4) > delta) {
    distance +=
        weights->at(4) * HistogramDistance(latest_object, track->predict_.state,
                                           object, time_diff);
  }
  if (weights->at(5) > delta) {
    distance += weights->at(5) * CentroidShiftDistance(latest_object,
                                                       track->predict_.state,
                                                       object, time_diff);
  }
  if (weights->at(6) > delta) {
    distance += weights->at(6) *
                BboxIouDistance(latest_object, track->predict_.state, object,
                                time_diff, background_object_match_threshold_);
  }
  // for foreground, calculate semantic map based distance
//  if (!is_background) {
//    distance += weights->at(7) * SemanticMapDistance(*track, object);
//  }

  return distance;
}
/******************************************************************************
 * Copyright 2018 The Apollo Authors. All Rights Reserved.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 *****************************************************************************/
#include 
#include 

#include "cyber/common/log.h"
#include "modules/common/math/line_segment2d.h"
#include "modules/common/math/vec2d.h"
#include "modules/perception/common/geometry/basic.h"
#include "modules/perception/common/geometry/common.h"
#include "modules/perception/lidar/lib/tracker/association/distance_collection.h"

float LocationDistance(const TrackedObjectConstPtr& last_object,
                       const Eigen::VectorXf& track_predict,
                       const TrackedObjectConstPtr& new_object,
                       const double time_diff) {
  // Compute locatin distance for last object and new object
  // range from 0 to positive infinity

  Eigen::Vector3f measured_anchor_point =
      (new_object->anchor_point).cast<float>();
  Eigen::Vector3f predicted_anchor_point = track_predict.head(3);
  Eigen::Vector3f measure_predict_diff =
      measured_anchor_point - predicted_anchor_point;

  float location_dist = static_cast<float>(sqrt(
      (measure_predict_diff.head(2).cwiseProduct(measure_predict_diff.head(2)))
          .sum()));

  /* NEED TO NOTICE: All the states would be collected mainly based on
   * states of tracked objects. Thus, update tracked object when you
   * update the state of track !!!!! */
  Eigen::Vector2d ref_dir = last_object->output_velocity.head(2);
  double speed = ref_dir.norm();
  ref_dir /= speed;

  /* Let location distance generated from a normal distribution with
   * symmetric variance. Modify its variance when speed greater than
   * a threshold. Penalize variance other than motion direction. */
  if (speed > 2) {
    Eigen::Vector2d ref_o_dir = Eigen::Vector2d(ref_dir[1], -ref_dir[0]);
    double dx = ref_dir(0) * measure_predict_diff(0) +
                ref_dir(1) * measure_predict_diff(1);
    double dy = ref_o_dir(0) * measure_predict_diff(0) +
                ref_o_dir(1) * measure_predict_diff(1);
    location_dist = static_cast<float>(sqrt(dx * dx * 0.5 + dy * dy * 2));
  }

  return location_dist;
}

float DirectionDistance(const TrackedObjectConstPtr& last_object,
                        const Eigen::VectorXf& track_predict,
                        const TrackedObjectConstPtr& new_object,
                        const double time_diff) {
  // Compute direction distance for last object and new object
  // range from 0 to 2

  Eigen::Vector3f old_anchor_point =
      (last_object->belief_anchor_point).cast<float>();
  Eigen::Vector3f new_anchor_point = (new_object->anchor_point).cast<float>();
  Eigen::Vector3f anchor_point_shift_dir = new_anchor_point - old_anchor_point;
  anchor_point_shift_dir[2] = 0.0;

  Eigen::Vector3f track_motion_dir = track_predict.head(6).tail(3);
  track_motion_dir[2] = 0.0;

  double cos_theta = 0.994;  // average cos
  if (!track_motion_dir.head(2).isZero() &&
      !anchor_point_shift_dir.head(2).isZero()) {
    // cos_theta = vector_cos_theta_2d_xy(track_motion_dir,
    //                                   anchor_point_shift_dir);
    cos_theta = common::CalculateCosTheta2DXY<float>(track_motion_dir,
                                                     anchor_point_shift_dir);
  }
  float direction_dist = static_cast<float>(-cos_theta) + 1.0f;

  return direction_dist;
}

float BboxSizeDistance(const TrackedObjectConstPtr& last_object,
                       const Eigen::VectorXf& track_predict,
                       const TrackedObjectConstPtr& new_object,
                       const double time_diff) {
  // Compute bbox size distance for last object and new object
  // range from 0 to 1

  Eigen::Vector3f old_bbox_dir = last_object->output_direction.cast<float>();
  Eigen::Vector3f new_bbox_dir = new_object->direction.cast<float>();
  Eigen::Vector3f old_bbox_size = last_object->output_size.cast<float>();
  Eigen::Vector3f new_bbox_size = new_object->size.cast<float>();

  float size_dist = 0.0f;
  double dot_val_00 = fabs(old_bbox_dir(0) * new_bbox_dir(0) +
                           old_bbox_dir(1) * new_bbox_dir(1));
  double dot_val_01 = fabs(old_bbox_dir(0) * new_bbox_dir(1) -
                           old_bbox_dir(1) * new_bbox_dir(0));
  float temp_val_0 = 0.0f;
  float temp_val_1 = 0.0f;
  if (dot_val_00 > dot_val_01) {
    temp_val_0 = static_cast<float>(fabs(old_bbox_size(0) - new_bbox_size(0))) /
                 std::max(old_bbox_size(0), new_bbox_size(0));
    temp_val_1 = static_cast<float>(fabs(old_bbox_size(1) - new_bbox_size(1))) /
                 std::max(old_bbox_size(1), new_bbox_size(1));
    size_dist = std::min(temp_val_0, temp_val_1);
  } else {
    temp_val_0 = static_cast<float>(fabs(old_bbox_size(0) - new_bbox_size(1))) /
                 std::max(old_bbox_size(0), new_bbox_size(1));
    temp_val_1 = static_cast<float>(fabs(old_bbox_size(1) - new_bbox_size(0))) /
                 std::max(old_bbox_size(1), new_bbox_size(0));
    size_dist = std::min(temp_val_0, temp_val_1);
  }

  return size_dist;
}

float PointNumDistance(const TrackedObjectConstPtr& last_object,
                       const Eigen::VectorXf& track_predict,
                       const TrackedObjectConstPtr& new_object,
                       const double time_diff) {
  // Compute point num distance for last object and new object
  // range from 0 and 1

  int old_point_number = static_cast<int>(
      (last_object->object_ptr->lidar_supplement).cloud_world.size());
  int new_point_number = static_cast<int>(
      (new_object->object_ptr->lidar_supplement).cloud_world.size());

  float point_num_dist =
      static_cast<float>(fabs(old_point_number - new_point_number)) * 1.0f /
      static_cast<float>(std::max(old_point_number, new_point_number));

  return point_num_dist;
}

float HistogramDistance(const TrackedObjectConstPtr& last_object,
                        const Eigen::VectorXf& track_predict,
                        const TrackedObjectConstPtr& new_object,
                        const double time_diff) {
  // Compute histogram distance for last object and new object
  // range from 0 to 3

  const std::vector<float>& old_object_shape_features =
      last_object->shape_features;
  const std::vector<float>& new_object_shape_features =
      new_object->shape_features;

  if (old_object_shape_features.size() != new_object_shape_features.size()) {
    AINFO << "sizes of compared features not matched. TrackObjectDistance";
    return 100;
  }

  float histogram_dist = 0.0f;
  for (size_t i = 0; i < old_object_shape_features.size(); ++i) {
    histogram_dist +=
        std::fabs(old_object_shape_features[i] - new_object_shape_features[i]);
  }

  return histogram_dist;
}

float CentroidShiftDistance(const TrackedObjectConstPtr& last_object,
                            const Eigen::VectorXf& track_predict,
                            const TrackedObjectConstPtr& new_object,
                            const double time_diff) {
  float dist = static_cast<float>(
      (last_object->barycenter.head(2) - new_object->barycenter.head(2))
          .norm());
  return dist;
}

float BboxIouDistance(const TrackedObjectConstPtr& last_object,
                      const Eigen::VectorXf& track_predict,
                      const TrackedObjectConstPtr& new_object,
                      const double time_diff, double match_threshold) {
  // Step 1: unify bbox direction, change the one with less pts,
  // for efficiency.
  Eigen::Vector3f old_dir = last_object->output_direction.cast<float>();
  Eigen::Vector3f old_size = last_object->output_size.cast<float>();
  Eigen::Vector3d old_center = last_object->output_center;
  Eigen::Vector3f new_dir = new_object->direction.cast<float>();
  Eigen::Vector3f new_size = new_object->size.cast<float>();
  Eigen::Vector3d new_center = new_object->center;
  old_dir.normalize();
  new_dir.normalize();
  // handle randomness
  old_size(0) = old_size(0) > 0.3f ? old_size(0) : 0.3f;
  old_size(1) = old_size(1) > 0.3f ? old_size(1) : 0.3f;
  new_size(0) = new_size(0) > 0.3f ? new_size(0) : 0.3f;
  new_size(1) = new_size(1) > 0.3f ? new_size(1) : 0.3f;
  int last_object_num_pts = static_cast<int>(
      (last_object->object_ptr->lidar_supplement).cloud_world.size());
  int cur_obj_num_pts = static_cast<int>(
      (new_object->object_ptr->lidar_supplement).cloud_world.size());
  bool change_cur_obj_bbox = last_object_num_pts > cur_obj_num_pts;
  float minimum_edge_length = 0.01f;
  base::PointDCloud& cloud =
      (new_object->object_ptr->lidar_supplement).cloud_world;
  if (change_cur_obj_bbox) {
    new_dir = old_dir;
    common::CalculateBBoxSizeCenter2DXY(cloud, new_dir, &new_size, &new_center,
                                        minimum_edge_length);
  } else {
    old_dir = new_dir;
    common::CalculateBBoxSizeCenter2DXY(cloud, old_dir, &old_size, &old_center,
                                        minimum_edge_length);
  }
  // Step 2: compute 2d iou bbox to bbox
  Eigen::Matrix2d trans_mat;
  trans_mat(0, 0) = (old_dir.cast<double>())(0);
  trans_mat(0, 1) = (old_dir.cast<double>())(1);
  trans_mat(1, 0) = -(old_dir.cast<double>())(1);
  trans_mat(1, 1) = (old_dir.cast<double>())(0);
  Eigen::Vector2d old_center_transformed_2d =
      static_cast<Eigen::Matrix<double, 2, 1, 0, 2, 1>>(trans_mat *
                                                        old_center.head(2));
  Eigen::Vector2d new_center_transformed_2d =
      static_cast<Eigen::Matrix<double, 2, 1, 0, 2, 1>>(trans_mat *
                                                        new_center.head(2));
  old_center(0) = old_center_transformed_2d(0);
  old_center(1) = old_center_transformed_2d(1);
  new_center(0) = new_center_transformed_2d(0);
  new_center(1) = new_center_transformed_2d(1);
  Eigen::Vector3d old_size_tmp = old_size.cast<double>();
  Eigen::Vector3d new_size_tmp = new_size.cast<double>();
  double iou = common::CalculateIou2DXY<double>(old_center, old_size_tmp,
                                                new_center, new_size_tmp);
  // Step 4: compute dist
  double dist = (1 - iou) * match_threshold;
  return static_cast<float>(dist);
}



‘’‘
’‘’

6:Radar跟踪算法

bool HMMatcher::Match(const std::vector<RadarTrackPtr> &radar_tracks,
                      const base::Frame &radar_frame,
                      const TrackObjectMatcherOptions &options,
                      std::vector<TrackObjectPair> *assignments,
                      std::vector<size_t> *unassigned_tracks,
                      std::vector<size_t> *unassigned_objects) {
  IDMatch(radar_tracks, radar_frame, assignments, unassigned_tracks,
          unassigned_objects);
  TrackObjectPropertyMatch(radar_tracks, radar_frame, assignments,
                           unassigned_tracks, unassigned_objects);
  return true;
}

bool HMMatcher::RefinedTrack(const base::ObjectPtr &track_object,
                             double track_timestamp,
                             const base::ObjectPtr &radar_object,
                             double radar_timestamp) {
  double dist = 0.5 * DistanceBetweenObs(track_object, track_timestamp,
                                         radar_object, radar_timestamp) +
                0.5 * DistanceBetweenObs(radar_object, radar_timestamp,
                                         track_object, track_timestamp);

  return dist < BaseMatcher::GetMaxMatchDistance();
}

void HMMatcher::TrackObjectPropertyMatch(
    const std::vector<RadarTrackPtr> &radar_tracks,
    const base::Frame &radar_frame, std::vector<TrackObjectPair> *assignments,
    std::vector<size_t> *unassigned_tracks,
    std::vector<size_t> *unassigned_objects) {
  if (unassigned_tracks->empty() || unassigned_objects->empty()) {
    return;
  }
  std::vector<std::vector<double>> association_mat(unassigned_tracks->size());
  for (size_t i = 0; i < association_mat.size(); ++i) {
    association_mat[i].resize(unassigned_objects->size(), 0);
  }
  ComputeAssociationMat(radar_tracks, radar_frame, *unassigned_tracks,
                        *unassigned_objects, &association_mat);

  // from perception-common
  common::SecureMat<double> *global_costs =
      hungarian_matcher_.mutable_global_costs();
  global_costs->Resize(unassigned_tracks->size(), unassigned_objects->size());
  for (size_t i = 0; i < unassigned_tracks->size(); ++i) {
    for (size_t j = 0; j < unassigned_objects->size(); ++j) {
      (*global_costs)(i, j) = association_mat[i][j];
    }
  }
  std::vector<TrackObjectPair> property_assignments;
  std::vector<size_t> property_unassigned_tracks;
  std::vector<size_t> property_unassigned_objects;
  hungarian_matcher_.Match(
      BaseMatcher::GetMaxMatchDistance(), BaseMatcher::GetBoundMatchDistance(),
      common::GatedHungarianMatcher<double>::OptimizeFlag::OPTMIN,
      &property_assignments, &property_unassigned_tracks,
      &property_unassigned_objects);

  for (size_t i = 0; i < property_assignments.size(); ++i) {
    size_t gt_idx = unassigned_tracks->at(property_assignments[i].first);
    size_t go_idx = unassigned_objects->at(property_assignments[i].second);
    assignments->push_back(std::pair<size_t, size_t>(gt_idx, go_idx));
  }
  std::vector<size_t> temp_unassigned_tracks;
  std::vector<size_t> temp_unassigned_objects;
  for (size_t i = 0; i < property_unassigned_tracks.size(); ++i) {
    size_t gt_idx = unassigned_tracks->at(property_unassigned_tracks[i]);
    temp_unassigned_tracks.push_back(gt_idx);
  }
  for (size_t i = 0; i < property_unassigned_objects.size(); ++i) {
    size_t go_idx = unassigned_objects->at(property_unassigned_objects[i]);
    temp_unassigned_objects.push_back(go_idx);
  }
  *unassigned_tracks = temp_unassigned_tracks;
  *unassigned_objects = temp_unassigned_objects;
}
void HMMatcher::ComputeAssociationMat(
    const std::vector<RadarTrackPtr> &radar_tracks,
    const base::Frame &radar_frame,
    const std::vector<size_t> &unassigned_tracks,
    const std::vector<size_t> &unassigned_objects,
    std::vector<std::vector<double>> *association_mat) {
  double frame_timestamp = radar_frame.timestamp;
  for (size_t i = 0; i < unassigned_tracks.size(); ++i) {
    for (size_t j = 0; j < unassigned_objects.size(); ++j) {
      const base::ObjectPtr &track_object =
          radar_tracks[unassigned_tracks[i]]->GetObs();
      const base::ObjectPtr &frame_object =
          radar_frame.objects[unassigned_objects[j]];
      double track_timestamp =
          radar_tracks[unassigned_tracks[i]]->GetTimestamp();
      double distance_forward = DistanceBetweenObs(
          track_object, track_timestamp, frame_object, frame_timestamp);
      double distance_backward = DistanceBetweenObs(
          frame_object, frame_timestamp, track_object, track_timestamp);
      association_mat->at(i).at(j) =
          0.5 * distance_forward + 0.5 * distance_backward;
    }
  }
}
double HMMatcher::DistanceBetweenObs(const base::ObjectPtr &obs1,
                                     double timestamp1,
                                     const base::ObjectPtr &obs2,
                                     double timestamp2) {
  double time_diff = timestamp2 - timestamp1;
  return (obs2->center - obs1->center -
          obs1->velocity.cast<double>() * time_diff)
      .head(2)
      .norm();
}

‘’‘
’‘’
‘’‘

7:Camera和Radar融合算法

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
多传感器融合算法,雷视融合算法_第12张图片
’‘’
‘’‘
’‘’
‘’‘

8:Lidar和Camera融合入算法

#include 
#include 
#include 
#include 
using namespace std;

static const int MAX = 4;
static const double maxC = 10001.f;

vector< std::vector<double> > array_to_matrix(double* m, int rows, int cols);
void hungarian_print_Costmatrix(const vector<vector<double> >& Cost);
void hungarian_print_assignment(vector<int> &assignment);


double c[MAX][MAX];
double Fx[MAX];//Cost矩阵中每一行的最小值
double Fy[MAX];//Cost矩阵中每一列的最小值
int matchX[MAX];//初始化为-1|matchX[i]表示X中的第i个元素和Y[matchX[i]]产生配对关系
int matchY[MAX];//初始化为-1|matchY[i]表示Y中的第i个元素和X[matchY[i]]产生配对关系
int Trace[MAX];//初始化为-1
int m;//tracks.size();
int n;//detections.size();
int k;//k=max(m,n)
int start;
int finish;


double GetC(int i, int j);
void FindAugmentingPath();
void SubX_AddY();
void Enlarge();
void hungarian(vector<vector<double>>& DistMatrix, vector<int>& Assignment);

int main()
{
    //double r[3 * 3] =
    //{
    //	100,  300,   300,
    //	300,  100,   300,
    //	300,  300,   100
    //};
    double r[4 * 4] =
    {
        100,	100,	300,	300,

        300,	100,	100,	300,

        100,	100,	300,	300,

        300,	300,	100,	300
    };
    vector< vector<double> > Cost = array_to_matrix(r, 4, 4);

    hungarian_print_Costmatrix(Cost);

    vector<int> assignment;
    //Hungarian APS;
    //APS.Solve(Cost, assignment);
    hungarian(Cost, assignment);

    hungarian_print_assignment(assignment);


    return 0;
}


void hungarian(vector<vector<double>>& DistMatrix, vector<int>& Assignment)
{

    int i, j;
    m = DistMatrix.size(); 
    n = DistMatrix[0].size(); 

    k = m > n ? m : n;//k=max(m,n)

                      //构建c矩阵,				  
    for (i = 0; i < k; i++)
    {
        for (j = 0; j < k; j++)
        {
            if (i >= m || j >= n)
            {
                c[i][j] = maxC;//maxC = 10001.f|超过Cost矩阵范围的元素设为特别大
                continue;
            }
            if (DistMatrix[i][j] == 0)
                c[i][j] = maxC;//预测点与最新点的欧式距离为0的开销设为特别大
            else
                c[i][j] = DistMatrix[i][j];
        }
    }


    for (i = 0; i < MAX; i++)
    {
        matchX[i] = -1;
        matchY[i] = -1;
    }
    //寻找Cost矩阵每一行的最小值
    for (i = 0; i < k; i++)
    {
        Fx[i] = maxC;
        for (j = 0; j < k; j++)
        {
            if (c[i][j] < Fx[i])
            {
                Fx[i] = c[i][j];
            }
        }
    }
    //Cost矩阵的i行的每一个元素减去Fx[i],然后寻找每一列的最小值
    for (j = 0; j < k; j++)
    {
        Fy[j] = maxC;
        for (i = 0; i < k; i++)
        {
            if (c[i][j] - Fx[i] < Fy[j])
            {
                Fy[j] = c[i][j] - Fx[i];
            }
        }
    }



    for (int x = 0; x < k; x++)
    {
    
        start = x;
        finish = -1;
        do
        {
            FindAugmentingPath();//利用0权重边寻找最大匹配
            if (finish == -1) 
                SubX_AddY();//寻找最小点覆蓋集,调整边的权重
        } while (finish == -1);
        Enlarge();//在matchX、matchY中更新X和Y的配对关系
    }


    for (int x = 0; x < m; x++)
    {
        Assignment.push_back(matchX[x]);
    }
    for (int x = 0; x < m; x++)
    {
        for (int y = 0; y < n; y++)
        {
            DistMatrix[x][y] = c[x][y];
            if (c[x][y] == maxC)
                DistMatrix[x][y] = 0.f;
        }
    }
    //计算损失函数值
    //float cost = 0.f;
    //for (int x = 0; x < m; x++)
    //{
    //	int y = matchX[x];
    //	if (c[x][y] < maxC)
    //	{
    //		cost += c[x][y];
    //	}
    //}

}


double GetC(int i, int j)
{
    return c[i][j] - Fx[i] - Fy[j];
}

void FindAugmentingPath()
{
    queue<int> q;
    int i, j;

    for (i = 0; i < MAX; i++)
    {
        Trace[i] = -1;
    }
    ///memset(Trace, -1, sizeof(Trace));

    //BFS宽度优先搜索|Running the algorithm BFS to find the opening of the road
    q.push(start);

    do
    {
        i = q.front();
        q.pop();
        for (j = 0; j < k; j++)
        {
            if (Trace[j] == -1 && GetC(i, j) == 0.0f)
            {
                Trace[j] = i;
                if (matchY[j] == -1)
                {
                    finish = j;
                    return;
                }
                q.push(matchY[j]);
            }
        }
    } while (!q.empty());
}

void SubX_AddY()
{
    int i, j, t;
    double Delta;
    set<int> VisitedX, VisitedY;
    
    VisitedX.insert(start);
    for (j = 0; j < k; j++)
    {
        if (Trace[j] != -1)
        {
            VisitedX.insert(matchY[j]);
            VisitedY.insert(j);
        }
    }
    
    Delta = maxC;
    for (i = 0; i < k; i++)
    {
        if (VisitedX.find(i) != VisitedX.end())
        {
            for (j = 0; j < k; j++)
            {
                if ((VisitedY.find(j) == VisitedY.end()) && (GetC(i, j) < Delta))
                    Delta = GetC(i, j);
            }
        }
    }

    for (t = 0; t < k; t++)
    {
    
        if (VisitedX.find(t) != VisitedX.end())
            Fx[t] = Fx[t] + Delta;
        
        if (VisitedY.find(t) != VisitedY.end())
            Fy[t] = Fy[t] - Delta;
    }
}

void Enlarge()
{
    int x, next;
    do
    {
        x = Trace[finish];
        next = matchX[x];
        matchX[x] = finish;
        matchY[finish] = x;
        finish = next;
    } while (finish != -1);
}


vector< std::vector<double> > array_to_matrix(double* m, int rows, int cols)
{
    int i, j;
    std::vector< std::vector<double> > r;
    r.resize(rows, std::vector<double>(cols, 0));

    for (i = 0; i < rows; i++)
    {
        for (j = 0; j < cols; j++)
            r[i][j] = m[i*cols + j];
    }
    return r;
}

void hungarian_print_Costmatrix(const vector<vector<double> >& Cost)
{

    int i, j;
    cout << endl;
    for (i = 0; i < Cost.size(); i++)
    {
        cout << "[";
        for (j = 0; j < Cost[0].size(); j++)
        {
            fprintf(stderr, "%-5.0f ", Cost[i][j]);
        }
        cout << "]" << endl;
    }
    cout << endl;

}

void hungarian_print_assignment(vector<int> &assignment)
{

    for (int i = 0; i < assignment.size(); i++)
    {
        cout << assignment[i] << "  ";
    }//0 1 2
    cout << endl;

}

多传感器融合算法,雷视融合算法_第13张图片

9: 雷视融合应用定速巡航+车道保持

多传感器融合算法,雷视融合算法_第14张图片
多传感器融合算法,雷视融合算法_第15张图片
多传感器融合算法,雷视融合算法_第16张图片
多传感器融合算法,雷视融合算法_第17张图片
多传感器融合算法,雷视融合算法_第18张图片

你可能感兴趣的:(多传感器融合,算法,自动驾驶,深度学习)