在看VINS-Mono的代码时,觉得非常有必要整理总结一下其中不同的数据结构,尤其是各种sensor_msgs的格式与具体含义。慢慢补充吧!
from file:std_msgs/Header.msg
一般在Image/PointCloud/IMU等各种传感器数据结构中都会出现的头信息
Definition:
uint32 seq # sequence ID: consecutively increasing ID
time stamp #时间戳 stamp.sec: seconds since epoch /stamp.nsec: nanoseconds since stamp_secs
string frame_id #坐标系ID
from file:sensor_msgs/Image.msg
在feature_trackers_node.cpp中回调函数img_callback的输入,表示一幅图像
Definition:
std_msgs/Header header #头信息
uint32 height # image height, number of rows
uint32 width # image width, number of columns
string encoding # Encoding of pixels -- channel meaning, ordering, size
# taken from the list of strings in include/sensor_msgs/image_encodings.h
uint8 is_bigendian #大端big endian(从低地址到高地址的顺序存放数据的高位字节到低位字节)和小端little endian
uint32 step # Full row length in bytes
uint8[] data # actual matrix data, size is (step * rows)
from file:sensor_msgs/PointCloud.msg
在feature_trackers.cpp中img_callback()中创建feature_points并封装,在main()中发布为话题“/feature_tracker/feature”,包含一帧图像中特征点信息
sensor_msgs::PointCloudPtr feature_points(new sensor_msgs::PointCloud);
pub_img.publish(feature_points);
pub_img = n.advertise<sensor_msgs::PointCloud>("feature", 1000);
Definition:
std_msgs/Header header #头信息
#feature_points->header = img_msg->header;
#feature_points->header.frame_id = "world";
geometry_msgs/Point32[] points #3D点(x,y,z)
sensor_msgs/ChannelFloat32[] channels #[特征点的ID,像素坐标u,v,速度vx,vy]
#feature_points->channels.push_back(id_of_point);
#feature_points->channels.push_back(u_of_point);
#feature_points->channels.push_back(v_of_point);
#feature_points->channels.push_back(velocity_x_of_point);
#feature_points->channels.push_back(velocity_y_of_point);
#如img_msg->channels[0].values[i]表示第i个特征点的ID值
sensor_msgs::PointCloudConstPtr &points_msg跟它一样
这个东西数据格式和之前的feature_points一样,但是channels不一样。
在keyframe.cpp中的findConnection()中建立并封装成msg_match_points,
在pose_graph_node.cpp的main()中发布话题“/pose_graph/match_points”
主要包含重定位的两帧间匹配点和匹配关系(变换矩阵)
sensor_msgs::PointCloud msg_match_points;
pub_match_points.publish(msg_match_points);
pub_match_points = n.advertise<sensor_msgs::PointCloud>("match_points", 100);
在estimator_node.cpp的main()中该话题被订阅,回调函数为relocalization_callback()
ros::Subscriber sub_relo_points = n.subscribe("/pose_graph/match_points", 2000, relocalization_callback);
void relocalization_callback(const sensor_msgs::PointCloudConstPtr &points_msg)
Definition:
std_msgs/Header header #头信息
#msg_match_points.header.stamp = ros::Time(time_stamp);
geometry_msgs/Point32[] points #3D点(x,y,z)
sensor_msgs/ChannelFloat32[] channels #[重定位帧的平移向量T的x,y,z,旋转四元数w,x,y,z和索引值]
#t_q_index.values.push_back(T.x());
#t_q_index.values.push_back(T.y());
#t_q_index.values.push_back(T.z());
#t_q_index.values.push_back(Q.w());
#t_q_index.values.push_back(Q.x());
#t_q_index.values.push_back(Q.y());
#t_q_index.values.push_back(Q.z());
#t_q_index.values.push_back(index);
#msg_match_points.channels.push_back(t_q_index);
from file:sensor_msgs/Imu.msg
IMU信息的标准数据结构
Header header #头信息
geometry_msgs/Quaternion orientation #四元数[x,y,z,w]
float64[9] orientation_covariance # Row major about x, y, z axes
geometry_msgs/Vector3 angular_velocity #角速度[x,y,z]轴
float64[9] angular_velocity_covariance # 对应协方差矩阵,Row major(行主序) about x, y, z axes
geometry_msgs/Vector3 linear_acceleration #线性加速度[x,y,z]
float64[9] linear_acceleration_covariance # 对应协方差矩阵 Row major x, y z
std::vector<std::pair<std::vector<sensor_msgs::ImuConstPtr>, sensor_msgs::PointCloudConstPtr>> measurements;
estimator_node.cpp中getMeasurements()函数将对imu和图像数据进行初步对齐得到的数据结构,确保图像关联着对应时间戳内的所有IMU数据
sensor_msgs::PointCloudConstPtr 表示某一帧图像的feature_points
std::vector
将两者组合成一个数据结构,并构建元素为这种结构的vector进行存储
在estimator.cpp中的process()中被建立,在Estimator::processImage()中被调用
作用是建立每个特征点(camera_id,[x,y,z,u,v,vx,vy])构成的map,索引为feature_id
for (unsigned int i = 0; i < img_msg->points.size(); i++)
{
int v = img_msg->channels[0].values[i] + 0.5;
int feature_id = v / NUM_OF_CAM;
int camera_id = v % NUM_OF_CAM;
double x = img_msg->points[i].x;
double y = img_msg->points[i].y;
double z = img_msg->points[i].z;
double p_u = img_msg->channels[1].values[i];
double p_v = img_msg->channels[2].values[i];
double velocity_x = img_msg->channels[3].values[i];
double velocity_y = img_msg->channels[4].values[i];
ROS_ASSERT(z == 1);
Eigen::Matrix<double, 7, 1> xyz_uv_velocity;
xyz_uv_velocity << x, y, z, p_u, p_v, velocity_x, velocity_y;
image[feature_id].emplace_back(camera_id, xyz_uv_velocity);
}
在estimator.h中作为class Estimator的属性
键是图像帧的时间戳,值是图像帧类
图像帧类可由图像帧的特征点与时间戳构造,此外还保存了位姿Rt,预积分对象pre_integration,是否是关键帧。
class ImageFrame
{
public:
ImageFrame(){};
ImageFrame(const map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>>>>& _points, double _t):t{_t},is_key_frame{false}
{
points = _points;
};
map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>> > > points;
double t;
Matrix3d R;
Vector3d T;
IntegrationBase *pre_integration;
bool is_key_frame;
};