基于nuscenes自动驾驶数据集,将三维雷达点云映射到图像

First initialize the NuScenes class as nusc:

choose the version and input your absolute path of the dataset

nusc = NuScenes(version='v1.0-mini', dataroot='/Users/jiayansong/Desktop/nuscenes/v1.0-mini', verbose=True)

Then choose which scene

For example:

scene0 = nusc.scene[0] # choose the first scene of this mini-dataset

Get this sample token within this scene

For example:

first_sample_token = scene0['first_sample_token']  # token of the first sample in this scene

Then you can use the sample token to get the sample detail

For example:

first_sample = nusc.get('sample', first_sample_token)  # first sample's detail

Then you can use the sample detail to obtain the different sensors’ information through command

For example:

radar_front_sample_data = nusc.get('sample_data', first_sample['data']['RADAR_FRONT'])  # first sample's detail through radar_front
cam_sample_data = nusc.get('sample_data', first_sample['data']['CAM_FRONT'])  # first sample's detail through cam_front

Then you should also derive the data path by related sensor

For example:

radar_front_points_path = os.path.join(nusc.dataroot, radar_front_sample_data['filename']) # radar points filepath

Now you can derive point cloud

Use command to derive the point cloud class:

pc = RadarPointCloud.from_file(radar_front_points_path) # 

pc now is a numpy array with 18 × n 18 \times n 18×n dimension, it has many Information about the radar point cloud:

'''
FIELDS x y z dyn_prop id rcs vx vy vx_comp vy_comp is_quality_valid ambig_state x_rms y_rms invalid_state pdh0 vx_rms vy_rms
        SIZE 4 4 4 1 2 4 4 4 4 4 1 1 1 1 1 1 1 1
        TYPE F F F I I F F F F F I I I I I I I I
        COUNT 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
        WIDTH 125
        HEIGHT 1
        VIEWPOINT 0 0 0 1 0 0 0
        POINTS 125
        DATA binary

        Below some of the fields are explained in more detail:

        x is front, y is left

        vx, vy are the velocities in m/s.
        vx_comp, vy_comp are the velocities in m/s compensated by the ego motion.
        We recommend using the compensated velocities.

        invalid_state: state of Cluster validity state.
        (Invalid states)
        0x01	invalid due to low RCS
        0x02	invalid due to near-field artefact
        0x03	invalid far range cluster because not confirmed in near range
        0x05	reserved
        0x06	invalid cluster due to high mirror probability
        0x07	Invalid cluster because outside sensor field of view
        0x0d	reserved
        0x0e	invalid cluster because it is a harmonics
        (Valid states)
        0x00	valid
        0x04	valid cluster with low RCS
        0x08	valid cluster with azimuth correction due to elevation
        0x09	valid cluster with high child probability
        0x0a	valid cluster with high probability of being a 50 deg artefact
        0x0b	valid cluster but no local maximum
        0x0c	valid cluster with high artefact probability
        0x0f	valid cluster with above 95m in near range
        0x10	valid cluster with high multi-target probability
        0x11	valid cluster with suspicious angle

        dynProp: Dynamic property of cluster to indicate if is moving or not.
        0: moving
        1: stationary
        2: oncoming
        3: stationary candidate
        4: unknown
        5: crossing stationary
        6: crossing moving
        7: stopped

        ambig_state: State of Doppler (radial velocity) ambiguity solution.
        0: invalid
        1: ambiguous
        2: staggered ramp
        3: unambiguous
        4: stationary candidates

        pdh0: False alarm probability of cluster (i.e. probability of being an artefact caused by multipath or similar).
        0: invalid
        1: <25%
        2: 50%
        3: 75%
        4: 90%
        5: 99%
        6: 99.9%
        7: <=100%

Project Part:

First step, transfer world coordinate to ego coordinate

use sensor’s rotation matrix and translation matrix, for example:

radar_calibrate = nusc.get('calibrated_sensor', radar_front_sample_data['calibrated_sensor_token']) # get lidar's calibrated information: it has translation, rotation and camera_intrinsic
pc.rotate(Quaternion(radar_calibrate['rotation']).rotation_matrix)
pc.translate(np.array(radar_calibrate['translation']))

Visualiza:
基于nuscenes自动驾驶数据集,将三维雷达点云映射到图像_第1张图片

Second step, transfer ego to camera coordinate:

use sensor’s and camera calibrated information:

camera_calibrate = nusc.get('calibrated_sensor', cam_sample_data['calibrated_sensor_token'])
pc.translate(-np.array(camera_calibrate['translation']))
pc.rotate(Quaternion(camera_calibrate['rotation']).rotation_matrix.T)

Third step, transfer from camera coordinate to image plane:

use camera intrinsic matrix and normalize the coordinate because in image plane we will lose depth information:

points = view_points(pc.points[:3, :], np.array(camera_calibrate['camera_intrinsic']), normalize=True)

Now, project work is done, and we still need to filter out some point that won’t appear on image because of image size:

make sure the points are at least 1m in front of the camera and remove points that are either outside or behind the camera, leave 1 :

mask = np.ones(depths.shape[0], dtype=bool)
mask = np.logical_and(mask, depths > 1.0)
mask = np.logical_and(mask, points[0, :] > 1)
mask = np.logical_and(mask, points[0, :] < im.size[0] - 1)
mask = np.logical_and(mask, points[1, :] > 1)
mask = np.logical_and(mask, points[1, :] < im.size[1] - 1) # mask true have 3053 points  mask totally have 34688 points

points = points[:, mask] # length of points is 3053 now

coloring = coloring[mask]

Now, the points are that we derive in order to scatter on the related image plane.
基于nuscenes自动驾驶数据集,将三维雷达点云映射到图像_第2张图片

你可能感兴趣的:(自动驾驶,python,人工智能)