自动驾驶nuscenes 数据集各个命令方法详细内容解析

from nuscenes.nuscenes import NuScenes

nusc is a stance of class NuScenes: use nusc

nusc = NuScenes(version='v1.0-mini', dataroot='/Users/jiayansong/Desktop/nuscenes/v1.0-mini', verbose=True)

table

there are 13 tables: and every of them are a list, which consists of .json information

self.table_names = ['category', 'attribute', 'visibility', 'instance', 'sensor', 'calibrated_sensor',
                            'ego_pose', 'log', 'scene', 'sample', 'sample_data', 'sample_annotation', 'map']

scene

one of scene is like this:

{
"token": "cc8c0bf57f984915a77078b10eb33198",
"log_token": "7e25a2c8ea1f41c5b0da1e69ecfa71a2",
"nbr_samples": 39,
"first_sample_token": "ca9a282c9e77460f8360f564131a8af5",
"last_sample_token": "ed5fc18c31904f96a8f0dbb99ff069c0",
"name": "scene-0061",
"description": "Parked truck, construction, intersection, turn left, following a van"
}

sample

{
"token": "ca9a282c9e77460f8360f564131a8af5",
"timestamp": 1532402927647951,
"prev": "",
"next": "39586f9d59004284a7114a68825e8eec",
"scene_token": "cc8c0bf57f984915a77078b10eb33198"
}

Sample detail can be get by getmethod via its token, nusc.get('sample', token_of_sample)and this is a sample looks like:

{
'token': 'ca9a282c9e77460f8360f564131a8af5',
'timestamp': 1532402927647951, 
'prev': '', 
'next': '39586f9d59004284a7114a68825e8eec', 
'scene_token': 'cc8c0bf57f984915a77078b10eb33198', 
'data': 
  {'RADAR_FRONT': '37091c75b9704e0daa829ba56dfa0906', 
   'RADAR_FRONT_LEFT': '11946c1461d14016a322916157da3c7d', 
   'RADAR_FRONT_RIGHT': '491209956ee3435a9ec173dad3aaf58b', 
   'RADAR_BACK_LEFT': '312aa38d0e3e4f01b3124c523e6f9776', 
   'RADAR_BACK_RIGHT': '07b30d5eb6104e79be58eadf94382bc1', 
   'LIDAR_TOP': '9d9bf11fb0e144c8b446d54a8a00184f', 
   'CAM_FRONT': 'e3d495d4ac534d54b321f50006683844', 
   'CAM_FRONT_RIGHT': 'aac7867ebf4f446395d29fbd60b63b3b', 
   'CAM_BACK_RIGHT': '79dbb4460a6b40f49f9c150cb118247e', 
   'CAM_BACK': '03bea5763f0f4722933508d5999c5fd8', 
   'CAM_BACK_LEFT': '43893a033f9c46d4a51b5e08a67a1eb7', 
   'CAM_FRONT_LEFT': 'fe5422747a7d4268a4b07fc396707b23'}, 
'anns': 
    ['ef63a697930c4b20a6b9791f423351da', '6b89da9bf1f84fd6a5fbe1c3b236f809', 							'924ee6ac1fed440a9d9e3720aac635a0', '91e3608f55174a319246f361690906ba', 'cd051723ed9c40f692b9266359f547af', '36d52dfedd764b27863375543c965376', '70af124fceeb433ea73a79537e4bea9e', '63b89fe17f3e41ecbe28337e0e35db8e', 'e4a3582721c34f528e3367f0bda9485d', 'fcb2332977ed4203aa4b7e04a538e309', 'a0cac1c12246451684116067ae2611f6', '02248ff567e3497c957c369dc9a1bd5c', '9db977e264964c2887db1e37113cddaa', 'ca9c5dd6cf374aa980fdd81022f016fd', '179b8b54ee74425893387ebc09ee133d', '5b990ac640bf498ca7fd55eaf85d3e12', '16140fbf143d4e26a4a7613cbd3aa0e8', '54939f11a73d4398b14aeef500bf0c23', '83d881a6b3d94ef3a3bc3b585cc514f8', '74986f1604f047b6925d409915265bf7', 'e86330c5538c4858b8d3ffe874556cc5', 'a7bd5bb89e27455bbb3dba89a576b6a1', 'fbd9d8c939b24f0eb6496243a41e8c41', '198023a1fb5343a5b6fad033ab8b7057', 'ffeafb90ecd5429cba23d0be9a5b54ee', 'cc636a58e27e446cbdd030c14f3718fd', '076a7e3ec6244d3b84e7df5ebcbac637', '0603fbaef1234c6c86424b163d2e3141', 'd76bd5dcc62f4c57b9cece1c7bcfabc5', '5acb6c71bcd64aa188804411b28c4c8f', '49b74a5f193c4759b203123b58ca176d', '77519174b48f4853a895f58bb8f98661', 'c5e9455e98bb42c0af7d1990db1df0c9', 'fcc5b4b5c4724179ab24962a39ca6d65', '791d1ca7e228433fa50b01778c32449a', '316d20eb238c43ef9ee195642dd6e3fe', 'cda0a9085607438c9b1ea87f4360dd64', 'e865152aaa194f22b97ad0078c012b21', '7962506dbc24423aa540a5e4c7083dad', '29cca6a580924b72a90b9dd6e7710d3e', 'a6f7d4bb60374f868144c5ba4431bf4c', 'f1ae3f713ba946069fa084a6b8626fbf', 'd7af8ede316546f68d4ab4f3dbf03f88', '91cb8f15ed4444e99470d43515e50c1d', 'bc638d33e89848f58c0b3ccf3900c8bb', '26fb370c13f844de9d1830f6176ebab6', '7e66fdf908d84237943c833e6c1b317a', '67c5dbb3ddcc4aff8ec5140930723c37', 'eaf2532c820740ae905bb7ed78fb1037', '3e2d17fa9aa5484d9cabc1dfca532193', 'de6bd5ffbed24aa59c8891f8d9c32c44', '9d51d699f635478fbbcd82a70396dd62', 'b7cbc6d0e80e4dfda7164871ece6cb71', '563a3f547bd64a2f9969278c5ef447fd', 'df8917888b81424f8c0670939e61d885', 'bb3ef5ced8854640910132b11b597348', 'a522ce1d7f6545d7955779f25d01783b', '1fafb2468af5481ca9967407af219c32', '05de82bdb8484623906bb9d97ae87542', 'bfedb0d85e164b7697d1e72dd971fb72', 'ca0f85b4f0d44beb9b7ff87b1ab37ff5', 'bca4bbfdef3d4de980842f28be80b3ca', 'a834fb0389a8453c810c3330e3503e16', '6c804cb7d78943b195045082c5c2d7fa', 'adf1594def9e4722b952fea33b307937', '49f76277d07541c5a584aa14c9d28754', '15a3b4d60b514db5a3468e2aef72a90c', '18cc2837f2b9457c80af0761a0b83ccc', '2bfcc693ae9946daba1d9f2724478fd4']}

sensor

one of sensor information is in format in this: you can see that every channel has an unique token

{
"token": "ce89d4f3050b5892b33b3d328c5e82a3",
"channel": "CAM_BACK",
"modality": "camera"
}

sample_data

if you want to obtain the detail of information by specific senor in one of sample (sample_data), you should use :

nusc.get('sample_data', one_of_sample_detail['data']['one_of_sensor_channel'])
// sample_data detail information in this sample by this sensor channel
// you can use these information to obtain the boxes by specific sensor 
{'token': 'e3d495d4ac534d54b321f50006683844', 
 'sample_token': 'ca9a282c9e77460f8360f564131a8af5', 
 'ego_pose_token': 'e3d495d4ac534d54b321f50006683844', 
 'calibrated_sensor_token': '1d31c729b073425e8e0202c5c6e66ee1', 
 'timestamp': 1532402927612460, 
 'fileformat': 'jpg', 
 'is_key_frame': True, 
 'height': 900, 
 'width': 1600, 
 'filename': 'samples/CAM_FRONT/n015-2018-07-24-11-22-45+0800__CAM_FRONT__1532402927612460.jpg', 
 'prev': '', 
 'next': '68e8e98cf7b0487baa139df808641db7', 
 'sensor_modality': 'camera', 
 'channel': 'CAM_FRONT'
}

sample_annotation

it is a dict, and you can use nusc.get('sample_annotation', sample_detail['anns'][num_of_the_annotation]) to get the detail of annotation information in one sample.

{'token': '924ee6ac1fed440a9d9e3720aac635a0', 
 'sample_token': 'ca9a282c9e77460f8360f564131a8af5', 
 'instance_token': 'bd26c2cdb22d4bb1834e808c89128898', 
 'visibility_token': '3', 
 'attribute_tokens': ['c3246a1e22a14fcb878aa61e69ae3329'], 
 'translation': [353.794, 1132.355, 0.602], 
 'size': [2.011, 4.633, 1.573], 
 'rotation': [0.9797276292877292, 0.0, 0.0, -0.20033415188191459], 
 'prev': '', 
 'next': 'f0cbd9dbafd74e20bcf6dd0357c97f59', 
 'num_lidar_pts': 5, 
 'num_radar_pts': 0, 
 'category_name': 'vehicle.car'
}

box

nusc.get_box(sample_annotation_token) # return only one box based on one annotation token
# is return a class 'nuscenes.utils.data_classes.Box'
nusc.get_boxes(sample_data_token) # return a group boxes based on information of a sensor channel
# it returns a list with a group of box

box’s format is this:

label: nan, score: nan, xyz: [353.79, 1132.36, 0.60], wlh: [2.01, 4.63, 1.57], rot axis: [0.00, 0.00, -1.00], ang(degrees): 23.11, ang(rad): 0.40, vel: nan, nan, nan, name: vehicle.car, token: 924ee6ac1fed440a9d9e3720aac635a0

and if you get boxes, then the format of boxes is a group of box, heap by heap. and

calibrated_sensor

you can get the intrintircs of camera by

cs_record = nusc.get('calibrated_sensor', nusc.get('sample_data', fi)['calibrated_sensor_token'])
{'token': 'a183049901c24361a6b0b11b8013137c',
 'sensor_token': 'dc8b396651c05aedbb9cdaae573bb567',
 'translation': [0.943713, 0.0, 1.84023],
 'rotation': [0.7077955119163518, -0.006492242056004365, 0.010646214713995808, -0.7063073142877817], 'camera_intrinsic': []
}

你可能感兴趣的:(自动驾驶,json,人工智能,python)