3D目标检测数据集——Nusence数据集

链接地址

  1. [官网] nuScenes
  2. [arXiv] nuScenes: A multimodal dataset for autonomous driving
  3. [GitHub] nuScenes devkit
  4. nuScenes devkit教程

数据集概述

2.1 数据采集

2.1.1 传感器配置

nuScenes的数据采集车辆为Renault Zoe迷你电动车,配备6个周视相机,5个毫米波雷达,具备360°的视野感知能力。

传感器类型详细信息
相机6台彩色相机,1600×900的分辨率,采用JPEG格式压缩,采样频率为12Hz;
后视相机FOV 110度;其余相机FOV均为70°;
激光雷达1台32线旋转式激光雷达,20Hz采样频率,360°水平FOV,-30°-10°的垂直FOV,探测距离70m,探测精度2cm,每秒140万点云
毫米波雷达5个77GHz的毫米波雷达,FMCW调频,13Hz采样频率,探测距离250m,速度精度±0.1km/h
GPS和IMU20mm的RTK定位精度,1000Hz采样频率

具体传感器信息即分布如下:

6相机视野范围示意图如下:

2.1.2 传感器标定

对于传感器的数据融合和配准,必须要做的一步就是对传感器的校准,其中包括对相机内外参数的校准以及对雷达等传感器的外参校准。

  • 相机外参校准:

使用立方形的校准目标放置到相机于雷达前进行校准(具体方法请参考nuScenes官网)(立方体校准清参考一篇论文https://www.researchgate.net/publication/327516843

  • 相机内参校准:

使用带有图案的平板校准(常用的平面棋盘校准法)

  • 毫米波雷达外参校准:

将雷达放置在车辆的水平面上,然后在都市环境中驾驶,将所收集的雷达点中的动态物体过滤,然后校准yaw轴的角度值以最小化静态物体的补偿距离变化率。

  • 激光雷达校准:使用laser liner精准地测量雷达到车辆自身坐标系的距离。

完成以上步骤,可以进一步计算雷达与相机的坐标转换矩阵,以开始下一步的数据采集工作。

2.1.3 数据采集

nuScenes数据集使用了两辆传感器配置相同的雷诺电动车进行采集,采集地点为波士顿和新加坡,这两个城市以交通密集和驾驶场景复杂闻名。

整个数据集包含了由人工挑选的84段log,时长约15小时,距离约242km,平均车速16km/h。数据场景覆盖了城市、住宅区、郊区、工业区各个场景,也涵盖了白天、黑夜、晴天、雨天、多云等不同时段不同天气状况。

最终数据集分为1000个片段,每个片段约20s。在每一个scenes中,有40个关键帧(key frames),也就是每秒钟有2个关键帧,其他的帧为sweeps。关键帧经过手工的标注,每一帧中都有了若干个annotation,标注的形式为bounding box。不仅标注了大小、范围、还有类别、可见程度等等。

2.1.4 传感器同步

nuScenes和KITTI一样,也是采用激光来控制相机的曝光时刻,不过nuScenes有6个覆盖360°视野的相机。

在nuScenes中,图像的时间戳表示相机开始曝光的时刻,激光的时间戳表示激光扫描一圈结束的时刻。当车顶激光的线束扫描到相机FOV中心区域时,便会给对应相机一个曝光信号,激光扫描一圈会触发6次相机曝光。

相关时间参数含义
expo_time_bet_adj_cams相邻两个相机曝光时间差,平均8.5ms,6个相机正好50ms,符合激光的扫描频率,侧面证明了每一圈激光会触发6次相机曝光;
max_delta_time_bet_cams6个周视相机依次曝光时刻的最大时间间隔,平均42ms,这意味着在相对40km/h相对速度下,第一个开始曝光的左前方相机和最后一个曝光的左后方相机,对同一个物体的观测距离会相差接近半米;
cam_refresh_time相机采样间隔,平均500ms,对应2Hz的关键帧采样频率;
lidar_refresh_time激光采样间隔,平均500ms,对应2Hz的关键帧采样频率;
delta_lidar_cam_time激光时间戳和左后方相机曝光时刻的差值,均值仅1ms,说明激光扫描是从左后方相机附近位置开始的;

2.2 数据集下载

完整数据集包含3个部分:

  • Mini:从训练/验证集抽取10个场景组成,包含完整的原始数据和标注信息,主要用于数据集的熟悉;
  • TrainVal:训练/验证集,包含850个场景,其中700个训练场景,150个验证场景
  • Test:测试集,包含150个场景,不包含标注数据。

下载好的数据集包含4个文件夹:

  • maps:地图数据,四张地图对应着4个数据采集地点
  • samples:带有标注信息的关键帧数据,训练主要用这部分数据
  • sweeps:完整时序数据,不带有标注信息,一般用于跟踪任务
  • v1.0-version:存有数据依赖关系、标注信息、标定参数的各种json文件

3 数据集解析

3.1 数据集结构

nuScenes数据集采用关系数据库来管理数据,数据库一共包含13张表,以json文件格式存储在./v1.0-version目录下。

数据集使用token关键词作为全局的唯一性标识。token:数据集中对所有的内容进行编码,包括对数据集对象、传感器、场景、关键帧等进行token的赋值,每个token都是独特的编码。

NO.数据库格式描述
1attributeplain attribute { "token": <str> -- Unique record identifier. "name": <str> -- Attribute name. "description": <str> -- Attribute description. } 对于实例的属性描述,例如同一个目标类别在不同状态下的属性描述:一辆标注的车辆停车、移动或者描述某个自行车是否有骑手。
2calibrated_sensorplain calibrated_sensor { "token": <str> -- Unique record identifier. "sensor_token": <str> -- Foreign key pointing to the sensor type. "translation": <float> [3] -- Coordinate system origin in meters: x, y, z. "rotation": <float> [4] -- Coordinate system orientation as quaternion: w, x, y, z. "camera_intrinsic": <float> [3, 3] -- Intrinsic camera calibration. Empty for sensors that are not cameras. } 描述一个传感器在车辆上安置的外参和内参矩阵等信息,所有的外参都是相对于车辆自身的坐标系
3categoryplain category { "token": <str> -- Unique record identifier. "name": <str> -- Category name. Subcategories indicated by period. "description": <str> -- Category description. "index": <int> -- The index of the label used for efficiency reasons in the .bin label files of nuScenes-lidarseg. This field did not exist previously. } 描述目标的种类信息,如果是某个大类的子类,在后面加‘.’进行子类的选择:例如vehicle.door
4ego_poseplain ego_pose { "token": <str> -- Unique record identifier. "translation": <float> [3] -- Coordinate system origin in meters: x, y, z. Note that z is always 0. "rotation": <float> [4] -- Coordinate system orientation as quaternion: w, x, y, z. "timestamp": <int> -- Unix time stamp. } 在某个特定的时间,车辆的姿态表示,这个姿态表示是相对于世界坐标系的,这个信息是基于雷达成像地图的定位算法所提供的(详情看nuScenes论文中关于自身定位的算法),输出为二维的坐标(x, y)。
5instanceplain instance { "token": <str> -- Unique record identifier. "category_token": <str> -- Foreign key pointing to the object category. "nbr_annotations": <int> -- 某个实例在一个scene中被标注的次数 "first_annotation_token": <str> -- Foreign key. Points to the first annotation of this instance. "last_annotation_token": <str> -- Foreign key. Points to the last annotation of this instance. } 一个对象实例,例如特定的车辆。是作者观察到的所有对象实例的枚举。注意,实例不是跨场景跟踪的,在一个scene中,instance是连续追踪的(例如:在一个视频中出现的同一辆车会连续追踪并标注)。但是在不同的scene中,instance是无关联的。
6logplain log { "token": <str> -- Unique record identifier. "logfile": <str> -- Log file name. "vehicle": <str> -- Vehicle name. "date_captured": <str> -- Date (YYYY-MM-DD). "location": <str> -- Area where log was captured, e.g. singapore-onenorth. } 对于提取出数据的日志文件
7mapplain map { "token": <str> -- Unique record identifier. "log_tokens": <str> [n] -- Foreign keys. "category": <str> -- Map category, currently only semantic_prior for drivable surface and sidewalk. "filename": <str> -- Relative path to the file with the map mask. } 地图数据(自上而下的视角)以二进制语义掩码的格式存储
8sample_annotationplain sample_annotation { "token": <str> -- Unique record identifier. "sample_token": <str> -- Foreign key. 说明来自哪个sample "instance_token": <str> -- Foreign key. 指向某个instance,因为一个实例可以有很多次标注 "attribute_tokens": <str> [n] -- Foreign keys. 这次标注中对象的属性,因为一个目标的属性在不同时间一直在改变所以目标的属性归属于此处管理,而不是归于实例 "visibility_token": <str> -- Foreign key 目标的可见性特征,目标的可见性会一直会改变。 "translation": <float> [3] -- 标注框的中心坐标值 "size": <float> [3] -- 标注框的大小 "rotation": <float> [4] --标注框的方向四元数 "num_lidar_pts": <int> -- 一个雷达扫描期间在标注框内的雷达点 "num_radar_pts": <int> -- Number of radar points in this box. Points are counted during the radar sweep identified with this sample. This number is summed across all radar sensors without any invalid point filtering. "next": <str> -- Foreign key. 同一个目标定的下一个sample_anatation "prev": <str> -- Foreign key. Sample annotation from the same object instance that precedes this in time. Empty if this is the first annotation for this object. } 用于标注某个目标在一个sample中的方向等信息的三维标注框,其中所有的定位信息都是基于世界坐标系而定的最终坐标。
9sample_dataplain sample_data { "token": <str> -- Unique record identifier. "sample_token": <str> -- Foreign key. 指向sample_data所关联的sample "ego_pose_token": <str> -- Foreign key. "calibrated_sensor_token": <str> -- Foreign key. "filename": <str> -- Relative path to data-blob on disk. "fileformat": <str> -- Data file format. #如果数据是图片,以下内容生效 "width": <int> -- If the sample data is an image, this is the image width in pixels. "height": <int> -- If the sample data is an image, this is the image height in pixels. "timestamp": <int> -- Unix time stamp. "is_key_frame": <bool> -- True if sample_data is part of key_frame, else False. "next": <str> -- Foreign key. 来自同一传感器的在下一时刻的数据,如果是scene的末尾,赋值为空。 "prev": <str> -- Foreign key. Sample data from the same sensor that precedes this in time. Empty if start of scene. } 传感器返回的数据:例如雷达点云或者是图片。对于sample_data且其is_key_frame = true的,在时间上非常接近sample,对于值为false的sample_data其指向它临近的sample。
10sampleplain sample { "token": <str> -- Unique record identifier. "timestamp": <int> -- Unix time stamp. "scene_token": <str> -- Foreign key pointing to the scene. "next": <str> -- Foreign key. Sample that follows this in time. Empty if end of scene. "prev": <str> -- Foreign key. Sample that precedes this in time. Empty if start of scene. } sample是每隔0.5s采集一次的经过标注的关键帧,其中数据基本是在同一时间戳下采集的作为单个雷达采集循环的一部分。
11sceneplain scene { "token": <str> -- Unique record identifier. "name": <str> -- Short string identifier. "description": <str> -- 例如,某一辆车正在某条路上靠右行驶等描述性词汇 "log_token": <str> -- Foreign key. 指向某个log "nbr_samples": <int> -- 场景中的sample数量 "first_sample_token": <str> -- Foreign key. 场景中的第一个sample. "last_sample_token": <str> -- Foreign key. Points to the last sample in scene. } 来自日志文件中一个20s的连续帧,多个帧可以同出自于一个log,实例标记不会跨场景保存。
12sensorplain sensor { "token": <str> -- Unique record identifier. "channel": <str> -- Sensor channel name. "modality": <str> {camera, lidar, radar} -- Sensor modality. Supports category(ies) in brackets. } 传感器类型描述
13visibilityplain visibility { "token": <str> -- Unique record identifier. "level": <str> -- Visibility level. "description": <str> -- Description of visibility level. } 实例的可见性

3.2 数据库模式

官网给出了描述了nuScenes的数据库模式。所有的标注和元数据(包括校准、地图、车辆坐标等)都包含在一个关系数据库中。下图即为数据库表。每一行都可以由其唯一的主键token标识。像sample_token这样的外键可以用来链接到sample表的token。

在这里插入图片描述

4 nuscenes-devkit 使用

4.1 安装

nuScenes官方提供了一个数据集开发工具nuscenes-devkit,封装了数据读取、索引、可视化等常用操作,可以直接使用pip安装:

pip install nuscenes-devkit

4.2 初始化

from nuscenes.nuscenes import NuScenes

nusc = NuScenes(version='v1.0-mini', dataroot='/workspace/dataset/open_dataset/Nuscenes/mini', verbose=True)

4.3 scene 场景信息

nusc.list_scenes()

输出结果 : mini数据集中只包含10个场景,每个场景大约持续20s【有的19s】,即每个场景有20秒采集到的信息。

查看某个场景中的信息:

my_scene = nusc.scene[0]
my_scene

输出结果:

:::info
{‘token’: ‘cc8c0bf57f984915a77078b10eb33198’, ‘log_token’: ‘7e25a2c8ea1f41c5b0da1e69ecfa71a2’, ‘nbr_samples’: 39, ‘first_sample_token’: ‘ca9a282c9e77460f8360f564131a8af5’, ‘last_sample_token’: ‘ed5fc18c31904f96a8f0dbb99ff069c0’, ‘name’: ‘scene-0061’, ‘description’: ‘Parked truck, construction, intersection, turn left, following a van’}

:::

4.4 sample 样本信息

每个scene大约持续20s,那sample就是每500ms进行一次采样。也可以这样理解sample和scene,sence相当于20s的视频,sample就是每0.5s取一帧的图像。

通过my_scene得到某一个sample的token值:

first_sample_token = my_scene['first_sample_token']  #获取第一个sample的token值
first_sample_token

输出结果:

‘ca9a282c9e77460f8360f564131a8af5’

当我们得到第一个sample的token值后,我们可以通过 nusc.get命令来获取当前sample的信息:

my_sample = nusc.get('sample', first_sample_token)
my_sample

输出结果 :结果中包含了传感器采集到的信息、标注信息等等;

:::info
{‘token’: ‘ca9a282c9e77460f8360f564131a8af5’, ‘timestamp’: 1532402927647951, ‘prev’: ‘’, ‘next’: ‘39586f9d59004284a7114a68825e8eec’, ‘scene_token’: ‘cc8c0bf57f984915a77078b10eb33198’, ‘data’: {‘RADAR_FRONT’: ‘37091c75b9704e0daa829ba56dfa0906’, ‘RADAR_FRONT_LEFT’: ‘11946c1461d14016a322916157da3c7d’, ‘RADAR_FRONT_RIGHT’: ‘491209956ee3435a9ec173dad3aaf58b’, ‘RADAR_BACK_LEFT’: ‘312aa38d0e3e4f01b3124c523e6f9776’, ‘RADAR_BACK_RIGHT’: ‘07b30d5eb6104e79be58eadf94382bc1’, ‘LIDAR_TOP’: ‘9d9bf11fb0e144c8b446d54a8a00184f’, ‘CAM_FRONT’: ‘e3d495d4ac534d54b321f50006683844’, ‘CAM_FRONT_RIGHT’: ‘aac7867ebf4f446395d29fbd60b63b3b’, ‘CAM_BACK_RIGHT’: ‘79dbb4460a6b40f49f9c150cb118247e’, ‘CAM_BACK’: ‘03bea5763f0f4722933508d5999c5fd8’, ‘CAM_BACK_LEFT’: ‘43893a033f9c46d4a51b5e08a67a1eb7’, ‘CAM_FRONT_LEFT’: ‘fe5422747a7d4268a4b07fc396707b23’}, ‘anns’: [‘ef63a697930c4b20a6b9791f423351da’, ‘6b89da9bf1f84fd6a5fbe1c3b236f809’, ‘924ee6ac1fed440a9d9e3720aac635a0’, ‘91e3608f55174a319246f361690906ba’, ‘cd051723ed9c40f692b9266359f547af’, ‘36d52dfedd764b27863375543c965376’, ‘70af124fceeb433ea73a79537e4bea9e’, ‘63b89fe17f3e41ecbe28337e0e35db8e’, ‘e4a3582721c34f528e3367f0bda9485d’, ‘fcb2332977ed4203aa4b7e04a538e309’, ‘a0cac1c12246451684116067ae2611f6’, ‘02248ff567e3497c957c369dc9a1bd5c’, ‘9db977e264964c2887db1e37113cddaa’, ‘ca9c5dd6cf374aa980fdd81022f016fd’, ‘179b8b54ee74425893387ebc09ee133d’, ‘5b990ac640bf498ca7fd55eaf85d3e12’, ‘16140fbf143d4e26a4a7613cbd3aa0e8’, ‘54939f11a73d4398b14aeef500bf0c23’, ‘83d881a6b3d94ef3a3bc3b585cc514f8’, ‘74986f1604f047b6925d409915265bf7’, ‘e86330c5538c4858b8d3ffe874556cc5’, ‘a7bd5bb89e27455bbb3dba89a576b6a1’, ‘fbd9d8c939b24f0eb6496243a41e8c41’, ‘198023a1fb5343a5b6fad033ab8b7057’, ‘ffeafb90ecd5429cba23d0be9a5b54ee’, ‘cc636a58e27e446cbdd030c14f3718fd’, ‘076a7e3ec6244d3b84e7df5ebcbac637’, ‘0603fbaef1234c6c86424b163d2e3141’, ‘d76bd5dcc62f4c57b9cece1c7bcfabc5’, ‘5acb6c71bcd64aa188804411b28c4c8f’, ‘49b74a5f193c4759b203123b58ca176d’, ‘77519174b48f4853a895f58bb8f98661’, ‘c5e9455e98bb42c0af7d1990db1df0c9’, ‘fcc5b4b5c4724179ab24962a39ca6d65’, ‘791d1ca7e228433fa50b01778c32449a’, ‘316d20eb238c43ef9ee195642dd6e3fe’, ‘cda0a9085607438c9b1ea87f4360dd64’, ‘e865152aaa194f22b97ad0078c012b21’, ‘7962506dbc24423aa540a5e4c7083dad’, ‘29cca6a580924b72a90b9dd6e7710d3e’, ‘a6f7d4bb60374f868144c5ba4431bf4c’, ‘f1ae3f713ba946069fa084a6b8626fbf’, ‘d7af8ede316546f68d4ab4f3dbf03f88’, ‘91cb8f15ed4444e99470d43515e50c1d’, ‘bc638d33e89848f58c0b3ccf3900c8bb’, ‘26fb370c13f844de9d1830f6176ebab6’, ‘7e66fdf908d84237943c833e6c1b317a’, ‘67c5dbb3ddcc4aff8ec5140930723c37’, ‘eaf2532c820740ae905bb7ed78fb1037’, ‘3e2d17fa9aa5484d9cabc1dfca532193’, ‘de6bd5ffbed24aa59c8891f8d9c32c44’, ‘9d51d699f635478fbbcd82a70396dd62’, ‘b7cbc6d0e80e4dfda7164871ece6cb71’, ‘563a3f547bd64a2f9969278c5ef447fd’, ‘df8917888b81424f8c0670939e61d885’, ‘bb3ef5ced8854640910132b11b597348’, ‘a522ce1d7f6545d7955779f25d01783b’, ‘1fafb2468af5481ca9967407af219c32’, ‘05de82bdb8484623906bb9d97ae87542’, ‘bfedb0d85e164b7697d1e72dd971fb72’, ‘ca0f85b4f0d44beb9b7ff87b1ab37ff5’, ‘bca4bbfdef3d4de980842f28be80b3ca’, ‘a834fb0389a8453c810c3330e3503e16’, ‘6c804cb7d78943b195045082c5c2d7fa’, ‘adf1594def9e4722b952fea33b307937’, ‘49f76277d07541c5a584aa14c9d28754’, ‘15a3b4d60b514db5a3468e2aef72a90c’, ‘18cc2837f2b9457c80af0761a0b83ccc’, ‘2bfcc693ae9946daba1d9f2724478fd4’]}

:::

4.5 sample_data 样本数据

使用my_sample[‘data’]可以获取sample的数据sample_data。

my_sample['data']

输出结果 :激光雷达、相机的样本数据。

:::info
{‘RADAR_FRONT’: ‘37091c75b9704e0daa829ba56dfa0906’, ‘RADAR_FRONT_LEFT’: ‘11946c1461d14016a322916157da3c7d’, ‘RADAR_FRONT_RIGHT’: ‘491209956ee3435a9ec173dad3aaf58b’, ‘RADAR_BACK_LEFT’: ‘312aa38d0e3e4f01b3124c523e6f9776’, ‘RADAR_BACK_RIGHT’: ‘07b30d5eb6104e79be58eadf94382bc1’, ‘LIDAR_TOP’: ‘9d9bf11fb0e144c8b446d54a8a00184f’, ‘CAM_FRONT’: ‘e3d495d4ac534d54b321f50006683844’, ‘CAM_FRONT_RIGHT’: ‘aac7867ebf4f446395d29fbd60b63b3b’, ‘CAM_BACK_RIGHT’: ‘79dbb4460a6b40f49f9c150cb118247e’, ‘CAM_BACK’: ‘03bea5763f0f4722933508d5999c5fd8’, ‘CAM_BACK_LEFT’: ‘43893a033f9c46d4a51b5e08a67a1eb7’, ‘CAM_FRONT_LEFT’: ‘fe5422747a7d4268a4b07fc396707b23’}

:::

可以使用下列命令来将这些传感器中采集的进行可视化:

sensor_radar = 'RADAR_FRONT'  #这里选择的传感器为前方的毫米波雷达传感器
radar_front_data = nusc.get('sample_data',my_sample['data'][sensor_radar])  
radar_front_data

#可视化
nusc.render_sample_data(radar_front_data['token'])

输出:

:::info
{‘token’: ‘37091c75b9704e0daa829ba56dfa0906’, ‘sample_token’: ‘ca9a282c9e77460f8360f564131a8af5’, ‘ego_pose_token’: ‘37091c75b9704e0daa829ba56dfa0906’, ‘calibrated_sensor_token’: ‘f4d2a6c281f34a7eb8bb033d82321f79’, ‘timestamp’: 1532402927664178, ‘fileformat’: ‘pcd’, ‘is_key_frame’: True, ‘height’: 0, ‘width’: 0, ‘filename’: ‘samples/RADAR_FRONT/n015-2018-07-24-11-22-45+0800__RADAR_FRONT__1532402927664178.pcd’, ‘prev’: ‘’, ‘next’: ‘f0b8593e08594a3eb1152c138b312813’, ‘sensor_modality’: ‘radar’, ‘channel’: ‘RADAR_FRONT’}

:::

前方毫米波雷达传感器的可视化结果:

4.6 sample_annotation样本标注信息

my_sample中包含了传感器采集到的信息、标注信息,在sample_data中展示传感器采集到的信息,这一部分将展示样本标注的信息:

my_annotation_token = my_sample['anns'][18]
my_annotation_metadata = nusc.get('sample_annotation',my_annotation_token)
my_annotation_metadata

#可视化
nusc.render_annotation(my_annotation_metadata['token'])

输出:

:::info
{‘token’: ‘83d881a6b3d94ef3a3bc3b585cc514f8’, ‘sample_token’: ‘ca9a282c9e77460f8360f564131a8af5’, ‘instance_token’: ‘e91afa15647c4c4994f19aeb302c7179’, ‘visibility_token’: ‘4’, ‘attribute_tokens’: [‘58aa28b1c2a54dc88e169808c07331e3’], ‘translation’: [409.989, 1164.099, 1.623], ‘size’: [2.877, 10.201, 3.595], ‘rotation’: [-0.5828819500503033, 0.0, 0.0, 0.812556848660791], ‘prev’: ‘’, ‘next’: ‘f3721bdfd7ee4fd2a4f94874286df471’, ‘num_lidar_pts’: 495, ‘num_radar_pts’: 13, ‘category_name’: ‘vehicle.truck’}

:::

可视化:

4.7 实例instance

通过nusc.instance可以获取实例:

my_instance = nusc.instance[0]
my_instance

#可视化
instance_token = my_instance['token']
nusc.render_instance(instance_token)

输出:

:::info
{‘token’: ‘6dd2cbf4c24b4caeb625035869bca7b5’, ‘category_token’: ‘1fa93b757fc74fb197cdd60001ad8abf’, ‘nbr_annotations’: 39, ‘first_annotation_token’: ‘ef63a697930c4b20a6b9791f423351da’, ‘last_annotation_token’: ‘8bb63134d48840aaa2993f490855ff0d’}

:::

可视化:

4.8 类别categories

通过nusc.list_categories可以获取类别:

nusc.list_categories()

输出结果 :

nusc.category[i]表示获取第i个类别的信息:

nusc.category[0]

输出:

:::info
{‘token’: ‘1fa93b757fc74fb197cdd60001ad8abf’, ‘name’: ‘human.pedestrian.adult’, ‘description’: ‘Adult subcategory.’}

:::

4.9 属性attributes

通过nusc.list_attributes可以获取属性:

nusc.list_attributes()

输出结果 :

:::info
cycle.with_rider: 305

cycle.without_rider: 434

pedestrian.moving: 3875

pedestrian.sitting_lying_down: 111

pedestrian.standing: 1029

vehicle.moving: 2715

vehicle.parked: 4674

vehicle.stopped: 1545

:::

注意:属性在一个场景中是可以变换的,下列代码展示了行人从移动到站立,属性发生了变换。

4.10 可视化 visibility

anntoken = my_sample['anns'][9]
visibility_token = nusc.get('sample_annotation', anntoken)['visibility_token']
print("Visibility: {}".format(nusc.get('visibility', visibility_token)))
nusc.render_annotation(anntoken)

输出:

:::info
Visibility: {‘description’: ‘visibility of whole object is between 80 and 100%’, ‘token’: ‘4’, ‘level’: ‘v80-100’}

:::

可视化:

4.11 传感器sensor

通过nusc.sensor来查看传感器,

nusc.sensor

输出:

:::info
[{‘token’: ‘725903f5b62f56118f4094b46a4470d8’, ‘channel’: ‘CAM_FRONT’, ‘modality’: ‘camera’}, {‘token’: ‘ce89d4f3050b5892b33b3d328c5e82a3’, ‘channel’: ‘CAM_BACK’, ‘modality’: ‘camera’}, {‘token’: ‘a89643a5de885c6486df2232dc954da2’, ‘channel’: ‘CAM_BACK_LEFT’, ‘modality’: ‘camera’}, {‘token’: ‘ec4b5d41840a509984f7ec36419d4c09’, ‘channel’: ‘CAM_FRONT_LEFT’, ‘modality’: ‘camera’}, {‘token’: ‘2f7ad058f1ac5557bf321c7543758f43’, ‘channel’: ‘CAM_FRONT_RIGHT’, ‘modality’: ‘camera’}, {‘token’: ‘ca7dba2ec9f95951bbe67246f7f2c3f7’, ‘channel’: ‘CAM_BACK_RIGHT’, ‘modality’: ‘camera’}, {‘token’: ‘dc8b396651c05aedbb9cdaae573bb567’, ‘channel’: ‘LIDAR_TOP’, ‘modality’: ‘lidar’}, {‘token’: ‘47fcd48f71d75e0da5c8c1704a9bfe0a’, ‘channel’: ‘RADAR_FRONT’, ‘modality’: ‘radar’}, {‘token’: ‘232a6c4dc628532e81de1c57120876e9’, ‘channel’: ‘RADAR_FRONT_RIGHT’, ‘modality’: ‘radar’}, {‘token’: ‘1f69f87a4e175e5ba1d03e2e6d9bcd27’, ‘channel’: ‘RADAR_FRONT_LEFT’, ‘modality’: ‘radar’}, {‘token’: ‘df2d5b8be7be55cca33c8c92384f2266’, ‘channel’: ‘RADAR_BACK_LEFT’, ‘modality’: ‘radar’}, {‘token’: ‘5c29dee2f70b528a817110173c2e71b9’, ‘channel’: ‘RADAR_BACK_RIGHT’, ‘modality’: ‘radar’}]

:::

因sample_data中就存储着传感器的信息,因此可以通过nusc.sample_data[i]来获取传感器的信息:

nusc.sample_data[0]

输出:

:::info
{‘token’: ‘5ace90b379af485b9dcb1584b01e7212’, ‘sample_token’: ‘39586f9d59004284a7114a68825e8eec’, ‘ego_pose_token’: ‘5ace90b379af485b9dcb1584b01e7212’, ‘calibrated_sensor_token’: ‘f4d2a6c281f34a7eb8bb033d82321f79’, ‘timestamp’: 1532402927814384, ‘fileformat’: ‘pcd’, ‘is_key_frame’: False, ‘height’: 0, ‘width’: 0, ‘filename’: ‘sweeps/RADAR_FRONT/n015-2018-07-24-11-22-45+0800__RADAR_FRONT__1532402927814384.pcd’, ‘prev’: ‘f0b8593e08594a3eb1152c138b312813’, ‘next’: ‘978db2bcdf584b799c13594a348576d2’, ‘sensor_modality’: ‘radar’, ‘channel’: ‘RADAR_FRONT’}

:::

4.12 校准传感器 calibrated_sensor

通过下列命令来得到某传感器的校准信息:

sensor_token = nusc.calibrated_sensor[0]
sensor_token

输出:

:::info

{‘token’: ‘f4d2a6c281f34a7eb8bb033d82321f79’, ‘sensor_token’: ‘47fcd48f71d75e0da5c8c1704a9bfe0a’, ‘translation’: [3.412, 0.0, 0.5], ‘rotation’: [0.9999984769132877, 0.0, 0.0, 0.0017453283658983088], ‘camera_intrinsic’: []}

:::

4.13 车辆姿态 ego_pose

nusc.ego_pose[0]

输出:

:::info
{‘token’: ‘5ace90b379af485b9dcb1584b01e7212’, ‘timestamp’: 1532402927814384, ‘rotation’: [0.5731787718287827, -0.0015811634307974854, 0.013859363182046986, -0.8193116095230444], ‘translation’: [410.77878632230204, 1179.4673290964536, 0.0]}

:::

4.14 日志 log

nusc.log[0]

输出:

:::info
{‘token’: ‘7e25a2c8ea1f41c5b0da1e69ecfa71a2’, ‘logfile’: ‘n015-2018-07-24-11-22-45+0800’, ‘vehicle’: ‘n015’, ‘date_captured’: ‘2018-07-24’, ‘location’: ‘singapore-onenorth’, ‘map_token’: ‘53992ee3023e5494b90c316c183be829’}

:::

4.15 地图map

nusc.map[0]

输出:

:::info
{‘category’: ‘semantic_prior’, ‘token’: ‘53992ee3023e5494b90c316c183be829’, ‘filename’: ‘maps/53992ee3023e5494b90c316c183be829.png’, ‘log_tokens’: [‘0986cb758b1d43fdaa051ab23d45582b’, ‘1c9b302455ff44a9a290c372b31aa3ce’, ‘e60234ec7c324789ac7c8441a5e49731’, ‘46123a03f41e4657adc82ed9ddbe0ba2’, ‘a5bb7f9dd1884f1ea0de299caefe7ef4’, ‘bc41a49366734ebf978d6a71981537dc’, ‘f8699afb7a2247e38549e4d250b4581b’, ‘d0450edaed4a46f898403f45fa9e5f0d’, ‘f38ef5a1e9c941aabb2155768670b92a’, ‘7e25a2c8ea1f41c5b0da1e69ecfa71a2’, ‘ddc03471df3e4c9bb9663629a4097743’, ‘31e9939f05c1485b88a8f68ad2cf9fa4’, ‘783683d957054175bda1b326453a13f4’, ‘343d984344e440c7952d1e403b572b2a’, ‘92af2609d31445e5a71b2d895376fed6’, ‘47620afea3c443f6a761e885273cb531’, ‘d31dc715d1c34b99bd5afb0e3aea26ed’, ‘34d0574ea8f340179c82162c6ac069bc’, ‘d7fd2bb9696d43af901326664e42340b’, ‘b5622d4dcb0d4549b813b3ffb96fbdc9’, ‘da04ae0b72024818a6219d8dd138ea4b’, ‘6b6513e6c8384cec88775cae30b78c0e’, ‘eda311bda86f4e54857b0554639d6426’, ‘cfe71bf0b5c54aed8f56d4feca9a7f59’, ‘ee155e99938a4c2698fed50fc5b5d16a’, ‘700b800c787842ba83493d9b2775234a’], ‘mask’: <nuscenes.utils.map_mask.MapMask object at 0x7fa8f63c1520>}

:::

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

JoannaJuanCV

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值