osrun pcl_ros bag_to_pcd 111.bag /camera/color/image_raw ~/pcd_file Provided topic '/camera/color/image_raw' is in the bag file, but is not of type sensor_msgs/PointCloud2 (type: sensor_msgs/Image) Could not find a sensor_msgs/PointCloud2 type on topic '/camera/color/image_raw' in bag file 111.bag Topics found in the bag file: /camera/color/image_raw (sensor_msgs/Image)

时间: 2025-07-23 08:04:37 AIGC 浏览: 25
<think>我们正在处理一个ROS相关的问题:用户尝试使用`pcl_ros`包中的`bag_to_pcd`工具将ROS bag文件中的图像话题(/camera/color/image_raw,类型为sensor_msgs/Image)转换为PCD文件,但遇到了类型不匹配错误,因为该工具期望的是PointCloud2类型的话题。 根据错误信息,`bag_to_pcd`工具只能处理`sensor_msgs/PointCloud2`类型的话题,而不能处理`sensor_msgs/Image`类型。因此,我们需要先将图像数据转换为点云数据,或者从图像中生成点云(例如通过深度图像和RGB图像生成彩色点云),然后再保存为PCD文件。 解决方案概述: 1. 理解需求:用户可能拥有一个包含彩色图像(可能还有深度图像)的bag文件,想要生成点云(PCD)文件。通常,单张RGB图像无法直接转换为点云,因为缺少深度信息。所以,用户可能需要同时有深度图像和彩色图像,或者通过其他方式获得深度信息。 2. 转换步骤: a. 如果bag文件中同时包含深度图像和彩色图像,我们可以使用ROS中的某个节点(例如`depth_image_proc`包中的节点)将深度图像和彩色图像转换为点云。 b. 将转换后的点云话题保存为PCD文件。 具体步骤: 步骤1:将深度图像和彩色图像转换为点云 - 我们可以使用`depth_image_proc`包中的`point_cloud_xyzrgb`节点。这个节点会订阅深度图像话题和彩色图像话题,并发布一个类型为`sensor_msgs/PointCloud2`的点云话题。 步骤2:将转换后的点云保存为PCD文件 - 使用`pcl_ros`包中的`bag_to_pcd`工具,指定转换后的点云话题。 但是,由于用户可能是在处理已经录制的bag文件,我们需要在回放bag文件的同时运行转换节点,然后再保存点云。 操作流程: 1. 启动一个ROS核心:`roscore` 2. 在一个终端中播放bag文件,并确保发布深度图像和彩色图像。假设深度图像话题为`/camera/depth/image_raw`,彩色图像话题为`/camera/color/image_raw`。 ```bash rosbag play your_bag_file.bag ``` 3. 在另一个终端中启动深度图像转点云的节点: ```bash roslaunch depth_image_proc point_cloud_xyzrgb.launch ``` 注意:默认的launch文件可能使用不同的话题名称,我们需要根据实际话题进行重映射。例如,如果深度图像话题是`/camera/depth/image_raw`,彩色图像话题是`/camera/color/image_raw`,相机信息话题是`/camera/depth/camera_info`(或者彩色相机的相机信息?通常深度图像需要相机信息),那么我们需要重映射输入话题。 我们可以创建一个自定义的launch文件,或者直接使用命令行重映射。例如: ```bash ROS_NAMESPACE=camera rosrun depth_image_proc point_cloud_xyzrgb depth_image:=depth/image_raw rgb_image:=color/image_raw ``` 或者更详细的重映射(假设深度图像话题为`/camera/depth/image_raw`,彩色图像话题为`/camera/color/image_raw`,深度相机的相机信息话题为`/camera/depth/camera_info`): ```bash rosrun depth_image_proc point_cloud_xyzrgb \ depth/image_rect:=/camera/depth/image_raw \ rgb/camera_info:=/camera/color/camera_info \ rgb/image_rect_color:=/camera/color/image_raw \ points:=/camera/points ``` 这样,转换后的点云会发布在`/camera/points`话题上。 4. 然后,我们可以使用`bag_to_pcd`工具来保存点云。但是注意,我们正在回放bag文件,同时转换节点在实时转换,所以我们可以直接使用`pcl_ros`的`pointcloud_to_pcd`工具来保存当前发布的点云(而不是从bag中转换)。或者,我们可以录制一个新的bag文件,只包含点云话题,然后再用`bag_to_pcd`。 更简单的方法是使用`pointcloud_to_pcd`工具,它可以直接订阅点云话题并保存为PCD文件。例如: ```bash rosrun pcl_ros pointcloud_to_pcd input:=/camera/points ``` 这个命令会订阅`/camera/points`话题,并将接收到的点云消息保存为PCD文件(默认保存在当前目录,按时间戳命名)。 5. 注意:在播放bag文件时,确保播放速度不要过快(可以使用`-r`参数控制播放速率),以便转换节点有足够的时间处理。 如果用户没有深度图像,只有RGB图像,那么无法直接转换为有三维信息的点云。这种情况下,可能需要使用其他方法(如单目深度估计)来生成深度图,但这超出了ROS工具的直接范围。 总结:用户需要先将图像(深度+彩色)转换为点云,然后再保存为PCD文件。如果bag文件中没有深度图像,则无法直接转换。 相关问题: 1. 如何将深度图像和RGB图像转换为彩色点云? 2. 如果bag文件中没有深度图像,只有RGB图像,如何生成点云? 3. 如何使用`pointcloud_to_pcd`工具保存点云? 4. 在转换过程中,如何正确重映射话题? 5. 如何调整点云转换的坐标系?</think>### 解决ROS中图像话题转PCD文件的类型错误 您的错误是由于`pcl_ros`的`bag_to_pcd`工具**仅支持PointCloud2类型**的话题,而您尝试转换的`/camera/color/image_raw`是`sensor_msgs/Image`类型。以下是解决方案: --- ### 解决方案步骤 #### 1. **深度图像转点云(必需步骤)** 需要先将RGB图像与深度图像融合生成点云数据: ```bash # 启动深度图像处理节点 rosrun depth_image_proc point_cloud_xyzrgb \ depth_image:=/camera/depth/image_raw \ # 深度图话题 rgb/image_rect_color:=/camera/color/image_raw \ # RGB图话题 rgb/camera_info:=/camera/color/camera_info \ # 相机内参 points:=/camera/color/points # 输出点云话题 ``` **关键要求**: - Bag文件中必须包含**深度图像**(`/camera/depth/image_raw`)和**相机内参**(`/camera/color/camera_info`) - 输出点云话题(如`/camera/color/points`)类型为`sensor_msgs/PointCloud2` #### 2. **转换点云为PCD文件** 使用转换后的点云话题生成PCD: ```bash rosrun pcl_ros bag_to_pcd <input_bag.bag> /camera/color/points output_pcd_dir ``` - `output_pcd_dir`需提前创建 --- ### 常见问题处理 #### ❌ 错误1:缺少深度图像 **现象**: `[ERROR] No depth image found in bag file` **解决**: 检查bag文件内容: ```bash rosbag info your_bag.bag | grep -E "depth|info" ``` #### ❌ 错误2:头文件缺失(引用[2]) **现象**: `fatal error: pcl/point_cloud.h: No such file or directory` **解决**: 安装PCL依赖: ```bash sudo apt-get install ros-$ROS_DISTRO-pcl-ros libpcl-dev ``` #### ❌ 错误3:坐标系问题(引用[1][4]) **现象**: 点云数据偏移或错位 **解决**: 确保转换时指定正确坐标系: ```bash rosrun pcl_ros bag_to_pcd <bag> <topic> <output_dir> _frame_id:=base_link ``` --- ### 完整示例流程 ```bash # 创建输出目录 mkdir ~/pcd_output # 启动深度转点云节点 rosrun depth_image_proc point_cloud_xyzrgb \ depth_image:=/camera/aligned_depth_to_color/image_raw \ rgb/image_rect_color:=/camera/color/image_raw \ rgb/camera_info:=/camera/color/camera_info \ points:=/color_points & # 播放bag文件 rosbag play --clock recorded.bag # 转换点云 rosrun pcl_ros bag_to_pcd recorded.bag /color_points ~/pcd_output ``` --- ### 备选方案:离线转换(无深度图时) 如果只有RGB图像: 1. 使用OpenCV读取图像: ```python import cv2 img = cv2.imread("image.png") ``` 2. 通过**深度估计算法**(如MiDaS)生成深度图 3. 使用PCL创建点云: ```cpp pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGB>); // 遍历像素填充XYZRGB数据 ``` 4. 保存为PCD: ```cpp pcl::io::savePCDFile("output.pcd", *cloud); ``` ---
阅读全文

相关推荐

lixing@lixing:~$ roslaunch fast_livo mapping_mid360.launch ... logging to /home/lixing/.ros/log/26a19042-87dc-11f0-9ea5-7d3779c15a43/roslaunch-lixing-214852.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt WARNING: disk usage in log directory [/home/lixing/.ros/log] is over 1GB. It's recommended that you use the 'rosclean' command. started roslaunch server http://lixing:40781/ SUMMARY ======== PARAMETERS * /common/img_en: 1 * /common/img_topic: /camera/color/ima... * /common/imu_topic: /mavros/imu/data * /common/lid_topic: /scan * /common/lidar_en: 1 * /common/ros_driver_bug_fix: False * /evo/pose_output_en: False * /evo/seq_name: CBD_Building_01 * /extrin_calib/Pcl: [-0.01363, -0.140... * /extrin_calib/Rcl: [-0.34925, -0.928... * /extrin_calib/extrinsic_R: [1, 0, 0, 0, 1, 0... * /extrin_calib/extrinsic_T: [0.011, 0.02329, ... * /imu/acc_cov: 0.8 * /imu/b_acc_cov: 0.001 * /imu/b_gyr_cov: 0.001 * /imu/gyr_cov: 0.5 * /imu/imu_en: True * /imu/imu_int_frame: 30 * /laserMapping/cam_cx: 320 * /laserMapping/cam_cy: 240 * /laserMapping/cam_d0: 0 * /laserMapping/cam_d1: 0 * /laserMapping/cam_d2: 0 * /laserMapping/cam_d3: 0 * /laserMapping/cam_fx: 639.997649 * /laserMapping/cam_fy: 639.997649 * /laserMapping/cam_height: 480 * /laserMapping/cam_model: pinhole * /laserMapping/cam_width: 640 * /laserMapping/scale: 1.0 * /lio/beam_err: 0.15 * /lio/dept_err: 0.02 * /lio/layer_init_num: [5, 5, 5, 5, 5] * /lio/max_iterations: 5 * /lio/max_layer: 2 * /lio/max_points_num: 50 * /lio/min_eigen_value: 0.005 * /lio/voxel_size: 0.5 * /local_map/half_map_size: 100 * /local_map/map_sliding_en: False * /local_map/sliding_thresh: 8 * /pcd_save/colmap_output_en: True * /pcd_save/filter_size_pcd: 0.15 * /pcd_save/interval: -1 * /pcd_save/pcd_save_en: True * /preprocess/blind: 0.5 * /preprocess/filter_size_surf: 0.1 * /preprocess/lidar_type: 1 * /preprocess/point_filter_num: 1 * /preprocess/scan_line: 4 * /publish/blind_rgb_points: 0.0 * /publish/dense_map_en: True * /publish/pub_effect_point_en: False * /publish/pub_plane_en: False * /publish/pub_scan_num: 1 * /rosdistro: noetic * /rosversion: 1.17.4 * /time_offset/exposure_time_init: 0.0 * /time_offset/img_time_offset: 0.1 * /time_offset/imu_time_offset: 0.0 * /uav/gravity_align_en: True * /uav/imu_rate_odom: False * /vio/exposure_estimate_en: True * /vio/img_point_cov: 100 * /vio/inv_expo_cov: 0.1 * /vio/inverse_composition_en: False * /vio/max_iterations: 5 * /vio/normal_en: True * /vio/outlier_threshold: 1000 * /vio/patch_pyrimid_level: 4 * /vio/patch_size: 8 * /vio/raycast_en: False NODES / laserMapping (fast_livo/fastlivo_mapping) republish (image_transport/republish) rviz (rviz/rviz) ROS_MASTER_URI=http://localhost:11311 process[laserMapping-1]: started with pid [214877] process[rviz-2]: started with pid [214878] process[republish-3]: started with pid [214879] Not found: /home/lixing/catkin_ws/src/FAST-LIVO2/Log/Colmap/images Removed: /home/lixing/catkin_ws/src/FAST-LIVO2/Log/Colmap/sparse/0 Created: /home/lixing/catkin_ws/src/FAST-LIVO2/Log/Colmap/images Created: /home/lixing/catkin_ws/src/FAST-LIVO2/Log/Colmap/sparse/0 [INFO] [1756804953.461777121, 995.921000000]: Found parameter: laserMapping/cam_model, value: pinhole terminate called after throwing an instance of 'std::runtime_error' what(): Camera model not correctly specified. [laserMapping-1] process has died [pid 214877, exit code -6, cmd /home/lixing/catkin_ws/devel/lib/fast_livo/fastlivo_mapping __name:=laserMapping __log:=/home/lixing/.ros/log/26a19042-87dc-11f0-9ea5-7d3779c15a43/laserMapping-1.log]. log file: /home/lixing/.ros/log/26a19042-87dc-11f0-9ea5-7d3779c15a43/laserMapping-1*.log [rviz-2] process has finished cleanly log file: /home/lixing/.ros/log/26a19042-87dc-11f0-9ea5-7d3779c15a43/rviz-2*.log

WARNING: disk usage in log directory [/home/sx/.ros/log] is over 1GB. It's recommended that you use the 'rosclean' command. started roslaunch server http://sx-NUC8i5BEH:34675/ SUMMARY ======== PARAMETERS * /common/imu_topic: /livox/imu * /common/lid_topic: /livox/lidar * /common/time_offset_lidar_to_imu: 0.0 * /common/time_sync_en: False * /cube_side_length: 1000.0 * /feature_extract_enable: False * /filter_size_map: 0.5 * /filter_size_surf: 0.5 * /mapping/acc_cov: 0.1 * /mapping/b_acc_cov: 0.0001 * /mapping/b_gyr_cov: 0.0001 * /mapping/det_range: 100.0 * /mapping/extrinsic_R: [1, 0, 0, 0, 1, 0... * /mapping/extrinsic_T: [-0.011, -0.02329... * /mapping/extrinsic_est_en: False * /mapping/fov_degree: 360 * /mapping/gyr_cov: 0.1 * /max_iteration: 3 * /pcd_save/interval: -1 * /pcd_save/pcd_save_en: False * /pcd_save_en: False * /point_filter_num: 3 * /pointcloud_to_laserscan/angle_increment: 0.0087 * /pointcloud_to_laserscan/angle_max: 3.14159 * /pointcloud_to_laserscan/angle_min: -3.14159 * /pointcloud_to_laserscan/concurrency_level: 1 * /pointcloud_to_laserscan/inf_epsilon: 1.0 * /pointcloud_to_laserscan/max_height: 1.0 * /pointcloud_to_laserscan/min_height: 0.0 * /pointcloud_to_laserscan/range_max: 30.0 * /pointcloud_to_laserscan/range_min: 0.05 * /pointcloud_to_laserscan/scan_time: 10 * /pointcloud_to_laserscan/transform_tolerance: 0.01 * /pointcloud_to_laserscan/use_inf: True * /preprocess/blind: 0.5 * /preprocess/lidar_type: 1 * /preprocess/scan_line: 4 * /publish/dense_publish_en: True * /publish/path_en: False * /publish/scan_bodyframe_pub_en: True * /publish/scan_publish_en: True * /rosdistro: noetic * /rosversion: 1.17.0 * /runtime_pos_log_enable: False NODES / global_localization (fast_lio_localization/global_localization.py) laserMapping (fast_lio/fastlio_mapping) map_publishe (pcl_ros/pcd_to_pointcloud) map_server (map_server/map_server) pointcloud_to_laserscan (pointcloud_to_laserscan/po

lixing@lixing:~/fast_livo2/devel$ source setup.bash lixing@lixing:~/fast_livo2/devel$ roslaunch fast_livo mapping_avia.launch ... logging to /home/lixing/.ros/log/d7f4aeb0-6608-11f0-af32-3db24e61e935/roslaunch-lixing-73455.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://lixing:35845/ SUMMARY ======== PARAMETERS * /common/img_en: 0 * /common/img_topic: /left_camera/image * /common/imu_topic: /livox/imu * /common/lid_topic: /livox/lidar * /common/lidar_en: 1 * /common/ros_driver_bug_fix: False * /evo/pose_output_en: False * /evo/seq_name: CBD_Building_01 * /extrin_calib/Pcl: [0.0194384, 0.104... * /extrin_calib/Rcl: [0.00610193, -0.9... * /extrin_calib/extrinsic_R: [1, 0, 0, 0, 1, 0... * /extrin_calib/extrinsic_T: [0.04165, 0.02326... * /imu/acc_cov: 0.5 * /imu/b_acc_cov: 0.0001 * /imu/b_gyr_cov: 0.0001 * /imu/gyr_cov: 0.3 * /imu/imu_en: True * /imu/imu_int_frame: 30 * /laserMapping/cam_cx: 626.91359 * /laserMapping/cam_cy: 522.799224 * /laserMapping/cam_d0: -0.07616 * /laserMapping/cam_d1: 0.123001 * /laserMapping/cam_d2: -0.00113 * /laserMapping/cam_d3: 0.000251 * /laserMapping/cam_fx: 1293.56944 * /laserMapping/cam_fy: 1293.3155 * /laserMapping/cam_height: 1024 * /laserMapping/cam_model: Pinhole * /laserMapping/cam_width: 1280 * /laserMapping/scale: 0.5 * /lio/beam_err: 0.05 * /lio/dept_err: 0.02 * /lio/layer_init_num: [5, 5, 5, 5, 5] * /lio/max_iterations: 5 * /lio/max_layer: 2 * /lio/max_points_num: 50 * /lio/min_eigen_value: 0.0025 * /lio/voxel_size: 0.5 * /local_map/half_map_size: 100 * /local_map/map_sliding_en: False * /local_map/sliding_thresh: 8 * /pcd_save/colmap_output_en: False * /pcd_save/filter_size_pcd: 0.15 * /pcd_save/interval: -1 * /pcd_save/pcd_save_en: False * /preprocess/blind: 0.8 * /preprocess/filter_size_surf: 0.1 * /preprocess/lidar_type: 1 * /preprocess/point_filter_num: 1 * /preprocess/scan_line: 6 * /publish/blind_rgb_points: 0.0 * /publish/dense_map_en: True * /publish/pub_effect_point_en: False * /publish/pub_plane_en: False * /publish/pub_scan_num: 1 * /rosdistro: noetic * /rosversion: 1.17.4 * /time_offset/exposure_time_init: 0.0 * /time_offset/img_time_offset: 0.1 * /time_offset/imu_time_offset: 0.0 * /uav/gravity_align_en: False * /uav/imu_rate_odom: False * /vio/exposure_estimate_en: True * /vio/img_point_cov: 100 * /vio/inv_expo_cov: 0.1 * /vio/inverse_composition_en: False * /vio/max_iterations: 5 * /vio/normal_en: True * /vio/outlier_threshold: 1000 * /vio/patch_pyrimid_level: 4 * /vio/patch_size: 8 * /vio/raycast_en: False NODES / laserMapping (fast_livo/fastlivo_mapping) republish (image_transport/republish) rviz (rviz/rviz) auto-starting new master process[master]: started with pid [73464] ROS_MASTER_URI=http://localhost:11311 setting /run_id to d7f4aeb0-6608-11f0-af32-3db24e61e935 process[rosout-1]: started with pid [73474] started core service [/rosout] process[laserMapping-2]: started with pid [73481] process[rviz-3]: started with pid [73484] process[republish-4]: started with pid [73486] [INFO] [1753084862.799631899]: Found parameter: laserMapping/cam_model, value: Pinhole [INFO] [1753084862.801137872]: Found parameter: laserMapping/cam_width, value: 1280 [INFO] [1753084862.801585278]: Found parameter: laserMapping/cam_height, value: 1024 [INFO] [1753084862.802377201]: Found parameter: laserMapping/scale, value: 0.5 [INFO] [1753084862.802890892]: Found parameter: laserMapping/cam_fx, value: 1293.57 [INFO] [1753084862.803310245]: Found parameter: laserMapping/cam_fy, value: 1293.32 [INFO] [1753084862.803787630]: Found parameter: laserMapping/cam_cx, value: 626.914 [INFO] [1753084862.804230044]: Found parameter: laserMapping/cam_cy, value: 522.799 [INFO] [1753084862.804656402]: Found parameter: laserMapping/cam_d0, value: -0.07616 [INFO] [1753084862.805159958]: Found parameter: laserMapping/cam_d1, value: 0.123001 [INFO] [1753084862.805688943]: Found parameter: laserMapping/cam_d2, value: -0.00113 [INFO] [1753084862.806193840]: Found parameter: laserMapping/cam_d3, value: 0.000251 scale: 0.5 [laserMapping-2] process has died [pid 73481, exit code -11, cmd /home/lixing/fast_livo2/devel/lib/fast_livo/fastlivo_mapping __name:=laserMapping __log:=/home/lixing/.ros/log/d7f4aeb0-6608-11f0-af32-3db24e61e935/laserMapping-2.log]. log file: /home/lixing/.ros/log/d7f4aeb0-6608-11f0-af32-3db24e61e935/laserMapping-2*.log

vae@vae-ASUS-TUF-Gaming-A15-FA507UU-FA507UU:~/primitive_ws/src/Primitive-Planner/src/planner/plan_manage$ sudo rm /etc/apt/sources.list.d/ros2.list # 若存在 rm: 无法删除 '/etc/apt/sources.list.d/ros2.list': 没有那个文件或目录 vae@vae-ASUS-TUF-Gaming-A15-FA507UU-FA507UU:~/primitive_ws/src/Primitive-Planner/src/planner/plan_manage$ sudo rm /etc/apt/sources.list.d/ros-fuerte* # 若存在 (引用[3]) rm: 无法删除 '/etc/apt/sources.list.d/ros-fuerte*': 没有那个文件或目录 vae@vae-ASUS-TUF-Gaming-A15-FA507UU-FA507UU:~/primitive_ws/src/Primitive-Planner/src/planner/plan_manage$ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu focal main" > /etc/apt/sources.list.d/ros-latest.list' vae@vae-ASUS-TUF-Gaming-A15-FA507UU-FA507UU:~/primitive_ws/src/Primitive-Planner/src/planner/plan_manage$ sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654 Executing: /tmp/apt-key-gpghome.wHCpRjn1jw/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654 gpg: 密钥 F42ED6FBAB17C654:“Open Robotics <[email protected]>” 1 个新的签名 gpg: 处理的总数:1 gpg: 新的签名:1 vae@vae-ASUS-TUF-Gaming-A15-FA507UU-FA507UU:~/primitive_ws/src/Primitive-Planner/src/planner/plan_manage$ dpkg -l | grep pcl-ros ii ros-noetic-pcl-ros 1.7.4-1focal.20250521.004323 amd64 PCL (Point Cloud Library) ROS interface stack. vae@vae-ASUS-TUF-Gaming-A15-FA507UU-FA507UU:~/primitive_ws/src/Primitive-Planner/src/planner/plan_manage$ rosrun pcl_ros pointcloud_to_pcd input:=/points2 [INFO] [1752722398.222521893]: Saving as ASCII PCD [INFO] [1752722398.223133113]: Listening for incoming data on topic /points2

[ 97%] Built target livox_ros_driver2_node In file included from /opt/ros/noetic/include/ros/ros.h:40, from /home/by/livox_ws/src/pcd2pgm_package/pcd2pgm/src/pcd2pgm.cpp:1: /home/by/livox_ws/src/pcd2pgm_package/pcd2pgm/src/pcd2pgm.cpp: In function ‘void SetMapTopicMsg(pcl::PointCloud::Ptr, nav_msgs::OccupancyGrid&)’: /home/by/livox_ws/src/pcd2pgm_package/pcd2pgm/src/pcd2pgm.cpp:204:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 8 has type ‘std::vector<signed char, std::allocator<signed char> >::size_type’ {aka ‘long unsigned int’} [-Wformat=] 204 | ROS_INFO("data size = %d\n", msg.data.size()); | ^~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ | | | std::vector<signed char, std::allocator<signed char> >::size_type {aka long unsigned int} /opt/ros/noetic/include/ros/console.h:351:165: note: in definition of macro ‘ROSCONSOLE_PRINT_AT_LOCATION_WITH_FILTER’ 351 | ::ros::console::print(filter, __rosconsole_define_location__loc.logger_, __rosconsole_define_location__loc.level_, __FILE__, __LINE__, __ROSCONSOLE_FUNCTION__, __VA_ARGS__) | ^~~~~~~~~~~ /opt/ros/noetic/include/ros/console.h:390:7: note: in expansion of macro ‘ROSCONSOLE_PRINT_AT_LOCATION’ 390 | ROSCONSOLE_PRINT_AT_LOCATION(__VA_ARGS__); \ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ /opt/ros/noetic/include/ros/console.h:575:35: note: in expansion of macro ‘ROS_LOG_COND’ 575 | #define ROS_LOG(level, name, ...) ROS_LOG_COND(true, level, name, __VA_ARGS__) | ^~~~~~~~~~~~ /opt/ros/noetic/include/rosconsole/macros_generated.h:110:23: note: in expansion of macro ‘ROS_LOG’ 110 | #define ROS_INFO(...) ROS_LOG(::ros::console::levels::Info, ROSCONSOLE_DEFAULT_NAME, __VA_ARGS__) | ^~~~~~~ /home/by/livox_ws/src/pcd2pgm_package/pcd2pgm/src/pcd2pgm.cpp:204:3: note: in expansion of macro ‘ROS_INFO’ 204 | ROS_INFO("data size = %d\n", msg.data.size()); | ^~~~~~~~ /home/by/livox_ws/src/pcd2pgm_package/pcd2pgm/src/pcd2pgm.cpp:204:26: note: format string is defined here 204 | ROS_INFO("data size = %d\n", msg.data.size()); | ~^ | | | int | %ld [100%] Linking CXX executable /home/by/livox_ws/devel/lib/pcd2pgm/pcd2pgm

era/src/2lidar_camera_node.cpp:262:30: error: no match for ‘operator&&’ (operand types are ‘bool’ and ‘sensor_msgs::msg::PointCloud2’ {aka ‘sensor_msgs::msg::PointCloud2_<std::allocator<void> >’}) 262 | if (rgb_msg && image_msg && cloud_msg) { | ~~~~~~~~~~~~~~~~~~~~ ^~ ~~~~~~~~~ | | | | bool sensor_msgs::msg::PointCloud2 {aka sensor_msgs::msg::PointCloud2_<std::allocator<void> >} /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/src/2lidar_camera_node.cpp:262:30: note: candidate: ‘operator&&(bool, bool)’ (built-in) 262 | if (rgb_msg && image_msg && cloud_msg) { | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/src/2lidar_camera_node.cpp:262:30: note: no known conversion for argument 2 from ‘sensor_msgs::msg::PointCloud2’ {aka ‘sensor_msgs::msg::PointCloud2_<std::allocator<void> >’} to ‘bool’ /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/src/2lidar_camera_node.cpp:333:29: error: ‘make_shared’ is not a member of ‘boost’ 333 | auto cloud_ptr = boost::make_shared(cloud2); | ^~~~~~~~~~~ /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/src/2lidar_camera_node.cpp:333:29: note: suggested alternatives: In file included from /usr/include/c++/11/memory:77, from /opt/ros/humble/include/rclcpp/rclcpp/rclcpp.hpp:153, from /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/src/2lidar_camera_node.cpp:1: /usr/include/c++/11/bits/shared_ptr.h:875:5: note: ‘std::make_shared’ 875 | make_shared(_Args&&... __args) | ^~~~~~~~~~~ In file included from /usr/include/pcl-1.12/pcl/io/pcd_io.h:42, from /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/include/lidar_camera/transforms.h:1, from /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/src/2lidar_camera_node.cpp:10: /usr/include/pcl-1.12/pcl/memory.h:122:66: note: ‘pcl::make_shared’ 122 | enable_if_t<!has_custom_allocator<T>::value, shared_ptr<T>> make_shared(Args&&... args) | ^~~~~~~~~~~ /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/src/2lidar_camera_node.cpp:333:52: error: expected primary-expression before ‘>’ token 333 | auto cloud_ptr = boost::make_shared(cloud2); | ^ /home/xingchen/gongzuo/2lidar_camera/calibrat_lidarcamera/ros2_ws/src/2lidar_camera/src/2lidar_camera_node.cpp:344:18: error: cannot bind non-const lvalue reference of type ‘Eigen::Vector3f&’ {aka ‘Eigen::Matrix<float, 3, 1>&’} to an rvalue of type ‘Eigen::Vector3f’ {aka ‘Eigen::Matrix<float, 3, 1>’} 344 | ensureNormal(n1, n2);

import os import struct from argparse import ArgumentParser import av import numpy as np import open3d as o3d import rosbag from sensor_msgs import point_cloud2 as pc2 av.logging.set_level(av.logging.PANIC) codec_ctx = av.codec.Codec('hevc','r') h265_code = codec_ctx.create() def parse_img(msg, dataset_dir): timestamp = int(msg.header.stamp.secs * 1000 + msg.header.stamp.nsecs / 1000000) # os.makedirs(os.path.join(dataset_dir, str(timestamp))) file_path = os.path.join(dataset_dir, str(timestamp)+'{}.jpg'.format(timestamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name!='rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) except Exception as e: print("{} frame can not trans to jpg".format(timestamp),e) def parse_pcd(msg, dataset_dir): timestamp = int(msg.header.stamp.secs * 1000 + msg.header.stamp.nsecs / 1000000) # os.makedirs(os.path.join(dataset_dir, str(timestamp))) file_path = os.path.join(dataset_dir, str(timestamp)+'{}.pcd'.format(timestamp)) pointclouds = [] points = np.frombuffer(msg.data, dtype=np.float32).reshape(-1, 4)[:, :3] pointclouds.append(points) # 合并所有点云数据 combined_pcd = np.concatenate(pointclouds) pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(combined_pcd) o3d.io.write_point_cloud(file_path, pcd) def parse_params(): parser = ArgumentParser() parser.add_argument('--topic', default = '') # parser.add_argument('--type', default='pcd') return parser.parse_args() def main(): # params = parse_params() output_dir = 'output_data' dataset_dir = os.path.join(output_dir,'dataset') if not os.path.exists(dataset_dir): os.makedirs(dataset_dir) bag_path = "/home/inwinic/Lixu_Pro/fangcheng_data/A3_A4_71/2025-07-14-10-54-53.bag" bag = rosbag.Bag(bag_path,"r") for topic, msg, t in bag.read_messages(): if topic == '/mdc_camera_instance_71': parse_img(msg,os.path.join(dataset_dir,'camrea')) elif topic == '/rs16PtClound_B2': parse_pcd(msg,os.path.join(dataset_dir,'lidar')) if __name__ == '__main__': main()

import math import os import struct from argparse import ArgumentParser import av import numpy as np import open3d as o3d import rosbag import yaml from sensor_msgs import point_cloud2 import subprocess from protoc.octopudata_localizationinfo_pb2 import LocalizationInfoFrame, LocalizationInfo from protoc.octopudata_trackedobject_pb2 import TrackedObjectFrame, Object, TrackedObject from protoc.octopusdata_controlcommand_pb2 import CommandFrame, ControlCommand from protoc.octopusdata_gnss_pb2 import GnssPoint, GnssPoints from protoc.octopusdata_plantrajectory_pb2 import Trajectory, TrajectoryPoint, PlanTrajectory from protoc.octopusdata_predictionobstacles_pb2 import PerceptionObstacle, \ Obstacle, PredictionTrajectory, PathPoint, PredictionObstacles from protoc.octopusdata_routingpath_pb2 import RoutingPath, Path, Point, RoutingFrames from protoc.octopusdata_vehicleinfo_pb2 import VehicleFrame, VehicleInfo av.logging.set_level(av.logging.PANIC) codec_ctx = av.codec.Codec('hevc','r') h265_code = codec_ctx.create() class Pose: def __init__(self, q0, q1, q2, q3, x, y, z): self.q0 = q0 self.q1 = q1 self.q2 = q2 self.q3 = q3 self.x = x self.y = y self.z = z def get_ts(secs, nsecs): return int(secs * 1000 + nsecs / 1000000) def get_modulo(x, y, z): return math.sqrt(x * x + y * y + z * z) def yaw_from_quaternions(w, x, y, z): a = 2 * (w * z + x * y) b = 1 - 2 * (y * y + z * z) return math.atan2(a, b) def pose_has_nan(p): return math.isnan(p.x) or math.isnan(p.y) or math.isnan(p.z) or \ math.isnan(p.q0) or math.isnan(p.q1) or math.isnan(p.q2) or \ math.isnan(p.q3) def get_t(q0, q1, q2, q3, x, y, z): aa = 1 - 2 * (q2 * q2 + q3 * q3) ab = 2 * (q1 * q2 - q0 * q3) ac = 2 * (q1 * q3 + q0 * q2) ba = 2 * (q1 * q2 + q0 * q3) bb = 1 - 2 * (q1 * q1 + q3 * q3) bc = 2 * (q2 * q3 - q0 * q1) ca = 2 * (q1 * q3 - q0 * q2) cb = 2 * (q2 * q3 + q0 * q1) cc = 1 - 2 * (q1 * q1 + q2 * q2) t = np.mat([[aa, ab, ac, x], [ba, bb, bc, y], [ca, cb, cc, z], [0, 0, 0, 1]]) return t def get_label(perception_typ, subtype): if perception_typ == 3: return 'Pedestrian' elif perception_typ == 4: return 'Bi_Tricycle' elif perception_typ == 5: if subtype == 5: return 'Truck' elif subtype == 6: return 'Bus' else: return 'Car' else: return 'unknow' def main(args): for file in os.listdir(args.input): if file.endswith('.bag'): bag_path = os.path.join(args.input, file) bag = rosbag.Bag(bag_path, "r") output_dir = os.getenv('output_dir') if not os.path.exists(os.path.join(output_dir, 'hwPtClound_A4')): os.makedirs(os.path.join(output_dir, 'hwPtClound_A4')) if not os.path.exists(os.path.join(output_dir, 'hwPtClound_B2')): os.makedirs(os.path.join(output_dir, 'hwPtClound_B2')) if not os.path.exists(os.path.join(output_dir, 'radar_track_array_0')): os.makedirs(os.path.join(output_dir, 'radar_track_array_0')) if not os.path.exists(os.path.join(output_dir, 'radar_track_array_3')): os.makedirs(os.path.join(output_dir, 'radar_track_array_3')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_74')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_74')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_73')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_73')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_72')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_72')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_71')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_71')) routes = [] controls = [] plans = [] preds = [] gnss = [] vehs = [] locs = [] objs = [] ego_pose = None has_camera_71 = False has_camera_72 = False has_camera_73 = False has_camera_74 = False has_lidar_A4 = False has_lidar_B2 = False has_radar_0 = False has_radar_3 = False lidar_num = 0 image_num = 0 radar_num = 0 for topic, msg, t in bag.read_messages(): time_stamp = int(t.to_sec() * 1000) # 以 rosbag 时间戳(t)为基准,转换为 13 位时间戳 # if topic == '/hwPtClound_A4': ### 图达通 时间辍应该是13位数字,图达通雷达8位,华为96线6位 # # file_path = os.path.join(output_dir, 'hwPtClound_A4', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # # msg.header.stamp.nsecs / 1000000))) # file_path = os.path.join(output_dir, 'hwPtClound_A4', '{}.pcd'.format(time_stamp)) # # # print(file_path) # # 提取点云数据,包括 x, y, z # points = list(point_cloud2.read_points(msg, field_names=["x", "y", "z", "intensity"], skip_nans=True)) # if points: # # 转换为 numpy 数组,添加 intensity, ring, timestamp 字段 # np_points = np.array(points) # (N, 3), 包含 x, y, z # # # 转换为 Open3D 格式点云 # pcd = o3d.geometry.PointCloud() # pcd.points = o3d.utility.Vector3dVector(np_points[:, :3]) # x, y, z # pcd.colors = o3d.utility.Vector3dVector(np.tile(np_points[:, 3:4] / np_points[:, 3:4].max(), (1, 3))) # 用 intensity 作为灰度颜色 # o3d.io.write_point_cloud(file_path, pcd) # lidar_num += 1 # has_lidar_A4 = True # elif topic == '/hwPtClound_B2': ### 图达通 # # file_path = os.path.join(output_dir, 'hwPtClound_B2', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # # msg.header.stamp.nsecs / 1000000))) # file_path = os.path.join(output_dir, 'hwPtClound_B2', '{}.pcd'.format(time_stamp)) # # # print(file_path) # # 提取点云数据,包括 x, y, z # points = list(point_cloud2.read_points(msg, field_names=["x", "y", "z", "intensity"], skip_nans=True)) # if points: # # 转换为 numpy 数组,添加 intensity, ring, timestamp 字段 # np_points = np.array(points) # (N, 3), 包含 x, y, z # # # 转换为 Open3D 格式点云 # pcd = o3d.geometry.PointCloud() # pcd.points = o3d.utility.Vector3dVector(np_points[:, :3]) # x, y, z # pcd.colors = o3d.utility.Vector3dVector(np.tile(np_points[:, 3:4] / np_points[:, 3:4].max(), (1, 3))) # 用 intensity 作为灰度颜色 # o3d.io.write_point_cloud(file_path, pcd) # lidar_num += 1 # has_lidar_B2 = True if topic == '/mdc_camera_instance_74': ### 相机 时间辍应该是13位数字 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_74', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_74 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/mdc_camera_instance_73': ### 相机 时间辍应该是13位数字 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_73', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_73 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/mdc_camera_instance_72': ### 相机 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_72', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_72 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/mdc_camera_instance_71': ### 相机 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_71', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_71 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/radar_track_array_0': ### 大陆408 时间辍应该是13位数字 # file_path = os.path.join(output_dir, 'radar_track_array_0', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000))) file_path = os.path.join(output_dir, 'radar_track_array_0', '{}.pcd'.format(time_stamp)) points = [] for track in msg.trackList: x, y, z = track.x, track.y, track.z vx, vy, ax, ay = track.vx, track.vy, track.ax, track.ay rcs, snr, yawRate = track.rcs, track.snr, track.yawRate obj_id, trackType, lifetime = track.id, track.trackType, track.lifetime # 过滤无效点 if np.isnan(x) or np.isnan(y) or np.isnan(z): continue points.append([x, y, z, vx, vy, ax, ay, rcs, snr, yawRate, obj_id, trackType, lifetime]) if not points: print("没有有效点云数据") continue points = np.array(points) # **写入 PCD 文件** with open(file_path, 'w') as f: # **写入 PCD 头部** f.write("# .PCD v0.7 - Point Cloud Data file format\n") f.write("VERSION 0.7\n") f.write(f"FIELDS x y z vx vy ax ay rcs snr yawRate id trackType lifetime\n") f.write("SIZE 4 4 4 4 4 4 4 4 4 4 4 4 4\n") f.write("TYPE F F F F F F F F F F I I I\n") # F = float, I = int f.write("COUNT 1 1 1 1 1 1 1 1 1 1 1 1 1\n") f.write(f"WIDTH {points.shape[0]}\n") f.write("HEIGHT 1\n") f.write("VIEWPOINT 0 0 0 1 0 0 0\n") f.write(f"POINTS {points.shape[0]}\n") f.write("DATA ascii\n") # **写入点云数据** np.savetxt(f, points, fmt="%.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %d %d %d") radar_num += 1 has_radar_0 = True elif topic == '/radar_track_array_3': ### 大陆408 # file_path = os.path.join(output_dir, 'radar_track_array_3', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000))) file_path = os.path.join(output_dir, 'radar_track_array_3', '{}.pcd'.format(time_stamp)) points = [] for track in msg.trackList: x, y, z = track.x, track.y, track.z vx, vy, ax, ay = track.vx, track.vy, track.ax, track.ay rcs, snr, yawRate = track.rcs, track.snr, track.yawRate obj_id, trackType, lifetime = track.id, track.trackType, track.lifetime # 过滤无效点 if np.isnan(x) or np.isnan(y) or np.isnan(z): continue points.append([x, y, z, vx, vy, ax, ay, rcs, snr, yawRate, obj_id, trackType, lifetime]) if not points: print("没有有效点云数据") continue points = np.array(points) # **写入 PCD 文件** with open(file_path, 'w') as f: # **写入 PCD 头部** f.write("# .PCD v0.7 - Point Cloud Data file format\n") f.write("VERSION 0.7\n") f.write(f"FIELDS x y z vx vy ax ay rcs snr yawRate id trackType lifetime\n") f.write("SIZE 4 4 4 4 4 4 4 4 4 4 4 4 4\n") f.write("TYPE F F F F F F F F F F I I I\n") # F = float, I = int f.write("COUNT 1 1 1 1 1 1 1 1 1 1 1 1 1\n") f.write(f"WIDTH {points.shape[0]}\n") f.write("HEIGHT 1\n") f.write("VIEWPOINT 0 0 0 1 0 0 0\n") f.write(f"POINTS {points.shape[0]}\n") f.write("DATA ascii\n") # **写入点云数据** np.savetxt(f, points, fmt="%.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %d %d %d") radar_num += 1 has_radar_3 = True elif topic == '/routing/routing_response_viz': rv = RoutingPath() rv.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) rv.stamp_secs = t.secs rv.stamp_nsecs = t.nsecs mark_list = list() for mark in msg.markers: path_pb = Path() path_pb.id = mark.id point_list = [] for point in mark.points: point_pb = Point() point_pb.x = point.x point_pb.y = point.y point_pb.z = point.z point_list.append(point_pb) path_pb.path_point.extend(point_list) mark_list.append(path_pb) rv.routing_path_info.extend(mark_list) routes.append(rv) elif topic == '/holo/ControlCommand': cf = CommandFrame() cf.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) cf.stamp_secs = t.secs cf.stamp_nsecs = t.nsecs cf.acceleration = msg.acceleration cf.front_wheel_angle = msg.front_wheel_angle cf.gear = msg.gear controls.append(cf) elif topic == '/planning/trajectory': tj = Trajectory() tj.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) tj.stamp_secs = t.secs tj.stamp_nsecs = t.nsecs p_list = [] for point in msg.trajectory_points: p = TrajectoryPoint() p.x = point.path_point.point.x p.y = point.path_point.point.y p.z = point.path_point.point.z p.theta = point.path_point.theta p.kappa = point.path_point.kappa p.v = point.v p.a = point.a p.relative_time = point.relative_time p_list.append(p) tj.trajectory_points.extend(p_list) plans.append(tj) elif topic == '/prediction/prediction_obstacles': tr_pb = PerceptionObstacle() tr_pb.timestamp = int(msg.header.stamp.secs * 1000 + msg.header.stamp.nsecs / 1000000) tr_pb.stamp_secs = msg.header.stamp.secs tr_pb.stamp_nsecs = msg.header.stamp.nsecs obj_list = list() for obj in msg.prediction_obstacle: ob_pb = Obstacle() ob_pb.obstacle_timestamp = int(obj.timestamp * 1000) ob_pb.id = obj.perception_obstacle.id ob_pb.x = obj.perception_obstacle.position.x ob_pb.y = obj.perception_obstacle.position.y ob_pb.z = obj.perception_obstacle.position.z traj_pbs = [] for traj in obj.trajectory: traj_pb = PredictionTrajectory() points_pbs = [] for trajectory_point in traj.trajectory_points: point_pb = PathPoint() point_pb.x = trajectory_point.path_point.point.x point_pb.y = trajectory_point.path_point.point.y point_pb.z = trajectory_point.path_point.point.z point_pb.theta = trajectory_point.path_point.theta point_pb.kappa = trajectory_point.path_point.kappa point_pb.lane_id = trajectory_point.path_point.lane_id point_pb.v = trajectory_point.v point_pb.a = trajectory_point.a point_pb.relative_time = trajectory_point.relative_time points_pbs.append(point_pb) traj_pb.path_point.extend(points_pbs) traj_pbs.append(traj_pb) ob_pb.prediction_trajectory.extend(traj_pbs) obj_list.append(ob_pb) tr_pb.obstacle_info.extend(obj_list) preds.append(tr_pb) elif topic == '/inspvax': pb_loc_gnss = GnssPoint() pb_loc_gnss.stamp_secs = msg.header.stamp.secs # 1 pb_loc_gnss.stamp_nsecs = msg.header.stamp.nsecs # 2 pb_loc_gnss.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) pb_loc_gnss.latitude = msg.latitude # 3 pb_loc_gnss.longitude = msg.longitude # 4 pb_loc_gnss.elevation = msg.altitude gnss.append(pb_loc_gnss) elif topic == '/holo/VehicleInfoMagotan': veh_pb = VehicleFrame() veh_pb.stamp_secs = msg.timestamp.secs # 1 veh_pb.stamp_nsecs = msg.timestamp.nsecs # 2 veh_pb.timestamp = get_ts(veh_pb.stamp_secs, veh_pb.stamp_nsecs) veh_pb.gear_value = msg.gear # 4 veh_pb.vehicle_speed = msg.vehicle_speed * 3.6 # 5 veh_pb.steering_angle = msg.steering_angle # 6 veh_pb.longitude_acc = msg.longitude_acc veh_pb.lateral_acc = msg.lateral_acc veh_pb.turn_left_light = msg.turn_left_light veh_pb.turn_right_light = msg.turn_right_light veh_pb.brake = msg.brake_torque veh_pb.autonomy_status = 0 vehs.append(veh_pb) elif topic == '/localization/localization_info': lo_pb = LocalizationInfoFrame() lo_pb.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) lo_pb.stamp_secs = msg.header.stamp.secs lo_pb.stamp_nsecs = msg.header.stamp.nsecs lo_pb.pose_position_x = msg.pose.position.x lo_pb.pose_position_y = msg.pose.position.y lo_pb.pose_position_z = msg.pose.position.z lo_pb.pose_orientation_x = msg.pose.orientation.x lo_pb.pose_orientation_y = msg.pose.orientation.y lo_pb.pose_orientation_z = msg.pose.orientation.z lo_pb.pose_orientation_w = msg.pose.orientation.w lo_pb.pose_orientation_yaw = \ yaw_from_quaternions(msg.pose.orientation.w, msg.pose.orientation.x, msg.pose.orientation.y, msg.pose.orientation.z) lo_pb.velocity_linear = get_modulo(msg.pose.linear_velocity.x, msg.pose.linear_velocity.y, msg.pose.linear_velocity.z) lo_pb.velocity_angular = get_modulo(msg.pose.angular_velocity.x, msg.pose.angular_velocity.y, msg.pose.angular_velocity.z) lo_pb.acceleration_linear = get_modulo(msg.pose.linear_acceleration_vrf.x, msg.pose.linear_acceleration_vrf.y, msg.pose.linear_acceleration_vrf.z) lo_pb.acceleration_angular = get_modulo(msg.pose.angular_velocity_vrf.x, msg.pose.angular_velocity_vrf.y, msg.pose.angular_velocity_vrf.z) locs.append(lo_pb) ego_pose = Pose(msg.pose.orientation.w, msg.pose.orientation.x, msg.pose.orientation.y, msg.pose.orientation.z, msg.pose.position.x, msg.pose.position.y, msg.pose.position.z) elif topic == '/perception/perception_obstacles': if ego_pose is None or pose_has_nan(ego_pose): continue tr_pb = TrackedObjectFrame() tr_pb.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) tr_pb.stamp_secs = msg.header.stamp.secs tr_pb.stamp_nsecs = msg.header.stamp.nsecs obj_list = list() for object in msg.perception_obstacle: ob_pb = Object() ob_pb.id = object.id ob_pb.label = get_label(object.type, object.sub_type) ob_pb.pose_position_x = object.position.x ob_pb.pose_position_y = object.position.y ob_pb.pose_position_z = object.position.z ob_pb.pose_orientation_x = 0 ob_pb.pose_orientation_y = 0 ob_pb.pose_orientation_z = math.sin(object.theta / 2) ob_pb.pose_orientation_w = math.cos(object.theta / 2) ob_pb.pose_orientation_yaw = object.theta ob_pb.dimensions_x = object.length ob_pb.dimensions_y = object.width ob_pb.dimensions_z = object.height ob_pb.speed_vector_linear_x = object.velocity.x ob_pb.speed_vector_linear_y = object.velocity.y ob_pb.speed_vector_linear_z = object.velocity.z world_obj = np.transpose(np.array([[object.position.x, object.position.y, object.position.z, 1]])) world_ego_t = get_t(ego_pose.q0, ego_pose.q1, ego_pose.q2, ego_pose.q3, ego_pose.x, ego_pose.y, ego_pose.z) try: world_ego_invt = np.linalg.pinv(world_ego_t) except Exception as err: print('pinv failed:', world_ego_t) raise err vehicle_obj = world_ego_invt * world_obj ob_pb.relative_position_x = vehicle_obj[0] ob_pb.relative_position_y = vehicle_obj[1] ob_pb.relative_position_z = vehicle_obj[2] obj_list.append(ob_pb) tr_pb.objects.extend(obj_list) objs.append(tr_pb) print(f"lidar_num : {lidar_num}") print(f"image_num : {image_num}") print(f"radar_num : {radar_num}") folders = [] if len(routes) > 0: os.makedirs(os.path.join(output_dir, 'routing_routing_response_viz')) folders.append({'folder': 'routing_routing_response_viz', 'sensor_type': 'routing_path'}) route_out = RoutingFrames() route_out.routing_frame.extend(routes) with open(os.path.join(output_dir, 'routing_routing_response_viz', 'route.pb'), "wb") as c: c.write(route_out.SerializeToString()) if len(controls) > 0: os.makedirs(os.path.join(output_dir, 'holo_ControlCommand')) folders.append({'folder': 'holo_ControlCommand', 'sensor_type': 'control'}) ctl_cmd_pb_out = ControlCommand() ctl_cmd_pb_out.command_frame.extend(controls) with open(os.path.join(output_dir, 'holo_ControlCommand', 'control.pb'), "wb") as c: c.write(ctl_cmd_pb_out.SerializeToString()) if len(plans) > 0: os.makedirs(os.path.join(output_dir, 'planning_trajectory')) folders.append({'folder': 'planning_trajectory', 'sensor_type': 'planning_trajectory'}) plan_traj_pb_out = PlanTrajectory() plan_traj_pb_out.trajectory_info.extend(plans) with open(os.path.join(output_dir, 'planning_trajectory', 'planning.pb'), "wb") as p: p.write(plan_traj_pb_out.SerializeToString()) if len(preds) > 0: os.makedirs(os.path.join(output_dir, 'prediction_prediction_obstacles')) folders.append({'folder': 'prediction_prediction_obstacles', 'sensor_type': 'predicted_objects'}) pred_obstacles_pb_out = PredictionObstacles() pred_obstacles_pb_out.perception_obstacle.extend(preds) with open(os.path.join(output_dir, 'prediction_prediction_obstacles', 'predicted.pb'), "wb") as p: p.write(pred_obstacles_pb_out.SerializeToString()) if len(gnss) > 0: os.makedirs(os.path.join(output_dir, 'inspvax')) folders.append({'folder': 'inspvax', 'sensor_type': 'gnss'}) gn_pb_out = GnssPoints() gn_pb_out.gnss_points.extend(gnss) with open(os.path.join(output_dir, 'inspvax', 'gnss.pb'), "wb") as g: g.write(gn_pb_out.SerializeToString()) if len(vehs) > 0: os.makedirs(os.path.join(output_dir, 'holo_VehicleInfoMagotan')) folders.append({'folder': 'holo_VehicleInfoMagotan', 'sensor_type': 'vehicle'}) veh_pb_out = VehicleInfo() veh_pb_out.vehicle_info.extend(vehs) with open(os.path.join(output_dir, 'holo_VehicleInfoMagotan', 'vehicle.pb'), "wb") as v: v.write(veh_pb_out.SerializeToString()) if len(locs) > 0: os.makedirs(os.path.join(output_dir, 'localization_localization_info')) folders.append({'folder': 'localization_localization_info', 'sensor_type': 'ego_tf'}) lo_pb_out = LocalizationInfo() lo_pb_out.localization_info.extend(locs) with open(os.path.join(output_dir, 'localization_localization_info', 'ego_tf.pb'), "wb") as lo: lo.write(lo_pb_out.SerializeToString()) if len(objs) > 0: os.makedirs(os.path.join(output_dir, 'perception_perception_obstacles')) folders.append({'folder': 'perception_perception_obstacles', 'sensor_type': 'object_array_vision'}) tr_pb_out = TrackedObject() tr_pb_out.tracked_object.extend(objs) with open(os.path.join(output_dir, 'perception_perception_obstacles', 'object_array_vision.pb'), "wb") as tr: tr.write(tr_pb_out.SerializeToString()) if has_camera_74: folders.append({'folder': 'mdc_camera_instance_74', 'sensor_type': 'camera'}) if has_camera_73: folders.append({'folder': 'mdc_camera_instance_73', 'sensor_type': 'camera'}) if has_camera_72: folders.append({'folder': 'mdc_camera_instance_72', 'sensor_type': 'camera'}) if has_camera_71: folders.append({'folder': 'mdc_camera_instance_71', 'sensor_type': 'camera'}) if has_lidar_A4: if args.calibration_id: folders.append({'folder': 'hwPtClound_A4', 'sensor_type': 'lidar', 'calibration_item_id': args.calibration_id}) else: folders.append({'folder': 'hwPtClound_A4', 'sensor_type': 'lidar'}) if has_lidar_B2: if args.calibration_id: folders.append({'folder': 'hwPtClound_B2', 'sensor_type': 'lidar', 'calibration_item_id': args.calibration_id}) else: folders.append({'folder': 'hwPtClound_B2', 'sensor_type': 'lidar'}) if has_radar_0: folders.append({'folder': 'radar_track_array_0', 'sensor_type': 'radar'}) if has_radar_3: folders.append({'folder': 'radar_track_array_3', 'sensor_type': 'radar'}) collect_yaml = {'folders': folders} with open(os.path.join(output_dir, "opendata_to_platform.yaml"), 'w', encoding='utf-8') as collect_file: yaml.safe_dump(collect_yaml, collect_file) with open(os.path.join(os.getenv('output_dir'), '_SUCCESS'), 'w') as f: f.write("") os.system('chmod -R a+r ${output_dir}/*') if __name__ == '__main__': parser = ArgumentParser() parser.add_argument('-i', '--input', help="input bag path", default=os.getenv('rosbag_path')) parser.add_argument('-o', '--output', default=os.getenv('output_dir'), help="result output directory, default to ./bags/") parser.add_argument('-ci', '--calibration_id', type=int) params = parser.parse_args() main(params)

import math import os import struct from argparse import ArgumentParser import av import numpy as np import open3d as o3d import rosbag import yaml from sensor_msgs import point_cloud2 import subprocess from protoc.octopudata_localizationinfo_pb2 import LocalizationInfoFrame, LocalizationInfo from protoc.octopudata_trackedobject_pb2 import TrackedObjectFrame, Object, TrackedObject from protoc.octopusdata_controlcommand_pb2 import CommandFrame, ControlCommand from protoc.octopusdata_gnss_pb2 import GnssPoint, GnssPoints from protoc.octopusdata_plantrajectory_pb2 import Trajectory, TrajectoryPoint, PlanTrajectory from protoc.octopusdata_predictionobstacles_pb2 import PerceptionObstacle, \ Obstacle, PredictionTrajectory, PathPoint, PredictionObstacles from protoc.octopusdata_routingpath_pb2 import RoutingPath, Path, Point, RoutingFrames from protoc.octopusdata_vehicleinfo_pb2 import VehicleFrame, VehicleInfo av.logging.set_level(av.logging.PANIC) codec_ctx = av.codec.Codec('hevc','r') h265_code = codec_ctx.create() class Pose: def __init__(self, q0, q1, q2, q3, x, y, z): self.q0 = q0 self.q1 = q1 self.q2 = q2 self.q3 = q3 self.x = x self.y = y self.z = z def get_ts(secs, nsecs): return int(secs * 1000 + nsecs / 1000000) def get_modulo(x, y, z): return math.sqrt(x * x + y * y + z * z) def yaw_from_quaternions(w, x, y, z): a = 2 * (w * z + x * y) b = 1 - 2 * (y * y + z * z) return math.atan2(a, b) def pose_has_nan(p): return math.isnan(p.x) or math.isnan(p.y) or math.isnan(p.z) or \ math.isnan(p.q0) or math.isnan(p.q1) or math.isnan(p.q2) or \ math.isnan(p.q3) def get_t(q0, q1, q2, q3, x, y, z): aa = 1 - 2 * (q2 * q2 + q3 * q3) ab = 2 * (q1 * q2 - q0 * q3) ac = 2 * (q1 * q3 + q0 * q2) ba = 2 * (q1 * q2 + q0 * q3) bb = 1 - 2 * (q1 * q1 + q3 * q3) bc = 2 * (q2 * q3 - q0 * q1) ca = 2 * (q1 * q3 - q0 * q2) cb = 2 * (q2 * q3 + q0 * q1) cc = 1 - 2 * (q1 * q1 + q2 * q2) t = np.mat([[aa, ab, ac, x], [ba, bb, bc, y], [ca, cb, cc, z], [0, 0, 0, 1]]) return t def get_label(perception_typ, subtype): if perception_typ == 3: return 'Pedestrian' elif perception_typ == 4: return 'Bi_Tricycle' elif perception_typ == 5: if subtype == 5: return 'Truck' elif subtype == 6: return 'Bus' else: return 'Car' else: return 'unknow' def main(args): for file in os.listdir(args.input): if file.endswith('.bag'): bag_path = os.path.join(args.input, file) bag = rosbag.Bag(bag_path, "r") output_dir = os.getenv('output_dir') if not os.path.exists(os.path.join(output_dir, 'innoPtClound_A4')): os.makedirs(os.path.join(output_dir, 'innoPtClound_A4')) if not os.path.exists(os.path.join(output_dir, 'innoPtClound_B2')): os.makedirs(os.path.join(output_dir, 'innoPtClound_B2')) if not os.path.exists(os.path.join(output_dir, 'radar_track_array_0')): os.makedirs(os.path.join(output_dir, 'radar_track_array_0')) if not os.path.exists(os.path.join(output_dir, 'radar_track_array_3')): os.makedirs(os.path.join(output_dir, 'radar_track_array_3')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_74')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_74')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_73')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_73')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_72')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_72')) if not os.path.exists(os.path.join(output_dir, 'mdc_camera_instance_71')): os.makedirs(os.path.join(output_dir, 'mdc_camera_instance_71')) routes = [] controls = [] plans = [] preds = [] gnss = [] vehs = [] locs = [] objs = [] ego_pose = None has_camera_71 = False has_camera_72 = False has_camera_73 = False has_camera_74 = False has_lidar_A4 = False has_lidar_B2 = False has_radar_0 = False has_radar_3 = False lidar_num = 0 image_num = 0 radar_num = 0 for topic, msg, t in bag.read_messages(): time_stamp = int(t.to_sec() * 1000) # 以 rosbag 时间戳(t)为基准,转换为 13 位时间戳 if topic == '/innoPtClound_A4': ### 图达通 时间辍应该是13位数字,图达通雷达8位,华为96线6位 # file_path = os.path.join(output_dir, 'innoPtClound_A4', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000))) file_path = os.path.join(output_dir, 'innoPtClound_A4', '{}.pcd'.format(time_stamp)) # print(file_path) # 提取点云数据,包括 x, y, z points = list(point_cloud2.read_points(msg, field_names=["x", "y", "z", "intensity"], skip_nans=True)) if points: # 转换为 numpy 数组,添加 intensity, ring, timestamp 字段 np_points = np.array(points) # (N, 3), 包含 x, y, z # 转换为 Open3D 格式点云 pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(np_points[:, :3]) # x, y, z pcd.colors = o3d.utility.Vector3dVector(np.tile(np_points[:, 3:4] / np_points[:, 3:4].max(), (1, 3))) # 用 intensity 作为灰度颜色 o3d.io.write_point_cloud(file_path, pcd) lidar_num += 1 has_lidar_A4 = True elif topic == '/innoPtClound_B2': ### 图达通 # file_path = os.path.join(output_dir, 'innoPtClound_B2', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000))) file_path = os.path.join(output_dir, 'innoPtClound_B2', '{}.pcd'.format(time_stamp)) # print(file_path) # 提取点云数据,包括 x, y, z points = list(point_cloud2.read_points(msg, field_names=["x", "y", "z", "intensity"], skip_nans=True)) if points: # 转换为 numpy 数组,添加 intensity, ring, timestamp 字段 np_points = np.array(points) # (N, 3), 包含 x, y, z # 转换为 Open3D 格式点云 pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(np_points[:, :3]) # x, y, z pcd.colors = o3d.utility.Vector3dVector(np.tile(np_points[:, 3:4] / np_points[:, 3:4].max(), (1, 3))) # 用 intensity 作为灰度颜色 o3d.io.write_point_cloud(file_path, pcd) lidar_num += 1 has_lidar_B2 = True elif topic == 'mdc_camera_instance_74': ### 相机 时间辍应该是13位数字 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_74', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_74 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == 'mdc_camera_instance_73': ### 相机 时间辍应该是13位数字 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_73', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_73 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == 'mdc_camera_instance_72': ### 相机 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_72', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_72 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == 'mdc_camera_instance_71': ### 相机 # time_stamp = int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000) file_path = os.path.join(output_dir, 'mdc_camera_instance_71', '{}.jpg'.format(time_stamp)) packet = av.packet.Packet(msg.data) try: out = h265_code.decode(packet) for frame in out: if frame.format.name != 'rgb24': frame = frame.reformat(format='rgb24') img = frame.to_image() img.save(file_path) image_num += 1 has_camera_71 = True except Exception as e: print("{} frame can not trans to jpg".format(time_stamp), e) elif topic == '/radar_track_array_0': ### 大陆408 时间辍应该是13位数字 # file_path = os.path.join(output_dir, 'radar_track_array_0', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000))) file_path = os.path.join(output_dir, 'radar_track_array_0', '{}.pcd'.format(time_stamp)) points = [] for track in msg.trackList: x, y, z = track.x, track.y, track.z vx, vy, ax, ay = track.vx, track.vy, track.ax, track.ay rcs, snr, yawRate = track.rcs, track.snr, track.yawRate obj_id, trackType, lifetime = track.id, track.trackType, track.lifetime # 过滤无效点 if np.isnan(x) or np.isnan(y) or np.isnan(z): continue points.append([x, y, z, vx, vy, ax, ay, rcs, snr, yawRate, obj_id, trackType, lifetime]) if not points: print("没有有效点云数据") continue points = np.array(points) # **写入 PCD 文件** with open(file_path, 'w') as f: # **写入 PCD 头部** f.write("# .PCD v0.7 - Point Cloud Data file format\n") f.write("VERSION 0.7\n") f.write(f"FIELDS x y z vx vy ax ay rcs snr yawRate id trackType lifetime\n") f.write("SIZE 4 4 4 4 4 4 4 4 4 4 4 4 4\n") f.write("TYPE F F F F F F F F F F I I I\n") # F = float, I = int f.write("COUNT 1 1 1 1 1 1 1 1 1 1 1 1 1\n") f.write(f"WIDTH {points.shape[0]}\n") f.write("HEIGHT 1\n") f.write("VIEWPOINT 0 0 0 1 0 0 0\n") f.write(f"POINTS {points.shape[0]}\n") f.write("DATA ascii\n") # **写入点云数据** np.savetxt(f, points, fmt="%.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %d %d %d") radar_num += 1 has_radar_0 = True elif topic == '/radar_track_array_3': ### 大陆408 # file_path = os.path.join(output_dir, 'radar_track_array_3', '{}.pcd'.format(int(msg.header.stamp.secs * 1000 + # msg.header.stamp.nsecs / 1000000))) file_path = os.path.join(output_dir, 'radar_track_array_3', '{}.pcd'.format(time_stamp)) points = [] for track in msg.trackList: x, y, z = track.x, track.y, track.z vx, vy, ax, ay = track.vx, track.vy, track.ax, track.ay rcs, snr, yawRate = track.rcs, track.snr, track.yawRate obj_id, trackType, lifetime = track.id, track.trackType, track.lifetime # 过滤无效点 if np.isnan(x) or np.isnan(y) or np.isnan(z): continue points.append([x, y, z, vx, vy, ax, ay, rcs, snr, yawRate, obj_id, trackType, lifetime]) if not points: print("没有有效点云数据") continue points = np.array(points) # **写入 PCD 文件** with open(file_path, 'w') as f: # **写入 PCD 头部** f.write("# .PCD v0.7 - Point Cloud Data file format\n") f.write("VERSION 0.7\n") f.write(f"FIELDS x y z vx vy ax ay rcs snr yawRate id trackType lifetime\n") f.write("SIZE 4 4 4 4 4 4 4 4 4 4 4 4 4\n") f.write("TYPE F F F F F F F F F F I I I\n") # F = float, I = int f.write("COUNT 1 1 1 1 1 1 1 1 1 1 1 1 1\n") f.write(f"WIDTH {points.shape[0]}\n") f.write("HEIGHT 1\n") f.write("VIEWPOINT 0 0 0 1 0 0 0\n") f.write(f"POINTS {points.shape[0]}\n") f.write("DATA ascii\n") # **写入点云数据** np.savetxt(f, points, fmt="%.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %.6f %d %d %d") radar_num += 1 has_radar_3 = True elif topic == '/routing/routing_response_viz': rv = RoutingPath() rv.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) rv.stamp_secs = t.secs rv.stamp_nsecs = t.nsecs mark_list = list() for mark in msg.markers: path_pb = Path() path_pb.id = mark.id point_list = [] for point in mark.points: point_pb = Point() point_pb.x = point.x point_pb.y = point.y point_pb.z = point.z point_list.append(point_pb) path_pb.path_point.extend(point_list) mark_list.append(path_pb) rv.routing_path_info.extend(mark_list) routes.append(rv) elif topic == '/holo/ControlCommand': cf = CommandFrame() cf.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) cf.stamp_secs = t.secs cf.stamp_nsecs = t.nsecs cf.acceleration = msg.acceleration cf.front_wheel_angle = msg.front_wheel_angle cf.gear = msg.gear controls.append(cf) elif topic == '/planning/trajectory': tj = Trajectory() tj.timestamp = int(t.secs * 1000 + t.nsecs / 1000000) tj.stamp_secs = t.secs tj.stamp_nsecs = t.nsecs p_list = [] for point in msg.trajectory_points: p = TrajectoryPoint() p.x = point.path_point.point.x p.y = point.path_point.point.y p.z = point.path_point.point.z p.theta = point.path_point.theta p.kappa = point.path_point.kappa p.v = point.v p.a = point.a p.relative_time = point.relative_time p_list.append(p) tj.trajectory_points.extend(p_list) plans.append(tj) elif topic == '/prediction/prediction_obstacles': tr_pb = PerceptionObstacle() tr_pb.timestamp = int(msg.header.stamp.secs * 1000 + msg.header.stamp.nsecs / 1000000) tr_pb.stamp_secs = msg.header.stamp.secs tr_pb.stamp_nsecs = msg.header.stamp.nsecs obj_list = list() for obj in msg.prediction_obstacle: ob_pb = Obstacle() ob_pb.obstacle_timestamp = int(obj.timestamp * 1000) ob_pb.id = obj.perception_obstacle.id ob_pb.x = obj.perception_obstacle.position.x ob_pb.y = obj.perception_obstacle.position.y ob_pb.z = obj.perception_obstacle.position.z traj_pbs = [] for traj in obj.trajectory: traj_pb = PredictionTrajectory() points_pbs = [] for trajectory_point in traj.trajectory_points: point_pb = PathPoint() point_pb.x = trajectory_point.path_point.point.x point_pb.y = trajectory_point.path_point.point.y point_pb.z = trajectory_point.path_point.point.z point_pb.theta = trajectory_point.path_point.theta point_pb.kappa = trajectory_point.path_point.kappa point_pb.lane_id = trajectory_point.path_point.lane_id point_pb.v = trajectory_point.v point_pb.a = trajectory_point.a point_pb.relative_time = trajectory_point.relative_time points_pbs.append(point_pb) traj_pb.path_point.extend(points_pbs) traj_pbs.append(traj_pb) ob_pb.prediction_trajectory.extend(traj_pbs) obj_list.append(ob_pb) tr_pb.obstacle_info.extend(obj_list) preds.append(tr_pb) elif topic == '/inspvax': pb_loc_gnss = GnssPoint() pb_loc_gnss.stamp_secs = msg.header.stamp.secs # 1 pb_loc_gnss.stamp_nsecs = msg.header.stamp.nsecs # 2 pb_loc_gnss.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) pb_loc_gnss.latitude = msg.latitude # 3 pb_loc_gnss.longitude = msg.longitude # 4 pb_loc_gnss.elevation = msg.altitude gnss.append(pb_loc_gnss) elif topic == '/holo/VehicleInfoMagotan': veh_pb = VehicleFrame() veh_pb.stamp_secs = msg.timestamp.secs # 1 veh_pb.stamp_nsecs = msg.timestamp.nsecs # 2 veh_pb.timestamp = get_ts(veh_pb.stamp_secs, veh_pb.stamp_nsecs) veh_pb.gear_value = msg.gear # 4 veh_pb.vehicle_speed = msg.vehicle_speed * 3.6 # 5 veh_pb.steering_angle = msg.steering_angle # 6 veh_pb.longitude_acc = msg.longitude_acc veh_pb.lateral_acc = msg.lateral_acc veh_pb.turn_left_light = msg.turn_left_light veh_pb.turn_right_light = msg.turn_right_light veh_pb.brake = msg.brake_torque veh_pb.autonomy_status = 0 vehs.append(veh_pb) elif topic == '/localization/localization_info': lo_pb = LocalizationInfoFrame() lo_pb.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) lo_pb.stamp_secs = msg.header.stamp.secs lo_pb.stamp_nsecs = msg.header.stamp.nsecs lo_pb.pose_position_x = msg.pose.position.x lo_pb.pose_position_y = msg.pose.position.y lo_pb.pose_position_z = msg.pose.position.z lo_pb.pose_orientation_x = msg.pose.orientation.x lo_pb.pose_orientation_y = msg.pose.orientation.y lo_pb.pose_orientation_z = msg.pose.orientation.z lo_pb.pose_orientation_w = msg.pose.orientation.w lo_pb.pose_orientation_yaw = \ yaw_from_quaternions(msg.pose.orientation.w, msg.pose.orientation.x, msg.pose.orientation.y, msg.pose.orientation.z) lo_pb.velocity_linear = get_modulo(msg.pose.linear_velocity.x, msg.pose.linear_velocity.y, msg.pose.linear_velocity.z) lo_pb.velocity_angular = get_modulo(msg.pose.angular_velocity.x, msg.pose.angular_velocity.y, msg.pose.angular_velocity.z) lo_pb.acceleration_linear = get_modulo(msg.pose.linear_acceleration_vrf.x, msg.pose.linear_acceleration_vrf.y, msg.pose.linear_acceleration_vrf.z) lo_pb.acceleration_angular = get_modulo(msg.pose.angular_velocity_vrf.x, msg.pose.angular_velocity_vrf.y, msg.pose.angular_velocity_vrf.z) locs.append(lo_pb) ego_pose = Pose(msg.pose.orientation.w, msg.pose.orientation.x, msg.pose.orientation.y, msg.pose.orientation.z, msg.pose.position.x, msg.pose.position.y, msg.pose.position.z) elif topic == '/perception/perception_obstacles': if ego_pose is None or pose_has_nan(ego_pose): continue tr_pb = TrackedObjectFrame() tr_pb.timestamp = get_ts(msg.header.stamp.secs, msg.header.stamp.nsecs) tr_pb.stamp_secs = msg.header.stamp.secs tr_pb.stamp_nsecs = msg.header.stamp.nsecs obj_list = list() for object in msg.perception_obstacle: ob_pb = Object() ob_pb.id = object.id ob_pb.label = get_label(object.type, object.sub_type) ob_pb.pose_position_x = object.position.x ob_pb.pose_position_y = object.position.y ob_pb.pose_position_z = object.position.z ob_pb.pose_orientation_x = 0 ob_pb.pose_orientation_y = 0 ob_pb.pose_orientation_z = math.sin(object.theta / 2) ob_pb.pose_orientation_w = math.cos(object.theta / 2) ob_pb.pose_orientation_yaw = object.theta ob_pb.dimensions_x = object.length ob_pb.dimensions_y = object.width ob_pb.dimensions_z = object.height ob_pb.speed_vector_linear_x = object.velocity.x ob_pb.speed_vector_linear_y = object.velocity.y ob_pb.speed_vector_linear_z = object.velocity.z world_obj = np.transpose(np.array([[object.position.x, object.position.y, object.position.z, 1]])) world_ego_t = get_t(ego_pose.q0, ego_pose.q1, ego_pose.q2, ego_pose.q3, ego_pose.x, ego_pose.y, ego_pose.z) try: world_ego_invt = np.linalg.pinv(world_ego_t) except Exception as err: print('pinv failed:', world_ego_t) raise err vehicle_obj = world_ego_invt * world_obj ob_pb.relative_position_x = vehicle_obj[0] ob_pb.relative_position_y = vehicle_obj[1] ob_pb.relative_position_z = vehicle_obj[2] obj_list.append(ob_pb) tr_pb.objects.extend(obj_list) objs.append(tr_pb) print(f"lidar_num : {lidar_num}") print(f"image_num : {image_num}") print(f"radar_num : {radar_num}") folders = [] if len(routes) > 0: os.makedirs(os.path.join(output_dir, 'routing_routing_response_viz')) folders.append({'folder': 'routing_routing_response_viz', 'sensor_type': 'routing_path'}) route_out = RoutingFrames() route_out.routing_frame.extend(routes) with open(os.path.join(output_dir, 'routing_routing_response_viz', 'route.pb'), "wb") as c: c.write(route_out.SerializeToString()) if len(controls) > 0: os.makedirs(os.path.join(output_dir, 'holo_ControlCommand')) folders.append({'folder': 'holo_ControlCommand', 'sensor_type': 'control'}) ctl_cmd_pb_out = ControlCommand() ctl_cmd_pb_out.command_frame.extend(controls) with open(os.path.join(output_dir, 'holo_ControlCommand', 'control.pb'), "wb") as c: c.write(ctl_cmd_pb_out.SerializeToString()) if len(plans) > 0: os.makedirs(os.path.join(output_dir, 'planning_trajectory')) folders.append({'folder': 'planning_trajectory', 'sensor_type': 'planning_trajectory'}) plan_traj_pb_out = PlanTrajectory() plan_traj_pb_out.trajectory_info.extend(plans) with open(os.path.join(output_dir, 'planning_trajectory', 'planning.pb'), "wb") as p: p.write(plan_traj_pb_out.SerializeToString()) if len(preds) > 0: os.makedirs(os.path.join(output_dir, 'prediction_prediction_obstacles')) folders.append({'folder': 'prediction_prediction_obstacles', 'sensor_type': 'predicted_objects'}) pred_obstacles_pb_out = PredictionObstacles() pred_obstacles_pb_out.perception_obstacle.extend(preds) with open(os.path.join(output_dir, 'prediction_prediction_obstacles', 'predicted.pb'), "wb") as p: p.write(pred_obstacles_pb_out.SerializeToString()) if len(gnss) > 0: os.makedirs(os.path.join(output_dir, 'inspvax')) folders.append({'folder': 'inspvax', 'sensor_type': 'gnss'}) gn_pb_out = GnssPoints() gn_pb_out.gnss_points.extend(gnss) with open(os.path.join(output_dir, 'inspvax', 'gnss.pb'), "wb") as g: g.write(gn_pb_out.SerializeToString()) if len(vehs) > 0: os.makedirs(os.path.join(output_dir, 'holo_VehicleInfoMagotan')) folders.append({'folder': 'holo_VehicleInfoMagotan', 'sensor_type': 'vehicle'}) veh_pb_out = VehicleInfo() veh_pb_out.vehicle_info.extend(vehs) with open(os.path.join(output_dir, 'holo_VehicleInfoMagotan', 'vehicle.pb'), "wb") as v: v.write(veh_pb_out.SerializeToString()) if len(locs) > 0: os.makedirs(os.path.join(output_dir, 'localization_localization_info')) folders.append({'folder': 'localization_localization_info', 'sensor_type': 'ego_tf'}) lo_pb_out = LocalizationInfo() lo_pb_out.localization_info.extend(locs) with open(os.path.join(output_dir, 'localization_localization_info', 'ego_tf.pb'), "wb") as lo: lo.write(lo_pb_out.SerializeToString()) if len(objs) > 0: os.makedirs(os.path.join(output_dir, 'perception_perception_obstacles')) folders.append({'folder': 'perception_perception_obstacles', 'sensor_type': 'object_array_vision'}) tr_pb_out = TrackedObject() tr_pb_out.tracked_object.extend(objs) with open(os.path.join(output_dir, 'perception_perception_obstacles', 'object_array_vision.pb'), "wb") as tr: tr.write(tr_pb_out.SerializeToString()) if has_camera_74: folders.append({'folder': 'mdc_camera_instance_74', 'sensor_type': 'camera'}) if has_camera_73: folders.append({'folder': 'mdc_camera_instance_73', 'sensor_type': 'camera'}) if has_camera_72: folders.append({'folder': 'mdc_camera_instance_72', 'sensor_type': 'camera'}) if has_camera_71: folders.append({'folder': 'mdc_camera_instance_71', 'sensor_type': 'camera'}) if has_lidar_A4: if args.calibration_id: folders.append({'folder': 'innoPtClound_A4', 'sensor_type': 'lidar', 'calibration_item_id': args.calibration_id}) else: folders.append({'folder': 'innoPtClound_A4', 'sensor_type': 'lidar'}) if has_lidar_B2: if args.calibration_id: folders.append({'folder': 'innoPtClound_B2', 'sensor_type': 'lidar', 'calibration_item_id': args.calibration_id}) else: folders.append({'folder': 'innoPtClound_B2', 'sensor_type': 'lidar'}) if has_radar_0: folders.append({'folder': 'radar_track_array_0', 'sensor_type': 'radar'}) if has_radar_3: folders.append({'folder': 'radar_track_array_3', 'sensor_type': 'radar'}) collect_yaml = {'folders': folders} with open(os.path.join(output_dir, "opendata_to_platform.yaml"), 'w', encoding='utf-8') as collect_file: yaml.safe_dump(collect_yaml, collect_file) with open(os.path.join(os.getenv('output_dir'), '_SUCCESS'), 'w') as f: f.write("") os.system('chmod -R a+r ${output_dir}/*') if __name__ == '__main__': parser = ArgumentParser() parser.add_argument('-i', '--input', help="input bag path", default=os.getenv('rosbag_path')) parser.add_argument('-o', '--output', default=os.getenv('output_dir'), help="result output directory, default to ./bags/") parser.add_argument('-ci', '--calibration_id', type=int) params = parser.parse_args() main(params)

#include <ros/ros.h> #include #include #include <sensor_msgs/PointCloud2.h> #include #include #include <iostream> #include <vector> #include <ctime> #include #include //struct MyPoint { float X, Y, Z; }; // 字段名必须与PCD文件完全一致 //POINT_CLOUD_REGISTER_POINT_STRUCT(MyPoint, (float, X, X)(float, Y, Y)(float, Z, Z)) main (int argc, char **argv) { ros::init (argc, argv, "pcl_create"); ros::NodeHandle nh; ros::Publisher pcl_pub = nh.advertise<sensor_msgs::PointCloud2> ("pcl_output", 1); pcl::PointCloud::Ptr cloud3 (new pcl::PointCloud); pcl::PointCloud::Ptr cloud_filtered (new pcl::PointCloud); pcl::PCLPointCloud2 cloud1; sensor_msgs::PointCloud2 output; //pcl::io::loadPCDFile("/home/zjh/cloud_test/table_scene_lms400.pcd",cloud1); //pcl::io::loadPCDFile(file_path, cloud1); //pcl::io::loadPCDFile;/tmp/table_scene_lms400.pcd //pcl::io::loadPCDFile("/tmp/table_scene_lms400.pcd",cloud1); pcl::io::loadPCDFile("/tmp/costmap_obstacles.pcd",cloud1); pcl::fromPCLPointCloud2 (cloud1,*cloud3); pcl::StatisticalOutlierRemoval sor; sor.setInputCloud (cloud3); // 创建过滤器,每个点分析计算时考虑的最近邻居个数为50个; // 设置标准差阈值为1,这意味着所有距离查询点的平均距离的标准偏差均大于1个标准偏差的所有点都将被标记为离群值并删除。 // 计算输出并将其存储在cloud_filtered中 // 设置平均距离估计的最近邻居的数量K sor.setMeanK (50); // 设置标准差阈值系数 sor.setStddevMulThresh (1.0); // 执行过滤 sor.filter (*cloud_filtered); std::cerr << "Cloud after filtering: " << std::endl; std::cerr << *cloud_filtered << std::endl; pcl::toROSMsg(*cloud_filtered, output); output.header.frame_id = "odom"; ros::Rate loop_rate(1); while (ros::ok()) { pcl_pub.publish(output); ros::spinOnce(); loop_rate.sleep(); } return 0; }

最新推荐

recommend-type

IEEE1588PTPdaemonforSTM32F4Discoveryboard.zip

IEEE1588PTPdaemonforSTM32F4Discoveryboard.zip
recommend-type

bls-wasm:Node.js下WebAssembly实现的BLS签名技术

### 知识点说明 #### 标题解析 - **WebAssembly**: 是一种新的代码执行格式,旨在提供一种在现代浏览器和服务器上都能运行的安全、快速的代码执行方式。WebAssembly最初的目标是让网页可以运行高性能的应用程序,比如游戏或视频编辑工具,但随着技术的发展,其应用场景已经扩展到服务器端。Node.js通过引入WebAssembly支持,使得可以在其环境中利用WebAssembly的能力执行高度优化的代码。 - **Node.js**: 是一个基于Chrome V8引擎的JavaScript运行环境,它执行JavaScript代码不需要浏览器支持。Node.js被设计为能够构建快速、可扩展的网络应用程序,尤其擅长处理大量并发连接的场景。 - **BLS签名**:BLS(Boneh-Lynn-Shacham)签名是一种基于密码学的签名方案。它在安全性、效率和功能上优于传统的ECDSA和RSA签名算法。BLS签名特别适合于区块链等需要快速验证大量签名的场景。 #### 描述解析 - **密钥和签名模型**: 描述了BLS签名方案中的基本要素:`Fr:SecretKey` 表示秘密密钥,而 `G2:PublicKey` 表示公钥。G1用于表示签名。在密码学中,密钥和签名的生成、使用和管理是确保系统安全的基础。 - **以太坊2.0兼容性**: 提到如果需要与以太坊2.0兼容的签名/验证,需要参考某些文档或指南。这暗示了`bls-wasm`库在区块链领域的重要性,特别是针对以太坊这样的平台,其正在向2.0版本升级,而新的版本将会使用BLS签名来改进网络的安全性和性能。 #### 使用指南 - **Node.js使用**: 通过`require('bls-wasm')`语句引入模块,展示了如何在Node.js环境中集成`bls-wasm`模块。 - **浏览器使用**: 对于在浏览器中使用,需要引入`bls.js`,并且通过`require('bls-wasm/browser')`的方式引入。这反映了WebAssembly模块的跨平台特点,能够适应不同的运行环境。 - **React使用**: 通过类似的方式`const bls = require('bls-wasm/browser')`说明了在React项目中如何集成`bls-wasm`。 - **版本兼容性**: 提到v0.4.2版本破坏了入口点的向后兼容性,意味着从这个版本开始,库的API可能发生了变更,需要开发者注意更新。 #### 执照信息 - **修改了新的执照**: 说明了关于软件许可证的新变化,暗示了库的许可证可能由之前的版本有所更新,需要用户关注和遵守新的许可证条款。 #### 压缩包文件信息 - **bls-wasm-master**: 由于提供了压缩包文件的名称列表,暗示了一个名为`bls-wasm`的项目,可能包含源代码、编译后的文件、文档等。 ### 知识点的深入拓展 #### WebAssembly在Node.js中的应用 WebAssembly在Node.js中的主要优势在于性能的提升,特别是在处理CPU密集型任务时。WebAssembly模块可以运行C/C++、Rust等语言编写的代码,并且这些代码在WebAssembly的沙盒环境中执行得非常快。 #### BLS签名在区块链中的作用 区块链技术依赖于密码学来确保交易的安全性和验证性。BLS签名因其在密钥长度、签名长度、签名速度以及多签性能等方面的优点,非常适合被用于区块链网络。它允许验证者更快地验证交易,并提高了区块链的处理能力。 #### Node.js环境下的安全实践 在Node.js环境中使用BLS签名或任何加密算法时,应当遵循安全实践,例如确保密钥的安全管理,避免在不安全的通道中传输密钥,以及定期更新和轮换密钥等。 #### 跨平台兼容性的重要性 对于WebAssembly模块来说,能够在不同的环境(如Node.js、浏览器、React应用等)中无缝工作是至关重要的。开发者需要关注不同平台间的API差异和兼容性问题。 #### 软件许可证的遵守 软件许可证规定了开发者如何使用该软件,以及他们可以对软件进行哪些修改和分发。遵循许可证的规定不仅可以避免法律风险,还可以确保代码的使用和传播不会侵犯原作者的权益。 综上所述,`bls-wasm`模块作为一个在WebAssembly环境下运行的BLS签名工具,为Node.js和Web开发者提供了强大的密码学能力,特别是对于希望支持以太坊2.0等区块链技术的项目。开发者在使用这类工具时,需要关注性能、兼容性、安全性和许可证等多方面的要求。
recommend-type

提升Linux终端使用效率的实用指南

### 提升 Linux 终端使用效率的实用指南 #### 1. i3wm 窗口管理基础操作 在 i3wm 中,创建新的窗口块(tiles)就等同于打开应用程序,因为所有应用程序都以窗口块的形式呈现在窗口管理器的网格中。 - **创建窗口块**: - 打开终端:使用 `win + enter`。 - 打开其他应用程序:按下 `win + d`,屏幕左上角会弹出一个小对话框,输入应用程序名称,对话框会显示自动补全状态,当显示目标程序时,按下“打开”,应用程序将以窗口块形式打开。 - **窗口块布局调整**: - 打开更多应用程序会在工作区自动创建更多窗口块,并在网格中分
recommend-type

现有以下数据: String[] keyAry: [{"fields":[{"name":"类型"}, {"name":"中问题", "width":"400px"}, {"name":"小问题", "width":"400px"}]}] String[] value: [{"fieldvalues":[{"value":"人员","row":0,"col":0},{"value":"1","row":0,"col":1},{"value":"1","row":0,"col":2}]},{"fieldvalues":[{"value":"设施","row":1,"col":0},{"value":"2","row":1,"col":1},{"value":"2","row":1,"col":2}]}] 请使用java语言将以上数据转化为word的表格,具体格式如下: 类型 中问题 小问题 人员 1 1 设施 2 2 其中,表格第一行是keyAry数组的name属性,value数组中每个fieldvalues属性作为一行

<think> 我们使用Apache POI库来操作Word文档生成表格。 步骤: 1. 创建文档对象,并创建一个表格。 2. 首先,我们创建表头行(第一行),遍历keyAry数组,取出每个元素的name属性作为表头单元格的内容。 3. 然后,遍历value数组,每个元素代表一行数据。每个元素有一个fieldvalues数组,这个数组中的每个元素对应一个单元格。 4. 注意:keyAry数组的长度应该与fieldvalues数组的长度一致,因为它们分别代表列名和每列的值。 假设数据结构如下: keyAry: 数组,每个元素是一个对象,对象有name属性(表头名称) v
recommend-type

2000年代初的粉丝创神奇宝贝网站回顾

标题和描述中提到的“jps1:杰西(Jesse)基于粉丝的终极口袋妖怪网站(2000-2001)”反映了以下几个重要知识点: 1. 网站建设与维护的早期阶段:从2000到2001年的时间段中,互联网技术正处于快速发展时期,而杰西(Jesse)创建的这个口袋妖怪主题网站,可以被视作个人站长时代的早期代表作。这代表了早期网络用户利用有限资源进行个人兴趣爱好的分享和推广。 2. 基于粉丝的互动平台:这个网站明确指出是基于粉丝而创建的,这表明了网络社区中粉丝文化的存在和影响力。在那个时期,围绕特定兴趣(如口袋妖怪)形成的粉丝群体,通过这些网站交流信息、分享资源,这种基于共同兴趣建立的社区模式对后来的社交媒体和粉丝经济有着深远影响。 3. 个人网站的存档意义:杰西(Jesse)在描述中提到了出于存档目的而发布,这说明了这个网站对于网络历史保存的重要性。随着互联网内容的快速更迭,个人网站往往由于服务器迁移、技术更新等原因而丢失,因此存档个人网站是对互联网文化遗产的一种保护。 关于标签“JavaScript”,它指向了一个重要的知识点: 4. JavaScript在网络技术中的作用:标签“JavaScript”点出了该网站使用了JavaScript技术。作为早期的动态网页脚本语言,JavaScript在提高用户交互体验、网页特效实现等方面发挥了关键作用。尽管该网站发布的年份较早,但极有可能包含了一些基础的JavaScript代码,用于实现动态效果和基本的客户端交互。 至于“压缩包子文件的文件名称列表”中的“jps1-gh-pages”,它暗示了以下知识点: 5. GitHub Pages的使用:GitHub Pages是GitHub提供的一项服务,允许用户将网站内容部署到一个独立的域名下,通过“jps1-gh-pages”文件名我们可以推测,这个口袋妖怪网站可能是被部署在GitHub Pages上的。这表明了即使在早期,个人站长已经开始利用现代网络服务来托管和分享他们的项目。 6. 压缩文件的作用:文件名称中的“压缩”二字意味着该网站的文件内容可能是经过压缩处理的,这可能是为了便于存储和传输。在互联网早期,由于带宽和存储空间的限制,压缩文件是一种常见的数据管理手段。 总结以上知识点,可以归纳为: - 网站建设与维护的早期案例,以及个人站长时代的特点。 - 基于粉丝文化的网络社区,及其对后来社交媒体的影响。 - 个人网站的存档价值和对互联网文化遗产的贡献。 - JavaScript的早期应用以及它在改善用户交互体验方面的重要性。 - GitHub Pages的使用案例,以及压缩技术在网络数据管理中的应用。 从这些知识点可以看出,即便是早期的个人网站项目,也蕴含了丰富的技术和文化内涵,为理解互联网的发展和应用提供了重要的视角。
recommend-type

Linux终端实用工具与技巧

# Linux 终端实用工具与技巧 ## 1. gnuplot 绘图与导出 ### 1.1 绘制方程图形 任何方程都可以用特定方式绘制图形。例如,一个斜率为 5、y 轴截距为 3 的直线方程,可使用以下命令生成图形: ```bash plot 5*x + 3 ``` ### 1.2 导出图形为图像文件 虽然能在终端显示图表,但多数情况下,我们希望将图表导出为图像,用于报告或演示。可按以下步骤将 gnuplot 设置为导出图像文件: 1. 切换到 png 模式: ```bash set terminal png ``` 2. 指定图像文件的输出位置,否则屏幕将显示未处理的原始 png 数据:
recommend-type

华为310散热

<think> 嗯,用户想了解华为昇腾310处理器的散热解决方案,我需要从提供的引用材料中提取相关信息。首先查看引用[1],里面提到Atlas 500智能小站使用了TEC半导体制冷技术,这项技术具有无风扇、低功耗和高可靠性的特点。虽然这是Atlas 500的设备,但考虑到昇腾310芯片的应用场景,很可能采用了相似的散热方案。 再看引用[2]中关于CloudEngine 16800交换机的散热描述,这里提到了两项关键技术:碳纳米导热垫和VC相变散热。虽然这是交换机产品,但散热原理应该可以迁移到AI处理器场景。特别是VC相变散热技术的高效热传导特性,对昇腾310这样的高密度芯片应该很适用。 引
recommend-type

VB提取EXE/DLL文件中所有图标的教程与源码

在深入探讨这个知识点之前,我们需要对VB(Visual Basic)有所了解。VB是一种广泛使用的编程语言,最初由微软公司开发,它属于事件驱动编程语言的一部分。随着技术的发展,VB已经演变出多个版本,其中包括较为人熟知的VB6和VB.NET。VB在开发Windows应用程序方面有着广泛的应用,因为其简洁的语法和可视化的开发环境使得编程变得相对容易。 【知识点】: 1. VB源码解析: VB源码通常包含对某个特定任务的指令集。在这个案例中,源码是用于从可执行的EXE文件和动态链接库(DLL)文件中提取ICO图标。这个程序允许用户选择一个快捷方式打开的应用程序(例如桌面上的Word快捷方式),通过解析快捷方式,程序能够找到实际的可执行文件,并从中提取图标资源。 2. 系统相关知识:了解Windows操作系统的工作原理对于这类任务至关重要。Windows操作系统将图标作为资源存储在可执行文件和DLL文件中。资源是二进制文件内的一部分,包含图像、字符串、菜单和其他可加载的元素。使用VB程序能够访问和操作这些资源,允许开发者或用户提取出图标资源。 3. ICO与图标处理:ICO文件是Windows系统中图标图像的文件格式。图标可以有不同的尺寸和颜色深度,通常用于表示文件、文件夹或程序。在VB程序中,提取的图标会被以ICO格式保存,这样可以确保在不同的应用程序中被识别和使用。 4. 图标提取程序的实现:要提取EXE文件中的图标,VB程序需要执行几个关键步骤。首先,程序需要打开目标EXE文件,并读取其内部的资源表。接着,程序要查找资源表中与图标相关的条目。通常,这些图标资源可以通过一个编号或名称被识别。然后,程序将这些图标资源读取出来,并保存为独立的ICO文件。在读取过程中,开发者可能需要使用Windows API(应用程序编程接口)函数来处理各种底层操作,例如打开文件、读取资源和关闭文件等。 5. VB中的用户界面元素:为了实现从EXE文件提取图标的功能,VB程序需要提供一个用户界面,允许用户选择快捷方式并显示提取出的图标列表。这通常涉及到使用VB中的控件,如列表框(ListBox)或图像框(PictureBox)来显示图标,使用滚动条(Scrollbar)来浏览不同图标,以及统计图标数量的逻辑。 6. 可能的编程实现和挑战:实现这个功能可能会遇到的挑战包括了解如何操作Windows资源,熟悉Windows API,以及处理可能出现的异常情况,例如用户选择的不是有效的可执行文件。程序员需要编写错误处理逻辑来确保程序的稳定性。此外,针对不同的操作系统版本(比如32位与64位),资源读取方式可能有所不同,因此程序可能需要考虑兼容性问题。 7. 关于文件名称"okbase.net"的讨论:虽然在给出的信息中并未提供具体的VB源代码或进一步的说明,文件名称"okbase.net"可能指向一个提供源代码下载、技术讨论或者知识共享的网站。"okbase.net"作为一个资源网站,可能会有各种类型的资源文件,包括我们这里讨论的VB源代码。开发者可以访问这类网站来获取所需的代码、库或其他开发资源。 总体而言,VB从EXE文件提取图标的功能展示了一个有趣且实用的应用,它涉及到了编程、操作系统原理以及图形用户界面的设计。对于那些对Windows编程感兴趣的开发者来说,这是一个很有教育意义的项目。
recommend-type

Tmux:高效终端管理工具的使用与定制

### Tmux:高效终端管理工具的使用与定制 在终端操作中,我们常常需要处理多个任务,并且希望这些任务能够在后台持续运行,同时方便我们随时切换和管理。本文将介绍如何在终端中让进程在后台运行,以及如何使用 Tmux 这个强大的工具来管理和切换虚拟终端窗口。 #### 让进程在后台运行 在终端中,我们可以使用 `bg` 命令让进程在后台重新启动,但该进程仍会与当前终端会话关联。要查看当前终端会话中正在运行的作业列表,可以使用 `jobs` 命令,它会返回一个包含作业编号的列表。 若要将某个进程与终端会话分离,可使用 `disown` 命令,例如 `disown %1`,这里的 `1` 需要
recommend-type

相位差与怎么对多光束干涉进行影响

<think> 首先,用户意图是了解相位差对多光束干涉的影响。用户问题是:"相位差对多光束干涉的影响"。我需要通过调用网络搜索工具进行检索,然后整合内容生成详细、清晰的回答。 我的任务要求: - 针对意图和搜索词进行检索:搜索词是"phase difference in multiple beam interference" 或类似关键词。 - 整合检索到的内容,生成实用、可操作的回答。 - 禁止将所有引用内容集中在末尾;我应该将它们融入回答中。 - 避免使用第一人称如"我"。 - 避免步骤词汇如"首先"、"然后"。相反,组织为多个方法或步骤,每个步骤间换行。 - 对于代码和公式,严格遵守格