Intel RealSense D435 and D435i 仿真教程-External Synchronization of Intel RealSense Depth cameras

本文介绍了Intel RealSense系列深度相机的全新Genlock功能,允许用户在任意时间或频率下触发深度帧捕捉,支持外部信号源同步,提升多相机协作的灵活性。文章详细阐述了操作原理、连接设置、应用场景和编程接口,以及与既有同步模式的区别。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Authors: Anders Grunnet-Jepsen, Aki Takagi, John Sweetser, Tri Khuong, Dave Tong
Rev 1.3

1. Introduction

In this whitepaper we introduce a new Genlock function for all Intel® RealSense™ depth cameras that are based on Global Shutter imagers. This currently includes the D435 and D435i cameras. “Genlock”, or generator locking, describes the technique whereby one or more video streams can be exactly synchronized in time to an external trigger. With this feature, depth frames can be triggered from an external source to capture at almost arbitrary times or frequencies, within an allowable time and frequency window. This includes being able to trigger at irregular intervals as well as offering the option to start a specified burst of frames with each single trigger.

In a previous whitepaper (Multi-Camera configurations - D400 Series Stereo Cameras) we had covered a similar capability where it was possible to make one Intel RealSense camera act as the Master and others as Slaves and have them hardware synchronized via a connecting cable. This was extremely helpful for multi-camera operations that required simultaneous capture from multiple cameras. However, the limitation of that approach was that only discrete pre-defined frequencies could be used, like 6, 15, 30, 60 and 90 frames per second. Moreover, the allowable frequency windows for the synchronization trigger signal was so strict as to make it nearly unfeasible to use an external signal generator as source. The new Genlock mode lifts some of these restrictions but adds others, as will be detailed below. Appendix A provides a table with a more detailed comparison of the differences between the different synchronization techniques.
We note that this Genlock mode can be considered as a superset of PPS operation (Pulse-Per-Second), which describes signals commonly used in GPS, robotics, and frequency standards, for precision synchronization of devices or time clocks. However, care must be taken to adjust voltage levels as they may differ. PPS signals are usually 5V TTL signals, while Genlock for Intel RealSense cameras requires 1.8V signals.

2. Principles of Operation

We start by defining the characteristics of the external HW SYNC trigger: Inputs are high impedance, trigger voltage level is 1.8 V, and pulse width should be wider than 100 microseconds. The Genlocked camera triggers frames on the positive upward slope.
When the Genlock mode is selected, the camera will wait indefinitely for a trigger. Once the trigger arrives, exposure will start, as shown in Figure 1. This detail is important to note when temporally synching with events from other cameras or devices, as there is up to 1 frame delay between trigger and readout. Since the stereo camera exposure is global shutter, the camera sensor will accumulate charge during the exposure time, and when done, it will read it out. The exposure time can either be set manually or can be determined inside the device by its autoexposure algorithm. However, if using auto-exposure, note that this is based on previous frame brightness and the auto-exposure algorithm requires a few frames of data to stabilize. This means that the autoexposure will only respond at the trigger rate and should therefore only be used if there are expected to be only minor lighting changes between triggers.

Figure 1. Timing diagrams for trigger and camera frame.

When streaming in Genlock mode, the camera can be configured during initialization to any of the normal input resolutions and frame rates. Specifically, the “native” frame rate of the camera can be set to 6 fps, 15fps, 30fps, 60fps, or 90fps. This native frame rate has an impact on two significant aspects:
First, it does not set the trigger rate, but it does set an upper limit on the frame rate that can be supported by an external trigger. When sending a trigger pulse train, the trigger signal cannot be repeated faster than HALF the native camera mode frame rate, as shown in the table below. In other words, once a trigger is received the camera will be blind to any other trigger signals for 2x the frame time of the “native mode” of the camera. This means that the trigger frequency can be any value inside the allowable range in the Table 1.

 Table 1: The allowed trigger frequencies as a function of the camera mode.

Second, when the Genlock mode is selected, there is the option to adjust a “burst count” from 1 to 255 frames. This means that once a trigger is received, a specified number of frames will be burst at the native frame rate. For example, if the native camera frame rate is 90fps, and the burst count is set to 3, then each time a trigger is received the camera will burst 3 frames at 90fps (i.e. with 11.1ms separation). The camera will then wait for the next trigger, as shown in Figure 1. This does mean that it is possible to approach the full frame rate, in this case 90fps, with a 1Hz trigger, by setting the native camera mode to 90 fps and setting the count to 89 while triggering at 1Hz, for example.

Figure 2. Timing diagrams for multiple triggers and camera frames. The frames will be burst at the native camera frame rate. The burst count can be specified from 1-255.

The next question to address is whether an Intel RealSense camera can act as the external trigger to another Intel RealSense camera. Yes, this is possible. One Intel RealSense camera can be placed into “Master” mode, while all other cameras are placed into “Genlock” mode. However, the same frame rate restrictions apply as outlined in Table 1. So, in order to have all cameras stream at 30fps, for example, the Master must be set to 30fps, and all Genlock cameras must be natively set to 90fps modes with burst count set to 1. The master camera will then deliver the trigger at 30fps, and all Genlock cameras will trigger in sync with the master. This does limit the maximum exposure time of all the slave cameras to 1/90 seconds, instead of 1/30 seconds.

There are several additional restrictions we need to address.

  1. Global Shutter: The Genlock feature only works on Intel RealSense cameras that are based on Global Shutter stereo imagers. This means that it currently works on the D435 and D435i, but not on D415.
  2. Color: For the D435/D435i the color imager is rolling shutter and it is not hardware synced to the depth. This is an HW limitation, so the color sensor will NOT be Genlocked, but will instead be free-running at the frame rate specified for that imager.
  3. Laser Projector: When Genlock mode is activated, the laser projector will switch into “always-on” mode. This means that the projector will not pulse at the frame rate but will automatically be set to be on continuously.
  4. Auto-exposure: Auto-exposure works but will respond proportional to the camera frame rate, so it is not advisable to be in auto-exposure when the frame rate is very low and lighting conditions may change during exposure.
  5. Firmware and SDK: This new feature is supported in FW 5.12.4.0, and later. It requires Intel RealSense SDK 2.34+.

3. Camera Connections

In this section we provide more details on how to connect a signal generator to a camera. Figure 3 shows a picture of the D435. It has a 9-connector that is hidden by a small magnetic latch. The sync signal needs to be connected to pin 5 (SYNC). Pin 9 is Ground.

To connect multiple cameras, you can use a daisy-chain or a star-configuration. For connections longer than 3m, some care must be taken with ESD protection, noise filtering, and shielded twisted-pair cables, as well as connector termination (resistors and capacitors), as shown in Figure 4. The shield can be connected to ground (pin 9) at the master camera connector and left unconnected at other connectors.

Figure 4. Connecting a MASTER camera to multiple GENLOCKed cameras. The proposed wiring helps mitigate electrical noise.

The connector has manufacture part number ASSHSSH28K152, and the housing has part number SHR-09V-S. Manufacturer web site is www.jst-mfg.com. Figure 5 shows an example of a D435 camera wired to a very simple computer-controlled signal generator – an Arduino Micro with a level-shifter to bring the sync signal to 1.8V.

In our example, it was possible to control multiple cameras simultaneously. Alternatively, one can also create a signal generator that can control the individually multiple output sync signals, with control over the frequency as well as the phase (time delay) between each trigger. See why in the next section.

4. Genlock usages

There are numerous reasons why one might be interested in this Genlock operation, and we will address the most common ones here:

  1. Unique frequencies: As stated, the Intel RealSense cameras only currently support a few predefined “native” frequencies. These frequencies are derived from an internal crystal clock. However, some applications may require more unique frequencies. For example, to support filmmaking applications, 24fps would be desirable, or sometimes 23.98 or 25fps. Other users have expressed interest in very slow frame rates that are not currently supported.
  2. Arbitrary times: For Genlock, no regular repetition rate is needed. The external source could be erratic or random in time, and the capture would still work and be synced exactly to the stimulus trigger. For example, it could be synced to a human heartbeat or a door-entry pressure pad.
  3. Externally controlled Sync: Probably the most common need for Genlock is when you need to synchronize multiple sources. For example, users may have a separate RGB camera that they want to be the master, and they want Intel RealSense cameras to exactly synchronize to its capture. This is often needed for applications like 3D scene reconstruction which may use a very high-resolution color camera and an Intel RealSense depth camera.
  4. Connecting multiple cameras: A very common application of Genlock is to link cameras together to be exactly HW synchronized to all fire at the same time. While it is certainly possible to use free-running cameras and software-synchronize by aligning frames according to time-stamps and frame counter, this will only guarantee synchronization to within a frame time. To capture scenes in motion much finer synchronization is normally needed. One example use-case is to have multiple cameras pointing in different directions away from each other (i.e. with non-overlapped views), in order to create a single synchronized point-cloud capture. Another common usage is to scan objects, people or scenes, by having multiple cameras arrayed in a circle around a scene, all looking inwards to capture the scene from different angles. One great advantage of Intel RealSense depth cameras is that they do no interfere with each other, so they can all point at the same scene and capture depth. In fact, the genlock mode automatically sets the projector into CW (continuous wave) mode, so it is on all the time. For more details on multi-camera usages we encourage the user to read the white paper entitled "Using the Intel RealSense Depth Camera 4xx in Multi-Camera configuration.". While this paper details a different frame synchronization mode, it also covers in more detail the many other considerations that need to be made when trying to connect to many Intel RealSense cameras at the same time, including power, bandwidth, and CPU limitations.
  5. Capture high-speed motion: It is possible to create a synthetic high-speed “bullet-time” camera array through time-staggered triggering. Assuming the scene is bright enough, requiring for example <1ms exposure for an outdoor scene, it is possible to trigger multiple cameras to fire staggered with, say, 1ms delays. So, if 10 cameras are arrayed to capture the same scene at 90fps each, with a 1.1ms delay between each camera trigger, it is possible to capture motion at 900fps. In fact with the newly release 300fps mode, the same camera configuration would capture at 3000fps. See our white paper on high-speed capture here.
  6. Time-Lapse Photography: There is now the ability to capture at extremely slow frame rates that could be used for stop-motion or observation of very slowly changing scenes, or simply to reduce the computational load on a CPU platform. When using this mode, it is of course necessary to make sure the timeout for the SDK “wait for frame” call is set to longer than the exposure time. We have verified this works for inter-trigger times of 33 seconds.

5. Programming Camera Sync

To get started using this feature, we have included it in the Intel RealSense Viewer 2.34+. It is an option that can be selected BEFORE streaming is started. The feature is called “Inter Cam Sync Mode” and is mode 4-258. Mode 4 will trigger a single frame capture with each trigger. Mode 5 with capture 2, and so on (ex: Burst count = Mode – 3).

To access the feature via the SDK is equally simple. Simply use the RS2_Set_Option

 Again, we emphasize that the command should be sent to the device, BEFORE streaming is started, or the mode will not take effect until after streaming has been stopped and restarted. To read back the INTERCAM SYNC value, use the RS_Get_Option.

 

6. Conclusion

The ability to synchronize and slave an Intel RealSense Depth camera to an almost arbitrary external input signal has been demonstrated. This feature can be an enabler for applications that require exact hardware time synchronization to an external RGB master camera or to where any other external sync signal is desired to control the exact frame rate and capture time electronically. This can be useful for numerous usages ranging from ultra-slow stop-motion applications to ultra-fast bullet-time 3D captures with multiple cameras.

Appendix A

There are two distinct HW sync slave modes that can be selected in the SDK using the RS2_Set_Option command with

RS2_OPTION_INTER_CAM_SYNC_MODE

Setting Option value to 2 or 3 will set the Depth Camera to SLAVE or FULL SLAVE mode. The only difference between these two modes is that full slave mode also synchronizes the color camera.

The Genlock Slave mode is selected using value 4 (and above). In the table below we explain some of the differences in more detail.

 

 Updated about 1 year ago

<think>嗯,用户想了解如何通过软件实现Livox Tele和Realsense D435i的时间同步。首先,我需要回忆这两种设备的时间同步机制。记得Livox激光雷达通常有自己的时间同步接口,而Realsense D435i可能有硬件触发或软件时间戳的方式。 首先,时间同步一般分为硬件同步和软件同步。硬件同步需要物理连接,比如用PTP或者GPS同步,但用户问的是软件方法,所以可能需要依赖软件时间戳。不过软件同步的精度可能不如硬件,需要考虑时间戳对齐的方法。 Livox Tele的SDK文档里可能有获取时间戳的函数,比如设备启动后的相对时间或者系统时间。Realsense D435i的librealsense库应该也提供时间戳,但要注意是相机时钟的时间还是系统时间。如果两者都使用系统时间,可能直接比较,但可能存在延迟问题。 可能需要将两者的时间戳统一到同一时间源,比如系统时钟。例如,在数据采集时,记录系统时间作为基准,然后调整两者的时间差。或者,使用NTP同步系统时间,但局域网内的NTP同步精度可能在毫秒级,可能不够。 另一个方法是在代码中同时获取两个设备的数据,并记录各自的时间戳,然后通过插值或外推来对齐时间。比如,当激光雷达和相机的帧率不同时,可以用最近的激光雷达数据对应相机的某一帧,反之亦然。 此外,可能需要考虑数据采集的触发机制。比如,用外部信号触发两者同时采集,但软件实现的话可能需要自定义触发信号,或者利用设备的软件触发功能。 还需要注意两者的时间戳精度和延迟。比如,Realsense D435i的IMU数据可能有不同的时间戳,需要分别处理。可能需要为每个传感器数据流打上统一的时间标签,然后在后期处理中进行时间对齐。 另外,参考用户提供的引用内容,他们之前的工作是关于多激光雷达的标定和同步,可能涉及到分布式框架的时间同步方法,但这里的问题是两个不同设备,可能需要更基础的方法。引用中提到的问题,比如DL直接出轨迹的不稳定性,可能提示时间同步需要更可靠的方法,而不是依赖后期处理。 综上,可能的步骤包括:检查设备的SDK时间戳来源,统一时间基准(如系统时间),软件记录时的时间戳对齐,或者后期插值处理。需要具体查阅Livox和Realsense的文档,确认时间戳类型,然后编写代码进行同步。</think>### Livox Tele与Realsense D435i软件时间同步实现方法 #### 1. **理解设备时间戳机制** - **Livox Tele**:通过SDK获取的时间戳通常基于设备内部时钟,可通过`livox_ros_driver`的API直接读取。例如,点云数据中的时间戳字段为设备上电后的相对时间(单位:纳秒)[^1]。 - **Realsense D435i**:其时间戳通过`librealsense`库提供,默认使用相机内部时钟。IMU和图像数据的时间戳可能独立,需通过`get_timestamp()`函数获取。 #### 2. **统一时间基准** - **方案1:对齐系统时间** 通过NTP协议同步主机系统时间,使两者数据的时间戳基于同一系统时钟。需确保设备连接的计算机已启用NTP同步(局域网内精度约1-10ms): ```bash sudo apt install chrony sudo systemctl restart chrony ``` - **方案2:软件触发对齐** 在代码中设置同步触发信号,例如: ```python # 伪代码示例(需结合具体SDK实现) import time trigger_time = time.time() # 获取当前系统时间 livox_data = get_livox_data() # 获取Livox数据并记录时间戳 realsense_data = get_realsense_data() # 获取Realsense数据并记录时间戳 # 计算时间偏移量并补偿 ``` #### 3. **时间戳补偿与插值** - **步骤**: 1. 记录设备首次数据的时间戳$t_{\text{livox}}$和$t_{\text{realsense}}$。 2. 计算初始时间偏移量$\Delta t = t_{\text{livox}} - t_{\text{realsense}}$。 3. 后续数据通过偏移量对齐,若存在时钟漂移,需动态修正(如线性拟合)。 - **插值示例**(以点云与图像对齐为例): $$t_{\text{target}} = t_{\text{image}} + \Delta t$$ 在点云数据中查找最接近$t_{\text{target}}$的帧,或通过线性插值生成虚拟帧。 #### 4. **代码实现参考** ```python # 基于ROS的同步示例(使用message_filters) import rospy from message_filters import ApproximateTimeSynchronizer, Subscriber from livox_ros_driver.msg import CustomMsg from sensor_msgs.msg import Image def callback(livox_msg, image_msg): # 对齐后的数据处理逻辑 pass livox_sub = Subscriber('/livox/points', CustomMsg) image_sub = Subscriber('/camera/color/image_raw', Image) ats = ApproximateTimeSynchronizer([livox_sub, image_sub], queue_size=5, slop=0.01) ats.registerCallback(callback) rospy.spin() ``` #### 5. **注意事项** - **精度限制**:纯软件同步精度通常在10-50ms级别,若需更高精度需硬件触发(如Realsense的`HARDWARE_TIMESTAMP`模式)。 - **延迟校准**:考虑传感器数据处理和传输延迟,可通过实验测量固定延迟值并补偿。 - **多线程优化**:分离数据采集与处理线程,避免阻塞导致时间戳失真[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值