Article Catalog
- Summary of Classroom Questions and Answers for Synchronization Theory for Multiple Sensors_20210703
- 1. Is the calibration of multiple sensors a two-by-two calibration or a joint calibration?
- 2. laser radar and camera fusion due to the factor of movement, especially in the rotation of the time difference will cause a big error fusion, for example, for traffic cones such as small targets, the two kinds of data fusion may produce half a body position or even a body position of the error how to deal with this situation?
- 3. Is any data that is not timestamped thrown away? Or will it be stored for another use?
- 4. If there is no device such as ADU or Trigger box that can directly access the PPS signal, is it possible to set LiDAR as the master of PTP?
- 5. What basic knowledge is needed to teach space synchronization in the classroom?
- 6. Is Odom a wheeled odometer?
- 7. What is the significance of motion compensation?
- 8. More concerned about how the imu removes useless information in the odometer fusion part. Line acceleration accuracy is poor, so how does the imu end up being used
- 9. Do the different sampling frequencies of different radars indicate differences in radar performance? For example, 13 Hz and 15 Hz
- 10. Why is a rolling starter camera used for automated driving and what are the advantages?
- 11. Just how do you time stamp a specific camera, how do you time stamp a radar?
- 12. Is time synchronization accuracy at the ms level sufficient in practice? In which cases is higher time synchronization accuracy required?
- 13. What does the class mean by resampling the roi area from a telephoto camera to a short- or medium-range camera for fusion? Why can't we just project it? What is the general practice for resampling?
- 14. After spatially synchronized calibration between the lidars when the vehicle is stationary, the attitude of the lidars changes when the vehicle is in motion. How is this situation resolved?
- 15. I would like to know how the PPS signal is multiplied and converted into multiple 20 HZ signal outputs.
- 16. After the PPS signal triggers the camera exposure, how is the GPRMC time timestamped to the camera image frame?
- 17. Looking at the schematic, the pulses output from the Trigger box are only given to multiple cameras, and there is no indication that the trigger signal is given to the LIDAR, so after the synchronization of the camera's time with the GPS time is completed, how is the LIDAR's time synchronized with the GPS time?
- 18. Looking at the "Vehicle-side Time Hard Synchronization Schematic" in Figure 5, does the time synchronization of the LIDAR and the camera require a combination of GPS+PPS and PTP to complete the synchronization?
- 19. Assuming that the radar is 10HZ and the camera is 20HZ, if the data acquisition is triggered at the same moment, but the first frame of the camera loses its frame, so the corresponding frame of the camera cannot be found in the first frame of the radar, how to deal with the situation of camera losing its frame?
- 20. Synchronization using the GPS+PPS method should result in a failure to synchronize in places with weak GPS signals, how is this situation handled?
- 21. Is the data from the various sensors in the hard-synchronized vehicle-side time schematic cached before it is collected and processed by the algorithm? If cached, should there be a delay?
- 22. Camera + millimeter wave radar fusion matching strategy?
- 29. What is the corresponding pre-processing that needs to be done for fusion when the radar sensing results are output to fusion?
- 30. How can vehicle-mounted LiDAR be time-synchronized with GPS?
- 31. On page 18 of the PowerPoint in the video, what is the point of the difference between the two images regarding the green and pink point clouds?
- 32. In multi-camera calibration, it is possible to fix the translation and only the rotation. Do you have details on this, or code.
- Fusion with lidar, can you measure the angle with lidar and adjust the attitude through imu
- 34. How is the multi-LIDAR calibrated, that piece just didn't ring a bell, and how is the main LIDAR coordinate system related to the vehicle body coordinate system by measuring the position of the target relative to the center of the rear axle of the vehicle as well as in the LIDAR, and converting the coordinates?
- 35. How are the timestamps of the sensors obtained?
- 36. At present, most of the on-board cameras only support pps-based external trigger synchronization mechanism, while LIDAR generally uses ieee 1588 (ptp) time synchronization mechanism.
- 37. The Continental 430 comes with SDA (self-calibration), which utilizes roadside point calibration.
- 38.1, Sensor calibration process, is there an industry standard? What is it? 2、What quantitative results should be achieved for calibration between sensors? 3, Is sensor calibration a manual rule writing process? 4, What is the relationship between deep learning and sensor calibration?
- 39. May I ask what is the best way to project the laser sensing results to an image? It feels like this has something to do with multi-sensor calibration
- 40. With two different cameras, there should be some distortion and parallax, how is this eliminated when matching? Or not? After calibrating the long focus and short focus, how to determine the scaling of the picture? How to just achieve the match shown in the picture?
- 41.What are the traditional methods of doing target recognition with LiDAR
- 42. The calibrated distance values between the laser and the imu or combined inertial guide are not very accurate, is there a good way to do this?
- 43. Is there any method or information recommended for online calibration?
- 10. Camera and Lidar synchronization practice
Summary of Classroom Questions and Answers for Synchronization Theory for Multiple Sensors_20210703
1. Is the calibration of multiple sensors a two-by-two calibration or a joint calibration?
Answer: The calibration between multiple sensors contains a two-by-two situation where the calibration between sensors can become a joint calibration. In most cases a primary sensor should be selected on the vehicle, the other sensors establish a calibration link with the primary sensor, and then the primary sensor establishes a link with the vehicle.
- 1
2. laser radarFusion with the camera Due to the motion factor, especially when rotating, a small difference in time can cause a big error in fusion For example, for a small target such as a traffic cone, the fusion of the two types of data may produce an error of half a body position or even one body position How to deal with this situation?
Answer: It is this reason, so the use of LiDAR need to do motion compensation to remove the impact of motion distortion, (camera imaging in fact, there is also a movement of the drag phenomenon, the actual use of the situation to see whether the impact of the decision can be whether you need to deal with). That the actual use of LiDAR and the camera if you do the result level fusion or data level fusion, you need to say this situation. Generally this situation also needs to consider the angle at which the lidar starts scanning, to the right angle and then trigger the camera to take a picture.
In addition, in the fusion of LiDAR and camera, the impact brought by moving targets will be considered, and you can consider relying on the velocity information to distinguish between stationary obstacles and moving obstacles, and after the two sensors are matched, the recovery of moving obstacles will be carried out. Of course, in practical applications, different fusion strategies may also be selected according to the scene. For example, when turning, the camera-based fusion strategy can be selected.
- 1
- 2
3. Is any data that is not timestamped thrown away? Or will it be stored for another use?
Answer: Usually it's not thrown away immediately, there will be a cache queue in the code, unless the cache is full it will delete the historical data.
- 1
4. If there is no device such as ADU or Trigger box that can directly access the PPS signal, is it possible to set LiDAR as the master of PTP?
Answer: usually gps time as master, if even gps is not available, then choose NTP. Generally LiDAR is being timed.
- 1
5. What basic knowledge is needed to teach space synchronization in the classroom?
Answer: outer product, inner product, rotation matrix properties, Euclidean transforms, Euler angles, quaternions, 2D-3D, 3D-3D, PnP, and more!
- 1
6. Is Odom a wheeled odometer?
Answer: Odom is more accurately referred to as an odometer, and wheeled odometers are wheel odom, but actually odometers are much broader in scope. Odometer contains two important quantities, namely attitude (position angle) and speed (forward and steering speed). In the context of a car, one can refer specifically to a wheeled odometer.
- 1
7. What is the significance of motion compensation?
Answer: mechanical LiDAR as an example, assuming that the target does not move, LiDAR motion scanning, but due to the LiDAR frame rate of 10 frames, the interval of 100ms a frame, the speed of the slow down is okay, but if it is a fast movement, the target in a certain frame out of the point cloud may not be the real effect of the target contour. But in practice, the target is often also in motion, so since the movement, target movement have to be compensated.
- 1
8. More concerned about how the imu removes useless information in the odometer fusion part. Line acceleration accuracy is poor, so how does the imu end up being used
Answer: does imu useless information refer to noise? If so, it can be solved by calculating its Allen variance from the sampled values. Of course there are quite a few ways to denoise or to model imu noise.
Linear acceleration is poorly accurate, and generally IMU parameters focus on the effect of this linear acceleration on angular velocity, so it can also be attributed to the noise effect of linear acceleration.
- 1
- 2
9. Do the different sampling frequencies of different radars indicate differences in radar performance? For example, 13 Hz and 15 Hz
Answer: For millimeter wave radar, the sampling frequency is actually related to the sampling points and frequency resolution. When the frequency resolution is certain (sampling frequency resolution is generally equivalent to millimeter wave radar processing time), the larger the sampling frequency, the more sampling points.
In addition, the radar performance is generally related to the sampling frequency resolution, and the sampling points and sampling frequency do not have much to do.
- 1
- 2
10. Why is a rolling starter camera used for automated driving and what are the advantages?
Answer: It's mainly cheaper. In terms of hardware, the rolling shutter uses cmos while the global shutter uses ccd as shown below. When converting from electrical to digital signals, CCDs are serial, while CMOS is parallel (suitable for integrated circuits) this makes cmos faster and less noisy.
From the software, the current image algorithms can be cmos some shortcomings (such as if the phenomenon of freezing) eliminated, so rolling shutter is more widely used. But if the ccd production process and production costs come down, I personally feel that the global shutter camera will be more widely used.
- 1
- 2
11. Just how do you time stamp a specific camera, how do you time stamp a radar?
Answer: For cameras, it is recommended to timestamp two times if available, i.e. trigger timestamp and system imaging timestamp. When you use it, the trigger time and imaging time will be used to estimate the time at the center of the exposure as the actual timestamp of the image.
In the case of radar, I don't understand what it means. Lidar comes with its own timestamp for each point, millimeter wave radar is usually played periodically and the CAN signal comes with its own timestamp.
- 1
- 2
12. Is time synchronization accuracy at the ms level sufficient in practice? In which cases is higher time synchronization accuracy required?
Answer: ms level is enough, I said this ms level is basically within 1ms, basically can reach 0ms. in fact, the second question, you can calculate, 120 yards, 1ms may be the car movement of a few centimeters, so think about it is not enough. Unless our national traffic laws and regulations to increase the speed limit to 200 yards, may need a higher precision timestamp.
- 1
13. What does the class mean by resampling the roi area from a telephoto camera to a short- or medium-range camera for fusion? Why can't we just project it? What is the general practice for resampling?
Answer: Longer focal lengths are used to detect distant scenes(small field of view), Short Focus Detection Nearby(having a large field of view)The so-called ROI fusion is equivalent to methodizing certain areas in a short focal length. As the depth of field of the two is not consistent, it will lead to a longitudinal parallax, if the direct projection, there will be a large error, so usually the polar correction will be carried out before the direct projection. Resampling, in fact, can be understood as a better alignment of the co-view area. Secondly, the live class also briefly explains an example of projecting a 2D target frame in a telephoto camera into a medium or short distance.
- 1
14. After spatially synchronized calibration between the lidars when the vehicle is stationary, the attitude of the lidars changes when the vehicle is in motion. How is this situation resolved?
Answer: This is where the online calibration or online calibration module needs to come into play.
- 1
15. I would like to know how the PPS signal is multiplied and converted into multiple 20 HZ signal outputs.
Answer: Generally there are specialized frequency doubling circuits to multiply the input signal output. The specific implementation principle, sorry, not specializing in circuits, do not care much over the internal implementation.
- 1
16. After the PPS signal triggers the camera exposure, how is the GPRMC time timestamped to the camera image frame?
Answer: The image frame itself has no timestamp information, only the moment 4 receives the image at the SOC side is considered to be the timestamp of the image.
- 1
17. Looking at the schematic, the pulses output from the Trigger box are only given to multiple cameras, and there is no indication that the trigger signal is given to the LIDAR, so after the synchronization of the camera's time with the GPS time is completed, how is the LIDAR's time synchronized with the GPS time?
Answer: This is just a schematic diagram, which is to illustrate the connection relationship of each component. For example, a camera that supports trigger must have a corresponding device for triggering, while LiDAR supports both ptp and ntp modes.
- 1
18. Looking at the "Vehicle-side Time Hard Synchronization Schematic" in Figure 5, does the time synchronization of the LIDAR and the camera require a combination of GPS+PPS and PTP to complete the synchronization?
Answer: This is just a schematic diagram, the actual GPS timing can be used, and then transmitted to the ADU, through the PTP gateway for other sensors timing. ptp is a protocol for high precision time synchronization.
- 1
19. Assuming that the radar is 10HZ and the camera is 20HZ, if the data acquisition is triggered at the same moment, but the first frame of the camera loses its frame, so the corresponding frame of the camera cannot be found in the first frame of the radar, how to deal with the situation of camera losing its frame?
Answer: if it was lost at the beginning of the power up, so be it. Just find one that matches. If it was lost at some point during the motion, then the image or target can be predictively processed using prior a priori knowledge.
- 1
20. Synchronization using the GPS+PPS method should result in a failure to synchronize in places with weak GPS signals, how is this situation handled?
Answer: It's not a problem for a short time, but for a long time, it will affect the synchronization. Often used program is the normal situation, the use of PTP, if the GPS signal is poor or no signal, after a period of time, you can use NTP, which may also be why although NTP is optional, but often are added to the reason.
- 1
21. Is the data from the various sensors in the hard-synchronized vehicle-side time schematic in the figure cached before it is collected and processed by the algorithm? If cached, should there be a delay?
Answer: Generally the timestamp of each sensor data has been recorded in entering ADU, so the data entering ADU will be processed, and at present ADU does not need to consider its cache delay because of its powerful function and transmission capability.
- 1
22. Camera + millimeter wave radar fusion matching strategy?
Answer: Common post-fusion strategies, such as nearest-neighbor matching, Kalman filter tracking fusion, JPDA, etc., are relatively mature, while image-based pre-fusion strategies can rely on the detected mask boxes or bboxes of the images to reject erroneous data from millimeter waves, and can also be optimized by using millimeter-wave data after direct 2D-3D conversion.
- 1
29. What is the corresponding pre-processing that needs to be done for fusion when the radar sensing results are output to fusion?
Answer: The fusion here refers to post-fusion, right? The first thing you should do is to ensure the validity of the target, i.e. there may be false detection of the target. Then do a good job of data correlation, not only to do a good job of correlation between multiple sensors, but also should do a good job of correlation between the front and rear frames of the same sensor.
- 1
30. How can vehicle-mounted LiDAR be time-synchronized with GPS?
Answer: Lidar for autonomous driving must support clock synchronization with the host computer or other sensors, usually to the millisecond level. There are two common synchronization techniques: one is GPS-based "PPS+NMEA"; the other is Ethernet-based IEEE1588Clock synchronization protocol.
GPS is capable of obtaining high-precision clock signals from satellites, and is therefore usually used as the clock source for the entire system. Conventional GPS units support the output of the second pulse signal PPS, which is accurate to milliseconds, and the NMEA command, which contains the year, month, day, hour, minute, and second information, and through the combination of PPS and NMEA, it is possible to achieve millisecond clock synchronization to the LIDAR or the host computer. As long as the laser radar supports the RS232 interface based PPS + NMEA clock synchronization input signal, it can achieve millisecond clock synchronization.
The advantage of PPS+NMEA is that the protocol is simple and easy to implement; the disadvantage is that it must be based on RS232, and it is difficult to synchronize between multiple devices.IEEE15881588 is an Ethernet-based high-precision clock synchronization protocol that enables sub-microsecond clock synchronization between multiple slave nodes (various sensors) and the master node (host) in an Ethernet network, provided that all nodes are interconnected via Ethernet and each node supports the 1588 protocol.
If the LiDAR supports the 1588 protocol, clock synchronization can be achieved using the following architecture:
1. The host computer achieves clock synchronization with GPS through PPS+NMEA;
2. Other nodes such as LIDAR achieve clock synchronization with the host through the 1588 protocol;
Each point in the point cloud output by the LIDAR is synchronized with the host in addition to the(x,y,z)In addition to the coordinates, another important field is the timestamp. Lidar is a slow scanning device compared to a camera, and the timestamps for different points in each frame of the point cloud are different. For example, for a Lidar with 10 frames per second, each frame of the point cloud takes 100 milliseconds, with a difference of about 100 milliseconds between the first and the last point of each frame of the point cloud. When scanning high-speed moving objects, the original point cloud is "deformed", similar to the camera's shutter speed is too slow to capture the movement of the object are blurred and elongated, you must use the point cloud in the point cloud of the timestamp to correct the point cloud in order to restore the original appearance of the object being scanned.
Lidar and host or GPS to achieve high-precision clock synchronization, it will be based on this clock for each laser point to generate a timestamp, with this timestamp a lot of work to carry out much more convenient, such as multi-sensor fusion and so on.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
31. On page 18 of the PowerPoint in the video, what is the point of the difference between the two images regarding the green and pink point clouds?
Answer: the purpose is to show that the linear assumption of position error is more reasonable than the linear assumption of position. As shown in the figure, the left image is the effect of removing motion aberrations from the overall point cloud using laser odometry and ICP, with the lidar moving from left to right. The green and pink points in the right half of the point cloud basically overlap, but the local zoom in the right half of the point cloud reveals that the pink point cloud data is ahead of the green point cloud data, but the shapes are still similar, which is suspected to be related to the lack of calibration of the IMU data. (There is a bias in the calibration manual of the IMU, which is like the information of the camera's internal reference, and the lack of calibration of the IMU data here refers to the fact that the IMU doesn't take the bias into consideration, which means that the IMU is not calibrated). Considering bias, bias can be interpreted as error, and from the graph, the right half is indeed shifted by one value overall).
- 1
32. In multi-camera calibration, it is possible to fix the translation and only the rotation. Do you have details on this, or code.
Answer:
- 1
Fusion with lidar, can you measure the angle with lidar and adjust the attitude through imu
Answer: It must be possible haha, he gives the exact target position and that speed will be accurate as well. What is the distance range you have haha? The distance covered by the lidar could be an issue
- 1
34. How is the multi-LIDAR calibrated, that piece just didn't ring a bell, and how is the main LIDAR coordinate system related to the vehicle body coordinate system by measuring the position of the target relative to the center of the rear axle of the vehicle as well as in the LIDAR, and converting the coordinates?
Answer: A sufficient number of co-vision area points will do, otherwise you can use a calibration booth.
- 1
35. How are the timestamps of the sensors obtained?
Answer: There is a timestamp in the lidar point cloud, millimeter wave radar can get the timestamp through can data, the camera is usually typed after the image is received by the soc, industrial controller or other hosts, or you can do the work in the driver layer, which can reach almost 0ms error.
- 1
36. At present, most of the on-board cameras only support pps-based external trigger synchronization mechanism, while LIDAR generally uses theieee Time synchronization mechanism for 1588 (ptp)
37. The Continental 430 comes with SDA (self-calibration), which utilizes roadside point calibration.
38.1, Sensor calibration process, is there an industry standard? What is it? 2、What quantitative results should be achieved for calibration between sensors? 3. Are the rules for sensor calibration written manually? 4,deep learningWhat is the relationship to sensor calibration?
Answer: First question: There is no industry standard, and calibration can be considered a "specialty" of each family.
The second question: Usually, the reprojection error can be used as a measure, and there is also a way to evaluate the calibration results called DS evidence theory.
Thirdly, each self-driving company has its own way of calibration, and the rules are unknown. But the most basic principles are similar.
Fourth, calibration can use deep learning methods, such as calib Net, as for the relationship, in fact, grasp the idea of calibration, you can understand the idea of deep learning methods to calibration!
- 1
- 2
- 3
- 4
39. May I ask what is the best way to project the laser sensing results to an image? It feels like this has something to do with multi-sensor calibration
Answer: The camera's internal parameters and the camera and radar external parameters can be projected directly onto the image after calibration.
- 1
40. With two different cameras, there should be some distortion and parallax, how is this eliminated when matching? Or not? After calibrating the long focus and short focus, how to determine the scaling of the picture? How to just achieve the match shown in the picture?
Answer: There are translations and rotations between the two cameras haha, aberrations are to be eliminated. The first thing to eliminate is the longitudinal parallax, i.e., resampling the focal lengths that are different (simply put, the focal lengths become the same), and the later works the same as binoculars.
- 1
41.What are the traditional methods of doing target recognition with LiDAR
Answer: target recognition? Here I understand the target detection haha, not similar to face recognition. General point cloud detection can be done using segmentation algorithms, traditional segmentation methods such as edge-based point cloud segmentation method, based on the region of the point cloud segmentation method, based on the model of the point cloud segmentation method (common RANSAC belongs to the class here, the line surface extraction also belongs to this category). If you have to say to recognize, it is the feature distance.
- 1
42. The calibrated distance values between the laser and the imu or combined inertial guide are not very accurate, is there a good way to do this?
Answer: The external parameter inaccuracy between LIDAR and IMU can be corrected by the following method:
Based on the state of LiDAR and IMU in continuous time, the relative position attitude between the two is estimated, i.e., the residual difference between the angular velocity under the corresponding IMU moments and the IMU measurements solved based on the attitude ensemble out of the laser steering attitude between the continuous history time, and then the optimization of minimizing the residual difference is solved;
Or use the method of point cloud matching, that is, through the point cloud segmentation, ground filtering, as well as key object extraction and other methods to extract key objects in space, and then use the method of 3D point cloud matching, the joint calibration of the coordinate system into a point cloud matching problem, using the ICP or other algorithms, it can be solved for the relative coordinates in the two coordinate systems of the laser and the IMU.
- 1
- 2
- 3
43. Is there any method or information recommended for online calibration?
10. Camera and Lidar synchronization practice
Data processing for time synchronization
NDT was used for synchronization, which is a better point, and IMU synchronization was also mentioned