Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reconfigurable optical time delay array for 3D lidar scene projector

Open Access Open Access

Abstract

3D lidar scene projector (LSP) plays an important role in the hardware-in-the-loop (HIL) simulation for autonomous driving system (ADS). It generates a simulated 3D lidar scene in laboratory by generating a 2D array of optical time delay signals. The reconfigurable optical time delay array (ROTDA) is crucial for LSP. However, current ROTDA solutions cannot support a LSP with a spatial resolution more than 10×10. In this paper, we proposed a novel ROTDA design based on the time slicing method. The optical signals with the same time delay but different spatial coordinates were treated as one time slice. Different time slices were superimposed into a composite image by a microlens-array-based imaging system to obtain a 3D lidar scene. And a spatial light modulator (SLM) was utilized to configure the time delay of each lidar scene pixel. We developed a ROTDA prototype with 64×64 pixels, each pixel can be reconfigured with up to 180 different time delays in one frame. The time delay resolution is 1 ns, the maximum time delay is 5000 s, and the 3D frame rate is 20Hz. The prototype can generate a continuous lidar scene with a distance span of 27 m, and can also generate up to 8 short scenes that are separated from each other along the lidar observation direction, each short scene covers a distance span of 3 m or 3.75 m. The design method proposed in this paper can also be applied to other occasions that demand a large number of time delay generators.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Over the past decade, imaging lidar has attracted plenty of attention as an important sensor of autonomous vehicles [14]. Generally, a hardware-in-the-loop (HIL) simulation needs to be performed before the vehicles go on the road to test the dynamic characteristics and safety performance of the autonomous driving systems (ADS) [510]. The block diagram of a typical HIL simulation system for ADS with imaging lidar as the perception unit is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Block diagram of a HIL simulation system for ADS.

Download Full Size | PDF

The simulation computer is the master control of the HIL simulation system. It runs the dynamics model of the ADS and other objects in the driving scene. The ADS consists of a perception unit, a decision unit and an execution unit. The imaging lidar is an important part of the perception unit that senses the 3D information of the driving scene. The decision unit runs algorithms to process the scene information and decides what motions should the execution unit take. The ADS then sends the feedback data to the simulation computer to close the loop.

LSP is a kind of simulator that is used for testing the imaging lidar of ADS. Generally, a LSP consists of a scene generation computer, a reconfigurable optical time delay array (ROTDA), and a projection lens [1114]. The scene generation computer first generates a digital 3D scene according to the target position data from the simulation computer. The ROTDA then converts the digital 3D scene into an optical 3D scene by generating a 2D array of optical time delay signals with different delays relative to the trigger signal. The optical 3D scene is then projected into the imaging lidar by the projection lens. ROTDA is a core component of a LSP. The three most important parameters of ROTDA are delay range, delay resolution, and spatial resolution, which respectively determine the distance range, distance resolution, and spatial resolution of LSP.

Current ROTDA solutions for LSP are mostly based on the electronic delay systems. Multiple electrical delay signals are generated first and then used to stimulate a laser diode array to emit array optical delay signals. The commonly used electronic delay system is FPGA delay board [11,12]. The delay of each signal can be flexibly configured using FPGA boards. However, it is difficult to generate a large number of delay signals at one time since a single FPGA delay board has relatively limited output channels (usually less than 20 channels). Dozens of FPGA boards have to be combined together to form a large array, which will make the system very bulky and costly. In addition, synchronization between dozens of FPGA boards will also bring challenges. To our knowledge, the largest ROTDA based on FPGA delay boards is only 10×10 at present [12].

With the increasing spatial resolution of imaging lidar, future LSP should be able to generate 3D lidar scenes with matching spatial resolutions, such as 32×32, 64×64, or even larger ones. However, current ROTDA solutions cannot support a LSP with a spatial resolution more than 10×10. To solve this problem, we proposed a novel ROTDA design based on the time slicing method. A ROTDA prototype with an array size of 64×64 was developed. And an experimental demonstration of the prototype was performed. The ROTDA design proposed in this paper also has the potential to expand to even larger arrays, which can meet the coming needs of LSP.

2. Method

2.1 Time matrix

Generally, lidar scene is modeled by point cloud [15]: P (x, y, r), where (x, y) is the pixel coordinates of the detector array, and r is the distance from the lidar to the target. A point cloud can be converted into a time matrix: T (x, y, t) according to the ranging equation:

$$r = c \cdot t/2$$
where c is the speed of light, t is the time delay of the laser return signal with (x, y) coordinates. Since most of this paper discusses how to process the time delay signals, the time matrix is used as the lidar scene model in this paper instead of the point cloud.

Traditional ROTDA solutions provide a delay generator for every pixel of the time matrix, which makes it difficult to integrate a large array (>10×10). However, we noticed that there are always many pixels with the same delay in a time matrix. It will greatly reduce the total number of delay generators if the pixels with the same delay can share a same delay generator. Based on this consideration, we proposed a time slicing method to deal with the time matrix.

2.2 Time slicing of time matrix: 3D to 2D

The time matrix is a 3D data matrix containing a unique time delay component, which is difficult to directly display by optical means. Here we introduce a time slicing method to first convert a 3D time matrix into a series of 2D time-slice images. And use a SLM to display all the time-slice images on a 2D plane. For example, there is a pyramid shaped target in the lidar scene, as shown in Fig. 2. The time matrix of the target is sliced into a series of time-slice images at different time delay values. The activated pixels (the small white dots in Fig. 2) in each time-slice image have the same t value but different (x, y) coordinates.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the time slicing operation at data level.

Download Full Size | PDF

The (x, y) coordinates of all the activated pixels that locate on the i-th time-slice image are extracted as a coordinate set and denoted as Ti (x, y):

$$\begin{array}{l} {T_i}(x,y) = \{ (x,y)\} ,\\ (x,y) \in T(x,y,t){|_{t = {t_i}}},{t_i} = 1,2,3 \cdot{\cdot} \cdot ,S \end{array}.$$
Suppose the minimum delay is tmin, the maximum delay is tmax, the time interval between adjacent time-slice images is τ, and the total number of time-slice images is S. Then we have:
$${t_i} = {t_{\min }} + \tau \cdot (i - 1)$$
$$S\textrm{ = }( {t_{\max }} - {t_{\min }})/\tau .$$
Suppose the original time matrix has M×N pixels, then through the time slicing operation, these M×N pixels will be allocated into S time-slice images. The activated pixels on each time-slice images have the same time delay value, thus can share a same delay generator. Now the number of delay generators required becomes S, while the traditional ROTDA solutions require a total of M×N delay generators. Generally, we have S ≤ M×N, this means if we use the time slicing method to deal with the time matrix, the number of delay generators required will possibly be reduced compared with the traditional ROTDA solutions.

For example, the length of a car will not exceed 12 m in a usual driving scene. Take the speed of light as c = 3×108 m/s, then according to Eq. (1), the time delay range of the time matrix of a car target will not exceed tmax - tmin = 80 ns. Suppose the time interval between adjacent time-slice images is τ = 1 ns (representing a ranging resolution of 0.15 m), then according to Eq. (4), the time matrix of a car target will become no more than S = 80 time slices. Assuming M×N=64×64, obviously S=80<<M×N=64×64. This means we only need 80 delay generators to fully generate a car target with a spatial resolution of 64×64.

The above operation realizes the time slicing operation at data level. At physical level, the time slicing operation can be realized by sequentially irradiating laser pulses with different time delays to different blocks of a SLM, as shown in Fig. 3. The time-domain expression of each delayed laser pulse is:

$${T_i}(t) = \left\{ \begin{array}{l} 1,\textrm{ }t \in [{t_i},{t_i} + W]\\ 0,\textrm{ }t \notin [{t_i},{t_i} + W] \end{array} \right.$$
where W is the pulse width of the laser pulse.

 figure: Fig. 3.

Fig. 3. Schematic diagram of the time slicing operation at physical level.

Download Full Size | PDF

Figure 3(a) shows the relationship between the delayed laser pulses, the time-slice images, and the SLM blocks. Figure 3(b) shows how the delayed laser pulses illuminate the SLM blocks. The yellow arrows represent the delayed laser pulses, the darker the yellow, the longer the delay.

The SLM blocks are illuminated at different time moments, thereby carrying different time delay information. Each SLM block displays a time-slice image. Use (x, y, i) to indicate the coordinates of the pixels on the i-th time-slice image. Then the pixel state is expressed as:

$$G(x,y,i) = \left\{ \begin{array}{l} 1,\textrm{ }(x,y) \in {T_i}(x,y)\\ 0,\textrm{ }(x,y) \notin {T_i}(x,y) \end{array} \right.$$
where the state “1” means the pixel is activated and allows light to pass, and the state “0” means the pixel is shut down and does not allow light to pass. Easy to know that the spatial resolution of a time-slice image is the same as the time matrix, i.e., M×N.

Each SLM block only allows pixels activated by its time-slice image to pass light. In other words, the SLM will filter laser pulses with specific time delay and specific spatial distribution according to the time-slice image sequence.

2.3 Superimposition of time-slice images: 2D to 3D

Now we have a series of 2D time-slice images displayed on the SLM plane. These 2D time-slice images need to be superimposed into one composite image to finally get a 3D lidar scene. This process is completed by the so-called superimposed imaging system (SIS) including a microlens array and a main lens, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Schematic diagram of the superimposition of time-slice images.

Download Full Size | PDF

Microlens array has been used for image superimposition in applications such as 3D imaging [16], high performance projection [1719] and enhanced RGB display [20,21]. In this paper, we use microlens array to superimposed the time-slice images for 3D lidar scene generation. The object-image relationship of the SIS is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. The object-image relationship of the SIS.

Download Full Size | PDF

The optical axes of all the microlenses and the main lens are parallel to each other, i.e., $\overrightarrow {{O_a}{F_a}} {\parallel }\overrightarrow {{O_b}{F_b}} {\parallel }\overrightarrow {OF^{\prime}} $. For the microlenses, Oa and Ob are the principal points, Fa and Fb are the focal points, and the focal length is: . $\overrightarrow {{A_a}{F_a}} $ and ${\overrightarrow {{A_b}F} _b}$ are two objects on the focal planes of the two microlenses, and $|{\overrightarrow {{A_a}{F_a}} } |= |{\overrightarrow {{A_b}{F_b}} } |= h$. For the main lens, O is the principal point, F and F’ are the focal points, and the focal length is: $|{\overrightarrow {OF} } |= |{\overrightarrow {OF^{\prime}} } |= {f_2}$. $\overrightarrow {A^{\prime}F^{\prime}} $ is $|{\overrightarrow {{O_a}{F_a}} } |= |{\overrightarrow {{O_b}{F_b}} } |= {f_1}$the composite image on the focal plane of the main lens, and$|{\overrightarrow {A^{\prime}F^{\prime}} } |= h$.

According to the theory of geometric optics, the rays emitted from a point on the focal plane of a lens will become parallel after passing the lens, therefore, $\overrightarrow {{P_a}{P_1}} {\parallel }\overrightarrow {{O_a}{P_2}}$and$\overrightarrow {{P_b}{P_3}} {\parallel }\overrightarrow {{O_b}{P_4}}$. The rays that pass the principal point of a lens does not change their direction after passing the lens, therefore, $\overrightarrow {{A_a}{O_a}} {\parallel }\overrightarrow {{O_a}{P_2}}$and$\overrightarrow {{A_b}{O_b}} {\parallel }\overrightarrow {{O_b}{P_4}}$. From the geometric relationship in Fig. 5, we know that$\; \overrightarrow {{A_a}{O_a}} {\parallel }\overrightarrow {{A_b}{O_b}}$, therefore: $\overrightarrow {{P_a}{P_1}} {\parallel }\overrightarrow {{O_a}{P_2}} {\parallel }{\overrightarrow {{P_b}P} _3}{\parallel }\overrightarrow {{O_b}{P_4}}$. Suppose $\overrightarrow {FP}$ is a virtual ray that passes the focal point of the main lens, and $ \overrightarrow {FP} {\parallel } \overrightarrow {{P_a}{P_1}} $. Then $\overrightarrow {FP}$ will become parallel to the optical axis after passing the main lens, i.e., $\overrightarrow {PA^{\prime}} {\parallel } \overrightarrow {OF^{\prime}}$. Suppose $\overrightarrow {OA^{\prime}}$ is another virtual ray that passes the principal point of the main lens, and $\overrightarrow {OA^{\prime}} {\parallel }\overrightarrow {FP}$. The parallel rays that pass a lens will converge to a same point on the focal plane of the lens, therefore, all the parallel rays including $\overrightarrow {OA^{\prime}}$, $\overrightarrow {FP}$, $\overrightarrow {{P_a}{P_1}}$, ${\overrightarrow {{O_a}P} _2}$, $\overrightarrow {{P_b}{P_3}}$, and $\overrightarrow {{O_b}{P_4}}$ finally converge to a same point A’ after passing through the main lens.

From the geometric relationship in Fig. 5, we know that $\angle {A_a}{O_a}{F_a}\textrm{ = }\angle A^{\prime}OF^{\prime}$, therefore:

$$\tan \angle {A_a}{O_a}{F_a} = h/{f_1} = \tan \angle A^{\prime}OF^{\prime} = h^{\prime}/{f_2}.$$
Therefore, the magnification of the SIS is:
$$\beta {\rm = }h^{\prime}/h = f_2/f_1.$$

The light rays emitted from point ${A_a}$ and point ${A_b}$ finally converge to the same point A’ after passing through the SIS. In other words, the SIS superimposes $\overrightarrow {A_aF_a} $ and $\overrightarrow {{A_b}{F_b}}$into a composite image $\overrightarrow {A^{\prime}F^{\prime}}$. This explains the feasibility of using the SIS to superimpose all the time-slice images into a composite image (see Fig. 4). The above raytracing analysis describes an ideal imaging situation. However, an actual image superimposition system will inevitably have aberrations, the relevant analysis can be found in Ref. [17].

The spatial resolution of the composite image is the same as each time-slice image, i.e., M×N. Each pixel of the composite image comes from a different time-slice image, thereby carrying different time delay information. In other words, the composite image is essentially a 2D array of optical time delay signals. The time-domain expression of the optical signal that arrives at the composite image pixel with (x, y) coordinates is:

$$T(x,y,t) = {T_i}(t) \cdot G(x,y,i).$$
The SIS superimposes all the 2D time-slice images into a composite image to obtain an optical 3D scene. When we change the time-slice image displayed by each SLM block, the time delays of the pixels on the composite image will be reconfigured accordingly. This allows us to generate a dynamic 3D lidar scene by refreshing the image written into the SLM. The 3D frame rate of the lidar scene depends on the image refreshing rate of the SLM.

3. Simulation and experiment

3.1 Ray tracing

We designed a ROTDA prototype including a SLM, a microlens array, and a main lens. A liquid crystal display (LCD) was adopted as the SLM. The spatial resolution of the LCD is 1920×1080, the size of each LCD pixel was 19×19 µm, and the refreshing rate of the LCD is 20 Hz. The entire SLM plane was divided into 18×10 blocks and illuminated by 18×10 laser pulses with different time delays. Each SLM block contained 104×104 pixels, of which the central 64×64 pixels were used for displaying a time-slice image. The microlens array was made of COC-480R and contained 18×10 square microlenses. The size of each microlens was equal to the size of a SLM block, which was 1.976×1.976 mm. And the focal length of each microlens was f1 = 50mm. The main lens was made of BK7 with a focal length of f2 = 500 mm and a diameter of 76.2 mm. The ray tracing inside the ROTDA prototype is shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Ray tracing of using the ROTDA to generate a 3D model with 32 distance gradients.

Download Full Size | PDF

In Fig. 6, the time matrix of a pyramid shaped target is sliced into 32 time-slice images. Each time-slice image is displayed by a SLM block. The SLM blocks are illuminated by 32 different colors of light representing 32 laser pulses with different time delays. The color of the light represents the pulse delay, and the darker the color, the longer the delay. The ray tracing shows that a square pattern with 32 color gradients is obtained on the composite image, which represents a 3D pyramid model with 32 time (distance) gradients. The spatial resolution of the composite image is determined by the spatial resolution of each time-slice image, which is 64×64. Since the SLM is divided into 180 blocks, we can superimpose up to 180 time-slice images into a composite image. This means each pixel of the composite image can be reconfigured with up to 180 optional time delays, thereby generating a 3D model with up to 180 distance gradients.

3.2 Experimental set up

An experiment was performed to demonstrate the ROTDA prototype, as shown in Fig. 7. We developed a two-stage delay system that can generate 180 laser pulses with different time delays relative to the trigger signal. The 180 laser pulses were emitted through a tightly arranged 2D optical fiber array, and then guided by the illumination lens to illuminate the 18×10 SLM blocks. The 32 time-slice images of a pyramid model were written to the SLM by a laptop. A detector was mounted on a 3-axis positioning platform to sense the optical time delay signals generated on the composite image plane. A time counter (FCA3100, Tektronix, Inc., USA) was utilized to accurately measure the time interval between the delay signal and the trigger signal. At the same time, an oscilloscope was used to monitor the waveforms of the delay signal and the trigger signal to make sure that the detector moved to the proper position.

 figure: Fig. 7.

Fig. 7. Experimental demonstration of the deigned ROTDA prototype.

Download Full Size | PDF

3.3 Results and discussion

By precisely adjusting the 3-axis positioning platform, we scanned to detect all the 64×64 optical time delay signals that were generated on the composite image plane. And the time delays of all the 64×64 signals were measured. The results are shown in Fig. 8(a). In addition, we also captured an intensity image of the simulated pyramid model generated on the composite image plane, the result is shown in Fig. 8(b).

 figure: Fig. 8.

Fig. 8. The time matrix and the intensity image of the simulated 3D pyramid model. (a) the time matrix. (b) the intensity image.

Download Full Size | PDF

Both the time matrix and the intensity image show that a 3D lidar scene containing a pyramid model is perfectly generated. The spatial resolution of the lidar scene is 64×64. The time delay resolution is about 1 ns, which represents a distance resolution of 0.15 m.

The two-stage delay system consists of an 8-channel digital delay generator (Quantum 9528, Quantum Composers Inc., USA), an 8-channel pulsed laser (1064 nm), and a 2D fiber array with 180 fibers of different lengths (refractive index: n = 1.4496@1064 nm). The 8-channel digital delay generator and the 8-channel pulsed laser form a primary delay system that can generate 8 primary delay signals with adjustable time delays. The 8 primary delay signals are then coupled into 8 fiber bundles respectively. Among the 8 fiber bundles, four of them each contains 20 fibers, and the other four each contains 25 fibers, for a total of 180 fibers. The other ends of the 180 fibers are tightly arranged into an 18×10 array (see Fig. 7). The 20 (or 25) fibers in each bundle have different lengths that increase by a step of 0.206 m, as shown in Fig. 9(a). Suppose a 0.206 m long fiber will cause a delay of $\Delta t$ to a 1064 nm laser pulse, then:

$$\Delta t = \displaystyle{{nL} \over c} = \displaystyle{{1.4496 \times 0.206} \over {3 \times {10}^8}} = 9.954 \times 10^{-10}{\rm s}\approx {\rm 1 ns}.$$
Therefore, the fibers inside a fiber bundle will cause different secondary delays that increase by a step of 1 ns to a primary delay signal. The timing diagram of the primary delay signals and the secondary delay signal is shown in Fig. 9(b).

 figure: Fig. 9.

Fig. 9. Delay signals in the two-stage delay system. (a) the fibers with different lengths in a fiber bundle. (b) timing diagram of the primary delay signals and the secondary delay signals.

Download Full Size | PDF

The primary delay is adjustable due to the digital delay generator. The primary delay range is 0∼5000 s, and the primary delay resolution is 1 ns. The secondary delay is fixed due to the unchangeable fiber length. The secondary delay resolution is 1 ns because of the 0.206 m step increment of fiber length. The ROTDA prototype can work in two modes: (1) The 8 primary delay signals are adjusted to let the delays of the 180 secondary delay signals continuously increase by 1 ns, thereby generating a relatively long lidar scene that covers a delay span of 180 ns (representing a distance span of 27 m). (2) The 8 primary delay signals are adjusted to divide the 180 secondary delay signals into 8 groups. The delays of the 20 (or 25) secondary delay signals within each group increase continuously by 1 ns, while the delays between different groups are more than 1 ns. This mode can generate up to 8 short scenes each covers a delay span of 20 or 25 ns (representing a distance span of 3 m or 3.75 m) that are separated from each other along the lidar observation direction.

4. Conclusion

In this paper, we proposed a novel ROTDA design based on the time slicing method. The optical signals with the same delay but different spatial coordinates were treated as a time slice and shared a same delay generator. Different time slices were superimposed into a composite image by a microlens-array-based imaging system to obtain a 3D lidar scene. And a spatial light modulator (SLM) was utilized to configure the time delay of each lidar scene pixel.

A ROTDA prototype with 64×64 pixels was developed. And each pixel can be reconfigured with up to 180 optional delay values in one frame. The maximum delay was determined by the digital delay generator inside the prototype, which is 5000 s (representing a maximum distance of 7.5×1010 m). The delay resolution is determined by the 0.206 m length increment of the 2D fiber array, which is 1 ns (representing a distance resolution of 0.15 m). The 3D frame rate is determined by the refreshing rate of the SLM, which is 20 Hz due to the LCD. However, if a high-speed SLM like the digital-micromirror-device (DMD) were utilized, the 3D frame rate of the ROTDA prototype could be increased to more than 20000 Hz. The ROTDA can generate a continuous lidar scene with a distance span of 27 m, and can also generate up to 8 short scenes that are separated from each other along the lidar observation direction, each short scene covers a distance span of 3 m or 3.75 m.

The ROTDA proposed in this paper was designed for LSP. It expanded the spatial resolution of LSP from the previous 10×10 to 64×64. And the spatial resolution can be expanded to even larger ones if we enlarge the working area of each SLM block. The ROTDA provide a feasible solution to support the HIL simulation for the future ADS with higher-resolution imaging lidar. However, the application of ROTDA is not limited to LSP, it is applicable to other occasions that require a large number of optical time delay generators. For example, ROTDA can be used to control the phases of the microwave antennas in a phased array antenna system. Researchers in related fields can also get inspiration from this paper.

Funding

China Postdoctoral Science Foundation (2020TQ0036).

Disclosures

The authors declare no conflicts of interest.

References

1. V. Molebny, P. F. McManamon, O. Steinvall, T. Kobayashi, and W. B. Chen, “Laser radar: historical prospective—from the East to the West,” Opt. Eng. 56(3), 031220 (2016). [CrossRef]  

2. J. Hecht, “Lidar for self-driving Cars,” Opt. Photonics News 29(1), 26–33 (2018). [CrossRef]  

3. Y. Li and J. I. Guzman, “Lidar for autonomous driving: the principles, challenges, and trends for automotive lidar and perception systems,” IEEE Signal Process. Mag. 37(4), 50–61 (2020). [CrossRef]  

4. Y. P. Chang, C. N. Liu, Z. W. Pei, S. M. Lee, Y. K. Lai, P. Han, H. K. Shih, and W. H. Cheng, “New scheme of LiDAR-embedded smart laser headlight for autonomous vehicles,” Opt. Express 27(20), A1481–A1489 (2019). [CrossRef]  

5. O. J. Gietelink, J. Ploeg, B. D. Schutter, and M. Verhaegen, “Development of a driver information and warning system with vehicle hardware-in-the-loop simulations,” Mechatronics 19(7), 1091–1104 (2009). [CrossRef]  

6. Ş. Y. Gelbal, S. Tamilarasan, M. R. Cantaş, L. Güvenç, and B. Aksun-Güvenç, “A connected and autonomous vehicle hardware-in-the-loop simulator for developing automated driving algorithms,” 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, 3397–3402 (2017).

7. P. Lepej, A. Santamaria-Navarro, and J. Solà, “A flexible hardware-in-the-loop architecture for UAVs,” 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 1751–1756 (2017).

8. J. Fang, D. F. Zhou, F. L. Yan, T. T. Zhao, F. H. Zhang, Y. Ma, L. Wang, and R. G. Yang, “Augmented LiDAR simulator for autonomous driving,” IEEE Robot. Autom. Lett. 5(2), 1931–1938 (2020). [CrossRef]  

9. C. Brogle, C. Zhang, K. L. Lim, and T. Bräunl, “Hardware-in-the-loop autonomous driving simulation without real-time constraints,” IEEE Trans. Intell. Veh. 4(3), 375–384 (2019). [CrossRef]  

10. S. Chen, Y. Chen, S. Zhang, and N. Zheng, “A novel integrated simulation and testing platform for self-driving cars with hardware in the loop,” IEEE Trans. Intell. Veh. 4(3), 425–436 (2019). [CrossRef]  

11. H. J. Kim, C. B. Naumann, and M. C. Cornell, “Hardware-in-the-loop projector system for light detection and ranging sensor testing,” Opt. Eng. 51(8), 083609 (2012). [CrossRef]  

12. R. Xu, X. Wang, Y. Tian, and Z. Li, “Ladar scene projector for a hardware-in-the-loop simulation system,” Appl. Opt. 55(21), 5745–5755 (2016). [CrossRef]  

13. R. Opromolla, G. Fasano, G. Rufino, and M. Grassi, “Hardware in the loop performance assessment of lidar-based spacecraft pose determination,” Sensors 17(10), 2197 (2017). [CrossRef]  

14. Y. Z. Gao, L. Zhou, X. Wang, H. Yan, K. Z. Hao, S. H. Yang, and Z. Li, “A programmable all-optical delay array for light detection and ranging scene generation,” IEEE Access 7, 93489–93500 (2019). [CrossRef]  

15. Y. Z. Gao, X. Wang, Y. Y. Li, L. Zhou, Q. F. Shi, and Z. Li, “Modeling method of a ladar scene projector based on physically based rendering technology,” Appl. Opt. 57(28), 8303–8313 (2018). [CrossRef]  

16. R. F. Stevens and T. G. Harvey, “Lens arrays for a three-dimensional imaging system,” J. Opt. A: Pure Appl. Opt. 4(4), 353S17 (2002). [CrossRef]  

17. Y. Liu, D. W. Cheng, T. Yang, and Y. T. Wang, “High precision integrated projection imaging optical design based on microlens array,” Opt. Express 27(9), 12264–12281 (2019). [CrossRef]  

18. M. Sieler, S. Fischer, P. Schreiber, P. Dannberg, and A. Bräuer, “Microoptical array projectors for free-form screen applications,” Opt. Express 21(23), 28702–28709 (2013). [CrossRef]  

19. M. Sieler, P. Schreiber, P. Dannberg, A. Bräuer, and A. Tünnermann, “Ultraslim fixed pattern projectors with inherent homogenization of illumination,” Appl. Opt. 51(1), 64–74 (2012). [CrossRef]  

20. J. Tanida, R. Shogenji, Y. Kitamura, K. Yamada, M. Miyamoto, and S. Miyatake, “Color imaging with an integrated compound imaging system,” Opt. Express 11(18), 2109–2117 (2003). [CrossRef]  

21. M. Sieler, P. Schreiber, and A. Bräuer, “Microlens array based LCD projection display with software-only focal distance control,” Proc. SPIE 8643, 86430B (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Block diagram of a HIL simulation system for ADS.
Fig. 2.
Fig. 2. Schematic diagram of the time slicing operation at data level.
Fig. 3.
Fig. 3. Schematic diagram of the time slicing operation at physical level.
Fig. 4.
Fig. 4. Schematic diagram of the superimposition of time-slice images.
Fig. 5.
Fig. 5. The object-image relationship of the SIS.
Fig. 6.
Fig. 6. Ray tracing of using the ROTDA to generate a 3D model with 32 distance gradients.
Fig. 7.
Fig. 7. Experimental demonstration of the deigned ROTDA prototype.
Fig. 8.
Fig. 8. The time matrix and the intensity image of the simulated 3D pyramid model. (a) the time matrix. (b) the intensity image.
Fig. 9.
Fig. 9. Delay signals in the two-stage delay system. (a) the fibers with different lengths in a fiber bundle. (b) timing diagram of the primary delay signals and the secondary delay signals.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

r = c t / 2
T i ( x , y ) = { ( x , y ) } , ( x , y ) T ( x , y , t ) | t = t i , t i = 1 , 2 , 3 , S .
t i = t min + τ ( i 1 )
S  =  ( t max t min ) / τ .
T i ( t ) = { 1 ,   t [ t i , t i + W ] 0 ,   t [ t i , t i + W ]
G ( x , y , i ) = { 1 ,   ( x , y ) T i ( x , y ) 0 ,   ( x , y ) T i ( x , y )
tan A a O a F a = h / f 1 = tan A O F = h / f 2 .
β = h / h = f 2 / f 1 .
T ( x , y , t ) = T i ( t ) G ( x , y , i ) .
Δ t = n L c = 1.4496 × 0.206 3 × 10 8 = 9.954 × 10 10 s 1 n s .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.