Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Smooth motion parallax method for 3D light-field displays with a narrow pitch based on optimizing the light beam divergence angle

Open Access Open Access

Abstract

The three-dimensional (3D) light field display (LFD) with dense views can provide smooth motion parallax for the human eye. Increasing the number of views will widen the lens pitch, however, resulting in a decrease in view resolution. In this paper, an approach to smooth motion parallax based on optimizing the divergence angle of the light beam (DALB) for 3D LFD with narrow pitch is proposed. DALB is controlled by lens design. A views-fitting optimization algorithm is established based on a mathematical model between DALB and view distribution. Subsequently, the lens is reversely designed based on the optimization results. A co-designed convolutional neural network (CNN) is used to implement the algorithm. The optical experiment shows that a smooth motion parallax 3D image is achievable through the proposed method.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) light field display (LFD) technology is considered to be one of the developmental directions of next-generation display technology. To achieve a natural and comfortable 3D visual perception, significant effort has been dedicated [19] in recent years. Smooth motion parallax plays an important role in achieving high-quality 3D LFD. It enhances people's experience of viewing 3D images and reduces viewing fatigue. People moving horizontally will notice significant ghosting in the 3D light field without smooth motion parallax, as depicted in Fig. 1. The 3D light field of dense views can provide smooth and consecutive motion parallax for the human eye. To increase the number of views, a lens with a wide pitch [10] is needed as the light control element. Since the lens is the smallest display unit in 3D LFD, the spatial resolution is inversely proportional to the lens pitch, making this a critical trade-off in 3D LFD. Various efforts have been devoted to obtaining a 3D display of motion parallax. Eye tracking [11,12] and time-sequential displays [13,14] have been discussed to provide smooth motion parallax. However, the eye-tracking method cannot be provided for multiple viewers. Frontal and rear projection three-dimensional displays based on lenticular sheets were presented [15]. High resolution, large size, and wide viewing angles were obtained. However, using a large number of projectors can make the prototype too complex, requiring considerable calibration efforts and substantial data transfer. On the other hand, by the non-uniform angular sampling [1618], the trade-off between angular resolution and field of view is solved. However, the low spatial resolution still needs improvement. Wang et al. proposed and developed a high-resolution integral imaging 3D display system with an anisotropic backlight unit [19,20]. This system was designed to maintain a small voxel size for high 3D resolution and eliminate the graininess issue between subpixels. Smooth 3D images with high resolution were presented. In recent research, scholars have proposed a 3D flat panel called visual equivalent light field 3D display [2123]. It provides smooth motion parallax and high-quality 3D images. However, its application is limited by the complex structure of hardware devices, which hinders its use in various fields such as medical diagnosis, teaching, industrial design, and others. In our previous work, deep learning has been applied to optimize the quality of 3D scene reconstruction, and some achievements, such as aberration correction [24] and image edge smoothing [25], have been made, providing researchers with new ideas.

 figure: Fig. 1.

Fig. 1. Light-field reconstruction image of dense views and sparse views.

Download Full Size | PDF

In this present study, a smooth motion parallax method for 3D light-field displays with a narrow pitch based on optimizing the light beam divergence angle is proposed. The view corresponds to a light beam (LB) emitted from each subpixel on the displayed two-dimensional (2D) image. In reality, the LB diverges, generating the overlapping of adjacent LBs, which affects the view distribution. Therefore, we optimize the divergence angle of the light beam (DALB) by lens design to achieve smooth motion parallax in 3D LFD. In detail, the mathematical model between DALB and view distribution is analyzed to establish a views-fitting optimization algorithm that fits the overlapping region of adjacent LBs to the multi-view light field. A co-designed convolutional neural network (CNN) is used to implement the algorithm, and the optical components are designed based on the output results. The processes are performed on two distinct 3D scenes to verify the effectiveness of the proposed method. In the experiment, a 3D LFD with smooth motion parallax can be viewed within a 100-degree viewing angle.

2. Principle

2.1 Mathematical model between DALB and view distribution

In the analysis of traditional 3D LFD, the LCD is placed on the focal plane of the lens array, and the LBs containing information about the subpixels are constrained into parallel light after being emitted from the light control element. However, because of the presence of aberrations in lens elements, the LB will diverge. We use an ideal lens unit covered N subpixels as an example to analyze the mathematical model between the DALB and the view distribution. The LB emitted from the subpixel will diverge at an angle of β after passing through the ideal lens and form a view with width w on the viewing plane with depth D, as shown in Fig. 2(a).

 figure: Fig. 2.

Fig. 2. (a) Optical paths of an ideal lens; (b) Optical path at 0-degree incidence angle of intensity distribution of ideal lens and designed lens.

Download Full Size | PDF

The lateral distance between the center of the lens and the center of the nth view is called $\Delta {x_n}(1\mathrm{\ \mathbin{\lower.3ex\hbox{$\buildrel< \over {\smash{\scriptstyle=}\vphantom{_x}}$}}\ }n\mathrm{\ \mathbin{\lower.3ex\hbox{$\buildrel< \over {\smash{\scriptstyle=}\vphantom{_x}}$}}\ }\textrm{N})$, where n represents the nth subpixel covered by the lens. The DALB βn can be calculated by

$${\beta _n} = \arctan (\frac{{\Delta {x_n} + \frac{1}{2}{w_n}}}{D}) - \arctan (\frac{{\Delta {x_n} - \frac{1}{2}{w_n}}}{D})$$
$$\Delta {x_n} = \textrm{Dtan}\left[ {\frac{{\alpha (n - 1)}}{{N - 1}} - \frac{\alpha }{2}} \right]$$

In these equations, α is the viewing angle of the lens. Given that D is usually much larger than w for an enhanced stereo experience, further derivation of Eq. (1) and Eq. (2) can be obtained

$$\textrm{tan}{\beta _n} = \frac{{{w_n}D}}{{{D^2} + \Delta {x_n}^2 - \frac{1}{4}{w_n}^2}} \approx \frac{{{w_n}D}}{{{D^2} + \Delta {x_n}^2}} = \frac{{{w_n}}}{{D + D{{\tan }^2}\left[ {\frac{{\alpha (n - 1)}}{{N - 1}} - \frac{\alpha }{2}} \right]}}$$

Afterward, w can be deduced based on the above results and approximately expressed as a function of β

$${w_n} = \left( {1 + {{\tan }^2}\left[ {\frac{{\alpha (n - 1)}}{{N - 1}} - \frac{\alpha }{2}} \right]} \right)\textrm{Dtan}{\beta _n}$$

The derived formula demonstrates the proportional relationship between the view width and DALB. Increasing the DALB to a specific βdesigned amount can control the distribution of view, as illustrated in Fig. 2(b).

Therefore, to achieve continuous and ghost-free 3D LFD, we proposed a smooth motion parallax method based on optimizing DALB, as shown in Fig. 3. The designed lens enlarges the DALB, causing adjacent LB to overlap. The overlapping region can present new views to the human eye separately. Each overlapping region can present new views separately to each eye, which means that the number of view is increased visually and motion parallax is smoother and more consecutive motion parallax. We take a lens covering four subpixels as an example to illustrate the detailed process. The subpixel with serial number 1 emitted LB diverged at an angle of β1 and formed a view v1 with width w1. Taking v2, v3, and v4 with the same numbering scheme, the number also represents the order in which these views enter the human eye. Due to the presence of DALB, view v1 overlapped with the adjacent view v2, generating an overlapping v1,2 with width of w1,2. Similarly, taking v2,3 and v3,4 in Fig. 3 as examples, the same numbering scheme is also implemented. The remaining region of the original view that has not been overlapped is represented as v1v4. The overlapping region can present new views separately to the human eye, in the order of v1, v1,2, v2, . . . v4, which means that the number of view regions is visually increased. Thus, we produce the view distribution by optimizing DALB that is similar to the dense views 3D light field. The width wn, n + 1 of the overlapping region where the nth and the (n-1)th LB overlap can be calculated by

$$w_{\textrm{n,n} + \textrm{1}}^{} = \frac{{\frac{{{w_n}}}{2} + \frac{{{w_{n + 1}}}}{2} - (\Delta {x_{n + 1}} - \Delta {x_n})}}{2}$$

 figure: Fig. 3.

Fig. 3. New view regions are generated by overlapping regions between adjacent LBs.

Download Full Size | PDF

According to the display principle with geometric optics, the width of the original views and overlapping can be used as the intensity distribution weight to calculate the energy distribution of the LB on the viewing plane. Therefore, the energy information of the views region in different locations can be obtained by

$$\left\{ \begin{array}{l} {v_{2n - 1}} = {I_n}({w_n} - w_{n,n - 1}^{} - w_{n,n + 1}^{})\\ {v_{2n}} = \boldsymbol {I_{n - 1}} + {I_n})w_{n - 1,n}^{} + ({I_{n,n + 1}} + {I_n})w_{\textrm{n,n} + \textrm{1}}^{} \end{array} \right.$$
where In represents the intensity information of the nth subpixel, vm represents the energy information of the view region with serial number m (1≦m≦2N-1) in new view distribution. Further extraction of the common weight matrix, the reconstruction of the 3D light field E can be represented by the following matrix calculation
$$E = {V^{1\ast (2N - 1 }} = UI_{}^{1\ast N} \times {W^{N\ast (2N - 1)}}$$
where matrix V denotes a set of the intensity distribution of all visual views regions, matrix UI denotes the intensity distribution of the image on the LCD covered by the lens. W denotes a weighting matrix derived from Eq. (4) and Eq. (5). We take a lens covering four subpixels as an example to illustrate W4*7 as follows
$$W_{}^{4\ast 7} = \left[ {\begin{array}{{ccccccc}} {{w_1} - {w_{1,2}}}&{{w_{1,2}}}&{}&{}&{}&{}&{}\\ {}&{{w_{1,2}}}&{{w_2} - {w_{1,2}} - {w_{2,3}}}&{{w_{2,3}}}&{}&{}&{}\\ {}&{}&{}&{{w_{2,3}}}&{{w_3} - {w_{2,3}} - {w_{3,4}}}&{{w_{3,4}}}&{}\\ {}&{}&{}&{}&{}&{{w_{3,4}}}&{{w_4} - {w_{3,4}}} \end{array}} \right]$$

Finally, the multi-view light-field fitting algorithm equation is established as follows

$$\mathop {\arg \min }\limits_{{\beta _1}\sim {\beta _N}} {||{E - E_{\textrm{multi - view}}^{}} ||^2}$$
where Emulti-view denotes the intensity distribution of the multi-view light field. By solving the minimum problem, a lens can be designed reversely based on the optimization results.

2.2 Implementation of the multi-view light-field fitting by CNN

An effective method to perform a series of optimization weight adjustments and iterative calculations in the multi-view light-field fitting algorithm is employing CNN. At the same time, we will jointly optimize the encoding of displayed 2D images to ensure the fidelity of the optical scene. The implementation of multi-view light-field fitting by co-designed CNN consists of three stages, as shown in Fig. 4: digital capture stage, modeling the optical elements, and iterative optimization process of the co-designed CNN.

 figure: Fig. 4.

Fig. 4. The process of using a co-designed CNN to fit multi-view light-field.

Download Full Size | PDF

Digital capture stage: Here we use a virtual camera array to capture 3D scenes and obtain a parallax image set. Then, synthetic image coding techniques are used to obtain a low-resolution unit image array (UIA) and a high-resolution unit image array (HUIA) with twice the lateral resolution. Since the parameters of the lens are consistent, each UIA is divided into periodic unit images (UI), which are successively used as the actual input of the network to reduce the load of the network.

Modeling the optical elements: The details of the lens information loading process during the joint optimization are shown in Fig. 5. The DALB varies with the position of the subpixels covered by the lens. A lens unit with a pitch of p was taken as an example to characterize the continuously varying DALB in the optical system. A one-dimensional plane coordinate system XO is employed to describe the UIA plane, with X representing the object field coordinates. Therefore, the lateral distance between the center of the lens and the center of the subpixel is x. According to Eq. (4), the width of the view region corresponding to different subpixels positions can be expressed as:

$${w_x} = \left( {1 + {{\tan }^2}\left[ {\frac{{\alpha \left\lfloor {\frac{{xN}}{p}} \right\rfloor }}{{N - 1}} - \frac{\alpha }{2}} \right]} \right)\textrm{Dtan}{\beta _x}$$

Further, we divide the UI on the object field into 10 equal sub-regions Field-i, denoted as Fi (i = -5, -4, . . . -1, 1, 2, . . . 5), as shown in Fig. 5. As each sub-region is very small, the DALB of each sub-region is represented by the DALB at its center position. For example, the DALBs in subsection F2 are represented using the DALB β2 at field position x = 0.15p. Thus, for a designed lens unit, we can obtain a weighting matrix with the parameters β-1 ∼ β-5 and β1 ∼ β5. For a designed lens array, we can further extract a weighting matrix array (WMA) consisting of multiple weighting matrices. Therefore, a relatively complex optical device can be effectively represented by a set of weighting matrices and implemented as input to the subsequent convolutional neural network.

 figure: Fig. 5.

Fig. 5. The lens information modeling and loading process.

Download Full Size | PDF

Iterative optimization process of the co-designed CNN: UIA and WMA are both used as inputs for the co-designed CNN. After iteration, the process yields the pre-processed unit image array (PUIA) and the pre-processed weighting matrix array (PWMA) as outputs. In addition, we perform matrix multiplication on PUIA and PWMA based on a multi-view light-field fitting algorithm to simulate the optical imaging process of the designed lens and obtain a displayed unit image array (DUIA). Since the number of overlapping regions (N-1) is approximately equal to that of LB (N), we use HUIA with twice the lateral resolution as the target image. The loss function of the co-designed CNN is the structural similarity (SSIM) index between DUIA and HUIA. The CNN takes into account the relationship between adjacent subpixels by using the SSIM, which is more consistent with the method proposed in the paper using the overlapping region of adjacent LBs to fit a multi-view light-field. Figure 6 shows the schematic diagram of the auto-encoder employed in the co-designed CNN. It should be pointed out that, in addition to UIA and WMA, we will feed their matrix multiplication into the network simultaneously, so that the network can identify which part of the parameters represents the lens and which part represents the image.

 figure: Fig. 6.

Fig. 6. Schematic diagram of the auto-encoder of co-designed CNN.

Download Full Size | PDF

By training the co-designed CNN, a set of β values in different fields was solved. Subsequently, the lens is designed in reverse based on the optimization results to achieve a view distribution that is similar to the dense views 3D light field. The PUIA is displayed on the 3D LFD with the designed lens. After optical transformation through the designed lens array, the reconstructed 3D scene with smooth motion parallax is obtained.

3. Experiment

To confirm the feasibility of the proposed method, we perform the corresponding optical experiments. Due to the symmetry of the lens unit, only β1, β2, β3, β4, and β5 are used as matrix element parameters. A part of the optimized target obtained by the joint design is shown in Table 1.

Tables Icon

Table 1. The optimized target of DALB obtained by the joint design.

In a far-field light-field system, the far-field divergence angle of the fundamental mode Gaussian beam of light is usually defined by the full-angle at half maximum (FAHM). Therefore, FAHM is used as the optimization operand to guide the fine-tuning parameters of the lens (aperture, radius of curvature, and refractive index). There will be certain errors in the actual manufacturing process of the optical components, and the goal of the lens structure design is to make the FAHM as close as possible to the CNN output βtarget. Considering the structural complexity and manufacturing difficulty, a convex lens and an aperture are employed. The optimized structure and its corresponding parameters are shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Structural of the designed lens element and light path of adjacent subpixels.

Download Full Size | PDF

FAHM of the designed lens in five different fields is illustrated in Table 2. The results show that the DALB design is close to the optimized target, and the error is acceptable. Furthermore, Fig. 7 shows the simulation results for the light path of adjacent subpixels. We can see that there is an obvious and uniform overlapping of adjacent views.

Tables Icon

Table 2. The DALB (equal FAHM) of the designed lens element.

In the optical experiment, a light-field display system with a 100-degree horizontal viewing angle composed of a lens array and an LCD was demonstrated. A 65-inch flat panel LCD device with a resolution of 7680 × 4320 is used to load the PUIA, and the 916 designed lens elements form the lens array. The lens array is located 1.68 mm above the LCD. According to the periodicity of the lens element, to cover the integer subpixels as the minimum period, the UIA is divided into 59 × 30 periodic unit UI, and each UI consists of 130 × 144 pixels in a matrix format.

Figure 8(a) and (b) present the displayed 3D architectural scene from different views. The first column of each group represents 3D images without the proposed co-designed CNN, while the second column of each group represents corresponding 3D images improved by the co-designed CNN. The bottom row of each group shows details from the captured images. By comparing the details, it can be concluded that the optimized 3D image in the second column of each group presents a 3D image with smoother motion parallax within a 100-degree viewing angle.

 figure: Fig. 8.

Fig. 8. 3D LFD for an architectural scene based on the proposed optimized lens array and the un-optimized lens.

Download Full Size | PDF

4. Conclusion

A method of smooth motion parallax 3D LFD based on optimizing the DALB is demonstrated in the present study. The target of the proposed method is to produce the view distribution by optimizing DALB that is similar to the dense views 3D light field. The mapping relation between the DALB and the view distribution is deduced to establish a multi-view light-field fitting algorithm. The implementation of the algorithm by co-designed CNN involved the joint optimization of the physical lens structure and the re-coding of the displayed 2D image. The optical components are designed based on the DALB optimization results. Optical experiments demonstrate that the proposed approach smooths the motion parallax remarkably. Using our approach, the proposed 3D LFD with narrow pitch produces smooth motion parallax 3D images in a 100-degree viewing angle.

Funding

National Key Research and Development Program of China (2023YFB3611500).

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. X. Yu, H. Dong, X. Gao, et al., “360-degree directional micro prism array for tabletop flat-panel light field displays,” Opt. Express 31(20), 32273–32286 (2023). [CrossRef]  

2. X. Yu, Z. Zhang, B. Liu, et al., “True-color light-field display system with large depth-of-field based on joint modulation for size and arrangement of halftone dots,” Opt. Express 31(12), 20505–20517 (2023). [CrossRef]  

3. E. Chen, J. Cai, X. Zeng, et al., “Ultra-large moiré-less autostereoscopic three-dimensional light-emitting-diode displays,” Opt. Express 27(7), 10355–10369 (2019). [CrossRef]  

4. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

5. Q. Ma, L. Cao, Z. He, et al., “Progress of three-dimensional light-field display,” Chin. Opt. Lett. 17(11), 111001 (2019). [CrossRef]  

6. X. Yan, Z. Yan, T. Jing, et al., “Enhancement of effective viewable information in integral imaging display systems with holographic diffuser: Quantitative characterization, analysis, and validation,” Opt. Laser Technol. 161, 109101 (2023). [CrossRef]  

7. Z. Yan, X. Yan, X. Jiang, et al., “Calibration of the lens’ axial position error for macrolens array based integral imaging display system,” Opt. Lasers Eng. 142, 106585 (2021). [CrossRef]  

8. Z. Yan, X. Yan, Y. Huang, et al., “Characteristics of the holographic diffuser in integral imaging display systems: A quantitative beam analysis approach,” Opt. Lasers Eng. 139, 106484 (2021). [CrossRef]  

9. A. Karimzadeh, “Integral imaging system optical design with aberration consideration,” Appl. Opt. 54(7), 1765–1769 (2015). [CrossRef]  

10. X. Yu, X. Sang, D. Chen, et al., “Autostereoscopic three-dimensional display with high dense views and the narrow structure pitch,” Chin. Opt. Lett. 12(6), 060008 (2014). [CrossRef]  

11. Y. Zhu, X. Sang, X. Yu, et al., “Wide field of view tabletop light field display based on piece-wise tracking and off-axis pickup,” Opt. Commun. 402, 41–46 (2017). [CrossRef]  

12. L. Yang, X. Sang, X. Yu, et al., “A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction,” Opt. Express 26(26), 34412–34427 (2018). [CrossRef]  

13. L. Yang, X. Sang, X. Yu, et al., “Viewing-angle and viewing-resolution enhanced integral imaging based on time multiplexed lens stitching,” Opt. Express 27(11), 15679–15692 (2019). [CrossRef]  

14. B. Liu, X. Sang, X. Yu, et al., “Time-multiplexed light field display with 120-degree wide viewing angle,” Opt. Express 27(24), 35728–35739 (2019). [CrossRef]  

15. S. Yoshida, “fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays,” Opt. Express 24(12), 13194–13203 (2016). [CrossRef]  

16. L. Ni, Z. Li, H. Li, et al., “360-degree large-scale multiprojection light-field 3D display system,” Appl. Opt. 57(8), 1817–1823 (2018). [CrossRef]  

17. J. Hua, E. Hua, F. Zhou, et al., “Foveated glasses-free 3D display with ultrawide field of view via a large-scale 2D-metagrating complex,” Light: Sci. Appl. 10(1), 213 (2021). [CrossRef]  

18. J. Hua, F. Zhou, Z. Xia, et al., “Large-scale metagrating complex-based light field 3D display with space-variant resolution for non-uniform distribution of information and energy,” Nanophotonics 12(2), 285–295 (2023). [CrossRef]  

19. C. J. Zhao, Z. D. Guo, H. Deng, et al., “Integral imaging three-dimensional display system with anisotropic backlight for the elimination of voxel aliasing and separation,” Opt. Express 31(18), 29132–29144 (2023). [CrossRef]  

20. T. H. Wang, H. Deng, Y. Xing, et al., “High-resolution integral imaging display with precise light control unit and error compensation,” Opt. Commun. 518, 128363 (2022). [CrossRef]  

21. M. Date, S. Shimizu, H. Kimata, et al., “Depth range control in visually equivalent light field 3D,” IEICE Trans. Electron. E104.C(2), 52–58 (2021). [CrossRef]  

22. M. Date, M. Isogai, and H. Kimata, “Full parallax visually equivalent light field 3D display using RGB stripe liquid crystal display panel,” 3DSA DIGEST LFII-4 (2018).

23. M. Date, Y. Tanaka, M. Isogai, et al., “56-5: Late-news paper: table top visually equivalent light field 3D display using 15.6-inch 4 K LCD panel,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 50(1), 791–794 (2019). [CrossRef]  

24. X. Yu, H. Li, X. Sang, et al., “Aberration correction based on a pre-correction convolutional neural network for light-field displays,” Opt. Express 29(7), 11009–11020 (2021). [CrossRef]  

25. X. Yu, H. Li, X. Su, et al., “Image edge smoothing method for light-field displays based on joint design of optical structure and elemental images,” Opt. Express 31(11), 18017–18025 (2023). [CrossRef]  

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Light-field reconstruction image of dense views and sparse views.
Fig. 2.
Fig. 2. (a) Optical paths of an ideal lens; (b) Optical path at 0-degree incidence angle of intensity distribution of ideal lens and designed lens.
Fig. 3.
Fig. 3. New view regions are generated by overlapping regions between adjacent LBs.
Fig. 4.
Fig. 4. The process of using a co-designed CNN to fit multi-view light-field.
Fig. 5.
Fig. 5. The lens information modeling and loading process.
Fig. 6.
Fig. 6. Schematic diagram of the auto-encoder of co-designed CNN.
Fig. 7.
Fig. 7. Structural of the designed lens element and light path of adjacent subpixels.
Fig. 8.
Fig. 8. 3D LFD for an architectural scene based on the proposed optimized lens array and the un-optimized lens.

Tables (2)

Tables Icon

Table 1. The optimized target of DALB obtained by the joint design.

Tables Icon

Table 2. The DALB (equal FAHM) of the designed lens element.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

β n = arctan ( Δ x n + 1 2 w n D ) arctan ( Δ x n 1 2 w n D )
Δ x n = Dtan [ α ( n 1 ) N 1 α 2 ]
tan β n = w n D D 2 + Δ x n 2 1 4 w n 2 w n D D 2 + Δ x n 2 = w n D + D tan 2 [ α ( n 1 ) N 1 α 2 ]
w n = ( 1 + tan 2 [ α ( n 1 ) N 1 α 2 ] ) Dtan β n
w n,n + 1 = w n 2 + w n + 1 2 ( Δ x n + 1 Δ x n ) 2
{ v 2 n 1 = I n ( w n w n , n 1 w n , n + 1 ) v 2 n = I n 1 + I n ) w n 1 , n + ( I n , n + 1 + I n ) w n,n + 1
E = V 1 ( 2 N 1 = U I 1 N × W N ( 2 N 1 )
W 4 7 = [ w 1 w 1 , 2 w 1 , 2 w 1 , 2 w 2 w 1 , 2 w 2 , 3 w 2 , 3 w 2 , 3 w 3 w 2 , 3 w 3 , 4 w 3 , 4 w 3 , 4 w 4 w 3 , 4 ]
arg min β 1 β N | | E E multi - view | | 2
w x = ( 1 + tan 2 [ α x N p N 1 α 2 ] ) Dtan β x
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.