Abstract

A novel panoramic stereo imaging system with double panoramic annular lenses (PAL) is introduced in this paper. The proposed system consists of two coaxial PAL units and images on one sensor. The imaging rings are concentric. The object position can be calculated by triangulation. The novelty of this system is that the central blind zone of one PAL unit is partly used by the other as an image zone. The stereo field of view is 60°~105° × 360°. The depth extracting resolution is about 500mm at 1line/cm. The F# of the system is about 3, facilitating the system to be applied in low illumination environment. A design is presented and the depth extracting precision is analyzed.

©2012 Optical Society of America

1. Introduction

Panoramic optical systems can provide a wide field of view. They are extensively used in situations like surveillance, robotic vision, navigation and military applications [1, 2]. If the stereo vision can be obtained in real-time in these applications, these systems should be much more useful. There are several approaches to get stereo panoramic views [3], either by mosaicking pictures or by taking more than two pictures at different viewpoints with one panoramic camera. Mosaicking pictures needs some fixed cameras at different places or one freely moving camera. The other method can be categorized into single viewpoint (SVP) optics and non-single viewpoint (non-SVP) optics.

SVP optics is usually composed of one or more reflective surfaces and a standard camera which is a popular method to get panoramic images [4]. G. Kweon reported a double reflective mirrors system with ideal equi-distance projection [5]. Two successive mirror profiles are fitted with 8th order polynomials. Rays coming from one object are reflected by two different mirrors and image at two positions on one image plane. G. Jang also proposed a panoramic stereo system using double-lobed mirror which is a combined coaxial mirror pair [6]. The sensor can get two distinct viewpoints from two mirrors. This stereo system has a long baseline, so the depth resolution is higher than the first one. The biggest advantage of SVP optics is the simple construction.

Non-SVP optics used in panoramic stereo imaging is less reported. The advantage of this technique is that it doesn’t need complicated reflective surfaces and the tolerance is not very sensitive comparing with SVP optics. Fisheye lens and PAL lens are non-SVP optics that can provide panoramic views over 180°. Their difference is that the distortion of fisheye lens is hard to control at the margin of the fields without using aspheric surfaces while PAL is not. Z. G. Zhu used double PAL systems to measure the object distance [7]. It is carried by two platforms: one is fixed on the top of a still shelf; the other is fixed on a mobile robot car. With the movement of the robot car, the baseline is variable, so the system forms a dynamic stereo view. Gilbert proposed a stereoscopic system only using one PAL lens which can move along optic axis by a step motor [8]. A 3D structure can thus be acquired by the comparison of two positions’ images. It is claimed that the resolution of the system is mainly dependent on the moving part.

However, all the mirrors used in SVP optics are usually large and irregular which are hard to fabricate, and need high-precision alignment. The individual difference of the off the-shelf lenses used in camera also should be considered. In ref [5], the author claimed the camera nodal point is hard to adjust, so that the precision is not high. On the other hand, to meet the SVP condition, the system’s entrance pupil is usually too small to be facilitated in low luminance environment or infrared bands. The above PAL stereo systems also have these disadvantages: in ref [7], this system needs a complex computation of the baseline calibration by “looking” at each other. In ref [8], the problem is that it is time consuming and cannot obtain real-time information. However, the biggest shortcoming in PAL system is the blind zone in the central portion of the imaging plane. Due to the image principle of PAL, it’s impossible to eliminate it but only to reduce it.

In this paper, a novel real-time stereo system using double coaxial PALs is proposed with special characteristic of imaging its double ring images on one COMS/CCD sensor without using any complicated surfaces. It belongs to non-SVP optics. Since only a ring zone but not all the area of the front reflective surface is used in PAL block, partial central zone of the lower PAL block is used to let the upper PAL’s rays pass through and image on the blind zone of the lower PAL image plane. The stereo field of view of this system is 60°~105°×360°. The baseline is about 100mm. The depth extracting resolution is about 500mm at 1line/cm. A full frame black and white CCD sensor is used. The depth information of all the 360° objects can be calculated by software in real-time. All the surfaces are spherical and the lenses are made of common glasses.

2. Stereo imaging principle of double PAL system

The configuration of the proposed stereo PAL imaging system is shown in Fig. 1 , which is composed of two PAL units. PALupper and RLupper are the upper PAL block and its relay lenses, respectively. PALlower and RLlower are the lower PAL block and its relay lenses, respectively. The front reflective surface of PALupper is fully reflective coating. The lower one is ring reflective coating and the central zone is refractive coating. αupper and βupper are maximum field angles from horizon of upper PAL, αlower and βlower are maximum field angles from horizon of lower PAL. Obviously, stereo vision can only be acquired in the overlapping zone. Assuming P is a point in object space radiating two rays. One (red dashed) goes through the upper unit. Then, it is refracted by the first refractive surface, reflected by the rear reflective surface, reflected by the fully reflective coating surface. After that, it goes out of PALupper block, passes through RLupper, penetrates PALlower from the refractive coating surface, and passes through RLlower images P1. The other ray (blue dashed) goes through the lower unit. Then, it is refracted by the first refractive surface, reflected by the rear reflective surface, reflected by the ring reflective coating surface. After that, it goes out of PALlower block, passes though RLlower and images P2. The height from P to imaging plane is H and the distance from P to optical axis is S. θupper is the incidence angle of red dash, θlower is the incidence angle of blue dash, d is the distance of the two entrance pupil position of the same object point, d’ is the distance between the lower entrance pupil position and the imaging plane. The object position can be easily calculated from θupper, θlower and d by using triangulation. The PAL system follows the principle of cylinder perspective, so all the concentric circles on the image plane correspond to the same field angle in object space. The depth of field is almost infinite. So the field angle can be calculated by measuring the image radius from the circle center. The system doesn’t contain any moving components.

 figure: Fig. 1

Fig. 1 Configuration of the stereo PAL imaging system. It is composed by two PAL units. Two rays coming from object point P go through two qwdifferent paths (color dashed) and image P1 and P2 on the image plane. By measuring the radiuses of images to calculate θupper, θlower and baseline d, the location parameters of P, H and S can be calculated by using triangulation.

Download Full Size | PPT Slide | PDF

3. Circles distribution

The image plane of the proposed system is shown in Fig. 2 . The blue ring is the image area of PALlower unit. The red ring is the image area of PALupper unit. The green circle is the blind area of PALupper unit. The red and green areas are configured to be the blind area of PALlower unit. Φ is the azimuth angle of P that can be measured from an agreed coordinate. P1, P2 are the images of P, whose radiuses are rupper and rlower respectively. rlower(max), rlower(min), rupper(max) and rupper(min) are boundaries of two PAL units’ rings. In this design, two images are on the same imaging plane, but located at the opposite side of optical axis. This is different from other stereo systems whose images are on the same side [4, 5]. The inner margin of PALlower’s image and the outer margin of PALupper’s image are close to each other. If the system obeys f-θ distortion, the focal lengths of two PAL units can be written as Eq. (1):

{rlower(min)=flowerθlower(min)rupper(max)=Mfupperθupper(max)rlower(min)=rupper(max)rlower(max)=flower(θlower(max))
where fupper and flower are the two PAL units’ focal lengths. M is the magnification of the reverse of lower PAL unit which will be introduced in following section. rlower(max) should be smaller than the semi-length of the sensor’s short side.

 figure: Fig. 2

Fig. 2 The blue ring is the image area of PALlower unit. The red ring is the image area of PALupper unit. The green circle is the blind area of PALupper unit. Φ is the azimuth angle of P in Fig. 1. P1, P2 are the images of P, whose radiuses are rupper and rlower respectively. rlower(max), rlower(min), rupper(max) and rupper(min) are boundaries of two PAL units.

Download Full Size | PPT Slide | PDF

Large field angle optics like PAL lens or fish-eye lens obeys f-θ distortion instead of f-tanθ distortion. If the maximal field angle is confirmed and PAL system obeys the latter distortion, two circle radiuses are determined, too. However, the size of sensor limits the outer circle diameter. To maximize the usage of the sensor pixels, this distortion rule has to multiply a constant k (0<k<1) to limit the image size. However, the ratio of blind zone is still the same. In ideal situation, the inner diameter is the smaller the better. The distortion is assumed to obey Eq. (2):

θθ1θ2θ1=rr1rr2(0<θ1<θ2)
where r1 and r2 are the radiuses when the smallest and largest field angles are θ1 and θ2. From this equation, the distortion is still linear but the size of the central blind zone is smaller.

In practical optical design, the fish-eye lens suffers from large negative f-θ distortion on the margin of the image. If the number of lenses is limited, it is hard to reduce without using aspheric surfaces and the volume is large. However, using PAL structure, a more than 180° field angle optical system without using any complicated optical structure still can be designed. The f-θ distortion can even be positive. This is a big advantage for application that need the margin information. Considering the optical aberration, the practical PAL lens will not obey the Eq. (2) completely. So calibration is needed after assembly. In particular, the relation between r and θ is fit into a polynomial equation after optical design. It can be obtained by polynomial fitting in Eq. (3),

r(θ)=r0+r1θ+r2θ2+...+rNθN,0<θminθθmax
The preferable distribution of the full frame sensor is that the blind spot diameter is 4mm, inner circle diameter is 14mm, and outer diameter is 24mm. The width of two rings is the same, i.e., 5mm. So the resolution of sagittal direction is the same. This is good to the final depth resolution. However, the distortion deviating from f-θ distortion of PALupper unit is hard to design. Tradeoff has been made after several trials, the blind spot diameter is designed to be 5.6mm, inner circle diameter is 13mm and outer diameter is 23.6mm.

4. Entrance pupil shift

In [8, 9], J. A. Gilbert used three equations to describe entrance pupil’s position by using Cartesian coordinates. He considered it a fixed position which was not on optical axis. Large field angle optics like fisheye lens or PAL lens suffers from large entrance pupil aberration so that it is difficult to optimize [10]. But considering rotational symmetry, the center of entrance pupil is still on the optical axis [11].

In this design, the normal angle of entrance pupil is assumed to be the same as the field angle θ, the position of entrance pupil center is variable with θ. Figure 3 shows the variation of the baseline in double PAL system. The distance between two PAL units’ positions at the minimum angles (θupper(min) and θlower(min)) is set to be the initial baseline length: d0. If there is an object point P1 radiating two rays to both PAL units with the angle θupper and θlower, both entrance pupils move upwards like fisheye lens. The baseline become d(θupper, θlower).

 figure: Fig. 3

Fig. 3 d0 is the initial baseline length at the minimal field angles (θupper(min) and θlower(min)) of PAL units. P0 is located by angles and baseline length. P1 is another object point located by θupper, θlower and d(θupper, θlower). Δd(θupper) and Δd(θlower) are the shifts of entrance pupils.

Download Full Size | PPT Slide | PDF

Like the ways dealing with distortion, the relation between the field angle and the pupil position can be obtained by polynomial fitting. The baseline length can be described as Eq. (4) with the changes of position P in Fig. 3:

d(θupper,θlower)=d0+Δd(θupper)Δd(θlower)
where Δd(θupper) and Δd(θlower) are polynomials like Eq. (3). d0 is the initial baseline length.

5. Coordinates extracting algorithm

Figure 4 is the schematic diagram of double PAL system for calculating the location of P in object space. The origin of the coordinate is set to be the lower PAL system’s lowest entrance pupil center. Φ is the azimuth angle that can be measured in the imaging circle. Using polar coordinates, the location parameter of P, s and h can be calculated by trigonometry shown in Eqs. (5) and (6):

s=d(θupper,θlower)sinθuppersinθlowersin(θupperθlower)
h=d(θupper,θlower)sinθuppercosθlowersin(θupperθlower)
where s, h, Φ is the location parameters of P. The Cartesian coordinates of P can be written asx0=scosϕ,y0=ssinϕ,z0=h.

 figure: Fig. 4

Fig. 4 Schematic diagram of double PAL system. The coordinates of P can be calculated by θupper, θlower and d(θupper, θlower) by using trigonometry.

Download Full Size | PPT Slide | PDF

6. Implementation

In our design, the SVS-VISTEK’s full frame monochrome camera svs16000 was chosen as the image sensor. The resolution of the camera is 4872 × 3248 with a pixel size of 7.4μm, and the spectral range is 0.486μm~0.656μm. Considering that rays from upper PAL unit penetrating more lenses, its aperture should be a little larger. The F# of the upper PAL unit is set to be 3.18, while the lower is 3.22. Both of their field angles are 60°~105° × 360°.

6.1 Principle of joining two PAL units

The difficulty of this design lies in how to put two PAL units’ images on one sensor. It’s a difficult and time-consuming work to trace a large entrance pupil aberration system by optical software. It’s much harder to optimize double PAL system by multi-configuration. This method is given up after some failures. As an alternative, two PAL units are designed separately, and then conjugated together. There are two ways to join two independent optical units, one is entrance pupil matching and the other is secondary imaging. Because PAL’s entrance pupil is inside its block and bent, it’s hard to match. After several trails, the second way is chosen.

The focal length of all the PAL system is negative since the rays reflecting twice in PAL block. Figure 5 is a typical PAL block. Ray bundles coming from the same direction converge in the PAL block as shown in the blue region A, and then reflected by the front mirror surface. If region A is mirrored to the front object space by the front reflective surface S2, there is a virtual bended imaging area, as shown as pink region B. It is a ring region that the center of which is empty because the field angle of PAL lens is not starting from 0°. If extending this region to the optical axis, objects in green region C will image in the same plane like objects in region B since the imaging region is successive. Because only a ring region between surface S1 and S2 is used in reflecting rays, if surface S2 is ring-reflective coating while surface S1 is refractive coating, the PAL system still works. Objects in region C will image in the central blind zone of sensor. In double PAL system design, the lower PAL unit is designed firstly, then the position and size of region B/C is determined, after that, the upper PAL unit can be designed, finally, the system will be optimized together.

 figure: Fig. 5

Fig. 5 A typical PAL block and its ray tracing. Blue region A is the inner curved imaging region in PAL block, pink region B is the region A mirrored by surface S2, green region C is the extension region of B to optical axis. Surface S1 is the surface with refractive coating. Surface S2 is the front mirror of PAL block with ring reflective coating.

Download Full Size | PPT Slide | PDF

6.2 Lower PAL unit

The lower PAL unit is composed of two cemented blocks, whose materials are ZK9 and ZF10 (in Chinese glass catalog) that are favorable for achromatic design [12]. The structure of the PAL part and its relay lens are shown in Fig. 6(a) . The focal length is −6.46mm, total length is about 124mm, the diameter of the PAL block is Φ96mm, and all the surfaces are spherical. The MTF is higher than 0.65 at 70lp/mm (b) in all fields.

 figure: Fig. 6

Fig. 6 (a). Structure of the lower PAL optical unit. (b). MTF of 60°, 70°, 80°, 90°, 100°, and 105° fields at 70lp/mm. Different colors represent longitudinal and sagittal MTF of different fields. Black curve denotes the MTF of diffraction limited data.

Download Full Size | PPT Slide | PDF

6.3 Reverse lower PAL unit

The next step is to determine the position and size of the region B and C in Fig. 5. The unit in section 6.2 is reversed to make it a limited conjugate system. In order to join the upper PAL unit successfully, two glass groups are added. There are some benefits about this design, including making virtual image side telecentric which make it easier to join the upper unit [13], flattening virtual image surface, reducing aberrations like chromatic aberration, reducing magnification (M) to prevent the upper PAL unit from becoming too large, accepting some distortion to reduce design difficulties of the upper PAL unit, and increasing the baseline length to make the whole system “look” further. The reverse system is shown in Fig. 7 .

 figure: Fig. 7

Fig. 7 The reverse lower PAL unit with added glass groups. These groups are used to conjugate upper unit with lower unit. The imaging plane is the intermediate imaging plane of the upper PAL unit.

Download Full Size | PPT Slide | PDF

6.4 Upper PAL unit

Besides image quality, the key point in designing upper PAL unit is to make the image side telecentric and match the image size of the reverse lower PAL unit’s image. The material of PAL block is ZK9. The structure of the unit is shown in Fig. 8(a) . The focal length is −3.70mm, total length is about 100mm, the diameter of the PAL block is Φ100mm, and all surfaces are spherical. The MTF is higher than 0.60 at 70lp/mm in all fields (b).

 figure: Fig. 8

Fig. 8 (a). Structure of the upper PAL optical system. (b) MTF of 60°, 70°, 80°, 90°, 100°, and 105° fields at 70lp/mm. Different colors represent longitudinal and sagittal MTF of different fields. Black curve denotes the MTF of diffraction limited data.

Download Full Size | PPT Slide | PDF

6.5 Combination of two PAL units

The last step is to combine two units and optimize them together. The structure of the combined system is shown in Fig. 9(a) with its focal length of 3.19mm and total track of 218mm. The resulting MTF is greater than 0.68 at 70lp/mm in all fields as shown in Fig. 9(b). It can be seen from Fig. 9(a) that the rays can go into the upper PAL block less than 60° (purple rays). The actual fields of this unit are 28°~105° × 360° and the actual blind zone is Φ3.26mm, which only takes up 1.88% of the whole image circle (Φ23.8mm). Figure 10 shows the profile of the whole double PAL units. It can be seem that boundary images of two rings are from the opposite fields.

 figure: Fig. 9

Fig. 9 (a). Structure of the upper PAL optical unit, Fig. 9(b). MTF of 60°, 70°, 80°, 90°, 100°, and 105° fields at 70lp/mm. Different colors represent longitudinal and sagittal MTF of different fields. Black curve denotes the MTF of diffraction limited data.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 The optical structure of the combination of two PAL units.

Download Full Size | PPT Slide | PDF

7. Depth extracting accuracy analysis

As mentioned in the above sections that the relation between θ and Δd(θ), r(θ) can be written as polynomials. The data of field angle vs. image radius and field angle vs. entrance pupil can be obtained by ZEMAX at 10° intervals. They are fitted into polynomials. The orders of the polynomials are determined by fitting errors which should be less than one pixel. The polynomials are shown in Eqs. (7)(10).

θupper(r1)=7.5506+12.9445r1+4.3679r120.9897r13+0.0621r148.4180×104r15
θlower(r2)=8.2310+5.4260r2+0.5897r220.0305r23
θupper(Δdupper)=58.6473+13.6664Δdupper2.3130Δdupper2+0.3028Δdupper30.0050Δdupper4
θlower(Δdlower)=58.8308+7.0935Δdlower0.5176Δdlower2+0.0304Δdlower34.8514×104Δdlower4
where θupper(60,105),θlower(60,105),d0=107.7589.

Among all the parameters the object depth s is most important. In Fig. 11 , S0 is a point object in object space whose distance from the optical axis is s0. This distance is determined by d(θupper, θlower), θupper, θlower. There is an area around this point that object points image in the same imaging pixel of S0, so they cannot be distinguished in the imaging plane. Two extreme positions are S1 and S2 shown in Fig. 11. The reciprocal of distance between S1 and S2 represents the resolution of S0.

 figure: Fig. 11

Fig. 11 S0 is an object point. S1 and S2 are two extreme positions in one pixel that cannot be distinguished by sensor. The distance to the optical axis is s0, s1, s2, respectively.

Download Full Size | PPT Slide | PDF

The sensor’s pixel size is 7.4μm (dr). The derivative can be obtained from Eq. (7), as shown as Eq. (11). r1 can be substituted by using the inverse transform of Eq. (7) as r1(θupper). The function between dθlower and θlower is shown in Eq. (12):

dθupper=(12.9445+8.7358r12.9691r12+0.2484r134.209×103r14)×d(r)
dθlower=(5.4260+1.1794r20.0915r22)×d(r)
s1 and s2 can be substituted by s1 (θupper, θlower, d’) and s2 (θupper, θlower, d’) from Eq. (13):
{θupper'=θupper+dθupperθlower'=θlowerdθlowerd'=d0+Δd(θupper',θlower')and{θupper''=θupperdθupperθlower''=θlower+dθlowerd''=d0+Δd(θupper'',θlower'')
The resolution R(θupper, θlower) is shown in Eq. (14). The unit is lines per centimeter (line/cm):
R(θupper,θlower)=10s2s1
Using MatLab, the 3D figure of the system’s resolution is shown in Fig. 12 . The maximal resolution is 9.17 line/cm at θlower = 60°, θupper = 95°. The resolution becomes lower when two angles get closer. Considering 1 line/cm as the minimal resolution that can be accepted, we can calculate the extreme position of P using θupper, θlower and d (θupper, θlower). These positions and trajectories of rays are shown in Fig. 13 . P2 (s = 584.8236mm) is the furthest position that can be resolved, where θupper = 92.85°, θlower = 82.5°. The average of furthest depth is between 450mm~550mm under this situation

 figure: Fig. 12

Fig. 12 The resolution vs. two field angles of the system. The unit is line/cm.

Download Full Size | PPT Slide | PDF

 figure: Fig. 13

Fig. 13 Incident ray trajectories of the extreme positions of P, whose depth resolution is 1 line/cm

Download Full Size | PPT Slide | PDF

Because the baseline is still not long enough and the focal lengths of two PAL units are short, the resolution is not very high. The resolution and system limited size is contradictory.

This result is obtained only under ideal situation. The final resolution is affected not only by optical design, but also by the assembly accuracy and tolerance. Actually, the software used to extract depth is also important since the real object is not a point but an object with a limited size. Besides, it is difficult to resolve the same object in two rings for the existence of the distortion in PAL system. So the development of pattern recognition software is important [14]. The speed of extracting arithmetic affects real-time performance directly. The accuracy of software will be tested in the final prototype.

8. Conclusions

A novel panoramic 3D system based on double PALs in real-time is proposed. The system has a long baseline. By “looking” at one object point, two PAL units can obtain two different angles. The object position is calculated using angles and baseline length by triangulation. Two ring images are in one sensor. The upper PAL unit image is in the blind zone of the lower one, so the usage ratio of pixel is increased. The depth information can be obtained in real-time by measuring the radiuses of images of the same object in two rings by software. The algorithm of extracting depth is presented and the accuracy is analyzed. This system can be used in Lunar or Mars rover robot. Since it provides panoramic view and object distance in real-time, the robot can bypass surrounding obstacles with an embedded intelligent computer program all by itself. Besides, in vehicle protection, this system can provide drivers not only the too-close warning but also the real environment without any dead zones.

Acknowledgments

This work is supported by National Natural Science Foundation (NSFC) of China 10576028.

References and links

1. W. J. Fowski and M. M. Birnbaum, “Panoramic annular lens attitude determination system (PALADS),” Proc. SPIE 2466, 108–117 (1995). [CrossRef]  

2. F. A. Allahdadi and M. Chrisp, “SEDS, earth, atmosphere, and space imaging system (SEASIS),” Proc. SPIE 2214, 257–268 (1994). [CrossRef]  

3. T. Svoboda and T. Pajdla, “Panoramic cameras for 3D computation,” in Proceedings of Czech Pattern Recognition Workshop, 63–70, (Czech Society for Pattern Recognition, February 2000).

4. S. Baker and S. K. Nayar, “A theory of catadioptric image formation” in Proceedings of IEEE Conference on Computer Vision 35(2) 1999. Sixth International Conference, 175–196(1998).

5. G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004). [CrossRef]  

6. G. Jang, S. Kim, and I. Kweon, “Single-camera panoramic stereo system with single-viewpoint optics,” Opt. Lett. 31(1), 41–43 (2006). [CrossRef]   [PubMed]  

7. Z. Zhu and K. D. Rajasekar, “Panoramic virtual stereo vision of cooperative mobile robots for localizing 3D moving objects,” in Proceedings of Omnidirectional Vision, IEEE Workshop on 2000, 29–36(2000).

8. G. N. E. Weech, J. A. Gilbert, and D. R. Matthys, “A stereoscopic system for radial metrology,” in Proceedings of the 2001 SEM Annual Conference and Exposition, 199–202 (2001).

9. D. R. Matthys, J. A. Gilbert, and S. B. Fair, “Characterization of optical systems for radial metrology,” in Proceedings of the SEM IX International Congress on Experimental Mechanics, 104–107 (2000).

10. H. R. Fallah and J. Maxwell, “Higher-order pupil aberrations in wide angle and panoramic optical systems,” Proc. SPIE 2774, 342–351 (1996). [CrossRef]  

11. Z. L. Feng, “Studies on the characteristics and applications of the annular imaging system with high resolving power,” Ph.D. dissertation (2008).

12. S. Niu, J. Bai, X. Y. Hou, and G. G. Yang, “Design of a panoramic annular lens with a long focal length,” Appl. Opt. 46(32), 7850–7857 (2007). [CrossRef]   [PubMed]  

13. I. Powell, “Design study of an infrared panoramic optical system,” Appl. Opt. 35(31), 6190–6194 (1996). [CrossRef]   [PubMed]  

14. I. Kopilovié, B. Vágvőlgyi, and T. Szirányi, “Application of panoramic annular lens for motion analysis tasks: surveillance and smoke detection,” in Proceedings of the International Conference on Pattern Recognition (ICPR'00), (Barcelona, Spain) 4, 714–717(2000).

References

  • View by:
  • |
  • |
  • |

  1. W. J. Fowski and M. M. Birnbaum, “Panoramic annular lens attitude determination system (PALADS),” Proc. SPIE 2466, 108–117 (1995).
    [Crossref]
  2. F. A. Allahdadi and M. Chrisp, “SEDS, earth, atmosphere, and space imaging system (SEASIS),” Proc. SPIE 2214, 257–268 (1994).
    [Crossref]
  3. T. Svoboda and T. Pajdla, “Panoramic cameras for 3D computation,” in Proceedings of Czech Pattern Recognition Workshop, 63–70, (Czech Society for Pattern Recognition, February 2000).
  4. S. Baker and S. K. Nayar, “A theory of catadioptric image formation” in Proceedings of IEEE Conference on Computer Vision 35(2) 1999. Sixth International Conference, 175–196(1998).
  5. G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
    [Crossref]
  6. G. Jang, S. Kim, and I. Kweon, “Single-camera panoramic stereo system with single-viewpoint optics,” Opt. Lett. 31(1), 41–43 (2006).
    [Crossref] [PubMed]
  7. Z. Zhu and K. D. Rajasekar, “Panoramic virtual stereo vision of cooperative mobile robots for localizing 3D moving objects,” in Proceedings of Omnidirectional Vision, IEEE Workshop on 2000, 29–36(2000).
  8. G. N. E. Weech, J. A. Gilbert, and D. R. Matthys, “A stereoscopic system for radial metrology,” in Proceedings of the 2001 SEM Annual Conference and Exposition, 199–202 (2001).
  9. D. R. Matthys, J. A. Gilbert, and S. B. Fair, “Characterization of optical systems for radial metrology,” in Proceedings of the SEM IX International Congress on Experimental Mechanics, 104–107 (2000).
  10. H. R. Fallah and J. Maxwell, “Higher-order pupil aberrations in wide angle and panoramic optical systems,” Proc. SPIE 2774, 342–351 (1996).
    [Crossref]
  11. Z. L. Feng, “Studies on the characteristics and applications of the annular imaging system with high resolving power,” Ph.D. dissertation (2008).
  12. S. Niu, J. Bai, X. Y. Hou, and G. G. Yang, “Design of a panoramic annular lens with a long focal length,” Appl. Opt. 46(32), 7850–7857 (2007).
    [Crossref] [PubMed]
  13. I. Powell, “Design study of an infrared panoramic optical system,” Appl. Opt. 35(31), 6190–6194 (1996).
    [Crossref] [PubMed]
  14. I. Kopilovié, B. Vágvőlgyi, and T. Szirányi, “Application of panoramic annular lens for motion analysis tasks: surveillance and smoke detection,” in Proceedings of the International Conference on Pattern Recognition (ICPR'00), (Barcelona, Spain) 4, 714–717(2000).

2007 (1)

2006 (1)

2004 (1)

G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
[Crossref]

1996 (2)

I. Powell, “Design study of an infrared panoramic optical system,” Appl. Opt. 35(31), 6190–6194 (1996).
[Crossref] [PubMed]

H. R. Fallah and J. Maxwell, “Higher-order pupil aberrations in wide angle and panoramic optical systems,” Proc. SPIE 2774, 342–351 (1996).
[Crossref]

1995 (1)

W. J. Fowski and M. M. Birnbaum, “Panoramic annular lens attitude determination system (PALADS),” Proc. SPIE 2466, 108–117 (1995).
[Crossref]

1994 (1)

F. A. Allahdadi and M. Chrisp, “SEDS, earth, atmosphere, and space imaging system (SEASIS),” Proc. SPIE 2214, 257–268 (1994).
[Crossref]

Allahdadi, F. A.

F. A. Allahdadi and M. Chrisp, “SEDS, earth, atmosphere, and space imaging system (SEASIS),” Proc. SPIE 2214, 257–268 (1994).
[Crossref]

Bai, J.

Birnbaum, M. M.

W. J. Fowski and M. M. Birnbaum, “Panoramic annular lens attitude determination system (PALADS),” Proc. SPIE 2466, 108–117 (1995).
[Crossref]

Choi, Y.

G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
[Crossref]

Chrisp, M.

F. A. Allahdadi and M. Chrisp, “SEDS, earth, atmosphere, and space imaging system (SEASIS),” Proc. SPIE 2214, 257–268 (1994).
[Crossref]

Fallah, H. R.

H. R. Fallah and J. Maxwell, “Higher-order pupil aberrations in wide angle and panoramic optical systems,” Proc. SPIE 2774, 342–351 (1996).
[Crossref]

Fowski, W. J.

W. J. Fowski and M. M. Birnbaum, “Panoramic annular lens attitude determination system (PALADS),” Proc. SPIE 2466, 108–117 (1995).
[Crossref]

Hou, X. Y.

Jang, G.

Kim, G.

G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
[Crossref]

Kim, H.

G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
[Crossref]

Kim, K.

G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
[Crossref]

Kim, S.

Kweon, G.

G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
[Crossref]

Kweon, I.

Maxwell, J.

H. R. Fallah and J. Maxwell, “Higher-order pupil aberrations in wide angle and panoramic optical systems,” Proc. SPIE 2774, 342–351 (1996).
[Crossref]

Niu, S.

Powell, I.

Yang, G. G.

Yang, S.

G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
[Crossref]

Appl. Opt. (2)

Opt. Lett. (1)

Proc. SPIE (4)

W. J. Fowski and M. M. Birnbaum, “Panoramic annular lens attitude determination system (PALADS),” Proc. SPIE 2466, 108–117 (1995).
[Crossref]

F. A. Allahdadi and M. Chrisp, “SEDS, earth, atmosphere, and space imaging system (SEASIS),” Proc. SPIE 2214, 257–268 (1994).
[Crossref]

H. R. Fallah and J. Maxwell, “Higher-order pupil aberrations in wide angle and panoramic optical systems,” Proc. SPIE 2774, 342–351 (1996).
[Crossref]

G. Kweon, K. Kim, Y. Choi, G. Kim, H. Kim, and S. Yang, “A catadioptric double-panoramic lens with the equi-distance projection for a rangefinder application,” Proc. SPIE 5613, 29–42 (2004).
[Crossref]

Other (7)

Z. L. Feng, “Studies on the characteristics and applications of the annular imaging system with high resolving power,” Ph.D. dissertation (2008).

I. Kopilovié, B. Vágvőlgyi, and T. Szirányi, “Application of panoramic annular lens for motion analysis tasks: surveillance and smoke detection,” in Proceedings of the International Conference on Pattern Recognition (ICPR'00), (Barcelona, Spain) 4, 714–717(2000).

T. Svoboda and T. Pajdla, “Panoramic cameras for 3D computation,” in Proceedings of Czech Pattern Recognition Workshop, 63–70, (Czech Society for Pattern Recognition, February 2000).

S. Baker and S. K. Nayar, “A theory of catadioptric image formation” in Proceedings of IEEE Conference on Computer Vision 35(2) 1999. Sixth International Conference, 175–196(1998).

Z. Zhu and K. D. Rajasekar, “Panoramic virtual stereo vision of cooperative mobile robots for localizing 3D moving objects,” in Proceedings of Omnidirectional Vision, IEEE Workshop on 2000, 29–36(2000).

G. N. E. Weech, J. A. Gilbert, and D. R. Matthys, “A stereoscopic system for radial metrology,” in Proceedings of the 2001 SEM Annual Conference and Exposition, 199–202 (2001).

D. R. Matthys, J. A. Gilbert, and S. B. Fair, “Characterization of optical systems for radial metrology,” in Proceedings of the SEM IX International Congress on Experimental Mechanics, 104–107 (2000).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Configuration of the stereo PAL imaging system. It is composed by two PAL units. Two rays coming from object point P go through two qwdifferent paths (color dashed) and image P1 and P2 on the image plane. By measuring the radiuses of images to calculate θupper, θlower and baseline d, the location parameters of P, H and S can be calculated by using triangulation.
Fig. 2
Fig. 2 The blue ring is the image area of PALlower unit. The red ring is the image area of PALupper unit. The green circle is the blind area of PALupper unit. Φ is the azimuth angle of P in Fig. 1. P1, P2 are the images of P, whose radiuses are rupper and rlower respectively. rlower(max), rlower(min), rupper(max) and rupper(min) are boundaries of two PAL units.
Fig. 3
Fig. 3 d0 is the initial baseline length at the minimal field angles (θupper(min) and θlower(min)) of PAL units. P0 is located by angles and baseline length. P1 is another object point located by θupper, θlower and d(θupper, θlower). Δd(θupper) and Δd(θlower) are the shifts of entrance pupils.
Fig. 4
Fig. 4 Schematic diagram of double PAL system. The coordinates of P can be calculated by θupper, θlower and d(θupper, θlower) by using trigonometry.
Fig. 5
Fig. 5 A typical PAL block and its ray tracing. Blue region A is the inner curved imaging region in PAL block, pink region B is the region A mirrored by surface S2, green region C is the extension region of B to optical axis. Surface S1 is the surface with refractive coating. Surface S2 is the front mirror of PAL block with ring reflective coating.
Fig. 6
Fig. 6 (a). Structure of the lower PAL optical unit. (b). MTF of 60°, 70°, 80°, 90°, 100°, and 105° fields at 70lp/mm. Different colors represent longitudinal and sagittal MTF of different fields. Black curve denotes the MTF of diffraction limited data.
Fig. 7
Fig. 7 The reverse lower PAL unit with added glass groups. These groups are used to conjugate upper unit with lower unit. The imaging plane is the intermediate imaging plane of the upper PAL unit.
Fig. 8
Fig. 8 (a). Structure of the upper PAL optical system. (b) MTF of 60°, 70°, 80°, 90°, 100°, and 105° fields at 70lp/mm. Different colors represent longitudinal and sagittal MTF of different fields. Black curve denotes the MTF of diffraction limited data.
Fig. 9
Fig. 9 (a). Structure of the upper PAL optical unit, Fig. 9(b). MTF of 60°, 70°, 80°, 90°, 100°, and 105° fields at 70lp/mm. Different colors represent longitudinal and sagittal MTF of different fields. Black curve denotes the MTF of diffraction limited data.
Fig. 10
Fig. 10 The optical structure of the combination of two PAL units.
Fig. 11
Fig. 11 S0 is an object point. S1 and S2 are two extreme positions in one pixel that cannot be distinguished by sensor. The distance to the optical axis is s0, s1, s2, respectively.
Fig. 12
Fig. 12 The resolution vs. two field angles of the system. The unit is line/cm.
Fig. 13
Fig. 13 Incident ray trajectories of the extreme positions of P, whose depth resolution is 1 line/cm

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

{ r lower (min)= f lower θ lower (min) r upper (max)=M f upper θ upper (max) r lower (min)= r upper (max) r lower (max)= f lower ( θ lower (max))
θ θ 1 θ 2 θ 1 = r r 1 r r 2 (0< θ 1 < θ 2 )
r(θ)= r 0 + r 1 θ+ r 2 θ 2 +...+ r N θ N , 0< θ min θ θ max
d( θ upper , θ lower )= d 0 +Δd( θ upper )Δd( θ lower )
s=d( θ upper , θ lower ) sin θ upper sin θ lower sin( θ upper θ lower )
h=d( θ upper , θ lower ) sin θ upper cos θ lower sin( θ upper θ lower )
θ upper ( r 1 )=7.5506+12.9445 r 1 +4.3679 r 1 2 0.9897 r 1 3 +0.0621 r 1 4 8.4180× 10 4 r 1 5
θ lower ( r 2 )=8.2310+5.4260 r 2 +0.5897 r 2 2 0.0305 r 2 3
θ upper (Δ d upper )=58.6473+13.6664Δ d upper 2.3130Δ d upper 2 +0.3028Δ d upper 3 0.0050Δ d upper 4
θ lower (Δ d lower )=58.8308+7.0935Δ d lower 0.5176Δ d lower 2 +0.0304Δ d lower 3 4.8514× 10 4 Δ d lower 4
d θ upper =(12.9445+8.7358 r 1 2.9691 r 1 2 +0.2484 r 1 3 4.209× 10 3 r 1 4 )×d(r)
d θ lower =(5.4260+1.1794 r 2 0.0915 r 2 2 )×d(r)
{ θ upper '= θ upper +d θ upper θ lower '= θ lower d θ lower d'= d 0 +Δd( θ upper ', θ lower ') and { θ upper ''= θ upper d θ upper θ lower ''= θ lower +d θ lower d''= d 0 +Δd( θ upper '', θ lower '')
R( θ upper , θ lower )= 10 s 2 s 1

Metrics