Abstract

This paper presents a new method of structured light-based 3D reconstruction, referred to here as Boundary Inheritance Codec, that provides high accuracy and low noise in projector–camera correspondence. The proposed method features (1) real-boundary recovery: the exact locations of region boundaries, defined by a coded pattern, are identified in terms of their real coordinates on the image plane. To this end, a radiance independent recovery of accurate boundaries and a disambiguation of true and false boundaries are presented. (2) Boundary inheritance: the consistency among the same boundaries of different layers in pattern hierarchy is exploited to further enhance the accuracy of region correspondence and boundary estimation. Extensive experimentations are carried out to verify the performance of the proposed Boundary Inheritance Codec, especially, in comparison with a number of well-known methods currently available, including Gray-code (GC) plus line/phase shift (LS/PS). The results indicate that the proposed method of recovering real boundaries with boundary inheritance is superior in accuracy and robustness to Gray-code inverse (GCI), GC+LS/PS. For instance, the error standard deviation and the percentile of outliers of the proposed method were 0.152 mm and 0.089%, respectively, while those of GCI were 0.312 mm and 3.937%, respectively, and those of GC+LS/PS were 0.280/0.321mm and 0.159/7.074%, respectively.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. Willow Garage, “PR2,” www.willowgarage.com/pages/pr2/overview .
  2. Willow Garage, “TurtleBot,” www.willowgarage.com/turtlebot .
  3. M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
    [CrossRef]
  4. F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13, 231–240 (2004).
    [CrossRef]
  5. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010).
    [CrossRef]
  6. J. L. Posdamer and M. D. Altschuler, “Surface measurement by space-encoded projected beam systems,” Comput. Graph. Image Process. 18, 1–17 (1982).
    [CrossRef]
  7. S. Inokuchi, K. Sato, and F. Matsuda, “Range imaging system for 3-D object recognition,” in Proceedings of the International Conference on Pattern Recognition (IEEE Computer Society, 1984), pp. 806–808.
  8. M. Trobina, “Error model of a coded-light range sensor,” Technical Report (Communication Technology Laboratory, ETH-Zentrum, 1995).
  9. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48, 149–158 (2010).
    [CrossRef]
  10. K. Liu, Y. Wang, D. Lau, Q. Hao, and L. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18, 5229–5244 (2010).
    [CrossRef]
  11. J. Guhring, “Dense 3-D surface acquisition by structured light using off-the-shelf components,” Proc. SPIE, 4309, 220–231 (2001).
    [CrossRef]
  12. G. Sansoni, A. Patrioli, and F. Docchio, “OPL-3D: a novel, portable optical digitizer for fast acquisition of free-form surfaces,” Rev. Sci. Instrum. 74, 2593–2603 (2003).
    [CrossRef]
  13. G. Wiora, “High resolution measurement of phase-shift amplitude and numeric object phase calculation,” Proc. SPIE 4117, 289–299 (2000).
    [CrossRef]
  14. D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light,” Opt. Lasers Eng. 50, 1130–1139 (2012).
    [CrossRef]
  15. S. Lee, J. Choi, D. Kim, N. Jaekeun, and O. Seungsub, “Signal separation coding for robust depth imaging based on structured light,” in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE Computer Society Press, 2005), pp. 4430–4436.
  16. S. Lee and L. Q. Bui, “Accurate estimation of the boundaries of a structured light pattern,” J. Opt. Soc. Am. A 28, 954–961 (2011).
    [CrossRef]
  17. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. A 62, 55–59 (1972).
    [CrossRef]
  18. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
    [CrossRef]
  19. Rapidform, www.rapidform.com .

2012

D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light,” Opt. Lasers Eng. 50, 1130–1139 (2012).
[CrossRef]

2011

2010

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010).
[CrossRef]

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48, 149–158 (2010).
[CrossRef]

K. Liu, Y. Wang, D. Lau, Q. Hao, and L. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18, 5229–5244 (2010).
[CrossRef]

2009

M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
[CrossRef]

2004

F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13, 231–240 (2004).
[CrossRef]

2003

G. Sansoni, A. Patrioli, and F. Docchio, “OPL-3D: a novel, portable optical digitizer for fast acquisition of free-form surfaces,” Rev. Sci. Instrum. 74, 2593–2603 (2003).
[CrossRef]

2001

J. Guhring, “Dense 3-D surface acquisition by structured light using off-the-shelf components,” Proc. SPIE, 4309, 220–231 (2001).
[CrossRef]

2000

G. Wiora, “High resolution measurement of phase-shift amplitude and numeric object phase calculation,” Proc. SPIE 4117, 289–299 (2000).
[CrossRef]

1982

J. L. Posdamer and M. D. Altschuler, “Surface measurement by space-encoded projected beam systems,” Comput. Graph. Image Process. 18, 1–17 (1982).
[CrossRef]

1974

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[CrossRef]

1972

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. A 62, 55–59 (1972).
[CrossRef]

Altschuler, M. D.

J. L. Posdamer and M. D. Altschuler, “Surface measurement by space-encoded projected beam systems,” Comput. Graph. Image Process. 18, 1–17 (1982).
[CrossRef]

Blais, F.

F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13, 231–240 (2004).
[CrossRef]

Bui, L. Q.

Choi, J.

S. Lee, J. Choi, D. Kim, N. Jaekeun, and O. Seungsub, “Signal separation coding for robust depth imaging based on structured light,” in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE Computer Society Press, 2005), pp. 4430–4436.

Choi, M.

M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
[CrossRef]

Da, F.

D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light,” Opt. Lasers Eng. 50, 1130–1139 (2012).
[CrossRef]

Docchio, F.

G. Sansoni, A. Patrioli, and F. Docchio, “OPL-3D: a novel, portable optical digitizer for fast acquisition of free-form surfaces,” Rev. Sci. Instrum. 74, 2593–2603 (2003).
[CrossRef]

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010).
[CrossRef]

Gomaa, H.

M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
[CrossRef]

Guhring, J.

J. Guhring, “Dense 3-D surface acquisition by structured light using off-the-shelf components,” Proc. SPIE, 4309, 220–231 (2001).
[CrossRef]

Hao, Q.

Hassebrook, L.

Inokuchi, S.

S. Inokuchi, K. Sato, and F. Matsuda, “Range imaging system for 3-D object recognition,” in Proceedings of the International Conference on Pattern Recognition (IEEE Computer Society, 1984), pp. 806–808.

Jaekeun, N.

S. Lee, J. Choi, D. Kim, N. Jaekeun, and O. Seungsub, “Signal separation coding for robust depth imaging based on structured light,” in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE Computer Society Press, 2005), pp. 4430–4436.

Kim, D.

S. Lee, J. Choi, D. Kim, N. Jaekeun, and O. Seungsub, “Signal separation coding for robust depth imaging based on structured light,” in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE Computer Society Press, 2005), pp. 4430–4436.

Kim, M.

M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
[CrossRef]

M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
[CrossRef]

Kim, S.

M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
[CrossRef]

Lau, D.

Lee, S.

S. Lee and L. Q. Bui, “Accurate estimation of the boundaries of a structured light pattern,” J. Opt. Soc. Am. A 28, 954–961 (2011).
[CrossRef]

S. Lee, J. Choi, D. Kim, N. Jaekeun, and O. Seungsub, “Signal separation coding for robust depth imaging based on structured light,” in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE Computer Society Press, 2005), pp. 4430–4436.

Liu, K.

Llado, X.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010).
[CrossRef]

Lucy, L. B.

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[CrossRef]

Matsuda, F.

S. Inokuchi, K. Sato, and F. Matsuda, “Range imaging system for 3-D object recognition,” in Proceedings of the International Conference on Pattern Recognition (IEEE Computer Society, 1984), pp. 806–808.

Park, S.

M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
[CrossRef]

Patrioli, A.

G. Sansoni, A. Patrioli, and F. Docchio, “OPL-3D: a novel, portable optical digitizer for fast acquisition of free-form surfaces,” Rev. Sci. Instrum. 74, 2593–2603 (2003).
[CrossRef]

Posdamer, J. L.

J. L. Posdamer and M. D. Altschuler, “Surface measurement by space-encoded projected beam systems,” Comput. Graph. Image Process. 18, 1–17 (1982).
[CrossRef]

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010).
[CrossRef]

Richardson, W. H.

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. A 62, 55–59 (1972).
[CrossRef]

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010).
[CrossRef]

Sansoni, G.

G. Sansoni, A. Patrioli, and F. Docchio, “OPL-3D: a novel, portable optical digitizer for fast acquisition of free-form surfaces,” Rev. Sci. Instrum. 74, 2593–2603 (2003).
[CrossRef]

Sato, K.

S. Inokuchi, K. Sato, and F. Matsuda, “Range imaging system for 3-D object recognition,” in Proceedings of the International Conference on Pattern Recognition (IEEE Computer Society, 1984), pp. 806–808.

Seungsub, O.

S. Lee, J. Choi, D. Kim, N. Jaekeun, and O. Seungsub, “Signal separation coding for robust depth imaging based on structured light,” in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE Computer Society Press, 2005), pp. 4430–4436.

Trobina, M.

M. Trobina, “Error model of a coded-light range sensor,” Technical Report (Communication Technology Laboratory, ETH-Zentrum, 1995).

Wang, Y.

Wiora, G.

G. Wiora, “High resolution measurement of phase-shift amplitude and numeric object phase calculation,” Proc. SPIE 4117, 289–299 (2000).
[CrossRef]

Zhang, S.

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48, 149–158 (2010).
[CrossRef]

Zheng, D.

D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light,” Opt. Lasers Eng. 50, 1130–1139 (2012).
[CrossRef]

Astron. J.

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).
[CrossRef]

Comput. Graph. Image Process.

J. L. Posdamer and M. D. Altschuler, “Surface measurement by space-encoded projected beam systems,” Comput. Graph. Image Process. 18, 1–17 (1982).
[CrossRef]

IEEE Robot. Autom. Mag.

M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE Robot. Autom. Mag. 16(1), 34–45 (2009).
[CrossRef]

J. Electron. Imaging

F. Blais, “Review of 20 years of range sensor development,” J. Electron. Imaging 13, 231–240 (2004).
[CrossRef]

J. Opt. Soc. Am. A

S. Lee and L. Q. Bui, “Accurate estimation of the boundaries of a structured light pattern,” J. Opt. Soc. Am. A 28, 954–961 (2011).
[CrossRef]

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. A 62, 55–59 (1972).
[CrossRef]

Opt. Express

Opt. Lasers Eng.

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48, 149–158 (2010).
[CrossRef]

D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light,” Opt. Lasers Eng. 50, 1130–1139 (2012).
[CrossRef]

Pattern Recogn.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010).
[CrossRef]

Proc. SPIE

J. Guhring, “Dense 3-D surface acquisition by structured light using off-the-shelf components,” Proc. SPIE, 4309, 220–231 (2001).
[CrossRef]

G. Wiora, “High resolution measurement of phase-shift amplitude and numeric object phase calculation,” Proc. SPIE 4117, 289–299 (2000).
[CrossRef]

Rev. Sci. Instrum.

G. Sansoni, A. Patrioli, and F. Docchio, “OPL-3D: a novel, portable optical digitizer for fast acquisition of free-form surfaces,” Rev. Sci. Instrum. 74, 2593–2603 (2003).
[CrossRef]

Other

S. Inokuchi, K. Sato, and F. Matsuda, “Range imaging system for 3-D object recognition,” in Proceedings of the International Conference on Pattern Recognition (IEEE Computer Society, 1984), pp. 806–808.

M. Trobina, “Error model of a coded-light range sensor,” Technical Report (Communication Technology Laboratory, ETH-Zentrum, 1995).

Willow Garage, “PR2,” www.willowgarage.com/pages/pr2/overview .

Willow Garage, “TurtleBot,” www.willowgarage.com/turtlebot .

S. Lee, J. Choi, D. Kim, N. Jaekeun, and O. Seungsub, “Signal separation coding for robust depth imaging based on structured light,” in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE Computer Society Press, 2005), pp. 4430–4436.

Rapidform, www.rapidform.com .

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (31)

Fig. 1.
Fig. 1.

Brief taxonomy of structured light patterns.

Fig. 2.
Fig. 2.

Three layers of GC inverse. Each layer consists of a pattern and its inverse.

Fig. 3.
Fig. 3.

GC (top) and PS (bottom) patterns of 32 stripes.

Fig. 4.
Fig. 4.

GC (top) and LS (bottom) patterns of 32 stripes.

Fig. 5.
Fig. 5.

Hierarchical layer of code. Each layer consists of four patterns, and each code in the upper layer is divided into four sub-codes in the lower layer.

Fig. 6.
Fig. 6.

Conceptual description of an HOC-based signal decoding process, which includes (a) a signal of pixel values at ith position, (b) signal separating by multiplying an orthogonal matrix, (c) selecting a set of probable codes, and (d) decoding of the correct address.

Fig. 7.
Fig. 7.

Block diagram of the proposed Boundary Inheritance Codec.

Fig. 8.
Fig. 8.

Criterion to select a detected boundary: a boundary will be rejected if the normalized edge is not in form of rising or falling edge (a), otherwise it will be selected (b).

Fig. 9.
Fig. 9.

Shadow in camera view. The region [Y2,Y3] corresponds to the shadow [B,C].

Fig. 10.
Fig. 10.

Shadow in projector view. The region [X2,X3] corresponds to the shadow [B,C].

Fig. 11.
Fig. 11.

Four cases of the pattern’s boundaries with shadow in projector view.

Fig. 12.
Fig. 12.

Bottom-up inheritance for detecting boundaries of shadow in projector view. In layer n, the boundary of shadow cannot be detected (represented by symbol “X”), down to layer n+1, the boundaries of shadow B and C are detected (represented by symbol “0”; in camera view, B coincides with C), we inherit B and C to boundary from layer n+1 to layer n.

Fig. 13.
Fig. 13.

Example of defining regions in each layer. Each region is formed from two consecutive boundaries, and the stripe regions are associated with their projector correspondence.

Fig. 14.
Fig. 14.

Signal of one HOC pattern and its boundaries in layer k and k+1. (a) The common boundaries in the upper layer are inherited to the lower layer. (b) The false boundary in layer k does not have any common boundary in layer k+1, and thus it is detected as a noise.

Fig. 15.
Fig. 15.

HOC and the corresponding addresses.

Fig. 16.
Fig. 16.

Inheritance principle: Each region in the lower layer k+1 updates its correspondence by inheriting the correspondence of containing region in upper layer k.

Fig. 17.
Fig. 17.

Structured light system consists of a Canon DLP projector and a PGR Flea2 camera; the baseline is 23 cm.

Fig. 18.
Fig. 18.

Scene to be captured.

Fig. 19.
Fig. 19.

3D point cloud of the scene captured by Boundary Inheritance Codec. (a) Front view, (b) top view, and (c) side view.

Fig. 20.
Fig. 20.

3D point cloud of the scene captured by GC+LS. (a) Front view, (b) top view, and (c) side view.

Fig. 21.
Fig. 21.

3D point cloud of the scene captured by GCI with boundary estimation. (a) Front view, (b) top view, and (c) side view.

Fig. 22.
Fig. 22.

3D point cloud of the scene captured by HOC. (a) Front view, (b) top view, and (c) side view.

Fig. 23.
Fig. 23.

3D point cloud of the scene captured by GCI. (a) Front view, (b) top view, and (c) side view.

Fig. 24.
Fig. 24.

3D point cloud of the scene captured by GC+PS. (a) Front view, (b) top view, and (c) side view.

Fig. 25.
Fig. 25.

Rendered 3D surface of the milk box reconstructed by different methods.

Fig. 26.
Fig. 26.

Calibration block to be measured and compared.

Fig. 27.
Fig. 27.

Exponential staircase. The steps are 1, 2, 4, and 8 mm from base-plane.

Fig. 28.
Fig. 28.

Perspective and side view of the 3D rendered surface of the staircase captured by Boundary Inheritance Codec at 1 m.

Fig. 29.
Fig. 29.

Depth values of different methods along a single row of the staircase at 1 m.

Fig. 30.
Fig. 30.

(a) Checkerboard pattern with size of each square is one pixel. The captured image of patterns projected by (b) the Pico projector and (c) the Canon DLP projector are shown.

Fig. 31.
Fig. 31.

Signals of the patterns projected by (a) the Pico projector and (b) the Canon DLP projector. Due to the resizing of images in the Pico projector, the boundary in layer 4 is same as in layer 1. The Canon projector can project pixels exactly; thus, the boundaries in layer 4 and 1 are the same.

Tables (7)

Tables Icon

Table 1. Number of 3D Points and Outliers in the Results of Boundary Inheritance Codec, GC+LS, GCI with Boundary Estimation, HOC, GCI, and GC+PS

Tables Icon

Table 2. Errors of Reconstructed 3D Data of the Calibration Block’s Face at Distances of 0.7, 1, and 1.5 m Using HOC, HOC with Boundary Estimation, GC+PS, GC+LS, GCI, GCI with Boundary Estimation, and Boundary Inheritance Codec

Tables Icon

Table 3. Three-Dimensional Measurement of the Staircase at Distance of 0.7 m Using HOC, HOC with Boundary Estimation, GC+PS, GC+LS, GCI, GCI with Boundary Estimation, and Boundary Inheritance Codec

Tables Icon

Table 4. Three-Dimensional Measurement of the Staircase at Distance of 1 m Using HOC, HOC with Boundary Estimation, GC+PS, GC+LS, GCI, GCI with Boundary Estimation, and Boundary Inheritance Codec

Tables Icon

Table 5. Three-Dimensional Measurement of the Staircase at Distance of 1.5 m Using HOC, HOC with Boundary Estimation, GC+PS, GC+LS, GCI, GCI with Boundary Estimation, and Boundary Inheritance Codec

Tables Icon

Table 6. Errors of Reconstructed 3D Data of a NonTextured Flat Surface Captured by Projector–Camera System Using a Pico Projector (Optoma PK301) with Different Methods

Tables Icon

Table 7. Errors of Reconstructed 3D Data of a NonTexture Flat Surface Captured by a Projector–Camera System Using a Canon DLP Projector (LDP-3260K) with Different Methods

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

Ii(j,k)=A(j,k)+B(j,k)2cos(Φ(j,k)iπ2),
Φ(j,k)=tan1I1(j,k)I3(j,k)I0(j,k)I2(j,k).
g4(i)=f(i2)+f(i1)f(i+1)f(i+2).
g8(i)=f(i4)+f(i3)+f(i2)+f(i1)f(i+1)f(i+2)f(i+3)f(i+4).
fs(x)=((s(x)gp(x,σp))R(x)+A(x))gc(x,σc)+W(x),
s(x)={Hx0Lx<0.
gi(x,σi)=1σi2πe(x22σi2),
f1(x)=(H*R(x)+A(x))gc(x,σc)+W1(x)f0(x)=(L*R(x)+A(x))gc(x,σc)+W0(x).
fc(x)deconvlucy((fs(x)f0(x)),σc)deconvlucy((f1(x)f0(x)),σc)(HL),
f¯s(x)=fs(x)f0(x)f1(x)f0(x),

Metrics