Abstract

In recent years, correlation-filter (CF)-based face recognition algorithms have attracted increasing interest in the field of pattern recognition and have achieved impressive results in discrimination, efficiency, location accuracy, and robustness. In this tutorial paper, our goal is to help the reader get a broad overview of CFs in three respects: design, implementation, and application. We review typical face recognition algorithms with implications for the design of CFs. We discuss and compare the numerical and optical implementations of correlators. Some newly proposed implementation schemes and application examples are also presented to verify the feasibility and effectiveness of CFs as a powerful recognition tool.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Spectral optimized asymmetric segmented phase-only correlation filter

I. Leonard, A. Alfalou, and C. Brosseau
Appl. Opt. 51(14) 2638-2650 (2012)

Tutorial survey of composite filter designs for optical correlators

B. V. K. Vijaya Kumar
Appl. Opt. 31(23) 4773-4801 (1992)

References

  • View by:
  • |
  • |
  • |

  1. T. Kanade, “Picture processing system by computer complex and recognition of human face,” Ph.D. thesis (Department of Information Science, Kyoto University, 1973).
  2. L. Sirovich and M. Kirby, “Low-dimensional procedure for the characterization of human face,” J. Opt. Soc. Am. 4, 519–524 (1987).
    [Crossref]
  3. M. Savvides, B. Kumar, and P. Khosla, “Eigenphases vs. eigenfaces,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR) (IEEE, 2004), Vol. 3.
  4. X. Lu, “Image analysis for face recognition,” Department of Computer Science and Engineering; Michigan State University, East Lansing, Michigan 48824 (Private Lecture Notes, 2003).
  5. A. Datta, M. Datta, and P. Banerjee, Face Detection and Recognition: Theory and Practice (Chapman & Hall/CRC Press, 2015).
  6. B. Kumar, “Tutorial survey of composite filter designs for optical correlators,” Appl. Opt. 31, 4773–4801 (1992).
    [Crossref]
  7. M. Kaneko and O. Hasegawa, “Processing of face images and its applications,” IEICE Trans. Inf. Syst. E82-D, 589–600 (1999).
  8. D. North, “An analysis of the factors which determine signal/noise discriminations in pulsed carrier systems,” Proc. IEEE 51, 1016–1027 (1963).
    [Crossref]
  9. A. VanderLugt, “Signal detection by complex spatial filtering,” IEEE Trans. Inf. Theory 10, 139–145 (1964).
    [Crossref]
  10. B. Kumar and L. Hassebrook, “Performance measures for correlation filters,” Appl. Opt. 29, 2997–3006 (1990).
    [Crossref]
  11. B. Kumar, M. Savvides, C. Xie, K. Venkataramani, J. Thornton, and A. Mahalanobis, “Biometric verification with composite filters,” Appl. Opt. 43, 391–402 (2004).
    [Crossref]
  12. A. Mansfield and J. Wayman, “Best practices in testing and reporting performance of biometric devices,” version 2.01 (Reproduced by Permission of the Controller of HMSO, 2002).
  13. A. Quaglia and C. M. Epifano, eds. Face Recognition: Methods, Applications and Technology (Nova Science, 2012).
  14. D. Wu, X. Zhou, B. Yao, R. Li, Y. Yang, T. Peng, M. Lei, D. Dan, and T. Ye, “Fast frame scanning camera system for light-sheet microscopy,” Appl. Opt. 54, 8632–8636 (2015).
    [Crossref]
  15. F. Wang, I. Toselli, and O. Korotkova, “Two spatial light modulator system for laboratory simulation of random beam propagation in random media,” Appl. Opt. 55, 1112–1117 (2016).
    [Crossref]
  16. C. Weaver and J. Goodman, “A technique for optically convolving two functions,” Appl. Opt. 5, 1248–1249 (1966).
    [Crossref]
  17. M. Alam and M. Karim, “Fringe-adjusted joint transform correlation,” Appl. Opt. 32, 4344–4350 (1993).
    [Crossref]
  18. L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).
  19. J. Horner and P. Gianino, “Phase-only matched filtering,” Appl. Opt. 23, 812–816 (1984).
    [Crossref]
  20. D. Psaltis, E. Paek, and S. Venkatesh, “Optical image correlation with binary spatial light modulator,” Opt. Eng. 23, 698–704 (1984).
  21. D. Roberge and Y. Sheng, “Optical composite wavelet-matched filters,” Opt. Eng. 33, 2290–2295 (1994).
    [Crossref]
  22. X. Lu, A. Katz, E. Kanterakis, and N. Caviris, “Joint transform correlator that uses wavelet transforms,” Opt. Lett. 17, 1700–1702 (1992).
    [Crossref]
  23. Q. Wang and S. Liu, “Morphological fringe-adjusted joint transform correlation,” Opt. Eng. 45, 087002 (2006).
    [Crossref]
  24. F. Lei, M. Iton, and T. Yatagai, “Adaptive binary joint transform correlator for image recognition,” Appl. Opt. 41, 7416–7421 (2002).
    [Crossref]
  25. Y. Hsu and H. Arsenault, “Optical character recognition using circular harmonic expansion,” Appl. Opt. 21, 4016–4019 (1982).
    [Crossref]
  26. D. Mendlovic, E. Marom, and N. Konforti, “Shift- and scale-invariant pattern recognition using Mellin radial harmonics,” Opt. Commun. 67, 172–176 (1988).
    [Crossref]
  27. E. Watanabe and K. Kodate, High Speed Holographic Optical Correlator for Face Recognition, State of the Art in Face Recognition, J. Ponce and A. Karahoca, eds. (InTech, 2009).
  28. Y. Ouerhani, M. Desthieux, and A. Alfalou, “Road sign recognition using Viapix module and correlation,” Proc. SPIE 9477, 94770H (2015).
    [Crossref]
  29. Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
    [Crossref]
  30. C. Hester and D. Casasent, “Multivariant technique for multiclass pattern recognition,” Appl. Opt. 19, 1758–1761 (1980).
    [Crossref]
  31. A. Mahalanobis, B. Kumar, and D. Casasent, “Minimum average correlation energy filters,” Appl. Opt. 26, 3633–3640 (1987).
    [Crossref]
  32. B. Kumar, “Minimum variance synthetic discriminant functions,” J. Opt. Soc. Am. A 3, 1579–1584 (1986).
    [Crossref]
  33. P. Refregier, “Filter design for optical pattern recognition: multi-criteria optimization approach,” Opt. Lett. 15, 854–856 (1990).
    [Crossref]
  34. G. Ravichandran and D. Casasent, “Minimum noise and correlation energy optical CF,” Appl. Opt. 31, 1823–1833 (1992).
    [Crossref]
  35. A. Mahalanobis, B. Kumar, S. R. F. Sims, and J. F. Epperson, “Unconstrained CFs,” Appl. Opt. 33, 3751–3759 (1994).
    [Crossref]
  36. B. Kumar, D. Carlson, and A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Opt. Lett. 19, 1556–1558 (1994).
    [Crossref]
  37. P. Refregier, “Optimal trade-off filters for noise robustness, sharpness of the correlation peak, and Horner efficiency,” Opt. Lett. 16, 829–831 (1991).
    [Crossref]
  38. O. Johnson, W. Edens, T. Lu, and T. Chao, “Optimization of OT-MACH filter generation for target recognition,” Proc. SPIE 7340, 734008 (2009).
    [Crossref]
  39. M. Alkanhal, B. Kumar, and A. Mahalanobis, “Improving the false alarm capabilities of the maximum average correlation height CF,” Opt. Eng. 39, 1133–1141 (2000).
    [Crossref]
  40. B. Kumar and M. Alkanhal, “Eigen-extended maximum average correlation height filters for automatic target recognition,” Proc. SPIE 4379, 424–431 (2001).
    [Crossref]
  41. K. Jeong, W. Liu, S. Han, E. Hasanbelliu, and J. Principe, “The correntropy MACE filter,” Pattern Recogn. 42, 871–885 (2009).
    [Crossref]
  42. M. Rodriguez, J. Ahmed, and M. Shah, “Action MACH a spatio-temporal maximum average correlation height filter for action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2008).
  43. S. Goyal, N. Nishchal, V. Beri, and A. Gupta, “Wavelet-modified maximum average correlation height filter for rotation invariance that uses chirp encoding in a hybrid digital-optical correlator,” Appl. Opt. 45, 4850–4857 (2006).
    [Crossref]
  44. P. K. Banerjee and A. K. Datta, Techniques of Frequency Domain Correlation for Face Recognition and its Photonic Implementation, A. Quaglia and C. M. Epifano, eds. (NOVA, 2012), Chap. 9, pp. 165–186.
  45. M. Savvides and B. V. K. Vijaya Kumar, “Quad phase minimum average correlation energy filters for reduced memory illumination tolerant face authentication,” in Proceedings of the 4th International Conference on Audio and Visual Biometrics based Person Authentication (AVBPA), Surrey, UK, 2003.
  46. M. Maddah and S. Mozaffari, “Face verification using local binary pattern-unconstrained minimum average correlation energy CFs,” J. Opt. Soc. Am. A 29, 1717–1721 (2012).
    [Crossref]
  47. A. Nevel and A. Mahalanobis, “Comparative study of maximum average correlation height filter variants using ladar imagery,” Opt. Eng. 42, 541–550 (2003).
    [Crossref]
  48. R. Muise, A. Mahalanobis, R. Mohapatra, X. Li, D. Han, and W. Mikhael, “Constrained quadratic CFs for target detection,” Appl. Opt. 43, 304–314 (2004).
    [Crossref]
  49. A. Mahalanobis, R. Muise, S. Stanfill, and A. Nevel, “Design and application of quadratic CFs for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004).
  50. K. Al-Mashouq, B. Kumar, and M. Alkanhal, “Analysis of signal-to-noise ratio of polynomial CFs,” Proc. SPIE 3715, 407–413 (1999).
    [Crossref]
  51. A. Mahalanobis and B. Kumar, “Polynomial filters for higher-order and multi-input information fusion,” in 11th Euro-American Opto-Electronic Information Processing Workshop (1999), pp. 221–231.
  52. A. Alfalou, G. Keryer, and J. L. de Bougrenet de la Tocnaye, “Optical implementation of segmented composite filtering,” Appl. Opt. 38, 6129–6135 (1999).
    [Crossref]
  53. H. Lai, V. Ramanathan, and H. Wechsler, “Reliable face recognition using adaptive and robust CFs,” Comput. Vis. Image Underst. 111, 329–350 (2008).
    [Crossref]
  54. D. Bolme, J. Beveridge, B. Draper, and Y. Lui, “Visual object tracking using adaptive correlation filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 2544–2550.
  55. D. Bolme, B. Draper, and J. Beveridge, “Average of synthetic exact filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 2105–2112.
  56. A. Mahalanobis, B. Kumar, and S. R. F. Sims, “Distance classifier CFs for distortion tolerance, discrimination and clutter rejection,” Proc. SPIE 2026, 325–335 (1993).
    [Crossref]
  57. R. Juday, “Optimal realizable filters and the minimum Euclidean distance principle,” Appl. Opt. 32, 5100–5111 (1993).
    [Crossref]
  58. M. Alkanhal and B. Kumar, “Polynomial distance classifier CF for pattern recognition,” Appl. Opt. 42, 4688–4708 (2003).
    [Crossref]
  59. A. Alfalou and C. Brosseau, “Robust and discriminating method for face recognition based on correlation technique and independent component analysis model,” Opt. Lett. 36, 645–647 (2011).
    [Crossref]
  60. J. Thornton, M. Savvides, and B. Kumar, “Linear shift-invariant maximum margin SVM correlation filter,” in Proceedings of the Intelligent Sensors, Sensor Networks and Information Processing Conference (IEEE, 2004), pp. 183–188.
  61. A. Rodriguez, V. Boddeti, B. Kumar, and A. Mahalanobis, “Maximum margin CF: a new approach for localization and classification,” IEEE Trans. Image Process. 22, 631–643 (2013).
    [Crossref]
  62. R. W. Świniarski and A. Skowron, “Transactions on rough sets I,” in Independent Component Analysis, Principal Component Analysis and Rough Sets in Face Recognition (Springer, 2004), pp. 392–404.
  63. S. Wijaya, M. Savvides, and B. Kumar, “Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced CFs for handheld devices,” Appl. Opt. 44, 655–665 (2005).
    [Crossref]
  64. M. Savvides and B. Kumar, “Illumination normalization using logarithm transforms for face authentication,” Lect. Notes Comput. Sci. 2688, 549–556 (2003).
    [Crossref]
  65. M. Savvides, B. Kumar, and P. Khosla, “Face verification using correlation filters,” in 3rd IEEE Automatic Identification Advanced Technologies (2002), pp. 56–61.
  66. M. Savvides and B. Kumar, “Efficient design of advanced CFs for robust distortion-tolerant face recognition,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (2003).
  67. M. Savvides, B. Kumar, and P. Khosla, “Corefaces - robust shift invariant PCA based CF for illumination tolerant face recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004).
  68. R. Patnaik and D. Casasent, “Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms,” Proc. SPIE 5816, 94–104 (2005).
    [Crossref]
  69. M. D. Levine and Y. Yu, “Face recognition subject to variations in facial expression, illumination and pose using CFs,” Comput. Vis. Image Underst. 104, 1–15 (2006).
    [Crossref]
  70. I. Jolliffe, Principal Component Analysis (Wiley, 2002).
  71. B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the Fifth Annual Workshop on Computational Learning Theory (1992), pp. 144–152.
  72. C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn. 20, 273–297 (1995).
  73. C. Xie and B. Kumar, “Face class code based feature extraction for face recognition,” in Fourth IEEE Workshop on Automatic Identification Advanced Technologies (2005).
  74. C. Xie, M. Savvides, and B. Kumar, “Kernel CF based redundant class-dependence feature analysis on FRGC2.0 data,” Lect. Notes Comput. Sci. 3723, 32–43 (2005).
    [Crossref]
  75. C. Xie and B. Kumar, “Comparison of kernel class-dependence feature analysis (KCFA) with kernel discriminant analysis (KDA) for face recognition,” in First IEEE International Conference on Biometrics: Theory, Applications, and Systems (BTAS) (IEEE, 2007).
  76. R. Abiantun, M. Savvides, and B. Kumar, “Generalized low dimensional feature subspace for robust face recognition on unseen datasets using kernel correlation feature analysis,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2007).
  77. J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Computer Vision–European Conference on Computer Vision (ECCV) (Springer, 2012), pp. 702–715.
  78. Y. Li and J. Zhu, “A scale adaptive kernel CF tracker with feature integration,” in Computer Vision–European Conference on Computer Vision (ECCV) Workshops (Springer, 2014), pp. 254–265.
  79. J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 583–596 (2015).
    [Crossref]
  80. Y. Yan and Y. Zhang, “Tensor CF based class-dependence feature analysis for face recognition,” Neurocomputing 71, 3434–3438 (2008).
    [Crossref]
  81. A. Alfalou, “Implementation of optical multichannel correlation: application to pattern recognition,” Ph.D. thesis (Université de Rennes, 1999).
  82. P. Katz, A. Alfalou, C. Brosseau, and M. S. Alam, “Correlation and independent component analysis based approaches for biometric recognition,” in Face Recognition Methods: Applications, and Technology, A. Quaglia and C. M. Epifano, eds. (Nova Science, 2011).
  83. C. Xie, M. Savvides, and B. Kumar, “Quaternion CF for face recognition in wavelet domain,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2005).
  84. D. Rizo-Rodríguez, H. Méndez-Vázquez, and E. García-Reyes, “Illumination invariant face recognition using quaternion-based CFs,” J. Math. Imaging Vis. 45, 164–175 (2013).
    [Crossref]
  85. J. Fernandez and B. Kumar, “Zero-aliasing CFs,” in International Symposium on Image and Signal Processing and Analysis (2013), pp. 101–106.
  86. J. Fernandez, V. Boddeti, A. Rodriguez, and B. Kumar, “Zero-aliasing CFs for object recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015).
    [Crossref]
  87. A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Decision optimization for face recognition based on an alternate correlation plane quantification metric,” Opt. Lett. 37, 1562–1564 (2012).
    [Crossref]
  88. H. Cardot, F. Ferraty, and P. Sarda, “Linear functional model,” Stat. Probab. Lett. 45, 11–22 (1999).
    [Crossref]
  89. M. Wall, A. Rechtsteiner, and L. Rocha, A Practical Approach to Microarray Data Analysis, D. P. Berrar, W. Dubitzky, and M. Granzow, eds. (Springer, 2003), pp. 91–109.
  90. M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proceedings of the British Machine Vision Conference (BMVC) (2014).
  91. Z. Chen, Z. Hong, and D. Tao, “An experimental survey on correlation filter-based tracking,” arXiv:1509.05520 (2015).
  92. E. Zhou, Z. Cao, and Q. Yin, “Naive-deep face recognition: touching the limit of LFW benchmark or not?” arXiv:1501.04690 (2015).
  93. Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2892–2900.
  94. Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Advances in Neural Information Processing Systems (2014), pp. 1988–1996.
  95. M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014).
  96. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.
  97. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.
  98. S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).
  99. G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).
  100. M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66.
  101. C. Ma, Y. Xu, B. Ni, and X. Yang, “When correlation filters meet convolutional neural networks for visual tracking,” IEEE Signal Process. Lett. 23, 1454–1458 (2016).
    [Crossref]
  102. K. W. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Underst. 101, 1–15 (2006).
    [Crossref]
  103. V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1063–1074 (2003).
    [Crossref]
  104. J. Heo, M. Savvides, and B. V. K. Vijayakumar, “Performance evaluation of face recognition using visual and thermal imagery with advanced correlation filters,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005).
  105. M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.
  106. A. Seal, S. Ganguly, D. Bhattacharjee, M. Nasipuri, and D. K. Basu, “Automated thermal face recognition based on minutiae extraction,” Int. J. Comput. Intell. Stud. 2, 133–156 (2013).
  107. X. Liu, T. Chen, and B. Kumar, “Face authentication for multiple subjects using eigenflow,” Pattern Recogn. 36, 313–328 (2003).
    [Crossref]
  108. A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), Vol. 1.
  109. D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
    [Crossref]
  110. X. Zhang, W. Hu, S. Maybank, and X. Li, “Graph based discriminative learning for robust and efficient object tracking,” in IEEE 11th International Conference on Computer Vision (ICCV) (IEEE, 2007).
  111. B. Kumar, A. Mahalanobis, S. Song, S. Sims, and J. Epperson, “Minimum squared error synthetic discriminant functions,” Opt. Eng. 31, 915–922 (1992).
    [Crossref]
  112. B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Conference on Computer Vision and Pattern Recognition (CVPR) (2009).
  113. A. Jepson, D. Fleet, and T. El-Maraghi, “Robust online appearance models for visual tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1296–1311 (2003).
    [Crossref]
  114. N. C. Oza, “Online ensemble learning,” Ph.D. thesis (University of California, 2001).
  115. M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. van de Weijer, “Adaptive color attributes for real-time visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1090–1097.
  116. M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Coloring channel representations for visual tracking,” in Scandinavian Conference on Image Analysis (Springer, 2015), pp. 117–129.
  117. M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3828–3836.
  118. M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 4310–4318.
  119. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: delving deep into convolutional nets,” arXiv:1405.3531 (2014).
  120. Y. Wu, J. Lim, and M. H. Yang, “Online object tracking: a benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2411–2418.
  121. J. Gao, H. Ling, W. Hu, and J. Xing, “Transfer learning based visual tracking with Gaussian processes regression,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.
  122. J. Zhang, S. Ma, and S. Sclaroff, “MEEM: robust tracking via multiple experts using entropy minimization,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.
  123. S. Hare, A. Saffari, and P. H. S. Torr, “Struck: structured output tracking with kernels,” in International Conference on Computer Vision (IEEE, 2011), pp. 263–270.
  124. H. Kiani Galoogahi, T. Sim, and S. Lucey, “Correlation filters with limited boundaries,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4630–4638.
  125. B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 983–990.
  126. K. Zhang, L. Zhang, and M. H. Yang, “Real-time compressive tracking,” in European Conference on Computer Vision (Springer, 2012), pp. 864–877.
  127. Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning: bootstrapping binary classifiers by structural constraints,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 49–56.
  128. L. Sevilla-Lara and E. Learned-Miller, “Distribution fields for tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1910–1917.
  129. M. Felsberg, “Enhanced distribution field tracking using channel representations,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2013), pp. 121–128.
  130. X. Jia, H. Lu, and M. H. Yang, “Visual tracking via adaptive structural local sparse appearance model,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1822–1829.
  131. The Visual Object Tracking (VOT) Challenge, 2015, http://www.votchallenge.net .
  132. A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
    [Crossref]
  133. T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distribution,” Pattern Recogn. 29, 51–59 (1996).
    [Crossref]
  134. A. Alfalou, M. Farhat, and A. Mansour, “Independent component analysis based approach to biometric recognition, information and communication technologies: from theory to applications,” in 3rd International Conference on International Conference on Information & Communication Technologies: from Theory to Applications (ICTTA), 7–11 April2008, pp. 1–4.
  135. X. Guan, H. H. Szu, and Z. Markowitz, “Local ICA for the most wanted face recognition,” Proc. SPIE 4056, 539–551 (2000).
    [Crossref]
  136. P. Comon, “Independent component analysis, a new concept?” Signal Process. 36, 287–314 (1994).
    [Crossref]
  137. A. Alfalou, C. Brosseau, and M. Alam, “Smart pattern recognition,” Proc. SPIE 8748, 874809 (2013).
    [Crossref]
  138. E. Watanabe and K. Kodate, “Implementation of high-speed face recognition system using an optical parallel correlator,” Appl. Opt. 44, 666–676 (2005).
    [Crossref]
  139. K. Kodate, “Star power in Japan,” in SPIE Professional, July 2010, doi: 10.1117/2.4201007.13, https://spie.org/membership/spie-professional-magazine/spie-professional-archives-and-special-content/july2010-spie-professional/star-power-in-japan .
    [Crossref]
  140. M. Elbouz, A. Alfalou, and C. Brosseau, “Fuzzy logic and optical correlation-based face recognition method for patient monitoring application in home video surveillance,” Opt. Eng. 50, 067003 (2011).
    [Crossref]
  141. D. Benarab, T. Napoléon, A. Alfalou, A. Verney, and P. Hellard, “Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques,” Opt. Commun. 356, 256–268 (2015).
    [Crossref]
  142. Centre of Molecular Materials for Photonics and Electronics, Department of Engineering, University of Cambridge, “Optical information processing,” http://www-g.eng.cam.ac.uk/CMMPE/pattern.html .
  143. I. Leonard, A. Alfalou, M. S. Alam, and A. Arnold-Bos, “Adaptive nonlinear fringe-adjusted joint transform correlator,” Opt. Eng. 51, 098201 (2012).
    [Crossref]

2016 (2)

F. Wang, I. Toselli, and O. Korotkova, “Two spatial light modulator system for laboratory simulation of random beam propagation in random media,” Appl. Opt. 55, 1112–1117 (2016).
[Crossref]

C. Ma, Y. Xu, B. Ni, and X. Yang, “When correlation filters meet convolutional neural networks for visual tracking,” IEEE Signal Process. Lett. 23, 1454–1458 (2016).
[Crossref]

2015 (5)

J. Fernandez, V. Boddeti, A. Rodriguez, and B. Kumar, “Zero-aliasing CFs for object recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015).
[Crossref]

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 583–596 (2015).
[Crossref]

D. Wu, X. Zhou, B. Yao, R. Li, Y. Yang, T. Peng, M. Lei, D. Dan, and T. Ye, “Fast frame scanning camera system for light-sheet microscopy,” Appl. Opt. 54, 8632–8636 (2015).
[Crossref]

Y. Ouerhani, M. Desthieux, and A. Alfalou, “Road sign recognition using Viapix module and correlation,” Proc. SPIE 9477, 94770H (2015).
[Crossref]

D. Benarab, T. Napoléon, A. Alfalou, A. Verney, and P. Hellard, “Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques,” Opt. Commun. 356, 256–268 (2015).
[Crossref]

2014 (1)

A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
[Crossref]

2013 (5)

A. Alfalou, C. Brosseau, and M. Alam, “Smart pattern recognition,” Proc. SPIE 8748, 874809 (2013).
[Crossref]

D. Rizo-Rodríguez, H. Méndez-Vázquez, and E. García-Reyes, “Illumination invariant face recognition using quaternion-based CFs,” J. Math. Imaging Vis. 45, 164–175 (2013).
[Crossref]

A. Seal, S. Ganguly, D. Bhattacharjee, M. Nasipuri, and D. K. Basu, “Automated thermal face recognition based on minutiae extraction,” Int. J. Comput. Intell. Stud. 2, 133–156 (2013).

A. Rodriguez, V. Boddeti, B. Kumar, and A. Mahalanobis, “Maximum margin CF: a new approach for localization and classification,” IEEE Trans. Image Process. 22, 631–643 (2013).
[Crossref]

Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
[Crossref]

2012 (3)

2011 (2)

M. Elbouz, A. Alfalou, and C. Brosseau, “Fuzzy logic and optical correlation-based face recognition method for patient monitoring application in home video surveillance,” Opt. Eng. 50, 067003 (2011).
[Crossref]

A. Alfalou and C. Brosseau, “Robust and discriminating method for face recognition based on correlation technique and independent component analysis model,” Opt. Lett. 36, 645–647 (2011).
[Crossref]

2009 (2)

K. Jeong, W. Liu, S. Han, E. Hasanbelliu, and J. Principe, “The correntropy MACE filter,” Pattern Recogn. 42, 871–885 (2009).
[Crossref]

O. Johnson, W. Edens, T. Lu, and T. Chao, “Optimization of OT-MACH filter generation for target recognition,” Proc. SPIE 7340, 734008 (2009).
[Crossref]

2008 (3)

Y. Yan and Y. Zhang, “Tensor CF based class-dependence feature analysis for face recognition,” Neurocomputing 71, 3434–3438 (2008).
[Crossref]

H. Lai, V. Ramanathan, and H. Wechsler, “Reliable face recognition using adaptive and robust CFs,” Comput. Vis. Image Underst. 111, 329–350 (2008).
[Crossref]

D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

2006 (4)

K. W. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Underst. 101, 1–15 (2006).
[Crossref]

M. D. Levine and Y. Yu, “Face recognition subject to variations in facial expression, illumination and pose using CFs,” Comput. Vis. Image Underst. 104, 1–15 (2006).
[Crossref]

S. Goyal, N. Nishchal, V. Beri, and A. Gupta, “Wavelet-modified maximum average correlation height filter for rotation invariance that uses chirp encoding in a hybrid digital-optical correlator,” Appl. Opt. 45, 4850–4857 (2006).
[Crossref]

Q. Wang and S. Liu, “Morphological fringe-adjusted joint transform correlation,” Opt. Eng. 45, 087002 (2006).
[Crossref]

2005 (4)

R. Patnaik and D. Casasent, “Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms,” Proc. SPIE 5816, 94–104 (2005).
[Crossref]

S. Wijaya, M. Savvides, and B. Kumar, “Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced CFs for handheld devices,” Appl. Opt. 44, 655–665 (2005).
[Crossref]

C. Xie, M. Savvides, and B. Kumar, “Kernel CF based redundant class-dependence feature analysis on FRGC2.0 data,” Lect. Notes Comput. Sci. 3723, 32–43 (2005).
[Crossref]

E. Watanabe and K. Kodate, “Implementation of high-speed face recognition system using an optical parallel correlator,” Appl. Opt. 44, 666–676 (2005).
[Crossref]

2004 (3)

2003 (6)

A. Nevel and A. Mahalanobis, “Comparative study of maximum average correlation height filter variants using ladar imagery,” Opt. Eng. 42, 541–550 (2003).
[Crossref]

M. Alkanhal and B. Kumar, “Polynomial distance classifier CF for pattern recognition,” Appl. Opt. 42, 4688–4708 (2003).
[Crossref]

M. Savvides and B. Kumar, “Illumination normalization using logarithm transforms for face authentication,” Lect. Notes Comput. Sci. 2688, 549–556 (2003).
[Crossref]

V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1063–1074 (2003).
[Crossref]

X. Liu, T. Chen, and B. Kumar, “Face authentication for multiple subjects using eigenflow,” Pattern Recogn. 36, 313–328 (2003).
[Crossref]

A. Jepson, D. Fleet, and T. El-Maraghi, “Robust online appearance models for visual tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1296–1311 (2003).
[Crossref]

2002 (1)

2001 (1)

B. Kumar and M. Alkanhal, “Eigen-extended maximum average correlation height filters for automatic target recognition,” Proc. SPIE 4379, 424–431 (2001).
[Crossref]

2000 (2)

X. Guan, H. H. Szu, and Z. Markowitz, “Local ICA for the most wanted face recognition,” Proc. SPIE 4056, 539–551 (2000).
[Crossref]

M. Alkanhal, B. Kumar, and A. Mahalanobis, “Improving the false alarm capabilities of the maximum average correlation height CF,” Opt. Eng. 39, 1133–1141 (2000).
[Crossref]

1999 (4)

M. Kaneko and O. Hasegawa, “Processing of face images and its applications,” IEICE Trans. Inf. Syst. E82-D, 589–600 (1999).

H. Cardot, F. Ferraty, and P. Sarda, “Linear functional model,” Stat. Probab. Lett. 45, 11–22 (1999).
[Crossref]

K. Al-Mashouq, B. Kumar, and M. Alkanhal, “Analysis of signal-to-noise ratio of polynomial CFs,” Proc. SPIE 3715, 407–413 (1999).
[Crossref]

A. Alfalou, G. Keryer, and J. L. de Bougrenet de la Tocnaye, “Optical implementation of segmented composite filtering,” Appl. Opt. 38, 6129–6135 (1999).
[Crossref]

1996 (1)

T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distribution,” Pattern Recogn. 29, 51–59 (1996).
[Crossref]

1995 (2)

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn. 20, 273–297 (1995).

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

1994 (4)

1993 (3)

A. Mahalanobis, B. Kumar, and S. R. F. Sims, “Distance classifier CFs for distortion tolerance, discrimination and clutter rejection,” Proc. SPIE 2026, 325–335 (1993).
[Crossref]

R. Juday, “Optimal realizable filters and the minimum Euclidean distance principle,” Appl. Opt. 32, 5100–5111 (1993).
[Crossref]

M. Alam and M. Karim, “Fringe-adjusted joint transform correlation,” Appl. Opt. 32, 4344–4350 (1993).
[Crossref]

1992 (4)

1991 (1)

1990 (2)

1988 (1)

D. Mendlovic, E. Marom, and N. Konforti, “Shift- and scale-invariant pattern recognition using Mellin radial harmonics,” Opt. Commun. 67, 172–176 (1988).
[Crossref]

1987 (2)

A. Mahalanobis, B. Kumar, and D. Casasent, “Minimum average correlation energy filters,” Appl. Opt. 26, 3633–3640 (1987).
[Crossref]

L. Sirovich and M. Kirby, “Low-dimensional procedure for the characterization of human face,” J. Opt. Soc. Am. 4, 519–524 (1987).
[Crossref]

1986 (1)

1984 (2)

J. Horner and P. Gianino, “Phase-only matched filtering,” Appl. Opt. 23, 812–816 (1984).
[Crossref]

D. Psaltis, E. Paek, and S. Venkatesh, “Optical image correlation with binary spatial light modulator,” Opt. Eng. 23, 698–704 (1984).

1982 (1)

1980 (1)

1966 (1)

1964 (1)

A. VanderLugt, “Signal detection by complex spatial filtering,” IEEE Trans. Inf. Theory 10, 139–145 (1964).
[Crossref]

1963 (1)

D. North, “An analysis of the factors which determine signal/noise discriminations in pulsed carrier systems,” Proc. IEEE 51, 1016–1027 (1963).
[Crossref]

Abiantun, R.

R. Abiantun, M. Savvides, and B. Kumar, “Generalized low dimensional feature subspace for robust face recognition on unseen datasets using kernel correlation feature analysis,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2007).

Adam, A.

A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), Vol. 1.

Ahmed, J.

M. Rodriguez, J. Ahmed, and M. Shah, “Action MACH a spatio-temporal maximum average correlation height filter for action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2008).

Alam, M.

A. Alfalou, C. Brosseau, and M. Alam, “Smart pattern recognition,” Proc. SPIE 8748, 874809 (2013).
[Crossref]

M. Alam and M. Karim, “Fringe-adjusted joint transform correlation,” Appl. Opt. 32, 4344–4350 (1993).
[Crossref]

Alam, M. S.

Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
[Crossref]

A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Decision optimization for face recognition based on an alternate correlation plane quantification metric,” Opt. Lett. 37, 1562–1564 (2012).
[Crossref]

I. Leonard, A. Alfalou, M. S. Alam, and A. Arnold-Bos, “Adaptive nonlinear fringe-adjusted joint transform correlator,” Opt. Eng. 51, 098201 (2012).
[Crossref]

P. Katz, A. Alfalou, C. Brosseau, and M. S. Alam, “Correlation and independent component analysis based approaches for biometric recognition,” in Face Recognition Methods: Applications, and Technology, A. Quaglia and C. M. Epifano, eds. (Nova Science, 2011).

Alfalou, A.

D. Benarab, T. Napoléon, A. Alfalou, A. Verney, and P. Hellard, “Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques,” Opt. Commun. 356, 256–268 (2015).
[Crossref]

Y. Ouerhani, M. Desthieux, and A. Alfalou, “Road sign recognition using Viapix module and correlation,” Proc. SPIE 9477, 94770H (2015).
[Crossref]

Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
[Crossref]

A. Alfalou, C. Brosseau, and M. Alam, “Smart pattern recognition,” Proc. SPIE 8748, 874809 (2013).
[Crossref]

I. Leonard, A. Alfalou, M. S. Alam, and A. Arnold-Bos, “Adaptive nonlinear fringe-adjusted joint transform correlator,” Opt. Eng. 51, 098201 (2012).
[Crossref]

A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Decision optimization for face recognition based on an alternate correlation plane quantification metric,” Opt. Lett. 37, 1562–1564 (2012).
[Crossref]

A. Alfalou and C. Brosseau, “Robust and discriminating method for face recognition based on correlation technique and independent component analysis model,” Opt. Lett. 36, 645–647 (2011).
[Crossref]

M. Elbouz, A. Alfalou, and C. Brosseau, “Fuzzy logic and optical correlation-based face recognition method for patient monitoring application in home video surveillance,” Opt. Eng. 50, 067003 (2011).
[Crossref]

A. Alfalou, G. Keryer, and J. L. de Bougrenet de la Tocnaye, “Optical implementation of segmented composite filtering,” Appl. Opt. 38, 6129–6135 (1999).
[Crossref]

A. Alfalou, “Implementation of optical multichannel correlation: application to pattern recognition,” Ph.D. thesis (Université de Rennes, 1999).

P. Katz, A. Alfalou, C. Brosseau, and M. S. Alam, “Correlation and independent component analysis based approaches for biometric recognition,” in Face Recognition Methods: Applications, and Technology, A. Quaglia and C. M. Epifano, eds. (Nova Science, 2011).

A. Alfalou, M. Farhat, and A. Mansour, “Independent component analysis based approach to biometric recognition, information and communication technologies: from theory to applications,” in 3rd International Conference on International Conference on Information & Communication Technologies: from Theory to Applications (ICTTA), 7–11 April2008, pp. 1–4.

Alkanhal, M.

M. Alkanhal and B. Kumar, “Polynomial distance classifier CF for pattern recognition,” Appl. Opt. 42, 4688–4708 (2003).
[Crossref]

B. Kumar and M. Alkanhal, “Eigen-extended maximum average correlation height filters for automatic target recognition,” Proc. SPIE 4379, 424–431 (2001).
[Crossref]

M. Alkanhal, B. Kumar, and A. Mahalanobis, “Improving the false alarm capabilities of the maximum average correlation height CF,” Opt. Eng. 39, 1133–1141 (2000).
[Crossref]

K. Al-Mashouq, B. Kumar, and M. Alkanhal, “Analysis of signal-to-noise ratio of polynomial CFs,” Proc. SPIE 3715, 407–413 (1999).
[Crossref]

Al-Mashouq, K.

K. Al-Mashouq, B. Kumar, and M. Alkanhal, “Analysis of signal-to-noise ratio of polynomial CFs,” Proc. SPIE 3715, 407–413 (1999).
[Crossref]

Arnold-Bos, A.

I. Leonard, A. Alfalou, M. S. Alam, and A. Arnold-Bos, “Adaptive nonlinear fringe-adjusted joint transform correlator,” Opt. Eng. 51, 098201 (2012).
[Crossref]

Arsenault, H.

Attia, M.

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

Babenko, B.

B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 983–990.

B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Conference on Computer Vision and Pattern Recognition (CVPR) (2009).

Banerjee, P.

A. Datta, M. Datta, and P. Banerjee, Face Detection and Recognition: Theory and Practice (Chapman & Hall/CRC Press, 2015).

Banerjee, P. K.

P. K. Banerjee and A. K. Datta, Techniques of Frequency Domain Correlation for Face Recognition and its Photonic Implementation, A. Quaglia and C. M. Epifano, eds. (NOVA, 2012), Chap. 9, pp. 165–186.

Basu, D. K.

A. Seal, S. Ganguly, D. Bhattacharjee, M. Nasipuri, and D. K. Basu, “Automated thermal face recognition based on minutiae extraction,” Int. J. Comput. Intell. Stud. 2, 133–156 (2013).

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Batista, J.

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 583–596 (2015).
[Crossref]

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Computer Vision–European Conference on Computer Vision (ECCV) (Springer, 2012), pp. 702–715.

Belongie, S.

B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Conference on Computer Vision and Pattern Recognition (CVPR) (2009).

B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 983–990.

Benarab, D.

D. Benarab, T. Napoléon, A. Alfalou, A. Verney, and P. Hellard, “Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques,” Opt. Commun. 356, 256–268 (2015).
[Crossref]

Berg, T.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

Beri, V.

Beveridge, J.

D. Bolme, J. Beveridge, B. Draper, and Y. Lui, “Visual object tracking using adaptive correlation filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 2544–2550.

D. Bolme, B. Draper, and J. Beveridge, “Average of synthetic exact filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 2105–2112.

Bhattacharjee, D.

A. Seal, S. Ganguly, D. Bhattacharjee, M. Nasipuri, and D. K. Basu, “Automated thermal face recognition based on minutiae extraction,” Int. J. Comput. Intell. Stud. 2, 133–156 (2013).

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Bhowmik, M. K.

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Blanz, V.

V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1063–1074 (2003).
[Crossref]

Boddeti, V.

J. Fernandez, V. Boddeti, A. Rodriguez, and B. Kumar, “Zero-aliasing CFs for object recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015).
[Crossref]

A. Rodriguez, V. Boddeti, B. Kumar, and A. Mahalanobis, “Maximum margin CF: a new approach for localization and classification,” IEEE Trans. Image Process. 22, 631–643 (2013).
[Crossref]

Bolme, D.

D. Bolme, B. Draper, and J. Beveridge, “Average of synthetic exact filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 2105–2112.

D. Bolme, J. Beveridge, B. Draper, and Y. Lui, “Visual object tracking using adaptive correlation filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 2544–2550.

Boser, B.

B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the Fifth Annual Workshop on Computational Learning Theory (1992), pp. 144–152.

Bottou, L.

M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014).

Bowyer, K. W.

K. W. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Underst. 101, 1–15 (2006).
[Crossref]

Brosseau, C.

Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
[Crossref]

A. Alfalou, C. Brosseau, and M. Alam, “Smart pattern recognition,” Proc. SPIE 8748, 874809 (2013).
[Crossref]

A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Decision optimization for face recognition based on an alternate correlation plane quantification metric,” Opt. Lett. 37, 1562–1564 (2012).
[Crossref]

A. Alfalou and C. Brosseau, “Robust and discriminating method for face recognition based on correlation technique and independent component analysis model,” Opt. Lett. 36, 645–647 (2011).
[Crossref]

M. Elbouz, A. Alfalou, and C. Brosseau, “Fuzzy logic and optical correlation-based face recognition method for patient monitoring application in home video surveillance,” Opt. Eng. 50, 067003 (2011).
[Crossref]

P. Katz, A. Alfalou, C. Brosseau, and M. S. Alam, “Correlation and independent component analysis based approaches for biometric recognition,” in Face Recognition Methods: Applications, and Technology, A. Quaglia and C. M. Epifano, eds. (Nova Science, 2011).

Calderara, S.

A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
[Crossref]

Cao, Z.

E. Zhou, Z. Cao, and Q. Yin, “Naive-deep face recognition: touching the limit of LFW benchmark or not?” arXiv:1501.04690 (2015).

Cardot, H.

H. Cardot, F. Ferraty, and P. Sarda, “Linear functional model,” Stat. Probab. Lett. 45, 11–22 (1999).
[Crossref]

Carlson, D.

Casasent, D.

Caseiro, R.

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 583–596 (2015).
[Crossref]

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Computer Vision–European Conference on Computer Vision (ECCV) (Springer, 2012), pp. 702–715.

Caviris, N.

Chang, K.

K. W. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Underst. 101, 1–15 (2006).
[Crossref]

Chao, T.

O. Johnson, W. Edens, T. Lu, and T. Chao, “Optimization of OT-MACH filter generation for target recognition,” Proc. SPIE 7340, 734008 (2009).
[Crossref]

Chatfield, K.

K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: delving deep into convolutional nets,” arXiv:1405.3531 (2014).

Chen, T.

X. Liu, T. Chen, and B. Kumar, “Face authentication for multiple subjects using eigenflow,” Pattern Recogn. 36, 313–328 (2003).
[Crossref]

Chen, Y.

Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Advances in Neural Information Processing Systems (2014), pp. 1988–1996.

Chen, Z.

Z. Chen, Z. Hong, and D. Tao, “An experimental survey on correlation filter-based tracking,” arXiv:1509.05520 (2015).

Chu, D. M.

A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
[Crossref]

Cimpoi, M.

M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3828–3836.

Comon, P.

P. Comon, “Independent component analysis, a new concept?” Signal Process. 36, 287–314 (1994).
[Crossref]

Cortes, C.

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn. 20, 273–297 (1995).

Cucchiara, R.

A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
[Crossref]

Dan, D.

Danelljan, M.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Coloring channel representations for visual tracking,” in Scandinavian Conference on Image Analysis (Springer, 2015), pp. 117–129.

M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. van de Weijer, “Adaptive color attributes for real-time visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1090–1097.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 4310–4318.

M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proceedings of the British Machine Vision Conference (BMVC) (2014).

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66.

Datta, A.

A. Datta, M. Datta, and P. Banerjee, Face Detection and Recognition: Theory and Practice (Chapman & Hall/CRC Press, 2015).

Datta, A. K.

P. K. Banerjee and A. K. Datta, Techniques of Frequency Domain Correlation for Face Recognition and its Photonic Implementation, A. Quaglia and C. M. Epifano, eds. (NOVA, 2012), Chap. 9, pp. 165–186.

Datta, M.

A. Datta, M. Datta, and P. Banerjee, Face Detection and Recognition: Theory and Practice (Chapman & Hall/CRC Press, 2015).

de Bougrenet de la Tocnaye, J. L.

A. Alfalou, G. Keryer, and J. L. de Bougrenet de la Tocnaye, “Optical implementation of segmented composite filtering,” Appl. Opt. 38, 6129–6135 (1999).
[Crossref]

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

Dehghan, A.

A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
[Crossref]

Desthieux, M.

Y. Ouerhani, M. Desthieux, and A. Alfalou, “Road sign recognition using Viapix module and correlation,” Proc. SPIE 9477, 94770H (2015).
[Crossref]

Draper, B.

D. Bolme, B. Draper, and J. Beveridge, “Average of synthetic exact filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 2105–2112.

D. Bolme, J. Beveridge, B. Draper, and Y. Lui, “Visual object tracking using adaptive correlation filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 2544–2550.

Edens, W.

O. Johnson, W. Edens, T. Lu, and T. Chao, “Optimization of OT-MACH filter generation for target recognition,” Proc. SPIE 7340, 734008 (2009).
[Crossref]

Elbouz, M.

M. Elbouz, A. Alfalou, and C. Brosseau, “Fuzzy logic and optical correlation-based face recognition method for patient monitoring application in home video surveillance,” Opt. Eng. 50, 067003 (2011).
[Crossref]

El-Maraghi, T.

A. Jepson, D. Fleet, and T. El-Maraghi, “Robust online appearance models for visual tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1296–1311 (2003).
[Crossref]

Epperson, J.

B. Kumar, A. Mahalanobis, S. Song, S. Sims, and J. Epperson, “Minimum squared error synthetic discriminant functions,” Opt. Eng. 31, 915–922 (1992).
[Crossref]

Epperson, J. F.

Farhat, M.

A. Alfalou, M. Farhat, and A. Mansour, “Independent component analysis based approach to biometric recognition, information and communication technologies: from theory to applications,” in 3rd International Conference on International Conference on Information & Communication Technologies: from Theory to Applications (ICTTA), 7–11 April2008, pp. 1–4.

Felsberg, M.

M. Felsberg, “Enhanced distribution field tracking using channel representations,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2013), pp. 121–128.

M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. van de Weijer, “Adaptive color attributes for real-time visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1090–1097.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Coloring channel representations for visual tracking,” in Scandinavian Conference on Image Analysis (Springer, 2015), pp. 117–129.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 4310–4318.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66.

M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proceedings of the British Machine Vision Conference (BMVC) (2014).

Fernandez, J.

J. Fernandez, V. Boddeti, A. Rodriguez, and B. Kumar, “Zero-aliasing CFs for object recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015).
[Crossref]

J. Fernandez and B. Kumar, “Zero-aliasing CFs,” in International Symposium on Image and Signal Processing and Analysis (2013), pp. 101–106.

Ferraty, F.

H. Cardot, F. Ferraty, and P. Sarda, “Linear functional model,” Stat. Probab. Lett. 45, 11–22 (1999).
[Crossref]

Fleet, D.

A. Jepson, D. Fleet, and T. El-Maraghi, “Robust online appearance models for visual tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1296–1311 (2003).
[Crossref]

Flynn, P.

K. W. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Underst. 101, 1–15 (2006).
[Crossref]

Ganguly, S.

A. Seal, S. Ganguly, D. Bhattacharjee, M. Nasipuri, and D. K. Basu, “Automated thermal face recognition based on minutiae extraction,” Int. J. Comput. Intell. Stud. 2, 133–156 (2013).

Gao, J.

J. Gao, H. Ling, W. Hu, and J. Xing, “Transfer learning based visual tracking with Gaussian processes regression,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

García-Reyes, E.

D. Rizo-Rodríguez, H. Méndez-Vázquez, and E. García-Reyes, “Illumination invariant face recognition using quaternion-based CFs,” J. Math. Imaging Vis. 45, 164–175 (2013).
[Crossref]

Gianino, P.

Goodman, J.

Goyal, S.

Guan, X.

X. Guan, H. H. Szu, and Z. Markowitz, “Local ICA for the most wanted face recognition,” Proc. SPIE 4056, 539–551 (2000).
[Crossref]

Guibert, L.

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

Gupta, A.

Guyon, I.

B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the Fifth Annual Workshop on Computational Learning Theory (1992), pp. 144–152.

Häger, G.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 4310–4318.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Coloring channel representations for visual tracking,” in Scandinavian Conference on Image Analysis (Springer, 2015), pp. 117–129.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66.

M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proceedings of the British Machine Vision Conference (BMVC) (2014).

Han, D.

Han, S.

K. Jeong, W. Liu, S. Han, E. Hasanbelliu, and J. Principe, “The correntropy MACE filter,” Pattern Recogn. 42, 871–885 (2009).
[Crossref]

Hare, S.

S. Hare, A. Saffari, and P. H. S. Torr, “Struck: structured output tracking with kernels,” in International Conference on Computer Vision (IEEE, 2011), pp. 263–270.

Harwood, D.

T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distribution,” Pattern Recogn. 29, 51–59 (1996).
[Crossref]

Hasanbelliu, E.

K. Jeong, W. Liu, S. Han, E. Hasanbelliu, and J. Principe, “The correntropy MACE filter,” Pattern Recogn. 42, 871–885 (2009).
[Crossref]

Hasegawa, O.

M. Kaneko and O. Hasegawa, “Processing of face images and its applications,” IEICE Trans. Inf. Syst. E82-D, 589–600 (1999).

Hassebrook, L.

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

Hellard, P.

D. Benarab, T. Napoléon, A. Alfalou, A. Verney, and P. Hellard, “Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques,” Opt. Commun. 356, 256–268 (2015).
[Crossref]

Henriques, J. F.

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 583–596 (2015).
[Crossref]

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Computer Vision–European Conference on Computer Vision (ECCV) (Springer, 2012), pp. 702–715.

Heo, J.

J. Heo, M. Savvides, and B. V. K. Vijayakumar, “Performance evaluation of face recognition using visual and thermal imagery with advanced correlation filters,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005).

Hester, C.

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Hong, Z.

Z. Chen, Z. Hong, and D. Tao, “An experimental survey on correlation filter-based tracking,” arXiv:1509.05520 (2015).

Horner, J.

Hsu, Y.

Hu, W.

X. Zhang, W. Hu, S. Maybank, and X. Li, “Graph based discriminative learning for robust and efficient object tracking,” in IEEE 11th International Conference on Computer Vision (ICCV) (IEEE, 2007).

J. Gao, H. Ling, W. Hu, and J. Xing, “Transfer learning based visual tracking with Gaussian processes regression,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

Huang, G. B.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).

Iton, M.

Jeong, K.

K. Jeong, W. Liu, S. Han, E. Hasanbelliu, and J. Principe, “The correntropy MACE filter,” Pattern Recogn. 42, 871–885 (2009).
[Crossref]

Jepson, A.

A. Jepson, D. Fleet, and T. El-Maraghi, “Robust online appearance models for visual tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1296–1311 (2003).
[Crossref]

Jia, X.

X. Jia, H. Lu, and M. H. Yang, “Visual tracking via adaptive structural local sparse appearance model,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1822–1829.

Johnson, O.

O. Johnson, W. Edens, T. Lu, and T. Chao, “Optimization of OT-MACH filter generation for target recognition,” Proc. SPIE 7340, 734008 (2009).
[Crossref]

Jolliffe, I.

I. Jolliffe, Principal Component Analysis (Wiley, 2002).

Jridi, M.

Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
[Crossref]

Juday, R.

Kalal, Z.

Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning: bootstrapping binary classifiers by structural constraints,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 49–56.

Kanade, T.

T. Kanade, “Picture processing system by computer complex and recognition of human face,” Ph.D. thesis (Department of Information Science, Kyoto University, 1973).

Kaneko, M.

M. Kaneko and O. Hasegawa, “Processing of face images and its applications,” IEICE Trans. Inf. Syst. E82-D, 589–600 (1999).

Kanterakis, E.

Karim, M.

Katz, A.

Katz, P.

Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
[Crossref]

A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Decision optimization for face recognition based on an alternate correlation plane quantification metric,” Opt. Lett. 37, 1562–1564 (2012).
[Crossref]

P. Katz, A. Alfalou, C. Brosseau, and M. S. Alam, “Correlation and independent component analysis based approaches for biometric recognition,” in Face Recognition Methods: Applications, and Technology, A. Quaglia and C. M. Epifano, eds. (Nova Science, 2011).

Keryer, G.

A. Alfalou, G. Keryer, and J. L. de Bougrenet de la Tocnaye, “Optical implementation of segmented composite filtering,” Appl. Opt. 38, 6129–6135 (1999).
[Crossref]

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

Khan, F. S.

M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proceedings of the British Machine Vision Conference (BMVC) (2014).

Khosla, P.

M. Savvides, B. Kumar, and P. Khosla, “Eigenphases vs. eigenfaces,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR) (IEEE, 2004), Vol. 3.

M. Savvides, B. Kumar, and P. Khosla, “Corefaces - robust shift invariant PCA based CF for illumination tolerant face recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004).

M. Savvides, B. Kumar, and P. Khosla, “Face verification using correlation filters,” in 3rd IEEE Automatic Identification Advanced Technologies (2002), pp. 56–61.

Kiani Galoogahi, H.

H. Kiani Galoogahi, T. Sim, and S. Lucey, “Correlation filters with limited boundaries,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4630–4638.

Kirby, M.

L. Sirovich and M. Kirby, “Low-dimensional procedure for the characterization of human face,” J. Opt. Soc. Am. 4, 519–524 (1987).
[Crossref]

Kodate, K.

E. Watanabe and K. Kodate, “Implementation of high-speed face recognition system using an optical parallel correlator,” Appl. Opt. 44, 666–676 (2005).
[Crossref]

E. Watanabe and K. Kodate, High Speed Holographic Optical Correlator for Face Recognition, State of the Art in Face Recognition, J. Ponce and A. Karahoca, eds. (InTech, 2009).

Konforti, N.

D. Mendlovic, E. Marom, and N. Konforti, “Shift- and scale-invariant pattern recognition using Mellin radial harmonics,” Opt. Commun. 67, 172–176 (1988).
[Crossref]

Korotkova, O.

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Kumar, B.

J. Fernandez, V. Boddeti, A. Rodriguez, and B. Kumar, “Zero-aliasing CFs for object recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015).
[Crossref]

A. Rodriguez, V. Boddeti, B. Kumar, and A. Mahalanobis, “Maximum margin CF: a new approach for localization and classification,” IEEE Trans. Image Process. 22, 631–643 (2013).
[Crossref]

S. Wijaya, M. Savvides, and B. Kumar, “Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced CFs for handheld devices,” Appl. Opt. 44, 655–665 (2005).
[Crossref]

C. Xie, M. Savvides, and B. Kumar, “Kernel CF based redundant class-dependence feature analysis on FRGC2.0 data,” Lect. Notes Comput. Sci. 3723, 32–43 (2005).
[Crossref]

B. Kumar, M. Savvides, C. Xie, K. Venkataramani, J. Thornton, and A. Mahalanobis, “Biometric verification with composite filters,” Appl. Opt. 43, 391–402 (2004).
[Crossref]

M. Alkanhal and B. Kumar, “Polynomial distance classifier CF for pattern recognition,” Appl. Opt. 42, 4688–4708 (2003).
[Crossref]

M. Savvides and B. Kumar, “Illumination normalization using logarithm transforms for face authentication,” Lect. Notes Comput. Sci. 2688, 549–556 (2003).
[Crossref]

X. Liu, T. Chen, and B. Kumar, “Face authentication for multiple subjects using eigenflow,” Pattern Recogn. 36, 313–328 (2003).
[Crossref]

B. Kumar and M. Alkanhal, “Eigen-extended maximum average correlation height filters for automatic target recognition,” Proc. SPIE 4379, 424–431 (2001).
[Crossref]

M. Alkanhal, B. Kumar, and A. Mahalanobis, “Improving the false alarm capabilities of the maximum average correlation height CF,” Opt. Eng. 39, 1133–1141 (2000).
[Crossref]

K. Al-Mashouq, B. Kumar, and M. Alkanhal, “Analysis of signal-to-noise ratio of polynomial CFs,” Proc. SPIE 3715, 407–413 (1999).
[Crossref]

A. Mahalanobis, B. Kumar, S. R. F. Sims, and J. F. Epperson, “Unconstrained CFs,” Appl. Opt. 33, 3751–3759 (1994).
[Crossref]

B. Kumar, D. Carlson, and A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Opt. Lett. 19, 1556–1558 (1994).
[Crossref]

A. Mahalanobis, B. Kumar, and S. R. F. Sims, “Distance classifier CFs for distortion tolerance, discrimination and clutter rejection,” Proc. SPIE 2026, 325–335 (1993).
[Crossref]

B. Kumar, A. Mahalanobis, S. Song, S. Sims, and J. Epperson, “Minimum squared error synthetic discriminant functions,” Opt. Eng. 31, 915–922 (1992).
[Crossref]

B. Kumar, “Tutorial survey of composite filter designs for optical correlators,” Appl. Opt. 31, 4773–4801 (1992).
[Crossref]

B. Kumar and L. Hassebrook, “Performance measures for correlation filters,” Appl. Opt. 29, 2997–3006 (1990).
[Crossref]

A. Mahalanobis, B. Kumar, and D. Casasent, “Minimum average correlation energy filters,” Appl. Opt. 26, 3633–3640 (1987).
[Crossref]

B. Kumar, “Minimum variance synthetic discriminant functions,” J. Opt. Soc. Am. A 3, 1579–1584 (1986).
[Crossref]

J. Thornton, M. Savvides, and B. Kumar, “Linear shift-invariant maximum margin SVM correlation filter,” in Proceedings of the Intelligent Sensors, Sensor Networks and Information Processing Conference (IEEE, 2004), pp. 183–188.

C. Xie, M. Savvides, and B. Kumar, “Quaternion CF for face recognition in wavelet domain,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2005).

J. Fernandez and B. Kumar, “Zero-aliasing CFs,” in International Symposium on Image and Signal Processing and Analysis (2013), pp. 101–106.

M. Savvides, B. Kumar, and P. Khosla, “Face verification using correlation filters,” in 3rd IEEE Automatic Identification Advanced Technologies (2002), pp. 56–61.

M. Savvides and B. Kumar, “Efficient design of advanced CFs for robust distortion-tolerant face recognition,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (2003).

C. Xie and B. Kumar, “Comparison of kernel class-dependence feature analysis (KCFA) with kernel discriminant analysis (KDA) for face recognition,” in First IEEE International Conference on Biometrics: Theory, Applications, and Systems (BTAS) (IEEE, 2007).

R. Abiantun, M. Savvides, and B. Kumar, “Generalized low dimensional feature subspace for robust face recognition on unseen datasets using kernel correlation feature analysis,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2007).

M. Savvides, B. Kumar, and P. Khosla, “Corefaces - robust shift invariant PCA based CF for illumination tolerant face recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004).

C. Xie and B. Kumar, “Face class code based feature extraction for face recognition,” in Fourth IEEE Workshop on Automatic Identification Advanced Technologies (2005).

A. Mahalanobis and B. Kumar, “Polynomial filters for higher-order and multi-input information fusion,” in 11th Euro-American Opto-Electronic Information Processing Workshop (1999), pp. 221–231.

M. Savvides, B. Kumar, and P. Khosla, “Eigenphases vs. eigenfaces,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR) (IEEE, 2004), Vol. 3.

Lai, H.

H. Lai, V. Ramanathan, and H. Wechsler, “Reliable face recognition using adaptive and robust CFs,” Comput. Vis. Image Underst. 111, 329–350 (2008).
[Crossref]

Laptev, I.

M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014).

Learned-Miller, E.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

L. Sevilla-Lara and E. Learned-Miller, “Distribution fields for tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1910–1917.

Lei, F.

Lei, M.

Leonard, I.

I. Leonard, A. Alfalou, M. S. Alam, and A. Arnold-Bos, “Adaptive nonlinear fringe-adjusted joint transform correlator,” Opt. Eng. 51, 098201 (2012).
[Crossref]

Levine, M. D.

M. D. Levine and Y. Yu, “Face recognition subject to variations in facial expression, illumination and pose using CFs,” Comput. Vis. Image Underst. 104, 1–15 (2006).
[Crossref]

Li, R.

Li, X.

R. Muise, A. Mahalanobis, R. Mohapatra, X. Li, D. Han, and W. Mikhael, “Constrained quadratic CFs for target detection,” Appl. Opt. 43, 304–314 (2004).
[Crossref]

X. Zhang, W. Hu, S. Maybank, and X. Li, “Graph based discriminative learning for robust and efficient object tracking,” in IEEE 11th International Conference on Computer Vision (ICCV) (IEEE, 2007).

Li, Y.

Y. Li and J. Zhu, “A scale adaptive kernel CF tracker with feature integration,” in Computer Vision–European Conference on Computer Vision (ECCV) Workshops (Springer, 2014), pp. 254–265.

Lim, J.

D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

Y. Wu, J. Lim, and M. H. Yang, “Online object tracking: a benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2411–2418.

Lin, R.

D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

Ling, H.

J. Gao, H. Ling, W. Hu, and J. Xing, “Transfer learning based visual tracking with Gaussian processes regression,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

Liu, S.

Q. Wang and S. Liu, “Morphological fringe-adjusted joint transform correlation,” Opt. Eng. 45, 087002 (2006).
[Crossref]

Liu, W.

K. Jeong, W. Liu, S. Han, E. Hasanbelliu, and J. Principe, “The correntropy MACE filter,” Pattern Recogn. 42, 871–885 (2009).
[Crossref]

Liu, X.

X. Liu, T. Chen, and B. Kumar, “Face authentication for multiple subjects using eigenflow,” Pattern Recogn. 36, 313–328 (2003).
[Crossref]

Lu, H.

X. Jia, H. Lu, and M. H. Yang, “Visual tracking via adaptive structural local sparse appearance model,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1822–1829.

Lu, T.

O. Johnson, W. Edens, T. Lu, and T. Chao, “Optimization of OT-MACH filter generation for target recognition,” Proc. SPIE 7340, 734008 (2009).
[Crossref]

Lu, X.

X. Lu, A. Katz, E. Kanterakis, and N. Caviris, “Joint transform correlator that uses wavelet transforms,” Opt. Lett. 17, 1700–1702 (1992).
[Crossref]

X. Lu, “Image analysis for face recognition,” Department of Computer Science and Engineering; Michigan State University, East Lansing, Michigan 48824 (Private Lecture Notes, 2003).

Lucey, S.

H. Kiani Galoogahi, T. Sim, and S. Lucey, “Correlation filters with limited boundaries,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4630–4638.

Lui, Y.

D. Bolme, J. Beveridge, B. Draper, and Y. Lui, “Visual object tracking using adaptive correlation filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 2544–2550.

Ma, C.

C. Ma, Y. Xu, B. Ni, and X. Yang, “When correlation filters meet convolutional neural networks for visual tracking,” IEEE Signal Process. Lett. 23, 1454–1458 (2016).
[Crossref]

Ma, S.

J. Zhang, S. Ma, and S. Sclaroff, “MEEM: robust tracking via multiple experts using entropy minimization,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

Mackenzie, H.

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

Maddah, M.

Mahalanobis, A.

A. Rodriguez, V. Boddeti, B. Kumar, and A. Mahalanobis, “Maximum margin CF: a new approach for localization and classification,” IEEE Trans. Image Process. 22, 631–643 (2013).
[Crossref]

B. Kumar, M. Savvides, C. Xie, K. Venkataramani, J. Thornton, and A. Mahalanobis, “Biometric verification with composite filters,” Appl. Opt. 43, 391–402 (2004).
[Crossref]

R. Muise, A. Mahalanobis, R. Mohapatra, X. Li, D. Han, and W. Mikhael, “Constrained quadratic CFs for target detection,” Appl. Opt. 43, 304–314 (2004).
[Crossref]

A. Mahalanobis, R. Muise, S. Stanfill, and A. Nevel, “Design and application of quadratic CFs for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004).

A. Nevel and A. Mahalanobis, “Comparative study of maximum average correlation height filter variants using ladar imagery,” Opt. Eng. 42, 541–550 (2003).
[Crossref]

M. Alkanhal, B. Kumar, and A. Mahalanobis, “Improving the false alarm capabilities of the maximum average correlation height CF,” Opt. Eng. 39, 1133–1141 (2000).
[Crossref]

A. Mahalanobis, B. Kumar, S. R. F. Sims, and J. F. Epperson, “Unconstrained CFs,” Appl. Opt. 33, 3751–3759 (1994).
[Crossref]

B. Kumar, D. Carlson, and A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Opt. Lett. 19, 1556–1558 (1994).
[Crossref]

A. Mahalanobis, B. Kumar, and S. R. F. Sims, “Distance classifier CFs for distortion tolerance, discrimination and clutter rejection,” Proc. SPIE 2026, 325–335 (1993).
[Crossref]

B. Kumar, A. Mahalanobis, S. Song, S. Sims, and J. Epperson, “Minimum squared error synthetic discriminant functions,” Opt. Eng. 31, 915–922 (1992).
[Crossref]

A. Mahalanobis, B. Kumar, and D. Casasent, “Minimum average correlation energy filters,” Appl. Opt. 26, 3633–3640 (1987).
[Crossref]

A. Mahalanobis and B. Kumar, “Polynomial filters for higher-order and multi-input information fusion,” in 11th Euro-American Opto-Electronic Information Processing Workshop (1999), pp. 221–231.

Maji, S.

M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3828–3836.

Majumder, G.

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Majumder, S.

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Mansfield, A.

A. Mansfield and J. Wayman, “Best practices in testing and reporting performance of biometric devices,” version 2.01 (Reproduced by Permission of the Controller of HMSO, 2002).

Mansour, A.

A. Alfalou, M. Farhat, and A. Mansour, “Independent component analysis based approach to biometric recognition, information and communication technologies: from theory to applications,” in 3rd International Conference on International Conference on Information & Communication Technologies: from Theory to Applications (ICTTA), 7–11 April2008, pp. 1–4.

Markowitz, Z.

X. Guan, H. H. Szu, and Z. Markowitz, “Local ICA for the most wanted face recognition,” Proc. SPIE 4056, 539–551 (2000).
[Crossref]

Marom, E.

D. Mendlovic, E. Marom, and N. Konforti, “Shift- and scale-invariant pattern recognition using Mellin radial harmonics,” Opt. Commun. 67, 172–176 (1988).
[Crossref]

Martins, P.

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 583–596 (2015).
[Crossref]

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Computer Vision–European Conference on Computer Vision (ECCV) (Springer, 2012), pp. 702–715.

Matas, J.

Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning: bootstrapping binary classifiers by structural constraints,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 49–56.

Maybank, S.

X. Zhang, W. Hu, S. Maybank, and X. Li, “Graph based discriminative learning for robust and efficient object tracking,” in IEEE 11th International Conference on Computer Vision (ICCV) (IEEE, 2007).

Méndez-Vázquez, H.

D. Rizo-Rodríguez, H. Méndez-Vázquez, and E. García-Reyes, “Illumination invariant face recognition using quaternion-based CFs,” J. Math. Imaging Vis. 45, 164–175 (2013).
[Crossref]

Mendlovic, D.

D. Mendlovic, E. Marom, and N. Konforti, “Shift- and scale-invariant pattern recognition using Mellin radial harmonics,” Opt. Commun. 67, 172–176 (1988).
[Crossref]

Mikhael, W.

Mikolajczyk, K.

Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning: bootstrapping binary classifiers by structural constraints,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 49–56.

Mohapatra, R.

Mozaffari, S.

Muise, R.

R. Muise, A. Mahalanobis, R. Mohapatra, X. Li, D. Han, and W. Mikhael, “Constrained quadratic CFs for target detection,” Appl. Opt. 43, 304–314 (2004).
[Crossref]

A. Mahalanobis, R. Muise, S. Stanfill, and A. Nevel, “Design and application of quadratic CFs for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004).

Napoléon, T.

D. Benarab, T. Napoléon, A. Alfalou, A. Verney, and P. Hellard, “Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques,” Opt. Commun. 356, 256–268 (2015).
[Crossref]

Nasipuri, M.

A. Seal, S. Ganguly, D. Bhattacharjee, M. Nasipuri, and D. K. Basu, “Automated thermal face recognition based on minutiae extraction,” Int. J. Comput. Intell. Stud. 2, 133–156 (2013).

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Nath Sarma, A.

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Nevel, A.

A. Mahalanobis, R. Muise, S. Stanfill, and A. Nevel, “Design and application of quadratic CFs for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004).

A. Nevel and A. Mahalanobis, “Comparative study of maximum average correlation height filter variants using ladar imagery,” Opt. Eng. 42, 541–550 (2003).
[Crossref]

Ni, B.

C. Ma, Y. Xu, B. Ni, and X. Yang, “When correlation filters meet convolutional neural networks for visual tracking,” IEEE Signal Process. Lett. 23, 1454–1458 (2016).
[Crossref]

Nishchal, N.

North, D.

D. North, “An analysis of the factors which determine signal/noise discriminations in pulsed carrier systems,” Proc. IEEE 51, 1016–1027 (1963).
[Crossref]

Ojala, T.

T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distribution,” Pattern Recogn. 29, 51–59 (1996).
[Crossref]

Oquab, M.

M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014).

Ouerhani, Y.

Y. Ouerhani, M. Desthieux, and A. Alfalou, “Road sign recognition using Viapix module and correlation,” Proc. SPIE 9477, 94770H (2015).
[Crossref]

Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
[Crossref]

Oza, N. C.

N. C. Oza, “Online ensemble learning,” Ph.D. thesis (University of California, 2001).

Paek, E.

D. Psaltis, E. Paek, and S. Venkatesh, “Optical image correlation with binary spatial light modulator,” Opt. Eng. 23, 698–704 (1984).

Patnaik, R.

R. Patnaik and D. Casasent, “Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms,” Proc. SPIE 5816, 94–104 (2005).
[Crossref]

Pellat-Finet, P.

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

Peng, T.

Pietikäinen, M.

T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distribution,” Pattern Recogn. 29, 51–59 (1996).
[Crossref]

Principe, J.

K. Jeong, W. Liu, S. Han, E. Hasanbelliu, and J. Principe, “The correntropy MACE filter,” Pattern Recogn. 42, 871–885 (2009).
[Crossref]

Psaltis, D.

D. Psaltis, E. Paek, and S. Venkatesh, “Optical image correlation with binary spatial light modulator,” Opt. Eng. 23, 698–704 (1984).

Ramanathan, V.

H. Lai, V. Ramanathan, and H. Wechsler, “Reliable face recognition using adaptive and robust CFs,” Comput. Vis. Image Underst. 111, 329–350 (2008).
[Crossref]

Ramesh, M.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

Ravichandran, G.

Rechtsteiner, A.

M. Wall, A. Rechtsteiner, and L. Rocha, A Practical Approach to Microarray Data Analysis, D. P. Berrar, W. Dubitzky, and M. Granzow, eds. (Springer, 2003), pp. 91–109.

Refregier, P.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

Rivlin, E.

A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), Vol. 1.

Rizo-Rodríguez, D.

D. Rizo-Rodríguez, H. Méndez-Vázquez, and E. García-Reyes, “Illumination invariant face recognition using quaternion-based CFs,” J. Math. Imaging Vis. 45, 164–175 (2013).
[Crossref]

Roberge, D.

D. Roberge and Y. Sheng, “Optical composite wavelet-matched filters,” Opt. Eng. 33, 2290–2295 (1994).
[Crossref]

Rocha, L.

M. Wall, A. Rechtsteiner, and L. Rocha, A Practical Approach to Microarray Data Analysis, D. P. Berrar, W. Dubitzky, and M. Granzow, eds. (Springer, 2003), pp. 91–109.

Rodriguez, A.

J. Fernandez, V. Boddeti, A. Rodriguez, and B. Kumar, “Zero-aliasing CFs for object recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015).
[Crossref]

A. Rodriguez, V. Boddeti, B. Kumar, and A. Mahalanobis, “Maximum margin CF: a new approach for localization and classification,” IEEE Trans. Image Process. 22, 631–643 (2013).
[Crossref]

Rodriguez, M.

M. Rodriguez, J. Ahmed, and M. Shah, “Action MACH a spatio-temporal maximum average correlation height filter for action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2008).

Ross, D.

D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

Saffari, A.

S. Hare, A. Saffari, and P. H. S. Torr, “Struck: structured output tracking with kernels,” in International Conference on Computer Vision (IEEE, 2011), pp. 263–270.

Saha, A.

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Saha, K.

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

Sarda, P.

H. Cardot, F. Ferraty, and P. Sarda, “Linear functional model,” Stat. Probab. Lett. 45, 11–22 (1999).
[Crossref]

Savvides, M.

C. Xie, M. Savvides, and B. Kumar, “Kernel CF based redundant class-dependence feature analysis on FRGC2.0 data,” Lect. Notes Comput. Sci. 3723, 32–43 (2005).
[Crossref]

S. Wijaya, M. Savvides, and B. Kumar, “Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced CFs for handheld devices,” Appl. Opt. 44, 655–665 (2005).
[Crossref]

B. Kumar, M. Savvides, C. Xie, K. Venkataramani, J. Thornton, and A. Mahalanobis, “Biometric verification with composite filters,” Appl. Opt. 43, 391–402 (2004).
[Crossref]

M. Savvides and B. Kumar, “Illumination normalization using logarithm transforms for face authentication,” Lect. Notes Comput. Sci. 2688, 549–556 (2003).
[Crossref]

M. Savvides, B. Kumar, and P. Khosla, “Face verification using correlation filters,” in 3rd IEEE Automatic Identification Advanced Technologies (2002), pp. 56–61.

M. Savvides, B. Kumar, and P. Khosla, “Corefaces - robust shift invariant PCA based CF for illumination tolerant face recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004).

M. Savvides and B. Kumar, “Efficient design of advanced CFs for robust distortion-tolerant face recognition,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (2003).

R. Abiantun, M. Savvides, and B. Kumar, “Generalized low dimensional feature subspace for robust face recognition on unseen datasets using kernel correlation feature analysis,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2007).

J. Heo, M. Savvides, and B. V. K. Vijayakumar, “Performance evaluation of face recognition using visual and thermal imagery with advanced correlation filters,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005).

M. Savvides and B. V. K. Vijaya Kumar, “Quad phase minimum average correlation energy filters for reduced memory illumination tolerant face authentication,” in Proceedings of the 4th International Conference on Audio and Visual Biometrics based Person Authentication (AVBPA), Surrey, UK, 2003.

M. Savvides, B. Kumar, and P. Khosla, “Eigenphases vs. eigenfaces,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR) (IEEE, 2004), Vol. 3.

J. Thornton, M. Savvides, and B. Kumar, “Linear shift-invariant maximum margin SVM correlation filter,” in Proceedings of the Intelligent Sensors, Sensor Networks and Information Processing Conference (IEEE, 2004), pp. 183–188.

C. Xie, M. Savvides, and B. Kumar, “Quaternion CF for face recognition in wavelet domain,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2005).

Sclaroff, S.

J. Zhang, S. Ma, and S. Sclaroff, “MEEM: robust tracking via multiple experts using entropy minimization,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

Seal, A.

A. Seal, S. Ganguly, D. Bhattacharjee, M. Nasipuri, and D. K. Basu, “Automated thermal face recognition based on minutiae extraction,” Int. J. Comput. Intell. Stud. 2, 133–156 (2013).

Servel, A.

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

Sevilla-Lara, L.

L. Sevilla-Lara and E. Learned-Miller, “Distribution fields for tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1910–1917.

Shah, M.

A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
[Crossref]

M. Rodriguez, J. Ahmed, and M. Shah, “Action MACH a spatio-temporal maximum average correlation height filter for action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2008).

Shahbaz Khan, F.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 4310–4318.

M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. van de Weijer, “Adaptive color attributes for real-time visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1090–1097.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Coloring channel representations for visual tracking,” in Scandinavian Conference on Image Analysis (Springer, 2015), pp. 117–129.

Sheng, Y.

D. Roberge and Y. Sheng, “Optical composite wavelet-matched filters,” Opt. Eng. 33, 2290–2295 (1994).
[Crossref]

Shimshoni, I.

A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), Vol. 1.

Sim, T.

H. Kiani Galoogahi, T. Sim, and S. Lucey, “Correlation filters with limited boundaries,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4630–4638.

Simonyan, K.

K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: delving deep into convolutional nets,” arXiv:1405.3531 (2014).

Sims, S.

B. Kumar, A. Mahalanobis, S. Song, S. Sims, and J. Epperson, “Minimum squared error synthetic discriminant functions,” Opt. Eng. 31, 915–922 (1992).
[Crossref]

Sims, S. R. F.

A. Mahalanobis, B. Kumar, S. R. F. Sims, and J. F. Epperson, “Unconstrained CFs,” Appl. Opt. 33, 3751–3759 (1994).
[Crossref]

A. Mahalanobis, B. Kumar, and S. R. F. Sims, “Distance classifier CFs for distortion tolerance, discrimination and clutter rejection,” Proc. SPIE 2026, 325–335 (1993).
[Crossref]

Sirovich, L.

L. Sirovich and M. Kirby, “Low-dimensional procedure for the characterization of human face,” J. Opt. Soc. Am. 4, 519–524 (1987).
[Crossref]

Sivic, J.

M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014).

Skowron, A.

R. W. Świniarski and A. Skowron, “Transactions on rough sets I,” in Independent Component Analysis, Principal Component Analysis and Rough Sets in Face Recognition (Springer, 2004), pp. 392–404.

Smeulders, A. W. M.

A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
[Crossref]

Song, S.

B. Kumar, A. Mahalanobis, S. Song, S. Sims, and J. Epperson, “Minimum squared error synthetic discriminant functions,” Opt. Eng. 31, 915–922 (1992).
[Crossref]

Stanfill, S.

A. Mahalanobis, R. Muise, S. Stanfill, and A. Nevel, “Design and application of quadratic CFs for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004).

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

Sun, Y.

Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Advances in Neural Information Processing Systems (2014), pp. 1988–1996.

Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2892–2900.

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Swiniarski, R. W.

R. W. Świniarski and A. Skowron, “Transactions on rough sets I,” in Independent Component Analysis, Principal Component Analysis and Rough Sets in Face Recognition (Springer, 2004), pp. 392–404.

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).

Szu, H. H.

X. Guan, H. H. Szu, and Z. Markowitz, “Local ICA for the most wanted face recognition,” Proc. SPIE 4056, 539–551 (2000).
[Crossref]

Tang, X.

Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Advances in Neural Information Processing Systems (2014), pp. 1988–1996.

Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2892–2900.

Tao, D.

Z. Chen, Z. Hong, and D. Tao, “An experimental survey on correlation filter-based tracking,” arXiv:1509.05520 (2015).

Thornton, J.

B. Kumar, M. Savvides, C. Xie, K. Venkataramani, J. Thornton, and A. Mahalanobis, “Biometric verification with composite filters,” Appl. Opt. 43, 391–402 (2004).
[Crossref]

J. Thornton, M. Savvides, and B. Kumar, “Linear shift-invariant maximum margin SVM correlation filter,” in Proceedings of the Intelligent Sensors, Sensor Networks and Information Processing Conference (IEEE, 2004), pp. 183–188.

Torr, P. H. S.

S. Hare, A. Saffari, and P. H. S. Torr, “Struck: structured output tracking with kernels,” in International Conference on Computer Vision (IEEE, 2011), pp. 263–270.

Toselli, I.

van de Weijer, J.

M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. van de Weijer, “Adaptive color attributes for real-time visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1090–1097.

VanderLugt, A.

A. VanderLugt, “Signal detection by complex spatial filtering,” IEEE Trans. Inf. Theory 10, 139–145 (1964).
[Crossref]

Vapnik, V.

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn. 20, 273–297 (1995).

B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the Fifth Annual Workshop on Computational Learning Theory (1992), pp. 144–152.

Vedaldi, A.

M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3828–3836.

K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: delving deep into convolutional nets,” arXiv:1405.3531 (2014).

Venkataramani, K.

Venkatesh, S.

D. Psaltis, E. Paek, and S. Venkatesh, “Optical image correlation with binary spatial light modulator,” Opt. Eng. 23, 698–704 (1984).

Verney, A.

D. Benarab, T. Napoléon, A. Alfalou, A. Verney, and P. Hellard, “Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques,” Opt. Commun. 356, 256–268 (2015).
[Crossref]

Vetter, T.

V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1063–1074 (2003).
[Crossref]

Vijaya Kumar, B. V. K.

M. Savvides and B. V. K. Vijaya Kumar, “Quad phase minimum average correlation energy filters for reduced memory illumination tolerant face authentication,” in Proceedings of the 4th International Conference on Audio and Visual Biometrics based Person Authentication (AVBPA), Surrey, UK, 2003.

Vijayakumar, B. V. K.

J. Heo, M. Savvides, and B. V. K. Vijayakumar, “Performance evaluation of face recognition using visual and thermal imagery with advanced correlation filters,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005).

Wall, M.

M. Wall, A. Rechtsteiner, and L. Rocha, A Practical Approach to Microarray Data Analysis, D. P. Berrar, W. Dubitzky, and M. Granzow, eds. (Springer, 2003), pp. 91–109.

Wang, F.

Wang, Q.

Q. Wang and S. Liu, “Morphological fringe-adjusted joint transform correlation,” Opt. Eng. 45, 087002 (2006).
[Crossref]

Wang, X.

Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2892–2900.

Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Advances in Neural Information Processing Systems (2014), pp. 1988–1996.

Watanabe, E.

E. Watanabe and K. Kodate, “Implementation of high-speed face recognition system using an optical parallel correlator,” Appl. Opt. 44, 666–676 (2005).
[Crossref]

E. Watanabe and K. Kodate, High Speed Holographic Optical Correlator for Face Recognition, State of the Art in Face Recognition, J. Ponce and A. Karahoca, eds. (InTech, 2009).

Wayman, J.

A. Mansfield and J. Wayman, “Best practices in testing and reporting performance of biometric devices,” version 2.01 (Reproduced by Permission of the Controller of HMSO, 2002).

Weaver, C.

Wechsler, H.

H. Lai, V. Ramanathan, and H. Wechsler, “Reliable face recognition using adaptive and robust CFs,” Comput. Vis. Image Underst. 111, 329–350 (2008).
[Crossref]

Wijaya, S.

Wu, D.

Wu, Y.

Y. Wu, J. Lim, and M. H. Yang, “Online object tracking: a benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2411–2418.

Xie, C.

C. Xie, M. Savvides, and B. Kumar, “Kernel CF based redundant class-dependence feature analysis on FRGC2.0 data,” Lect. Notes Comput. Sci. 3723, 32–43 (2005).
[Crossref]

B. Kumar, M. Savvides, C. Xie, K. Venkataramani, J. Thornton, and A. Mahalanobis, “Biometric verification with composite filters,” Appl. Opt. 43, 391–402 (2004).
[Crossref]

C. Xie, M. Savvides, and B. Kumar, “Quaternion CF for face recognition in wavelet domain,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2005).

C. Xie and B. Kumar, “Face class code based feature extraction for face recognition,” in Fourth IEEE Workshop on Automatic Identification Advanced Technologies (2005).

C. Xie and B. Kumar, “Comparison of kernel class-dependence feature analysis (KCFA) with kernel discriminant analysis (KDA) for face recognition,” in First IEEE International Conference on Biometrics: Theory, Applications, and Systems (BTAS) (IEEE, 2007).

Xing, J.

J. Gao, H. Ling, W. Hu, and J. Xing, “Transfer learning based visual tracking with Gaussian processes regression,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

Xu, Y.

C. Ma, Y. Xu, B. Ni, and X. Yang, “When correlation filters meet convolutional neural networks for visual tracking,” IEEE Signal Process. Lett. 23, 1454–1458 (2016).
[Crossref]

Yan, Y.

Y. Yan and Y. Zhang, “Tensor CF based class-dependence feature analysis for face recognition,” Neurocomputing 71, 3434–3438 (2008).
[Crossref]

Yang, M.

D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

Yang, M. H.

Y. Wu, J. Lim, and M. H. Yang, “Online object tracking: a benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2411–2418.

B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 983–990.

X. Jia, H. Lu, and M. H. Yang, “Visual tracking via adaptive structural local sparse appearance model,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1822–1829.

K. Zhang, L. Zhang, and M. H. Yang, “Real-time compressive tracking,” in European Conference on Computer Vision (Springer, 2012), pp. 864–877.

Yang, M.-H.

B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Conference on Computer Vision and Pattern Recognition (CVPR) (2009).

Yang, X.

C. Ma, Y. Xu, B. Ni, and X. Yang, “When correlation filters meet convolutional neural networks for visual tracking,” IEEE Signal Process. Lett. 23, 1454–1458 (2016).
[Crossref]

Yang, Y.

Yao, B.

Yatagai, T.

Ye, T.

Yin, Q.

E. Zhou, Z. Cao, and Q. Yin, “Naive-deep face recognition: touching the limit of LFW benchmark or not?” arXiv:1501.04690 (2015).

Yu, Y.

M. D. Levine and Y. Yu, “Face recognition subject to variations in facial expression, illumination and pose using CFs,” Comput. Vis. Image Underst. 104, 1–15 (2006).
[Crossref]

Zhang, J.

J. Zhang, S. Ma, and S. Sclaroff, “MEEM: robust tracking via multiple experts using entropy minimization,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

Zhang, K.

K. Zhang, L. Zhang, and M. H. Yang, “Real-time compressive tracking,” in European Conference on Computer Vision (Springer, 2012), pp. 864–877.

Zhang, L.

K. Zhang, L. Zhang, and M. H. Yang, “Real-time compressive tracking,” in European Conference on Computer Vision (Springer, 2012), pp. 864–877.

Zhang, X.

X. Zhang, W. Hu, S. Maybank, and X. Li, “Graph based discriminative learning for robust and efficient object tracking,” in IEEE 11th International Conference on Computer Vision (ICCV) (IEEE, 2007).

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

Zhang, Y.

Y. Yan and Y. Zhang, “Tensor CF based class-dependence feature analysis for face recognition,” Neurocomputing 71, 3434–3438 (2008).
[Crossref]

Zhou, E.

E. Zhou, Z. Cao, and Q. Yin, “Naive-deep face recognition: touching the limit of LFW benchmark or not?” arXiv:1501.04690 (2015).

Zhou, X.

Zhu, J.

Y. Li and J. Zhu, “A scale adaptive kernel CF tracker with feature integration,” in Computer Vision–European Conference on Computer Vision (ECCV) Workshops (Springer, 2014), pp. 254–265.

Zisserman, A.

K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: delving deep into convolutional nets,” arXiv:1405.3531 (2014).

Appl. Opt. (21)

B. Kumar, “Tutorial survey of composite filter designs for optical correlators,” Appl. Opt. 31, 4773–4801 (1992).
[Crossref]

B. Kumar and L. Hassebrook, “Performance measures for correlation filters,” Appl. Opt. 29, 2997–3006 (1990).
[Crossref]

B. Kumar, M. Savvides, C. Xie, K. Venkataramani, J. Thornton, and A. Mahalanobis, “Biometric verification with composite filters,” Appl. Opt. 43, 391–402 (2004).
[Crossref]

D. Wu, X. Zhou, B. Yao, R. Li, Y. Yang, T. Peng, M. Lei, D. Dan, and T. Ye, “Fast frame scanning camera system for light-sheet microscopy,” Appl. Opt. 54, 8632–8636 (2015).
[Crossref]

F. Wang, I. Toselli, and O. Korotkova, “Two spatial light modulator system for laboratory simulation of random beam propagation in random media,” Appl. Opt. 55, 1112–1117 (2016).
[Crossref]

C. Weaver and J. Goodman, “A technique for optically convolving two functions,” Appl. Opt. 5, 1248–1249 (1966).
[Crossref]

M. Alam and M. Karim, “Fringe-adjusted joint transform correlation,” Appl. Opt. 32, 4344–4350 (1993).
[Crossref]

J. Horner and P. Gianino, “Phase-only matched filtering,” Appl. Opt. 23, 812–816 (1984).
[Crossref]

F. Lei, M. Iton, and T. Yatagai, “Adaptive binary joint transform correlator for image recognition,” Appl. Opt. 41, 7416–7421 (2002).
[Crossref]

Y. Hsu and H. Arsenault, “Optical character recognition using circular harmonic expansion,” Appl. Opt. 21, 4016–4019 (1982).
[Crossref]

C. Hester and D. Casasent, “Multivariant technique for multiclass pattern recognition,” Appl. Opt. 19, 1758–1761 (1980).
[Crossref]

A. Mahalanobis, B. Kumar, and D. Casasent, “Minimum average correlation energy filters,” Appl. Opt. 26, 3633–3640 (1987).
[Crossref]

G. Ravichandran and D. Casasent, “Minimum noise and correlation energy optical CF,” Appl. Opt. 31, 1823–1833 (1992).
[Crossref]

A. Mahalanobis, B. Kumar, S. R. F. Sims, and J. F. Epperson, “Unconstrained CFs,” Appl. Opt. 33, 3751–3759 (1994).
[Crossref]

S. Goyal, N. Nishchal, V. Beri, and A. Gupta, “Wavelet-modified maximum average correlation height filter for rotation invariance that uses chirp encoding in a hybrid digital-optical correlator,” Appl. Opt. 45, 4850–4857 (2006).
[Crossref]

R. Muise, A. Mahalanobis, R. Mohapatra, X. Li, D. Han, and W. Mikhael, “Constrained quadratic CFs for target detection,” Appl. Opt. 43, 304–314 (2004).
[Crossref]

A. Alfalou, G. Keryer, and J. L. de Bougrenet de la Tocnaye, “Optical implementation of segmented composite filtering,” Appl. Opt. 38, 6129–6135 (1999).
[Crossref]

R. Juday, “Optimal realizable filters and the minimum Euclidean distance principle,” Appl. Opt. 32, 5100–5111 (1993).
[Crossref]

M. Alkanhal and B. Kumar, “Polynomial distance classifier CF for pattern recognition,” Appl. Opt. 42, 4688–4708 (2003).
[Crossref]

S. Wijaya, M. Savvides, and B. Kumar, “Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced CFs for handheld devices,” Appl. Opt. 44, 655–665 (2005).
[Crossref]

E. Watanabe and K. Kodate, “Implementation of high-speed face recognition system using an optical parallel correlator,” Appl. Opt. 44, 666–676 (2005).
[Crossref]

Comput. Vis. Image Underst. (3)

K. W. Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition,” Comput. Vis. Image Underst. 101, 1–15 (2006).
[Crossref]

M. D. Levine and Y. Yu, “Face recognition subject to variations in facial expression, illumination and pose using CFs,” Comput. Vis. Image Underst. 104, 1–15 (2006).
[Crossref]

H. Lai, V. Ramanathan, and H. Wechsler, “Reliable face recognition using adaptive and robust CFs,” Comput. Vis. Image Underst. 111, 329–350 (2008).
[Crossref]

IEEE Signal Process. Lett. (1)

C. Ma, Y. Xu, B. Ni, and X. Yang, “When correlation filters meet convolutional neural networks for visual tracking,” IEEE Signal Process. Lett. 23, 1454–1458 (2016).
[Crossref]

IEEE Trans. Aerosp. Electron. Syst. (1)

A. Mahalanobis, R. Muise, S. Stanfill, and A. Nevel, “Design and application of quadratic CFs for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004).

IEEE Trans. Image Process. (1)

A. Rodriguez, V. Boddeti, B. Kumar, and A. Mahalanobis, “Maximum margin CF: a new approach for localization and classification,” IEEE Trans. Image Process. 22, 631–643 (2013).
[Crossref]

IEEE Trans. Inf. Theory (1)

A. VanderLugt, “Signal detection by complex spatial filtering,” IEEE Trans. Inf. Theory 10, 139–145 (1964).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (5)

V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1063–1074 (2003).
[Crossref]

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 583–596 (2015).
[Crossref]

J. Fernandez, V. Boddeti, A. Rodriguez, and B. Kumar, “Zero-aliasing CFs for object recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015).
[Crossref]

A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 1442–1468 (2014).
[Crossref]

A. Jepson, D. Fleet, and T. El-Maraghi, “Robust online appearance models for visual tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1296–1311 (2003).
[Crossref]

IEICE Trans. Inf. Syst. (1)

M. Kaneko and O. Hasegawa, “Processing of face images and its applications,” IEICE Trans. Inf. Syst. E82-D, 589–600 (1999).

Int. J. Comput. Intell. Stud. (1)

A. Seal, S. Ganguly, D. Bhattacharjee, M. Nasipuri, and D. K. Basu, “Automated thermal face recognition based on minutiae extraction,” Int. J. Comput. Intell. Stud. 2, 133–156 (2013).

Int. J. Comput. Vis. (1)

D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

J. Math. Imaging Vis. (1)

D. Rizo-Rodríguez, H. Méndez-Vázquez, and E. García-Reyes, “Illumination invariant face recognition using quaternion-based CFs,” J. Math. Imaging Vis. 45, 164–175 (2013).
[Crossref]

J. Opt. Soc. Am. (1)

L. Sirovich and M. Kirby, “Low-dimensional procedure for the characterization of human face,” J. Opt. Soc. Am. 4, 519–524 (1987).
[Crossref]

J. Opt. Soc. Am. A (2)

Lect. Notes Comput. Sci. (2)

M. Savvides and B. Kumar, “Illumination normalization using logarithm transforms for face authentication,” Lect. Notes Comput. Sci. 2688, 549–556 (2003).
[Crossref]

C. Xie, M. Savvides, and B. Kumar, “Kernel CF based redundant class-dependence feature analysis on FRGC2.0 data,” Lect. Notes Comput. Sci. 3723, 32–43 (2005).
[Crossref]

Mach. Learn. (1)

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn. 20, 273–297 (1995).

Neurocomputing (1)

Y. Yan and Y. Zhang, “Tensor CF based class-dependence feature analysis for face recognition,” Neurocomputing 71, 3434–3438 (2008).
[Crossref]

Opt. Commun. (3)

D. Benarab, T. Napoléon, A. Alfalou, A. Verney, and P. Hellard, “Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques,” Opt. Commun. 356, 256–268 (2015).
[Crossref]

Y. Ouerhani, M. Jridi, A. Alfalou, C. Brosseau, P. Katz, and M. S. Alam, “Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter,” Opt. Commun. 289, 33–44 (2013).
[Crossref]

D. Mendlovic, E. Marom, and N. Konforti, “Shift- and scale-invariant pattern recognition using Mellin radial harmonics,” Opt. Commun. 67, 172–176 (1988).
[Crossref]

Opt. Eng. (9)

D. Psaltis, E. Paek, and S. Venkatesh, “Optical image correlation with binary spatial light modulator,” Opt. Eng. 23, 698–704 (1984).

D. Roberge and Y. Sheng, “Optical composite wavelet-matched filters,” Opt. Eng. 33, 2290–2295 (1994).
[Crossref]

L. Guibert, G. Keryer, A. Servel, M. Attia, H. Mackenzie, P. Pellat-Finet, and J. L. de Bougrenet de la Tocnaye, “On-board optical joint transform correlator for real-time road sign recognition,” Opt. Eng. 34, 101–109 (1995).

A. Nevel and A. Mahalanobis, “Comparative study of maximum average correlation height filter variants using ladar imagery,” Opt. Eng. 42, 541–550 (2003).
[Crossref]

Q. Wang and S. Liu, “Morphological fringe-adjusted joint transform correlation,” Opt. Eng. 45, 087002 (2006).
[Crossref]

M. Alkanhal, B. Kumar, and A. Mahalanobis, “Improving the false alarm capabilities of the maximum average correlation height CF,” Opt. Eng. 39, 1133–1141 (2000).
[Crossref]

I. Leonard, A. Alfalou, M. S. Alam, and A. Arnold-Bos, “Adaptive nonlinear fringe-adjusted joint transform correlator,” Opt. Eng. 51, 098201 (2012).
[Crossref]

M. Elbouz, A. Alfalou, and C. Brosseau, “Fuzzy logic and optical correlation-based face recognition method for patient monitoring application in home video surveillance,” Opt. Eng. 50, 067003 (2011).
[Crossref]

B. Kumar, A. Mahalanobis, S. Song, S. Sims, and J. Epperson, “Minimum squared error synthetic discriminant functions,” Opt. Eng. 31, 915–922 (1992).
[Crossref]

Opt. Lett. (6)

Pattern Recogn. (3)

K. Jeong, W. Liu, S. Han, E. Hasanbelliu, and J. Principe, “The correntropy MACE filter,” Pattern Recogn. 42, 871–885 (2009).
[Crossref]

X. Liu, T. Chen, and B. Kumar, “Face authentication for multiple subjects using eigenflow,” Pattern Recogn. 36, 313–328 (2003).
[Crossref]

T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distribution,” Pattern Recogn. 29, 51–59 (1996).
[Crossref]

Proc. IEEE (1)

D. North, “An analysis of the factors which determine signal/noise discriminations in pulsed carrier systems,” Proc. IEEE 51, 1016–1027 (1963).
[Crossref]

Proc. SPIE (8)

O. Johnson, W. Edens, T. Lu, and T. Chao, “Optimization of OT-MACH filter generation for target recognition,” Proc. SPIE 7340, 734008 (2009).
[Crossref]

B. Kumar and M. Alkanhal, “Eigen-extended maximum average correlation height filters for automatic target recognition,” Proc. SPIE 4379, 424–431 (2001).
[Crossref]

Y. Ouerhani, M. Desthieux, and A. Alfalou, “Road sign recognition using Viapix module and correlation,” Proc. SPIE 9477, 94770H (2015).
[Crossref]

K. Al-Mashouq, B. Kumar, and M. Alkanhal, “Analysis of signal-to-noise ratio of polynomial CFs,” Proc. SPIE 3715, 407–413 (1999).
[Crossref]

A. Mahalanobis, B. Kumar, and S. R. F. Sims, “Distance classifier CFs for distortion tolerance, discrimination and clutter rejection,” Proc. SPIE 2026, 325–335 (1993).
[Crossref]

X. Guan, H. H. Szu, and Z. Markowitz, “Local ICA for the most wanted face recognition,” Proc. SPIE 4056, 539–551 (2000).
[Crossref]

A. Alfalou, C. Brosseau, and M. Alam, “Smart pattern recognition,” Proc. SPIE 8748, 874809 (2013).
[Crossref]

R. Patnaik and D. Casasent, “Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms,” Proc. SPIE 5816, 94–104 (2005).
[Crossref]

Signal Process. (1)

P. Comon, “Independent component analysis, a new concept?” Signal Process. 36, 287–314 (1994).
[Crossref]

Stat. Probab. Lett. (1)

H. Cardot, F. Ferraty, and P. Sarda, “Linear functional model,” Stat. Probab. Lett. 45, 11–22 (1999).
[Crossref]

Other (67)

M. Wall, A. Rechtsteiner, and L. Rocha, A Practical Approach to Microarray Data Analysis, D. P. Berrar, W. Dubitzky, and M. Granzow, eds. (Springer, 2003), pp. 91–109.

M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proceedings of the British Machine Vision Conference (BMVC) (2014).

Z. Chen, Z. Hong, and D. Tao, “An experimental survey on correlation filter-based tracking,” arXiv:1509.05520 (2015).

E. Zhou, Z. Cao, and Q. Yin, “Naive-deep face recognition: touching the limit of LFW benchmark or not?” arXiv:1501.04690 (2015).

Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2892–2900.

Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Advances in Neural Information Processing Systems (2014), pp. 1988–1996.

M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014).

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arXiv:1502.03167 (2015).

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66.

A. Alfalou, “Implementation of optical multichannel correlation: application to pattern recognition,” Ph.D. thesis (Université de Rennes, 1999).

P. Katz, A. Alfalou, C. Brosseau, and M. S. Alam, “Correlation and independent component analysis based approaches for biometric recognition,” in Face Recognition Methods: Applications, and Technology, A. Quaglia and C. M. Epifano, eds. (Nova Science, 2011).

C. Xie, M. Savvides, and B. Kumar, “Quaternion CF for face recognition in wavelet domain,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2005).

C. Xie and B. Kumar, “Comparison of kernel class-dependence feature analysis (KCFA) with kernel discriminant analysis (KDA) for face recognition,” in First IEEE International Conference on Biometrics: Theory, Applications, and Systems (BTAS) (IEEE, 2007).

R. Abiantun, M. Savvides, and B. Kumar, “Generalized low dimensional feature subspace for robust face recognition on unseen datasets using kernel correlation feature analysis,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2007).

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Computer Vision–European Conference on Computer Vision (ECCV) (Springer, 2012), pp. 702–715.

Y. Li and J. Zhu, “A scale adaptive kernel CF tracker with feature integration,” in Computer Vision–European Conference on Computer Vision (ECCV) Workshops (Springer, 2014), pp. 254–265.

A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), Vol. 1.

B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Conference on Computer Vision and Pattern Recognition (CVPR) (2009).

X. Zhang, W. Hu, S. Maybank, and X. Li, “Graph based discriminative learning for robust and efficient object tracking,” in IEEE 11th International Conference on Computer Vision (ICCV) (IEEE, 2007).

C. Xie and B. Kumar, “Face class code based feature extraction for face recognition,” in Fourth IEEE Workshop on Automatic Identification Advanced Technologies (2005).

J. Fernandez and B. Kumar, “Zero-aliasing CFs,” in International Symposium on Image and Signal Processing and Analysis (2013), pp. 101–106.

J. Heo, M. Savvides, and B. V. K. Vijayakumar, “Performance evaluation of face recognition using visual and thermal imagery with advanced correlation filters,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)-Workshops (IEEE, 2005).

M. K. Bhowmik, K. Saha, S. Majumder, G. Majumder, A. Saha, A. Nath Sarma, D. Bhattacharjee, D. K. Basu, and M. Nasipuri, “Thermal infrared face recognition—a biometric identification technique for robust security system,” in Reviews, Refinements and New Ideas in Face Recognition (2011), pp. 113–138.

A. Alfalou, M. Farhat, and A. Mansour, “Independent component analysis based approach to biometric recognition, information and communication technologies: from theory to applications,” in 3rd International Conference on International Conference on Information & Communication Technologies: from Theory to Applications (ICTTA), 7–11 April2008, pp. 1–4.

K. Kodate, “Star power in Japan,” in SPIE Professional, July 2010, doi: 10.1117/2.4201007.13, https://spie.org/membership/spie-professional-magazine/spie-professional-archives-and-special-content/july2010-spie-professional/star-power-in-japan .
[Crossref]

Centre of Molecular Materials for Photonics and Electronics, Department of Engineering, University of Cambridge, “Optical information processing,” http://www-g.eng.cam.ac.uk/CMMPE/pattern.html .

N. C. Oza, “Online ensemble learning,” Ph.D. thesis (University of California, 2001).

M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. van de Weijer, “Adaptive color attributes for real-time visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1090–1097.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Coloring channel representations for visual tracking,” in Scandinavian Conference on Image Analysis (Springer, 2015), pp. 117–129.

M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3828–3836.

M. Danelljan, G. Häger, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 4310–4318.

K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: delving deep into convolutional nets,” arXiv:1405.3531 (2014).

Y. Wu, J. Lim, and M. H. Yang, “Online object tracking: a benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2411–2418.

J. Gao, H. Ling, W. Hu, and J. Xing, “Transfer learning based visual tracking with Gaussian processes regression,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

J. Zhang, S. Ma, and S. Sclaroff, “MEEM: robust tracking via multiple experts using entropy minimization,” in European Conference on Computer Vision (Springer, 2014), pp. 188–203.

S. Hare, A. Saffari, and P. H. S. Torr, “Struck: structured output tracking with kernels,” in International Conference on Computer Vision (IEEE, 2011), pp. 263–270.

H. Kiani Galoogahi, T. Sim, and S. Lucey, “Correlation filters with limited boundaries,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 4630–4638.

B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 983–990.

K. Zhang, L. Zhang, and M. H. Yang, “Real-time compressive tracking,” in European Conference on Computer Vision (Springer, 2012), pp. 864–877.

Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning: bootstrapping binary classifiers by structural constraints,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 49–56.

L. Sevilla-Lara and E. Learned-Miller, “Distribution fields for tracking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1910–1917.

M. Felsberg, “Enhanced distribution field tracking using channel representations,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (2013), pp. 121–128.

X. Jia, H. Lu, and M. H. Yang, “Visual tracking via adaptive structural local sparse appearance model,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 1822–1829.

The Visual Object Tracking (VOT) Challenge, 2015, http://www.votchallenge.net .

J. Thornton, M. Savvides, and B. Kumar, “Linear shift-invariant maximum margin SVM correlation filter,” in Proceedings of the Intelligent Sensors, Sensor Networks and Information Processing Conference (IEEE, 2004), pp. 183–188.

M. Savvides, B. Kumar, and P. Khosla, “Face verification using correlation filters,” in 3rd IEEE Automatic Identification Advanced Technologies (2002), pp. 56–61.

M. Savvides and B. Kumar, “Efficient design of advanced CFs for robust distortion-tolerant face recognition,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (2003).

M. Savvides, B. Kumar, and P. Khosla, “Corefaces - robust shift invariant PCA based CF for illumination tolerant face recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2004).

R. W. Świniarski and A. Skowron, “Transactions on rough sets I,” in Independent Component Analysis, Principal Component Analysis and Rough Sets in Face Recognition (Springer, 2004), pp. 392–404.

I. Jolliffe, Principal Component Analysis (Wiley, 2002).

B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the Fifth Annual Workshop on Computational Learning Theory (1992), pp. 144–152.

A. Mahalanobis and B. Kumar, “Polynomial filters for higher-order and multi-input information fusion,” in 11th Euro-American Opto-Electronic Information Processing Workshop (1999), pp. 221–231.

D. Bolme, J. Beveridge, B. Draper, and Y. Lui, “Visual object tracking using adaptive correlation filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2010), pp. 2544–2550.

D. Bolme, B. Draper, and J. Beveridge, “Average of synthetic exact filters,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2009), pp. 2105–2112.

M. Rodriguez, J. Ahmed, and M. Shah, “Action MACH a spatio-temporal maximum average correlation height filter for action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2008).

P. K. Banerjee and A. K. Datta, Techniques of Frequency Domain Correlation for Face Recognition and its Photonic Implementation, A. Quaglia and C. M. Epifano, eds. (NOVA, 2012), Chap. 9, pp. 165–186.

M. Savvides and B. V. K. Vijaya Kumar, “Quad phase minimum average correlation energy filters for reduced memory illumination tolerant face authentication,” in Proceedings of the 4th International Conference on Audio and Visual Biometrics based Person Authentication (AVBPA), Surrey, UK, 2003.

E. Watanabe and K. Kodate, High Speed Holographic Optical Correlator for Face Recognition, State of the Art in Face Recognition, J. Ponce and A. Karahoca, eds. (InTech, 2009).

T. Kanade, “Picture processing system by computer complex and recognition of human face,” Ph.D. thesis (Department of Information Science, Kyoto University, 1973).

M. Savvides, B. Kumar, and P. Khosla, “Eigenphases vs. eigenfaces,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR) (IEEE, 2004), Vol. 3.

X. Lu, “Image analysis for face recognition,” Department of Computer Science and Engineering; Michigan State University, East Lansing, Michigan 48824 (Private Lecture Notes, 2003).

A. Datta, M. Datta, and P. Banerjee, Face Detection and Recognition: Theory and Practice (Chapman & Hall/CRC Press, 2015).

A. Mansfield and J. Wayman, “Best practices in testing and reporting performance of biometric devices,” version 2.01 (Reproduced by Permission of the Controller of HMSO, 2002).

A. Quaglia and C. M. Epifano, eds. Face Recognition: Methods, Applications and Technology (Nova Science, 2012).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (67)

Figure 1
Figure 1

ROC curve.

Figure 2
Figure 2

Principle of the VLC setup.

Figure 3
Figure 3

Typical images for one subject. Reprinted with permission from [11]. Copyright 2004 Optical Society of America.

Figure 4
Figure 4

PSR values generated from the MACE filter fabricated with subject 1. Reprinted with permission [11]. Copyright 2004 Optical Society of America.

Figure 5
Figure 5

PSR values generated from the MACE filter fabricated with subject 2. Reprinted with permission from [11]. Copyright 2004 Optical Society of America.

Figure 6
Figure 6

Images for one subject obtained for the illumination variation tests. Reprinted with permission from [11]. Copyright 2004 Optical Society of America.

Figure 7
Figure 7

PSR values for subject 2: authentic subjects (top plots) and imposters (bottom plots). Reprinted with permission from [11]. Copyright 2004 Optical Society of America.

Figure 8
Figure 8

Images for subject 2 in the dataset. Reprinted with permission from [63]. Copyright 2005 Optical Society of America.

Figure 9
Figure 9

Training sets 1 (top) and 2 (bottom). Reprinted with permission from [63]. Copyright 2005 Optical Society of America.

Figure 10
Figure 10

ROC of the MACE filter obtained with training set 1 of subject 2. Reprinted with permission from [63]. Copyright 2005 Optical Society of America.

Figure 11
Figure 11

ROC of the MACE filter obtained with training set 2 of subject 2. Reprinted with permission from [63]. Copyright 2005 Optical Society of America.

Figure 12
Figure 12

Training images f1,2,3, specified correlation outputs g1,2,3, and exact filters h1,2,3 of ASEF [55].

Figure 13
Figure 13

Localization accuracy of several CFs when a priori approximate eye localization is unknown. © 2009 IEEE. Reprinted, with permission, from Bolme et al., IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 2105–2112 [55].

Figure 14
Figure 14

Comparison of the second frame PSR values of the MOSSE, ASEF, and UMACE filters. © 2010 IEEE. Reprinted, with permission, from Bolme et al., IEEE Conference on Computer Vision and Pattern Recognition (2010), pp. 2544–2550 [54].

Figure 15
Figure 15

Comparison of the second frame PSR values of the regulated CFs when different regulation parameters are added. © 2010 IEEE. Reprinted, with permission, from Bolme et al., IEEE Conference on Computer Vision and Pattern Recognition (2010), pp. 2544–2550 [54].

Figure 16
Figure 16

Performance of three filter-based trackers on three video sequences for face tracking. © 2010 IEEE. Reprinted, with permission, from Bolme et al., IEEE Conference on Computer Vision and Pattern Recognition (2010), pp. 2544–2550 [54].

Figure 17
Figure 17

Performance comparison when using different convolutional layers in the network. © 2015 IEEE. Reprinted, with permission, from Danelljan et al., Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66 [100].

Figure 18
Figure 18

Performance comparison of several feature representations in the DCF framework. © 2015 IEEE. Reprinted, with permission, from Danelljan et al., Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66 [100].

Figure 19
Figure 19

Success plot showing a comparison of our trackers with state-of-the-art methods on the OTB dataset containing all 50 videos. © 2015 IEEE. Reprinted, with permission, from Danelljan et al., Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66 [100].

Figure 20
Figure 20

Attribute-based comparison of our trackers with some state of-the-art methods on the OTB-2013 dataset. © 2015 IEEE. Reprinted, with permission, from Danelljan et al., Proceedings of the IEEE International Conference on Computer Vision Workshops (2015), pp. 58–66 [100].

Figure 21
Figure 21

Test images with 25%, 50%, and 75% occlusion. © 2013 IEEE. Reprinted, with permission, from Rodriguez et al., IEEE Trans. Image Process. 22, 631–643 (2013) [62].

Figure 22
Figure 22

Multi-PIE database test images with illumination variations. © 2013 IEEE. Reprinted, with permission, from Rodriguez et al., IEEE Trans. Image Process. 22, 631–643 (2013) [62].

Figure 23
Figure 23

Accuracy loss in Test 7 for face recognition. © 2013 IEEE. Reprinted, with permission, from Rodriguez et al., IEEE Trans. Image Process. 22, 631–643 (2013) [62].

Figure 24
Figure 24

Accuracy loss in Test 8 for face recognition. © 2013 IEEE. Reprinted, with permission, from Rodriguez et al., IEEE Trans. Image Process. 22, 631–643 (2013) [62].

Figure 25
Figure 25

Basic LBP operator. Reprinted with permission from [46]. Copyright 2012 Optical Society of America.

Figure 26
Figure 26

Flowchart of the LBP-UMACE filter. Reprinted with permission from [46]. Copyright 2012 Optical Society of America.

Figure 27
Figure 27

Typical images of one subject. Reprinted with permission from [46]. Copyright 2012 Optical Society of America.

Figure 28
Figure 28

Comparison of recognition rates. Reprinted with permission from [46]. Copyright 2012 Optical Society of America.

Figure 29
Figure 29

Comparison of error rates. Reprinted with permission from [46]. Copyright 2012 Optical Society of America.

Figure 30
Figure 30

Schematics of the algorithm: (a) definition of the independent component learning base, (b) definition of the PCEs matrix, and (c) recognition procedure. Reprinted with permission from [59]. Copyright 2011 Optical Society of America.

Figure 31
Figure 31

Simulation results illustrating the ICA algorithm. Reprinted with permission from [59]. Copyright 2011 Optical Society of America.

Figure 32
Figure 32

Linear correlation (left) and circular correlation (right). © 2015 IEEE. Reprinted, with permission, from Fernandez et al., IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015) [86].

Figure 33
Figure 33

Conventional MACE (left) and ZAMACE (right). © 2015 IEEE. Reprinted, with permission, from Fernandez et al., IEEE Trans. Pattern Anal. Mach. Intell. 37, 1702–1715 (2015) [86].

Figure 34
Figure 34

Illustration of the denoised separation algorithm. Reprinted with permission from [87]. Copyright 2012 Optical Society of America.

Figure 35
Figure 35

ROC curves for performance comparison. Reprinted with permission from [87]. Copyright 2012 Optical Society of America.

Figure 36
Figure 36

Face reconstruction while the subspace analysis is performed over person-1 (PIE dataset). Reprinted from Face Detection and Recognition: Theory and Practice (Chapman & Hall/CRC Press, 2015) [5].

Figure 37
Figure 37

Detailed process of the face recognition method. Reprinted from Face Detection and Recognition: Theory and Practice (Chapman & Hall/CRC Press, 2015) [5].

Figure 38
Figure 38

Comparison of ROC plots for different training sets. Reprinted from Face Detection and Recognition: Theory and Practice (Chapman & Hall/CRC Press, 2015) [5].

Figure 39
Figure 39

Two-level decision tree learning approach. (a) First level, classification; (b) second level, identification [29].

Figure 40
Figure 40

GPU architecture [29].

Figure 41
Figure 41

GPU architecture [29].

Figure 42
Figure 42

Influence of the image size on the run time for four architectures [29].

Figure 43
Figure 43

All-optical correlation setup [137].

Figure 44
Figure 44

(a) Optical input image and (b) class domains of segmented filtering. Reprinted with permission from [52]. Copyright 1999 Optical Society of America.

Figure 45
Figure 45

PCE versus number of training images for fabrication of the composite filters. Reprinted with permission from [52]. Copyright 1999 Optical Society of America.

Figure 46
Figure 46

(a) Training image and (b) corresponding output; (c) non-training image and (d) corresponding output. Reprinted with permission from [52]. Copyright 1999 Optical Society of America.

Figure 47
Figure 47

Block diagram of the monitoring system [140].

Figure 48
Figure 48

Experimental results for head detection: (a) captured image, (b) H component thresholding, (c) S component thresholding, and (d) head-detection results. (e) This figure shows the result by using the proposed method [140].

Figure 49
Figure 49

Example of reference database used in our study [140].

Figure 50
Figure 50

Experimental platform developed [140].

Figure 51
Figure 51

User interface of the fuzzy logic and optical correlation based face-recognition method developed using MATLAB software and a CPU: Intel Dual Core E5300, 2.6 GHz, 2Go RAM computer [140].

Figure 52
Figure 52

Shooting environment: (a) camera Blackmagic 4K used for shooting the videos of the test, (b) example of a frame extracted from a video used in the tests [141].

Figure 53
Figure 53

Comparison among the NL-NZ-JTC correlator, color histograms, and dynamic fusion in terms of PCE (tested on a video sequence (360 frames) of a backstroke competition). A high value of the PCE criterion implies greater confidence in the localization (sharp peak) [141].

Figure 54
Figure 54

Comparison among the NL-NZ-JTC correlator, color histogram, and dynamic fusion in terms of Local-STD (tested on a video sequence (360 frames) of a backstroke competition). A low value of the Local-STD criterion implies a greater confidence toward the detection (less noisy plane) [141].

Figure 55
Figure 55

Image comparator based on the JTC [142]. Available from http://www-g.eng.cam.ac.uk/CMMPE/pattern.html.

Figure 56
Figure 56

Picture showing the configuration of the image comparator studied in [142]. Available from http://www-g.eng.cam.ac.uk/CMMPE/pattern.html.

Figure 57
Figure 57

Correlation output of the image comparator based on JTC [142]. Available from http://www-g.eng.cam.ac.uk/CMMPE/pattern.html.

Figure 58
Figure 58

Head tracker based on JTC comparator [142]: the blue spots represent the cross-correlation peaks, and the red spot is the auto-correlation peak. Available from http://www-g.eng.cam.ac.uk/CMMPE/pattern.html.

Figure 59
Figure 59

Experimental results of head tracker based on JTC comparator [142]. Available from http://www-g.eng.cam.ac.uk/CMMPE/pattern.html.

Figure 60
Figure 60

Three target images showing different types of mine [137].

Figure 61
Figure 61

Results of the optimized nonlinear fringe-adjusted JTC [143].

Figure 62
Figure 62

VIAPIX acquisition [28].

Figure 63
Figure 63

Examples of images used to obtain panoramic images [28].

Figure 64
Figure 64

Panoramic image obtained using images from [28].

Figure 65
Figure 65

Red color segmentation using HSV [28].

Figure 66
Figure 66

Schematic diagram of the POF used in [28].

Figure 67
Figure 67

Panoramic image and identification results of POF [28].

Tables (17)

Tables Icon

Table 1. Average Verification Rate (at 0% FAR) Achieved by Use of Test Images Compressed to Various Bit Rates with MACE Filters Synthesized with the Two Training Schemes [63]

Tables Icon

Table 2. Average Verification Rate (at 0% FAR) Achieved by Use of Test Images Compressed at Various Bit Rates with OTSDFS Synthesized with Different Amounts of Noise Tolerance [63]

Tables Icon

Table 3. Average Verification Performance of 65 People (at 0% FAR) with Compressed Logarithm-Transformed Test Images at Various Bit Rates [63]

Tables Icon

Table 4. Verification Rates (at 0% FAR) of Test Images with 7 dB Additive White Gaussian Noise Compressed at Various Bit Rates and Tested with the OTSDF at Different Amounts of Noise Tolerance [63]

Tables Icon

Table 5. Results Generated by the VOT2015 Benchmark Toolkit [100]

Tables Icon

Table 6. Classification Accuracy (%) [62]

Tables Icon

Table 7. Computational Complexity (O, see box) and Measured Time [62]

Tables Icon

Table 8. Recognition Results of UMACE Filters for the First Five Subjects [46]

Tables Icon

Table 9. Recognition Data of LBP-UMACE Filters for the First Five Subjects [46]

Tables Icon

Table 10. Comparison of Performance for the Baseline CFs and ZACFs Using the ORL Dataseta [86]

Tables Icon

Table 11. Comparison of Performance for the Baseline CFs and ZACFs Using the FRGC Dataset [86]

Tables Icon

Table 12. PSR Value Comparison of Different Filters [5]

Tables Icon

Table 13. Experimental Conditions for Holographic Optical Disk Correlator [27]

Tables Icon

Table 14. Correlation Speed of the Outermost Track of an Optical Holographic Optical Disk [27]

Tables Icon

Table 15. Experimental Error Rates of this Cell Phone Face Recognition System [27]

Tables Icon

Table 16. Comparison among NL-NZ-JTC, Color Histogram, and Dynamic Fusion Technique in Terms of Tracking Percentage, PCE, and Local-STD [141]

Tables Icon

Table 17. Robustness Results of the Different JTCs [143]

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

C=FFT1{H*T},
PSR=E{y(0)}var{y(τ)},
PCE=|y(0)|2Ey,
ASM=1Li=1Lmn[gi(m,n)g¯i(m,n)]2,
Hi*(u,v)=Gi(u,v)Fi(u,v),
HASEF(u,v)=1Ni=1NHi*(u,v).
H*=iGiFi*iFiFi*,
Hi*=AiBi,Ai=ηGiFi*+(1η)Ai1,Bi=ηFiFi*+(1η)Bi1,
ε=k=1tαkftxkyk2+λft2,
ε=k=1tαkftxkyk2+l=1dw·ftl2,
minw,b(hTh+Ci=1Nξi,i=1Nhxigi22)s.t.  ti(hTxi+b)ciξi,
εi=|PCE1PCEi1|+|PCE2PCEi2|++|PCEnPCEin|.i=1,2,,n.
Pc=i=1MβiYi+RorPc=Yβ+R,(matrix from)
Ypeak=[thinSVD(peak1)thinSVD(peakk)]=[V1tVkt],
Ypeak=[noise1noisen],Y=[YpeakYnoise].
C=FFT1{Hpφj×Hrφk*},

Metrics