Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time concealed-object detection and recognition with passive millimeter wave imaging

Open Access Open Access

Abstract

Millimeter wave (MMW) imaging is finding rapid adoption in security applications such as concealed object detection under clothing. A passive MMW imaging system can operate as a stand-off type sensor that scans people in both indoors and outdoors. However, the imaging system often suffers from the diffraction limit and the low signal level. Therefore, suitable intelligent image processing algorithms would be required for automatic detection and recognition of the concealed objects. This paper proposes real-time outdoor concealed-object detection and recognition with a radiometric imaging system. The concealed object region is extracted by the multi-level segmentation. A novel approach is proposed to measure similarity between two binary images. Principal component analysis (PCA) regularizes the shape in terms of translation and rotation. A geometric-based feature vector is composed of shape descriptors, which can achieve scale and orientation-invariant and distortion-tolerant property. Class is decided by minimum Euclidean distance between normalized feature vectors. Experiments confirm that the proposed methods provide fast and reliable recognition of the concealed object carried by a moving human subject.

©2012 Optical Society of America

1. Introduction

Millimeter wave (MMW) imaging has been widely adopted in security and military applications since it can penetrate fabrics and clothing [15]. Moreover, passive MMW imaging can generate interpretable images even in low-visibility conditions such as fog, rain, dust, and smoke.

Unfortunately, the image quality in such applications is often degraded by the diffraction limit and low-signal-to-noise ratio (SNR) [6,7]. Furthermore, the system conventionally affords trade-offs between aperture size and spatial resolution, and temperature resolution is directly affected by integration time and bandwidth of the system.

There have been researches for the automatic detection of concealed objects [814]. Multi-level expectation-maximization (EM) has been proposed to cluster pixels according to the Gaussian mixture model (GMM) [11,12]. Object identification has been performed by means of the radiation characteristics of the target in close range [13]. Studies for concealed weapon identification and navigation were presented in [14]. Shape-based object matching and recognition can be found in [15].

In the paper, we propose a real-time concealed object detection and recognition with passive MMW imaging. Concealed object regions are extracted by the multi-level segmentation method [11,12]. A novel approach is proposed to measure similarity between two binary images. Principal component analysis (PCA) provides a convenient way to normalize the object in terms of translation and rotation [16]. A feature vector is extracted after PCA, which is invariant to scale and orientation, and tolerant to distortion. The feature vector is composed of several geometric descriptors. A decision rule classifies an unknown object into one of trained classes. The decision rule is based on Euclidean distance between feature vectors normalized. The passive MMW imaging system generates images at a frame rate of 1 Hz, thus the computational time is very limited in order to achieve real-time process.

In experiments, two different objects (gun and metal plate) are concealed under clothing of a moving human subject. They are detected and recognized by the proposed method. Our algorithms are implemented in both MATLAB and C++ environment on a standard PC. The performance is evaluated by the detection probability and the computational time.

The paper is organized as follows: In Sections 2 and 3, the passive MMW imaging system and the multilevel segmentation are described briefly. The concealed object recognition is presented in Section 4. In Section 5, the experimental and simulation results are illustrated, and the conclusion follows in Section 6.

2. Passive millimeter wave imaging system

The passive MMW imaging system operating in W band (around 94 GHz) consists of a high-density polyethylene lens with a diameter of 50 cm, reflective mirror, one-dimensional (1D) receiver array composed of 30 receiver channels, and a mechanical scanner, which moves the receiver array. The incoming W-band electromagnetic wave is focused on the feed antennas of a 1D receiver array located at the focal plane of the lens, thus images are visualized by mechanical scanning of the 1D receiver array [9]. The focal length and the field of view (FOV) of the lens are 500 mm and 17 × 17°, respectively, thus the 30-channel 1D receiver array corresponds to 30 pixels in the range of 17°. The lens of 500 mm-diameter satisfies the 0.57° spatial resolution. The 1D receiver is the direct detection total power type [9, 10]. Each receiver channel is composed of a dielectric rod antenna, four MMIC (monolithic microwave integrated circuit) low noise amplifiers, and a detector. The measured receiver gain is more than 45 dB in the desired frequency band (90-98 GHz). The detector is implemented by an unbiased Schottky barrier diode (SBD). The simulated return loss of the detector is less than −15 dB in the same frequency band [10]. There are three factors in determining the signal sensitivity: the receiver bandwidth, the noise figure of the amplifier, and the integration time. This passive MMW imaging system has 10 mrad spatial resolution and 1.5 K temperature resolution. The integration time of one channel is 10 msec. Figures 1(a) and 1(b) show the passive MMW imaging system and a block diagram of the system, respectively.

 figure: Fig. 1

Fig. 1 (a) passive MMW imaging system, (b) a block diagram of the system.

Download Full Size | PDF

3. Concealed object segmentation

The multilevel segmentation comprises the global and local segmentation. The global segmentation separates the subject’s body area from the background area. During the local segmentation, only the inside of the body area is processed and the concealed object is segmented from the body area. Figure 2 shows the block diagram of concealed object segmentation.

 figure: Fig. 2

Fig. 2 Block diagram of concealed object segmentation.

Download Full Size | PDF

This multilevel segmentation adopts vector quantization (VQ), EM algorithm for the GMM parameter estimation, and Bayesian decision rule. The VQ algorithm initializes the parameters of the GMM at each segmentation level. The EM process estimates the GMM parameters iteratively and the Bayesian decision rule decides which cluster each pixel belongs to. An alternative method is also developed for faster processing, which utilizes only the VQ during the global segmentation. Threshold values for EM are P0(G)1=0.999 and P0(G)2=0.001, respectively. They are chosen heuristically when the best results are produced. The segmented areas less than 4 pixels are considered clutters and discarded in the experiments. More detailed procedures of the segmentation process are presented in [10,11].

4. Concealed object recognition

The recognition of the concealed object is composed of preprocessing for magnification, principal component analysis (PCA), size normalization, feature vector extraction, and a decision rule as illustrated in Fig. 3 . Magnification process reduces errors occurring when geometric shape features are extracted in the low resolution binary image. In this paper, we enlarge the binary image five times before applying PCA.

 figure: Fig. 3

Fig. 3 Block diagram of the concealed object recognition.

Download Full Size | PDF

4.1 Principal component analysis (PCA) and size normalization

PCA and size normalization provide a convenient way to normalize the object region in terms of size, translation, and rotation. The pixels in the object area can be treated as two dimensional vectors, xj=[xj,yj]t, where xj and yj are the coordinate values of any pixel along the x- and y-axis, respectively. All the pixels in the region constitute a 2-D vector population which can be used to compute the covariance matrix Σxx and mean vector mx.

mx=1noj=1noxj,
Σxx=1noj=1no(xjmx)(xjmx)t,
where no is the number of pixels of the object, and superscript t denotes the transpose matrix. The principal components transform is calculated by the eigenvalue matrix (Λ=[λ100λ2],λ1>λ2>0) and eigenvectors (Ε=[μ1,μ2]) of the covariance matrix.

ΣxxΕ=ΕΛ,
yj=Εt(xjmx),j=1,...,no.

The coordinates of yj are translated and rotated versions of the xj’s, so that the eigenvectors align with axes centered at the origin. This transform gives the translation and rotation invariant property. One of eigenvalues (λ1) is normalized to 104 by the resizing process for the size-invariant property. Figure 4 shows the PCA and the resizing processes of the binary object.

 figure: Fig. 4

Fig. 4 (a) target image, (b) eigenvectors of the target, (c) transformed target by PCA, (d) size normalization.

Download Full Size | PDF

4.2 Feature vector extraction

A geometric feature is composed of several shape descriptors which are the object size, the major or minor-axis lengths, and the major and minor principal components, and size of each quadrant. Figure 5 shows the shape descriptors. The perimeter (T) of the concealed object is marked with ‘□’ in Fig. 5(a). The area (A) is calculated as the total number of pixels marked by ‘ × ’ in Fig. 5(a). The major (w) and minor (h) axis are the longer and the shorter side of a basic rectangle, respectively [16]. The proposed feature vector is composed of several descriptors as illustrated in Table 1 . The feature vector has six components (f1~f6). The feature components are based on shape descriptors and PCA in [16, 17]. Some of the components are contrived in order to classify several objects captured by a passive MMW imaging system [18].

 figure: Fig. 5

Fig. 5 Geometric feature descriptors, (a) size number of pixels, lengths of major and minor axis, and the perimeter (number of boundary pixels), (b) size of each quadrant.

Download Full Size | PDF

Tables Icon

Table 1. Feature Vector Components

4.3 Decision rule

Our decision rule is based on the Euclidean distance between feature vectors normalized. An unknown object in the image is classified into class j^, which minimizes the Euclidean distance as

j^=argminj=1,..,nck=1c(otest(k)xm|j(k)1)2,
where otest(k) is k-th component of the feature vector of the target image, xm|j(k) is k-th component of the conditional mean vector of the j-th class, obtained from the training images, c is the number of components of the feature vector, nc is the number of classes, and denotes the Euclidean norm. We define a probability of correct decision (PD) for the evaluation of recognition results.

PD=NumberofdecisionforclassjNumberofframescapturingclassj

5. Experimental and simulation results

The passive MMW images of a human subject hiding a gun (Colt 45 – M1911A1) and metal plate are captured for 10 seconds with a 1 Hz frame rate. A human subject is captured at 5 m outdoors. Figures 6(a) and 6(b) are the pictures of the gun and metal plate, respectively. Figures 7(a) and 7(b) are the binary images for training, which describes Fig. 6(a) and 6(b), respectively. Figure 7(c) shows enlarged training images three, five, and seven times from the original image.

 figure: Fig. 6

Fig. 6 Training target of real images, (a) gun (class 1, C1), (b) metal plate (class 2, C2).

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Binary image of training target, (a) gun, (b) metal plate, (c) enlarged image by 3, 5, and 7 times of Fig. 7(a) and 7(b), respectively.

Download Full Size | PDF

Figure 8(a) and 8(d) are the passive MMW images capturing human subject hiding a gun and metal plate, respectively. Figure 8(b) and 8(e) are the results of concealed object detection by the standard procedure of the multi-level segmentation. Figure 8(c) and 8(f) are the feature extraction of the binary image representing concealed objects. The image quality depends on SNR and temperature sensitivity whereas the spatial resolution is an important factor to decide the size of the object to be recognized.

 figure: Fig. 8

Fig. 8 Feature extraction of the target objects, gun of, (a) passive MMW images, (b) multi-level segmentation, (c) feature extraction, metal plate of, (d) passive MMW images, (e) multi-level segmentation, (f) feature extraction.

Download Full Size | PDF

Table 2 shows the feature vectors of the mean of the training targets and unknown target objects. Table 3 is the recognition results of the concealed object. All frames are correctly classified excepted for the 8-th and 9-th frames of the gun. This is caused by the shape of the object, which is more similar to the metal shape after the segmentation. Figure 9 shows the total computational time of the concealed objects recognition process which is implemented in both MATALB and C + + environments on a standard computer.

Tables Icon

Table 2. Feature Vectors of the Target Object

Tables Icon

Table 3. Probabilities of Correct Decision

 figure: Fig. 9

Fig. 9 Computational time of the concealed object recognition.

Download Full Size | PDF

6. Conclusion

In this paper, we have addressed the detection and recognition of concealed objects. The imaging system generates frames every second by scanning a human subject in outdoor open space. Segmented binary images are analyzed by the geometrical feature vector, which are extracted from the target shape after the PCA transform. Further investigation on multiple objects captured from a flow of people remains for future studies.

Acknowledgments

This work was supported by the Samsung Thales and Mid-career Researcher Program through NRF grant funded by the MEST (No. 2010-0027695).

References and links

1. L. Yujiri, M. Shoucri, and P. Moffa, “Passive millimeter-wave imaging,” IEEE Microw. Mag. 4(3), 39–50 (2003). [CrossRef]  

2. R. Appleby and R. N. Anderton, “Millimeter-wave and submillimeter-wave imaging for security and surveillance,” Proc. IEEE 95(8), 1683–1690 (2007). [CrossRef]  

3. H.-M. Chen, S. Lee, R. M. Rao, M.-A. Slamani, and P. K. Varshney, “Imaging for concealed weapon detection: a tutorial overview of development in imaging sensors and processing,” IEEE Signal Process. Mag. 22(2), 52–61 (2005). [CrossRef]  

4. National Research Council, Assessment of millimeter-wave and terahertz technology for detection and identification of concealed explosives and weapons (National Academies Press, Washington, D.C., 2007).

5. A. Denisov, A. Gorishnyak, S. Kuzmin, V. Miklashevich, V. Obolonskv, V. Radzikhovsky, B. Shevchuk, B. Yshenko, V. Uliz’ko, and J. Son, “Some experiments concerning resolution of 32 sensors passive 8 mm wave imaging system,” in Proceeding of the 20th International Symposium on Space Terahertz Technology, ISSTT 2009 (National Radio Astronomy Observatory, 2009), 227–229.

6. M. R. Fetterman, J. Grata, G. Jubic, W. L. Kiser Jr, and A. Visnansky, “Simulation, acquisition and analysis of passive millimeter-wave images in remote sensing applications,” Opt. Express 16(25), 20503–20515 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-25-20503. [CrossRef]   [PubMed]  

7. K. B. Cooper, R. J. Dengler, N. Llombart, T. Bryllert, G. Chattopadhyay, I. Mehdi, and P. H. Siegel, “An approach for sub-second imaging of concealed objects using terahertz (THz) radar,” J. Infrared, Millimeter Terahertz Waves 30, 1297–1307 (2009).

8. H. Sato, K. Sawaya, K. Mizuno, J. Uemura, M. Takeda, J. Takahashi, K. Yamada, K. Morichika, T. Hasegawa, H. Hirai, H. Niikura, T. Matsuzaki, S. Kato, and J. Nakada, “Passive millimeter-wave imaging for security and safety applications,” Proc. SPIE 7672, 76720V (2010).

9. M.-K. Jung, Y.-S. Chang, S.-H. Kim, W.-G. Kim, and Y.-H. Kim, “Development of passive millimeter wave imaging system at W-band,” in Proceedings of the 34th International Conference on Infrared, Millimeter, and Terahertz Waves, (Busan, Korea, , 2009), pp. 1–2.

10. H. Lee, W. Kim, J. Seong, D. Kim, K. Na, M. Jung, Y. Chang, S. Kim, and Y. Kim, “Test model of millimeter-wave imaging radiometer equipment (MIRAE), presented at the International Symposium of Remote Sensing, Busan, South Korea Oct. 23-Nov. 2 2007.

11. S. Yeom, D.-S. Lee, J. Son, M. K. Jun, Y. Jang, S.-W. Jung, and S.-J. Lee, “Real-time outdoor concealed-object detection with passive millimeter wave imaging,” Opt. Express 19(3), 2530–2536 (2011). [CrossRef]   [PubMed]  

12. D.-S. Lee, S. Yeom, M.-K. Lee, S.-W. Jung, and Y. Chang, “Real-time computational processing and implementation for concealed object detection,” Opt. Eng. Computational Imaging Special Section, (to be published).

13. L. C. Li, J. Y. Yang, G. L. Cui, Z. M. Jiang, and X. Zheng, “Method of passive MMW image detection and identification for close target,” J. Infrared, Millimeter Terahertz Waves 32(1), 102–115 (2011). [CrossRef]  

14. E. L. Jacobs and O. Furxhi, “Target identification and navigation performance modeling of a passive millimeter wave imager,” Appl. Opt. 49(19), E94–E105 (2010). [CrossRef]   [PubMed]  

15. S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002). [CrossRef]  

16. R. C. Gonzalez, Digital Image Processing 2/E (Prentice-Hall Inc., 2003).

17. I. Pitas, Digital image processing algorithms and applications (John Wiley & Sons, Inc., 2000).

18. S. Yeom and D. Lee, “Radiometer automatic detection and image fusion algorithm development,” Samsung Thales Final Report, 2011.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 (a) passive MMW imaging system, (b) a block diagram of the system.
Fig. 2
Fig. 2 Block diagram of concealed object segmentation.
Fig. 3
Fig. 3 Block diagram of the concealed object recognition.
Fig. 4
Fig. 4 (a) target image, (b) eigenvectors of the target, (c) transformed target by PCA, (d) size normalization.
Fig. 5
Fig. 5 Geometric feature descriptors, (a) size number of pixels, lengths of major and minor axis, and the perimeter (number of boundary pixels), (b) size of each quadrant.
Fig. 6
Fig. 6 Training target of real images, (a) gun (class 1, C1), (b) metal plate (class 2, C2).
Fig. 7
Fig. 7 Binary image of training target, (a) gun, (b) metal plate, (c) enlarged image by 3, 5, and 7 times of Fig. 7(a) and 7(b), respectively.
Fig. 8
Fig. 8 Feature extraction of the target objects, gun of, (a) passive MMW images, (b) multi-level segmentation, (c) feature extraction, metal plate of, (d) passive MMW images, (e) multi-level segmentation, (f) feature extraction.
Fig. 9
Fig. 9 Computational time of the concealed object recognition.

Tables (3)

Tables Icon

Table 1 Feature Vector Components

Tables Icon

Table 2 Feature Vectors of the Target Object

Tables Icon

Table 3 Probabilities of Correct Decision

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

m x = 1 n o j=1 n o x j
Σ xx = 1 n o j=1 n o ( x j m x ) ( x j m x ) t
Σ xx Ε=ΕΛ
y j = Ε t ( x j m x ) , j=1,..., n o
j ^ = argmin j=1,.., n c k=1 c ( o test (k) x m|j (k) 1 ) 2 ,
P D = Number of decision for class j Number of frames capturing class j
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.