## Abstract

When imaging through layered media such as walls, the contents and thickness of the wall layers are generally not known *a priori*. Furthermore, compensating for their effects can be computationally intensive, as this generally requires modelling the transmission and reflection of complex fields through layered media. We propose a blind deconvolution method that does not require knowledge of the wall layers by directly estimating a circularly symmetric Green’s function that models the transmission through the wall layers, simultaneously addressing both problems. We experimentally demonstrate this technique by measuring the reflection through a multilayered structure of building materials at the K-band microwave frequencies, and using the blind deconvolution method to find the image of a reflective object behind the layers.

© 2017 Optical Society of America

## 1. INTRODUCTION

When imaging an object through an inhomogeneous medium, the medium often distorts the field passing through the medium, reducing the fidelity of the resulting image. In some cases where the structure of the medium is known, this structure can be included in the model of the imaging system so that its effects may be accounted for. However, in many practical situations, both the object to be imaged and the structure of the intervening medium are unknown. Unfortunately, the medium then adds confounding variables so that the scattered field sampled by the sensor may no longer be sufficient to determine both the medium and object structures. Layered planar media, however, are relatively simple, because these may be described by a one-dimensional function of the medium composition with depth, and because planar structures are symmetric about the axis normal to the surface planes. Therefore, the types of distortions that may be produced by planar media are highly restricted as compared to three-dimensional media. Artificial structures such as building walls are often built as layered structures, so that accounting for the propagation through these layers is likely to remove much of the distortions produced by the wall. We propose a blind deconvolution [1] method that simultaneously infers the distortion to the fields produced by a layered structure and the image of an object behind the structure. This method is demonstrated for through-wall imaging (TWI) by reconstructing the image of reflective metal targets behind several types of walls from measured backscattered microwave radiation.

Layered structures, despite their simplicity, can produce a number of complicated effects. These may be modeled elegantly with scattering matrices [2,3], which simplify the propagation of electromagnetic radiation through layers into a succession of reflection and transmission events at each material boundary and the propagation of the radiation between boundaries. Complex effects can occur in layered structures, including interference effects and Newton’s rings, total internal reflection and evanescent fields, and surface waves. Despite these complex effects, the response of a layered medium to plane wave radiation may be summarized as a function of the temporal frequency, the incident angle to the surface, and whether the radiation is polarized in the plane of incidence or perpendicular to it. As a layered medium is completely rotationally symmetric around the axis normal to the surface layers, so is its response to incoming radiation. This strongly constrains the types of distortions that can occur when propagating through layered media.

To preserve circular symmetry when irradiated by an antenna, one might consider the field an electrically small radiation source placed on one side of the layered media might produce on the other side; in other words, the Green’s function of a layered medium, as shown in Fig. 1. The Green’s function is translationally invariant along the planes of the medium, and therefore propagation through the layers is calculated as a convolution. The Green’s function is rotationally symmetric around the medium axis as well. However, only the perpendicular or parallel polarizations are rotationally symmetric, and so a small source that unequally excites the polarizations does not produce a rotationally symmetric field. For a point source, the only three rotationally symmetric polarization states with respect to the layered structure are the linear polarization pointing along the symmetry axis and the left and right circular polarizations in the plane perpendicular to the symmetry axis. Unfortunately, all three of these polarization states have disadvantages when measuring the scattered signal from a layered structure in a reflection geometry. The linearly polarized source along the symmetry axis does not radiate at all in the direction along the symmetry axis. The scattered field of the two circular polarizations reverses upon reflection, so that right-circular polarization becomes left-circular polarization and vice versa, and therefore an antenna radiating one polarization rejects the reflected orthogonal polarization.

A dipole linearly polarized along the layer planes does not produce a rotationally symmetric Green’s function, unlike an ideal isotropic point source, as it projects unequally onto parallel and perpendicular incident polarizations and does not radiate equally into all angles around the symmetry axis. However, as the transmission of the field near the axis is weakly dependent on the polarization, the Green’s function is often nearly rotationally symmetric. One may regard the true asymmetric Green’s function as the convolution of a “best fit” rotationally symmetric convolution kernel and an asymmetric error kernel. Typically, most of the distortion is due to the rotationally symmetric kernel, and therefore compensation for this rotationally symmetric component removes most of the distortion. In particular, while the transmission and reflection coefficients at each interface depend on the incident polarization state, the propagating phase between the interfaces does not, and often much of the distortion depends on the interference effects between interfaces that is largely determined by the propagation phase.

Given that a suitable rotationally symmetric approximation to the layered medium Green’s function exists, by using deconvolution the object reflectivity may be estimated from the backscattered signal at a point dipole antenna. Blind deconvolution methods attempt to infer the convolution kernel directly from measured data. If the convolution kernel, or Green’s function, of the layered medium is inferred from the measurements, this obviates the need to know the medium composition. Without any constraints, the reconstruction of the object could be that produced by deconvolution by the Green’s function of any plausible layered medium. Many objects of interest are not translationally or rotationally invariant, and so the object can be assumed to be the minimally symmetric solution, with all of the symmetric broadening or blurring attributed to the layered medium. A sparsity constraint is placed on the object to achieve the minimally broadened solution. The algorithm selects a circularly symmetric Green’s function so that the image of the object is compact or confined by minimizing its sparsity. However, objects with repetitive annular structure may have their structure incorrectly attributed to the Green’s function. For the kinds of objects expected to be inside walls, such as metal pipes or conduit boxes, these situations are not likely to occur.

There are numerous approaches to imaging through walls using microwave radiation. Blind deconvolution has been used to find the [4,5] impulse response of multipath propagation through a wall in the time domain to compensate for multiple reflections in the wall. The effects of time delay variations between antenna channels are reduced [6] by adjusting delays to maximize constructive interference between the signals from different channels. Another method [7] uses ray tracing to identify the locations of targets from multiple sensor data. The effects of ambiguities in wall thickness and dielectric constant to the TWI problem have been explored [8]. A nonlinear optimization process used with time-reversal imaging [9] can find the permittivity and thickness of a wall, optimizing an entropy criterion. An ${\ell}_{1}$-based compressed sensing optimization can also be used to identify targets [10]. The performance of TWI using random or chaotic waveforms has been explored [11]. Another method identifies moving targets using ultrawideband noise radar [12]. The effects of different antennas to wideband pulses was explored with a X-band TWI radar system [13]. A method of using two different standoff distances from a wall to estimate ambiguities in the wall parameters is proposed [14]. Another method adapted from geophysical processing uses several different transmitter and receiver separations and common midpoint processing to find the thickness and dielectric constant of the wall layer [15]. A video-rate TWI system was developed for imaging targets on an urban battlefield [16].

An inverse scattering method [17] is analyzed using the weak scattering or first Born approximation with the Green’s function given by a single layered medium. The locations of reflections are ranged, compensating for the wall layer [18] to find the shape of the target. Delay and sum beamforming were used to form 3D images of targets behind a brick wall in simulation [19–21]. The CLEAN and RELAX algorithms were used to identify individual targets when the beamforming array had high sidelobes [22,23]. Using a differential signal that subtracts successive measurements removes the effects of uniform areas of a target, including a wall, allowing edge-like targets behind the targets to be enhanced [24]. Models of a target can be constructed using primitive elements such as plates, dihedrals, and trihedrals, to form a geometric estimate of a target shape [25]. A method using the uniform theory of diffraction to compensate for multiple scattering effects in conjunction with using scattering matrices to compensate for the layered medium of the wall can produce high-quality reconstructions [26].

We note that unlike most TWI radars, the radar presented here does not use temporal ranging to separate the scattering of an object and a wall, and instead relies on a short wavelength to obtain resolution. To separate objects 10 mm apart in range, a bandwidth of approximately 10 GHz is needed. Therefore, a radar, for example, at 20 GHz would span 15 to 25 GHz. Radar transceivers with this bandwidth can be quite costly and difficult to manufacture and utilize more spectrum than regulatory agencies are likely to permit, as most have a fractional bandwidth less than 10%, and the penetration of the radiation would be reduced if the center frequency was increased. A longer wavelength is useful but produces correspondingly lower-resolution images, so the choice of frequency is a compromise between resolution and penetration as frequency increases. This method is specifically designed for narrow-bandwidth use that is a more likely scenario for practical commercial TWI instruments that may be approved by telecommunications regulatory agencies, for example, to operate in the industrial, scientific, and medical (ISM) band between 24 and 24.25 GHz. While the limitation of using a narrow bandwidth makes TWI more challenging, a narrowband solution is more likely to have a wider impact as a practical instrument.

The field produced in a layered medium due to a point source outside the medium is summarized in Section 2, showing that a nearly circularly symmetric convolution kernel is produced by the layered medium. The blind deconvolution method is described in Section 3. The algorithm is applied to both simulated and measured data from a wall phantom in Section 4. The results are summarized in Section 5.

## 2. GREEN’S FUNCTION IN A LAYERED MEDIUM

The excitation of a layered medium by a point source is derived in detail in [2], and it is briefly summarized here. A point source is polarized in the plane along the wall with a polarization state ${\epsilon}_{x}\widehat{\mathbf{x}}+{\epsilon}_{y}\widehat{\mathbf{y}}$. The field ${\mathbf{E}}_{\mathrm{inc}}$ incident on the wall can be represented using the Weyl plane wave expansion of a point source,

Using Eq. (2) with the appropriate transmission coefficients, the radiation produced by the point source transmitted through the dielectric stack may be calculated. However, for a backscatter imaging configuration, both the transmitted signal and the received signal use this radiation pattern. If the susceptibility of the target is given by a function $\chi (x,y)$, the measured backscattered power at the antenna is given by

## 3. BLIND DECONVOLUTION OF THE PSF

The measured backscattered power is related to the object susceptibility through a convolution with a nearly circularly symmetric PSF. Because this PSF is not known *a priori*, the blind deconvolution method presented here attempts to infer the PSF from the data. We note that in the Fourier domain, the convolution of Eq. (4) can be written as $\tilde{W}({k}_{x},{k}_{y})=\tilde{P}({k}_{x},{k}_{y})\tilde{\chi}({k}_{x},{k}_{y})$ with $\tilde{W}({k}_{x},{k}_{y})$, $\tilde{P}({k}_{x},{k}_{y})$, and $\tilde{\chi}({k}_{x},{k}_{y})$ being the Fourier transforms of $W(x,y)$, $P(x,y)$, and $\chi (x,y)$, respectively. If the PSF was known, the susceptibility could be estimated using a Weiner filter. Likewise, if the object $\chi (x,y)$ was known, the PSF could be estimated from the object using a Weiner filter as well. In the absence of any constraints, any two functions $P(x,y)$ and $\chi (x,y)$ that satisfy the relationship $\tilde{W}({k}_{x},{k}_{y})=\tilde{P}({k}_{x},{k}_{y})\tilde{\chi}({k}_{x},{k}_{y})$ are a solution. However, if $P(x,y)$ is circularly symmetric, then $\tilde{P}({k}_{x},{k}_{y})$ is as well, so that plausible solutions for $\chi (x,y)$ correspond to those that, after deconvolving the data $W(x,y)$ by $\chi (x,y)$, produce a circular symmetric $P(x,y)$. Note that other common constraints to the PSF, such as enforcing positivity, do not apply as the PSF is complex-valued. There is still a space of one-dimensional functions $P(x,y)$ that all produce plausible $\chi (x,y)$, so additional constraints are needed.

An additional ambiguity is that because the magnitudes $|\tilde{W}({k}_{x},{k}_{y})|=|\tilde{P}({k}_{x},{k}_{y})\parallel \tilde{\chi}({k}_{x},{k}_{y})|$, if the data is bandlimited, it is not discernible whether the object is bandlimited by convolution by the PSF or if the object itself is bandlimited. The trivial example of this is to assume the PSF is a delta function, and therefore $\chi (x,y)\propto W(x,y)$. As many objects of interest are likely to have sharp edges or relatively few features, this may be used as a constraint. To enforce objects that have sharp edges or few features, a sparsity constraint [27,28] is applied to $\chi (x,y)$ [29]. A typical approach to constraining the sparsity is to minimize the ${\ell}_{1}$ norm of the function,

A further trivial constraint is that one may multiply $\chi (x,y)$ by a constant and divide $P(x,y)$ by the same constant and obtain another solution. To resolve this ambiguity, the ${\ell}_{2}$ norm of the PSF is normalized to one,

There are then three constraints of this blind deconvolution problem: that the PSF is circularly symmetric, that the object is sparse, and that the point spread function has unit ${\ell}_{2}$-norm. These can be combined in an iterative algorithm for blind deconvolution that performs three minimization steps. The flowchart for this algorithm is shown in Fig. 2. The algorithm starts with the original image data as given by $W(x,y)$ and an initial estimate of the object ${\chi}^{(0)}(x,y)=W(x,y)$. The cycle of the algorithm starts at the center upper block of Fig. 2, with each block labeled with its step number, and the steps are as follows:

- 1. The current image estimate ${\chi}^{(n)}(x,y)$ is soft-thresholded to form a new image ${\chi}_{T}^{(n)}(x,y)$ with reduced ${\ell}_{1}$-norm as given in Eq. (6).
- 3. The Fourier transform of the PSF is estimated using the Weiner filter, which minimizes the squared error of the PSF with a ${\ell}_{2}$ regularization,$${\tilde{P}}^{(n)}({k}_{x},{k}_{y})=\frac{\tilde{W}({k}_{x},{k}_{y}){\tilde{\chi}}_{T}^{(n)}{({k}_{x},{k}_{y})}^{*}}{{|{\tilde{\chi}}_{T}^{(n)}({k}_{x},{k}_{y})|}^{2}+\lambda {\Vert {\tilde{\chi}}_{T}^{(n)}({k}_{x},{k}_{y})\Vert}_{2}^{2}}\phantom{\rule{0ex}{0ex}}{\Vert {\tilde{\chi}}_{T}^{(n)}({k}_{x},{k}_{y})\Vert}_{2}^{2}={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{|{\tilde{\chi}}_{T}^{(n)}({k}_{x},{k}_{y})|}^{2}\mathrm{d}{k}_{x}\mathrm{d}{k}_{y}.$$The regularization constant $\lambda $ is proportional to the ${\ell}_{2}$-norm of ${\tilde{\chi}}_{T}^{(n)}(x,y)$ so that the PSF does not contain frequencies for which the object does not have sufficient magnitude.
- 4. The ${\ell}_{2}$-norm of the PSF is normalized to one to enforce the condition of Eq. (7),
- 5. To enforce that the PSF is circularly symmetric, it is averaged in the angular direction. Two angularly averaged functions are formed, with the circularly symmetric PSF being a combination of the two,$${P}_{M}^{(n)}(r)={\left[\frac{1}{2\pi}{\int}_{-\pi}^{\pi}{|{P}^{\prime (n)}(r\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta ,r\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta )|}^{1/2}\mathrm{d}\theta \right]}^{2}\phantom{\rule{0ex}{0ex}}{P}_{\varphi}^{(n)}(r)=\frac{1}{2\pi}{\int}_{-\pi}^{\pi}{P}^{\prime (n)}(r\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta ,r\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta )\mathrm{d}\theta \phantom{\rule{0ex}{0ex}}{P}_{A}^{(n)}(r)={P}_{M}^{(n)}(r)\frac{{P}_{\varphi}^{(n)}(r)}{|{P}_{\varphi}^{(n)}(r)|}.$$${P}_{M}^{(n)}(r)$ is the average of the magnitude of the PSF in the angular direction. The average is weighted so that smaller values of magnitude influence the average more by taking the square root of the magnitude. ${P}_{\varphi}^{(n)}(r)$ is the simple average that is used to infer the phase. These two are combined together to find the angularly averaged PSF ${P}_{A}^{(n)}(r)$. If ${P}_{\varphi}^{(n)}(r)$ is used to infer both the magnitude and phase, the magnitude is often underestimated due to cancellation of phases.
- 6. The Weiner filter is used to infer a new estimate of the object from the PSF, which minimizes the squared-error of the object with ${\ell}_{2}$ regularization,
- 7. The inverse Fourier transform of ${\tilde{\chi}}^{(n+1)}({k}_{x},{k}_{y})$ is taken to obtain the object estimate ${\chi}^{(n+1)}({k}_{x},{k}_{y})$,

The regularization constant $\lambda $ is started at a larger value and is decreased as the iterations proceed. A small regularization constant $\lambda $ is needed so that the algorithm does not overly smooth the object and reject high frequencies, but has the disadvantage of sharpening edges that may occur because of diffraction effects that erroneously reconstruct an out-of-focus image with sharp edges. A large regularization constant is effective but removes the high frequencies of the object, reducing the resolution. By starting with a high regularization constant, the gross features are first identified and then sharpened as the regularization constant is decreased with each iteration. Because the algorithm is three successive minimization steps, a ${\ell}_{1}$ minimization to increase the sparsity of the object, a squared-error minimization to find the PSF, and a second squared-error minimization to estimate the object, the order of operations can strongly determine the rate of convergence, and adjusting the regularization constant during the iterations is a convenient way to accelerate convergence. Because the original spatial frequency data is applied at both squared-error minimization steps, the reconstructed object remains consistent with the data.

## 4. DEMONSTRATION OF TWI

To demonstrate TWI using the blind deconvolution algorithm, a radar scanner was constructed consisting of two linearly polarized short dipole antennas 4 cm apart, a diagram of which is shown in Fig. 1. These interrogated the object at frequencies between 17 and 26 GHz; however, only one frequency was used in the reconstructed image. A vector network analyzer transmitted 17 dBm of power from one dipole and sampled the power and relative phase of the backscattered signal at the other dipole. These two dipoles were placed on a two-axis translation stage, and the backscattered signal sampled at 5 mm intervals over a rectangular area. While the system is translationally symmetric except for the object, the dipoles are separated slightly in the $X$ direction are linearly polarized, slightly breaking the circular symmetry of the PSF. Nevertheless, satisfactory results were obtained.

Because one of the applications envisioned for TWI is the nondestructive imaging of the infrastructure inside the walls of buildings, the objects and layered media being tested are intended to be realistic examples for this application. Realistic walls, however, deviate from an ideal layered dielectric structure in several ways. Walls are not perfectly flat or planar, the layers may be skewed relative to each other, and most importantly, the materials are inhomogeneous. There can often be significant scattering from inside the layers as well as the object behind the layers, and the algorithm may focus on the internal scattering of the wall layers rather than the object. This may be an advantage when the internal scattering features of the wall are of interest; however, in many cases these scatters are clutter. As we shall see, certain materials have more internal scattering than others; for example, plywood has significant internal scattering, likely due to the alternating orientations of the plies. Other materials such as gypsum plasterboard or cement board tend to be somewhat more homogeneous.

For an ideal layered medium, the reflection from the layers would be the same at any translational position of the antennas relative to the layers, and so subtracting off the average reflected signal as a function of position would remove the reflection of the layers from the data. Because many realistic walls are of nonuniform thickness or spacing, the reflection from the wall slowly varies along the translation directions. Because this reflection signal is often much stronger than the object signal, the algorithm focuses on the reflections from the wall rather than the object behind the wall. To remove these reflections that slowly vary with position, a high-pass filter is applied to the data $W(x,y)$ to find ${W}_{H}(x,y)$,

Because the data is taken only over a finite area, the function $W(x,y)$ has finite support, and as the finite support is presented as hard edges in the data, the algorithm is likely to form an image of the hard edges of the support rather than the edges of the object. It is necessary to taper the magnitude of ${W}_{H}(x,y)$ towards the edge of its support to prevent this. To taper the function inside the support, if the support of $W(x,y)$ is confined to $-{L}_{X}/2\le x\le {L}_{X}/2$ and $-{L}_{Y}/2\le Y\le {L}_{Y}/2$, the tapered version ${W}_{T}(x,y)$ of ${W}_{H}(x,y)$ is

As a first test of the algorithm, a simulation was performed. An object was created based on the letters of “DUKE.” Each point in the image was multiplied by a random unit-magnitude complex number to produce a speckle effect in the object. The speckle effect ensures that the object contains all spatial frequencies, whereas an object with large areas of uniform susceptibility has mostly low frequencies. Noise was added to the object so that the signal-to-noise ratio was 20 dB. The object was imaged through two layers of relative permittivity 2 and 3, both each 10 free-space wavelengths thick, with a gap of 3 wavelengths between the object and the interior layer. The algorithm was applied to the field scattered by object through these layers, with the ${\ell}_{1}$-regularization shrinkage constant $\alpha $ of Eq. (6) being equal to 0.2 times the magnitude data point. The regularization constant $\lambda $ was started at 0.2 and decreased to 0.01 over the course of 60 iterations of the algorithm. The results are shown in Fig. 3. The magnitude of the PSF and its Fourier transform the modulation transfer function (MTF) are shown in logarithmic scale as (a) and (b), with black corresponding to 0 dB, or the maximum value of the function, and white corresponding to $-30\text{\hspace{0.17em}}\mathrm{dB}$. After applying the algorithm to the original data, the estimated PSF and MTF are shown as parts (c) and (d). Finally, the magnitude of the original data is shown as (e), with the object shape clearly broadened by diffraction and propagation through the layers, and (f) shows the reconstructed object.

However, the simulation is an idealized case, which, while it contains noise, does not contain the more realistic impediments of scattering within the layers or nonplanar layers. As a first test of a more realistic situation, a wall phantom was constructed to be similar to electrical ductwork in residential construction, a photograph of which is shown in Fig. 4. In all of the experimental demonstrations, the total scanned area coincides with the axes shown on the figures. A wall was constructed from common “two-by-four” pine studs with gypsum plasterboard screwed onto the studs. An electrical junction box was screwed onto a stud, and holes were bored through the studs through which an electrical conduit was routed. A Romex wire was routed from the electrical box vertically as well. The wall was imaged through the gypsum plasterboard, which was about 12 mm thick. The results are in Fig. 5. Part (a) shows the magnitude of the original data, in which the electrical junction box can be discerned as a blurry object as well as the electrical conduit. After applying the algorithm, with $\alpha $ being 0.05 times the maximum magnitude data point and $\lambda $ varying from 0.2 to 0.05 as the iterations proceed, the reconstruction of part (b) results with the electrical box, conduit, and Romex wire clearly discernible. In addition, a pine stud can be seen vertically on the right side of the junction box; however, the reflection is comparatively weak from the wood because of its low density. This image is reconstructed from the backscattered signal at a single frequency, 19.39 GHz.

A second object, consisting of the letters “DUKE” in aluminum foil taped to a stack of six layers of plywood each 3 mm thick, is shown in Fig. 6. This was imaged in an identical manner to the wall phantom. The reconstruction algorithm was applied to the reflection data at 19.39 GHz with $\alpha $ equal to 0.2 times the maximum magnitude data point and $\lambda $ varying from 0.2 to 0.017. The reconstruction of Fig. 7 shows that the algorithm was successfully able to find the “DUKE” letters despite significant scattering from the plywood layers. The border of the plywood can also be seen in the image as well, but as the “DUKE” letters produce somewhat stronger scattering, the sparsity constraint optimizes the PSF to focus the letters.

Another challenging object, a cross-shaped target of two copper foil strips, was taped onto the rear surface behind 80 mm of wood consisting of two 20 mm thick particleboard layers and two 20 mm thick melamine laminated particleboard layers. The scattering from the inside of the particleboard layers is comparable to the scattering from the foil cross. Applying the algorithm with $\alpha $ being 0.2 times the maximum magnitude data point and $\lambda $ varying from 0.2 to 0.05 as the iterations proceed, the reconstruction of Fig. 8 is obtained. The reconstruction frequency is at 17.59 GHz. The cross can be clearly discerned, despite the fact that it is unrecognizable in the raw data. The algorithm must apply the correct phase to numerically focus the cross as well as the amplitude variations to the MTF to correct for interference effects in the layers.

## 5. ANALYSIS AND CONCLUSION

There are some aspects of note of the operation of the algorithm. While the algorithm is used in many cases to focus the target without being provided the approximate distance to the object from the antenna, and in the foregoing demonstrations this information was not provided to the algorithm, when imaging through highly scattering objects, even an approximate focusing of the data before application of the algorithm can greatly help the algorithm enhance the object of interest rather than scatterers inside the wall. A prefocused version ${W}_{P}(x,y)$ can be computed from the data $W(x,y)$, propagating the data by a distance $d$ in the Fourier domain ${\tilde{W}}_{P}({k}_{x},{k}_{y})=\tilde{W}({k}_{x},{k}_{y})\mathrm{exp}\left[id\sqrt{4{k}_{0}^{2}-{k}_{x}^{2}-{k}_{y}^{2}}\right]$ in a manner analogous to the high-pass filter of Eq. (14). For thin walls, it is often the case that this focusing operation produces a somewhat recognizable image; however, for thick, highly scattering walls, the object may still appear highly obscured after refocusing, but the deconvolution algorithm improves the image. For practical use, the data may be rapidly refocused so that an operator may examine various refocused images to find the approximate object depth, and then the deconvolution algorithm may be used to enhance the image. Like any operation that maintains the circular symmetry of the convolution kernel, the deconvolution algorithm may still be applied after refocusing.

Another aspect to note is that a smaller regularization constant $\lambda $ tends to enhance high spatial frequencies, but oversharpening may occur if $\lambda $ is too small. In practice, $\lambda $ should be selected on the basis of the required resolution. Because electrical conduits or plumbing tend to be large with lower spatial frequency features, it may be desirable to make the resolution more coarse to remove out-of-focus scattering from inside the wall. The degree of sparsity $\alpha $ is usually best set to the minimum value required to enable sharpening to occur.

While there are challenges to practically exploiting blind deconvolution algorithms, blind deconvolution may be an effective way to cope with multilayer structures of unknown composition that are likely to occur in practical imaging situations. Circular symmetry is a strong constraint that can be applied to effectively remove the many effects of propagation through multilayer structures. For this reason, we believe that this algorithm is likely to find use in imaging instruments used for construction, ground-penetrating radar, or seismology.

## Funding

Air Force Office of Scientific Research (AFOSR) (FA9550-12-1-0491).

## REFERENCES

**1. **A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding blind deconvolution algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. **33**, 2354–2367 (2011). [CrossRef]

**2. **W. Chew, *Waves and Fields in Inhomogeneous Media* (IEEE, 1990).

**3. **Z. Knittl, *Optics of Thin Films* (Wiley, 1976).

**4. **H. Mansour and U. S. Kamilov, “Multipath removal by online blind deconvolution in through-the-wall-imaging,” in *IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)* (2016), pp. 3106–3110.

**5. **H. Mansour, U. Kamilov, D. Liu, P. Orlik, P. Boufounos, K. Parsons, and A. Vetro, “Online blind deconvolution for sequential through-the-wall-radar-imaging,” in *4th International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa)* (2016), pp. 61–65.

**6. **A. Beeri and R. Daisy, “High-resolution through-wall imaging,” Proc. SPIE **6201**, 62010J (2006).

**7. **F. Aryanfar and K. Sarabandi, “Through wall imaging at microwave frequencies using space-time focusing,” in *IEEE Antennas and Propagation Society Symposium* (2004), Vol. 3, pp. 3063–3066.

**8. **F. Ahmad, M. G. Amin, and S. A. Kassam, “Synthetic aperture beamformer for imaging through a dielectric wall,” IEEE Trans. Aerosp. Electron. Syst. **41**, 271–283 (2005). [CrossRef]

**9. **L. Li, W. Zhang, and F. Li, “A novel autofocusing approach for real-time through-wall imaging under unknown wall characteristics,” IEEE Trans. Geosci. Remote Sens. **48**, 423–431 (2010). [CrossRef]

**10. **Q. Huang, L. Qu, B. Wu, and G. Fang, “UWB through-wall imaging based on compressive sensing,” IEEE Trans. Geosci. Remote Sens. **48**, 1408–1415 (2010). [CrossRef]

**11. **V. Venkatasubramanian and H. Leung, “A novel chaos-based high-resolution imaging technique and its application to through-the-wall imaging,” IEEE Signal Process. Lett. **12**, 528–531 (2005). [CrossRef]

**12. **H. Wang, R. M. Narayanan, and Z. O. Zhou, “Through-wall imaging of moving targets using UWB random noise radar,” IEEE Antennas Propag. Lett. **8**, 802–805 (2009). [CrossRef]

**13. **Y. Yang and A. E. Fathy, “See-through-wall imaging using ultra wideband short-pulse radar system,” in *IEEE Antennas and Propagation Society International Symposium* (2005), Vol. 3B, pp. 334–337.

**14. **G. Wang and M. G. Amin, “Imaging through unknown walls using different standoff distances,” IEEE Trans. Signal Process. **54**, 4015–4025 (2006). [CrossRef]

**15. **P. Protiva, J. Mrkvica, and J. Machac, “Estimation of wall parameters from time-delay-only through-wall radar measurements,” IEEE Trans. Antennas Propag. **59**, 4268–4278 (2011). [CrossRef]

**16. **T. S. Ralston, G. L. Charvat, and J. E. Peabody, “Real-time through-wall imaging using an ultrawideband multiple-input multiple-output (MIMO) phased array radar system,” in *IEEE International Symposium on Phased Array Systems and Technology* (2010), pp. 551–558.

**17. **F. Soldovieri and R. Solimene, “Through-wall imaging via a linear inverse scattering algorithm,” IEEE Geosci. Remote Sens. Lett. **4**, 513–517 (2007). [CrossRef]

**18. **S. Hantscher, A. Reisenzahn, and C. G. Diskus, “Through-wall imaging with a 3-D UWB SAR algorithm,” IEEE Signal Process. Lett. **15**, 269–272 (2008). [CrossRef]

**19. **F. Soldovieri, R. Solimene, and G. Prisco, “A multiarray tomographic approach for through-wall imaging,” IEEE T. Geosci. Remote Sens. **46**, 1192–1199 (2008). [CrossRef]

**20. **C. Debes, M. G. Amin, and A. M. Zoubir, “Target detection in single- and multiple-view through-the-wall radar imaging,” IEEE T. Geosci. Remote Sens. **47**, 1349–1361 (2009). [CrossRef]

**21. **M. G. Amin and F. Ahmad, “Wideband synthetic aperture beamforming for through-the-wall imaging [lecture notes],” IEEE Signal Process. Mag. **25**, 110–113 (2008). [CrossRef]

**22. **S. S. Ram and H. Ling, “Through-wall tracking of human movers using joint doppler and array processing,” IEEE Geosci. Remote Sens. Lett. **5**, 537–541 (2008). [CrossRef]

**23. **P. C. Chang, R. J. Burkholder, and J. L. Volakis, “Adaptive clean with target refocusing for through-wall image improvement,” IEEE Trans. Antennas Propag. **58**, 155–162 (2010). [CrossRef]

**24. **M. Dehmollaian, M. Thiel, and K. Sarabandi, “Through-the-wall imaging using differential SAR,” IEEE Trans. Geosci. Remote Sens. **47**, 1289–1296 (2009). [CrossRef]

**25. **E. Ertin and R. L. Moses, “Through-the-wall SAR attributed scattering center feature estimation,” IEEE T. Geosci. Remote Sens. **47**, 1338–1348 (2009). [CrossRef]

**26. **P. C. Chang, R. J. Burkholder, J. L. Volakis, R. J. Marhefka, and Y. Bayram, “High-frequency EM characterization of through-wall building imaging,” IEEE Trans. Geosci. Remote Sens. **47**, 1375–1387 (2009). [CrossRef]

**27. **D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in *Computer Vision and Pattern Recognition (CVPR)* (2011), pp. 233–240.

**28. **S. Choudhary and U. Mitra, “Sparse blind deconvolution: what cannot be done,” in *IEEE International Symposium on Information Theory*,” (2014), pp. 3002–3006.

**29. **J. M. Fadili and J.-L. Starck, “Sparse representation-based image deconvolution by iterative thresholding,” in Astronomical Data Analysis (ADA), Marseille, France (2006).

**30. **S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Rev. **43**, 129–159 (2001). [CrossRef]

**31. **D. L. Donoho, “De-noising by soft-thresholding,” IEEE Trans. Inf. Theory **41**, 613–627 (1995). [CrossRef]

**32. **T. Blumensath and M. E. Davies, “Iterative thresholding for sparse approximations,” J. Fourier Anal. Appl. **14**, 629–654 (2008). [CrossRef]