close

Вход

Забыли?

вход по аккаунту

?

Physics-Based Near-Field Microwave Imaging Algorithms for Dense Layered Media

код для вставкиСкачать
Physics-Based Near-Field Microwave Imaging Algorithms for
Dense Layered Media
DISSERTATION
Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy
in the Graduate School of The Ohio State University
By
Kai Ren
Graduate Program in Electrical and Computer Engineering
The Ohio State University
2017
Dissertation Committee:
Professor Robert Burkholder, Advisor
Professor Fernando Teixeira
Professor Graeme Smith
ProQuest Number: 10901869
All rights reserved
INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
ProQuest 10901869
Published by ProQuest LLC (2018 ). Copyright of the Dissertation is held by the Author.
All rights reserved.
This work is protected against unauthorized copying under Title 17, United States Code
Microform Edition © ProQuest LLC.
ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
Copyrighted by
Kai Ren
2017
Abstract
It is of importance to understand the physics as electromagnetic (EM) waves
propagate through stratified media, are scattered back from buried irregularities, and are
received by an antenna in the near field. To generate better images, we need to
incorporate the physics of the phenomena into the imaging algorithm, such as multiple
reflections, refractions resulting from the existence of interfaces, and diffractions from
embedded targets. A forward model is developed based on the spectral Green’s function
associated with layered media weighted by the antenna gain pattern, satisfying the nearfield condition and incorporating all refraction effects. Thereby, the weak scattering from
deeper layers and wide angles will be compensated in a model-based imaging algorithm
with the consideration of the refraction coefficients and gain pattern, respectively.
To form real-time continuous images of targets embedded in a layered structure, a
near-field uniform diffraction tomographic (UDT) imaging algorithm is developed.
Conventional diffraction tomography (DT) improperly applies the stationary phase
method for stratified environments to evaluate the innermost spectral integral. In DT the
large argument is assumed to be the depth, which is not appropriate for near-field
imaging. This results in amplitude discontinuities occurring at the interfaces between
adjacent layers. The correct dimensionless large argument is the product of the free space
wavenumber and the depth, as used in high-frequency asymptotic solutions. UDT
therefore yields uniformly continuous images across the interfaces. And like DT, UDT
ii
retains the fast Fourier transform (FFT) relation in the algorithm for generating images
very efficiently. Both 2D and 3D cases are investigated to verify the efficacy of the
proposed UDT algorithm.
To overcome the singularity problem caused by nulls in the antenna gain pattern in
DT and UDT, a fast back-projection (FBP) imaging algorithm is propose to provide
balanced monostatic and bistatic images, where both the stationary phase method and
FFT are implemented to achieve the same computational efficiency as DT and UDT. FBP
is derived based on the conventional back-projection (BP) imaging algorithm, and then
finding the Fourier transform relation after applying the matched filter to the forward
model. The comparison between UDT and FBP is investigated for the free space case.
FBP is also demonstrated by a combined monostatic/bistatic synthetic aperture radar
(SAR) imaging system, where the time for data collection is reduced by half through the
appearance of virtual array elements. To achieve even faster data collection, a fixed
multi-static array of antennas is proposed to efficiently illuminate a given subsurface
volume using a simultaneous multiple-input multiple-output (MIMO) type signal.
After constructing a radar image, it is of interest to quantify buried target
parameters, such as the shape, location, orientation, and size. These parameters can be
estimated by the implementation of the spatial moment method. However, due to the lens
effect when EM waves propagate through a slab with a high dielectric constant,
embedded objects which are roughly symmetric such as spheres can look similar in size
regardless of the actual size. To help overcome this issue, another unique feature of
images in layered media is employed, namely, the presence of shadows. Due to the lens
iii
effect, the projection of an embedded object onto the next deeper interface tends to show
better size information than its primary image. However, the shadow image can be fuzzy
if a conventional imaging algorithm is applied, decreasing the accuracy of size
estimations. Modification using a local spatial filter for improving shadow images is
investigated.
Lastly, it has been shown that the interface between the external region and a dense
medium tends to focus the energy from the antenna straight into the medium due to
refraction, losing important wide-angle illumination of embedded objects. This
phenomenon is investigated further to exploit the Brewster angle effect in order to launch
energy at wide angles. Illumination with a dipole oriented normal to the layered medium
is simulated because it has a null pointing straight into the medium and strong radiation
into the Brewster angle direction. When combined with a transverse source, it is shown
that the illumination of targets buried in a dense dielectric media is much improved.
iv
Acknowledgments
I would like to reserve the earnest gratitude to my advisor and mentor Prof. Robert
Burkholder for his support, guidance, and encouragement during my graduate study. Just
like what Albert Einstein said, ‘education is not the learning of facts, but the training of
the mind to think’. This is the way he trains me how to do research scientifically, how to
think independently, and how to keep the big picture in mind, leading me to become a
self-motivated researcher rather than just a student. I thank his endless help and countless
time in revising our papers, no matter it is weekend or holiday. Without him, it is
impossible for me to know and love the wonderful world of microwave imaging.
I would like to thank my committee members, Prof. Fernando Teixeira and Prof.
Graeme Smith for their precious time and advice on my candidacy exam and defense. I
would like to thank my sponsor PaneraTech. Without their support, it would be
impossible for me to finish this dissertation.
I would like to thank my dear friends and colleagues at ElectroScience Lab, OSU,
Dr. Shuai Shao, Jiaqing Lu, Dr. Weidong Li, Cagdas Gunes, Dr. Jue Wang, Qi Wang for
all the helpful discussions.
I would like to thank my parents Qun Hua and Qiming Ren for their unconditional
love through my life and the education they provide, guiding me to become the man I am
v
today. I would like to thank my mother-in-law Cuilian Zhu for treating me as her son. I
am grateful to my wife Xiaowei Ren for her endless support, company, and
understanding to make my study easier. I would like to thank my lovely sons Yihua Ren
and Yihao Ren for bringing me so much happiness.
vi
Vita
December 20, 1987 ........................................Born, China
2010................................................................B.S. Electrical Engineering, Xidian
University
2013................................................................M.S. Electrical Engineering, Xidian
University
2013 to 2016 .................................................Graduate Research Associate, Electrical and
Computer Engineering, The Ohio State
University
2016 to present ..............................................Graduate Teaching Associate, Electrical and
Computer Engineering, The Ohio State
University
Publications
[1] K. Ren, J. Chen, and R. J. Burkholder, “A 3-D uniform diffraction tomographic
imaging algorithm for near-field microwave scanning through stratified media,” IEEE
Transactions on Antennas and Propagation, under review
vii
[2] K. Ren and R. J. Burkholder, “A fast back-projection alternate to diffraction
tomography for near-field microwave imaging,” IEEE Geoscience and Remote Sensing
letters, under review
[3] K. Ren and R. J. Burkholder, “A 3-D novel fast back projection imaging algorithm for
stratified media based on near-field monostatic and bistatic SAR,” IEEE Transactions on
Antennas and Propagation, under revision
[4] K. Ren and R. J. Burkholder, “Size and shape estimation of buried objects from
spatial moments analysis of near-field microwave images,” IEEE Transactions on
Geoscience and Remote Sensing, under revision
[5] K. Ren and R. J. Burkholder, “Shadow-projection based hidden object identification
in near-field microwave images,” IEEE Geoscience and Remote Sensing letters, under
revision
[6] K. Ren and R. J. Burkholder, “Normal dipole based embedded object identification in
near-field microwave images,” IEEE Antennas and Wireless Propagation letters, under
revision
[7] K. Ren and R. J. Burkholder, “A uniform diffraction tomography imaging algorithm
for near-field microwave scanning through stratified media,” IEEE Transactions on
Antennas and Propagation, vol. 64, no. 12, pp. 5198-5207, Dec. 2016
[8] K. Ren and R. J. Burkholder, “A fast uniform 3D near-field microwave imaging
method for layered media,” IEEE International Symposium on Antennas and
Propagation, San Diego, Jul. 9-14, 2017
viii
[9] K. Ren and R. J. Burkholder, “A uniform diffraction tomographic algorithm for realtime portable microwave imaging,” IEEE International Symposium on Antennas and
Propagation, Fajardo, Puerto Rico, Jun. 26-Jul. 1, 2016
[10] J. Chen, K. Ren, and R. J. Burkholder, “3-D imaging for the detection of embedded
objects in layered medium,” IEEE International Symposium on Antennas and
Propagation, Vancouver, Canada, Jul. 19-25, 2015
[11] K. Ren, J. Chen, and R. J. Burkholder, “Investigation of gaps between blocks in
microwave images of multilayered walls,” IEEE International Symposium on Antennas
and Propagation, Vancouver, Canada, Jul. 19-25, 2015
Fields of Study
Major Field: Electrical and Computer Engineering
ix
Table of Contents
Abstract ............................................................................................................................... ii
Acknowledgments............................................................................................................... v
Vita.................................................................................................................................... vii
List of Tables .................................................................................................................... xv
List of Figures .................................................................................................................. xvi
Chapter 1: Introduction ...................................................................................................... 1
1.1 The Problem and Its Significance ............................................................................. 1
1.2 Literature Review ...................................................................................................... 3
1.3 Organization of This Dissertation ............................................................................. 8
Chapter 2: Uniform Diffraction Tomography for Stratified Media .................................. 12
2.1 Introduction ............................................................................................................. 12
2.1.1 The Point Target Model ................................................................................. 14
2.1.2 The Volumetric Current Model ...................................................................... 14
2.1.3 The Surface Current Model ............................................................................ 17
2.2 2D UDT Formulation .............................................................................................. 18
x
2.2.1 2D Diffraction Tomography .......................................................................... 21
2.2.2 2D Stationary Phase Solution......................................................................... 21
2.2.3 2D UDT Inverse Scattering ............................................................................ 27
2.2.4 2D Numerical Imaigng Results ...................................................................... 29
2.2.5 Experimetal Imaging Results ......................................................................... 35
2.3 3D UDT Formulation .............................................................................................. 38
2.3.1 3D Diffraction Tomography .......................................................................... 40
2.3.2 3D Stationary Phase Solution......................................................................... 41
2.3.3 3D UDT Inverse Scattering ............................................................................ 47
2.3.4 3D Background Removal ............................................................................... 48
2.3.5 3D Numerical Imaging Results ...................................................................... 51
2.3.6 Experimetal Imaging Results ......................................................................... 54
2.4 Summary of Chapter 2 ............................................................................................ 58
Chapter 3: Multi-Input and Multi-Output Radar Systerm for Fast Data Collection ......... 60
3.1 Introduction ............................................................................................................. 60
3.2 Comparison between DT and FBP .......................................................................... 61
3.2.1 Diffraction Tomography (DT) ....................................................................... 63
3.2.2 Fast Back-Projection (FBP) ........................................................................... 66
3.2.3 Numerical Imaging Results ............................................................................ 69
xi
3.3 Monostatic and Bistatic Combined Radar System .................................................. 71
3.3.1 The Forward Model........................................................................................ 71
3.3.2 BP Inverse Scattering ..................................................................................... 73
3.3.3 Stationary Phase Solution............................................................................... 75
3.3.4 Experimental Setup ........................................................................................ 78
3.3.5 Experimental Imaging Results ....................................................................... 84
3.4 Multistatic Radar System ........................................................................................ 89
3.4.1 The Multistatic Array Design......................................................................... 89
3.4.2 Multistatic Imaging Function ......................................................................... 91
3.4.3 Theoretical Imaging Results .......................................................................... 91
3.5 Summary of Chapter 3 ............................................................................................ 95
Chapter 4: Imaging Through Periodic Structures ............................................................. 97
4.1 Introduction ............................................................................................................. 99
4.2 Analysis of the Periodic Sturcture......................................................................... 100
4.2.1 Floquet Modes .............................................................................................. 100
4.2.2 Electric Field Distributions through the Grating .......................................... 105
4.3 Inversion by the Method of Least Squares ............................................................ 108
4.3.1 The Forward Modle in the Matrix Form ...................................................... 108
4.3.2 The Conjugate-Phase Matched Filter ........................................................... 110
xii
4.3.3 The Method of Least Square Error ............................................................... 110
4.3.4 Tikhonov Regularization .............................................................................. 110
4.4 Numerical Simulations .......................................................................................... 111
4.5 Summary of Chapter 4 .......................................................................................... 114
Chapter 5: Target Size and Shape Estimation Based on Near-Field Radar Imaging...... 116
5.1 Introduction ........................................................................................................... 116
5.2 Physics behind EM Illumination in Dense Media................................................. 118
5.3 Spatial Moments Approach ................................................................................... 127
5.4 Numerical Imaging Results ................................................................................... 129
5.5 Target Size Estimation .......................................................................................... 134
5.5.1 Target Shape Identification .......................................................................... 137
5.5.2 Size Estimation of Spherical Voids .............................................................. 138
5.5.3 Size Estimation of Cylindrical Voids ........................................................... 140
5.6 The Shadow-Projection Approach and Normal Dipole Illumination.................... 141
5.6.1 The Shadow-Projection Approach ............................................................... 141
5.6.2 Normal Dipole Illumination ......................................................................... 148
5.7 Summary of Chapter 5 .......................................................................................... 151
Chapter 6: Contributions and Future Work .................................................................... 152
References ....................................................................................................................... 156
xiii
Appendix A: Evaluation of the Second Derivative of the Phase Function in the 2D Case
......................................................................................................................................... 161
Appendix B: 2D Inversion Scheme ................................................................................ 162
Appendix C: Evaluation of the Second Derivative of the Phase Function in the 3D Case
......................................................................................................................................... 164
Appendix D: Signature of the Second Partial Derivative Matrix ................................... 165
Appendix E: 3D Inversion Scheme................................................................................. 167
xiv
List of Tables
Table 2.1. Differences between 2D DT and UDT ............................................................ 24
Table 2.2. Differences between 3D DT and UDT ............................................................ 44
Table 4.1. Cut-off Freuqencies of Different Floquet Modes for the 1D Periodic Structure
......................................................................................................................................... 102
Table 4.2. Cut-off Freuqencies of Different Floquet Modes for the 2D Periodic Structure
......................................................................................................................................... 103
Table 5.1. Estimated Locations ....................................................................................... 135
Table 5.2. Scaled Root Mean Square Error .................................................................... 137
Table 5.3. Root Mean Square Error for Spheres ............................................................. 138
Table 5.4. Estimated Rotation Angle and Size for Cylinders ......................................... 139
xv
List of Figures
Fig. 1.1. Real scenarios where near-field microwave imaging could be used .................... 2
Fig. 2.1. An embedded object in an opaque stratified structure ....................................... 13
Fig. 2.2. Comparison among exact, DT and UDT solutions of the inner integral based on
a 2D stratified structure with three dielectric layers  =2, 4, 10 ...................................... 25
Fig. 2.3. Comparison among exact, DT and UDT solutions of the inner integral based on
a 2D stratified structure with three dielectric layers  =2, 4, 10 ...................................... 26
Fig. 2.4. Targets in a 2D opaque layered environment ..................................................... 30
Fig. 2.5. The corresponding DT imaging results .............................................................. 32
Fig. 2.6. The corresponding UDT imaging results s ......................................................... 33
Fig. 2.7. Objects in a 2D opaque layered environment, comparable to the experimental
arrangement....................................................................................................................... 34
Fig. 2.8. Experimental scenario ........................................................................................ 35
Fig. 2.9. DT images from the experiment ......................................................................... 37
Fig. 2.10. UDT images from the experiment .................................................................... 37
Fig. 2.11. Comparison among exact, DT and UDT solutions of the inner double integral
based on a 3D stratified structure...................................................................................... 46
Fig. 2.12. Spherical voids in a 3D opaque layered environment ..................................... 51
Fig. 2.13. The corresponding DT imaging results ............................................................ 52
xvi
Fig. 2.14. The corresponding UDT imaging results ........................................................ 53
Fig. 2.15. Experimental scenario with 2D scan ................................................................ 55
Fig. 2.16. 3D UDT and DT imaging results without the moving average subtraction ..... 56
Fig. 2.17. 3D UDT and DT imaging results with the moving average subtraction .......... 57
Fig. 3.1. An object in free space ....................................................................................... 62
Fig.3.2. Comparison between 3-D images of a PEC cylinder based on both DT and FBP
algorithm ........................................................................................................................... 70
Fig. 3.3. A buried object in an opaque layered environment ............................................ 72
Fig. 3.4. The experiment scenario ..................................................................................... 78
Fig. 3.5. The downrange profile from the waterfall plot .................................................. 80
Fig. 3.6. Schema of the horn antenna illuminating the near-field layered medium .......... 81
Fig. 3.7. The horn antenna in HFSS.................................................................................. 83
Fig. 3.8. 3D imaging results at the second interface without the PSF .............................. 85
Fig. 3.9. 3D UDT imaging results at the second interface with considering the gain
pattern ............................................................................................................................... 85
Fig. 3.10. 3D imaging results with the PSF, the sensor step size is 2.54 cm along the x
axis and 1.27 cm along the y axis ..................................................................................... 87
Fig. 3.11. 3D imaging results with the PSF, the sensor step size is 2.54 cm along the x
axis and 1.27 cm along the y axis ..................................................................................... 88
Fig. 3.12. The proposed 13.2cm-by-13.2cm multistatic array .......................................... 90
Fig. 3.13. The 3D image with a point target in free space ................................................ 92
Fig. 3.14. 3D multistatic imaging results .......................................................................... 93
xvii
Fig. 3.15. 3D multistatic imaging results based on the interface model ........................... 94
Fig. 4.1.Periodic structures ............................................................................................... 97
Fig. 4.2. 3D FBP monostatic imaging results ................................................................... 98
Fig. 4.3. The metal grating with a diamond shape unit cell ............................................ 100
Fig. 4.4. The periodic structure with the plane wave illumination ................................. 101
Fig. 4.5. HFSS simulation of the metal grating .............................................................. 104
Fig. 4.6. Electric field variation in free space ................................................................. 106
Fig. 4.7. The electric field comparison with and without the grating ............................. 107
Fig. 4.8. A 2D PEC cylinder behind a metal grating. ..................................................... 112
Fig. 4.9. 2D imaging results. ........................................................................................... 113
Fig. 5.1. An object embedded in a dense slab. ................................................................ 118
Fig. 5.2. Electric field distributions of a y-oriented Hertzian dipole in YZ plane with the
presence of a dense slab .................................................................................................. 120
Fig. 5.3. Electric field distributions of a y-oriented Hertzian dipole in XZ plane with the
presence of a dense slab .................................................................................................. 122
Fig. 5.4. Electric field distributions of a z-oriented Hertzian dipole in YZ plane with the
presence of a dense slab .................................................................................................. 123
Fig. 5.5. Electric field distributions of a z-oriented Hertzian dipole in XZ plane with the
presence of a dense slab .................................................................................................. 124
Fig. 5.6. Electric field distributions of a Brewster angle-oriented Hertzian dipole in YZ
plane with the presence of a dense slab .......................................................................... 125
xviii
Fig. 5.7. Top view of 3-D images for different spherical voids based on both x and y
polarizations .................................................................................................................... 131
Fig. 5.8. Top view of 3-D images for different cylindrical voids based on both x and y
polarizations .................................................................................................................... 132
Fig. 5.9. The size estimation algorithm........................................................................... 134
Fig. 5.10. Images of the buried spherical void with diameter 5.08 cm illuminated by a yoriented dipole ................................................................................................................ 143
Fig. 5.11. Images of different buried spherical voids illuminated by a y-oriented dipole
......................................................................................................................................... 144
Fig. 5.12. Images of different buried cylindrical voids illuminated by a y-oriented dipole
......................................................................................................................................... 146
Fig. 5.13. 3D UDT images of different buried spherical voids illuminated by a y-oriented
dipole............................................................................................................................... 149
Fig. 5.14. 3D UDT images of different buried spherical voids illuminated by a z-oriented
dipole............................................................................................................................... 150
Fig. 6.1. Fixed antenna array for generating 3D snapshot images of stratified media
......................................................................................................................................... 154
xix
Chapter 1: Introduction
Nondestructive investigation of objects concealed by opaque layered environments
has been a high priority topic for many years, involving a wide range of applications such
as geophysical imaging, through-wall radar imaging and medical imaging [1]-[3]. The
through-wall scenario can be treated as a special air-dielectric-air case of general
stratified media, which may be applied to search and rescue operations in urban areas [4][7].
1.1 The Problem and Its Significance
While through-wall imaging can often be implemented using ray-tracing methods
because the target is in the far-field of the antenna probe [4], near-field imaging requires
a more accurate model of the local media. Thus, the properties of multiple layers should
be considered in order to detect gaps, voids, cracks, and erosions inside walls of old
buildings, pavement, furnace walls and bridges before structural failure occurs [8], as
shown in Figure 1.1. The existing systems are bulky, time-consuming or with poor
resolution. Therefore, it is of significance to develop a portable imaging system to
achieve real-time inspection of these real scenarios.
1
Fig. 1.1. Real scenarios where near-field microwave imaging could be used.
Compared with free-space, half-space and air-dielectric-air (through-wall)
situations, the presence of a layered dielectric media gives rise to tremendous complexity
in formulations where EM waves propagate through each layer and scatter back from
buried targets considering the refraction, multiple reflections and the physical and
electrical properties of each layer. It is also important to obtain accurate estimations of
the size and shape of voids and erosions to quantify target information, providing precise
evaluation of walls or road weaknesses.
2
1.2 Literature Review
Before the development of diffraction tomography (DT), X-ray tomography was
prevalent because of the convenient implementation of the Fourier slice theorem based on
the X-ray’s property: no diffraction and direct ray paths from aligned transmitters to
receivers [9]. Because of radiation hazards (X-rays can cause cell mutations and may lead
to cancer), it was of importance to introduce acoustic waves or microwaves as radiating
sources, leading to another problem caused by diffraction from targets. Therefore, DT
was proposed with the observation that the scattered field for a small object is a spatial
Fourier transform of the object function. This relation in 3-D free space was first
developed by Wolf [10]-[11] in 1969. In the early 1980s, Devaney [12]-[14] investigated
the inverse scattering problem with the filtered backpropagation algorithm and simulation
studies, then applied it to geophysical cases. These papers are based on X-ray
transmission tomography, with the configuration of source and receiver arrays on
opposite sides of the object. In [15]-[16], DT was applied to 2-D shallow subsurface
geophysical applications with well-to-well tomography, requiring two boreholes to place
explosive sources and associated receivers, respectively, and is therefore a destructive
method.
To preserve the integrity of the medium under test, a monostatic backscatter
arrangement was first introduced in [17] with the co-located transmitter and receiver for
ground penetrating radar (GPR) applications, where the round-trip propagation must be
considered in the imaging function. Based on the assumption of a homogeneous
3
background, the stationary phase method was correctly implemented to obtain a linear
relation between the spectral object function and received signals in the 2-D air-ground
scalar model. Neglecting the layers’ effect in the imaging algorithm, the vector E-field
was proposed to solve the 2-D inverse problem in [18] and the 3-D formulation was
introduced in [19]. It is noted that all these papers ignore some physical and/or electrical
properties in stratified environments, leading to problems of defocusing and false
placement of targets.
In the GPR application, the air-ground interface was first considered in [20] to
generate the 3-D image of a pipeline underground. However, there is an improper
implementation of the stationary phase method, which should be the high frequency
asymptotic solution and using all the fast-varying exponential terms. In obtaining the 2-D
imaging function, it is fallacious to reduce one dimension directly based on the 3-D
imaging function due to the fact that asymptotic evaluations of 2-D and 3-D cases are
different [20]. The DT algorithm has been applied to GPR with lossy soil, through-wall
imaging (TWI), and in-wall detection applications [21]-[24], where linear relations
between the spectral object function and received signal were obtained based on the
inexact formulation in [20] associated with the dyadic Green’s function. It is interesting
to observe that the inexact derivation of DT has a similar phase variation compared with
the asymptotically exact integral solution, showing why DT still presents seemingly good
imaging results in most GPR and TWI scenarios. However, there are discontinuities in
the amplitude of the imaging kernel when applying DT’s stationary phase method in the
4
layered media. Therefore, it is UDT [25]-[27] that overcomes the weakness in DT and
provides smoothly continuous images through layered structures. UDT also solves
discontinuities of buried targets near interfaces in DT results, presenting a better
evaluation of stratified environments. However, a singularity issue can be observed with
the inclusion of the antenna gain pattern, which may have nulls in the imaging volume.
With UDT, real-time image generation can be achieved based on a monostatic or
bistatic configuration. To realize a real-time imaging system, fast data collection is
required as well, where a multiple antenna arrangement should be considered. A system
with only one transmitter and multiple receivers is known as single-input multi-output
(SIMO) configuration, and with multiple transmitters it can be considered as multi-input
multi-output (MIMO). A numerical SIMO array was mentioned in [28], detecting one
point target in 2-D free space. In both [29] and [30], real-time data collection for targets
in the far field was achieved with a Vivaldi and dipole MIMO antenna array, respectively.
In [31], a near-field MIMO radar system was proposed to detect targets in free space,
where the curvature of the transmitted wavefront was considered in the imaging function.
However, there can still be a singularity problem when considering the antenna pattern
and interference of wavefronts. Therefore, it is necessary to develop a novel fast
algorithm to overcome the singularity problem, which is the fast back projection (FBP)
method [32]-[33].
Time-reversal (TR)-based imaging algorithm is another method to generate images
of buried objects even in a random media, where multipath/multiple reflections are
5
considered [34]-[40]. In [34], TR-based methods under non-ideal conditions were
investigated. In [35], singular vectors of the space-frequency multistatic data matrix for
backpropagation from TR antennas were applied. In [36], an overview of ultrawideband
(UWB) microwave sensing and imaging using TR methods was provided. In [37], to
identify, image, and track of moving targets in clutter, two TR approaches were
investigated. In [38], the statistical stability of TR methods in continuous random media
was investigated. In [39], a Bayesian UWB algorithm was proposed to generate electrical
properties of the breast tissue. In [40], to generate a statistically stable inversion, a
Bayesian compressive sensing-based UWB algorithm was developed to investigate
frequency decorrelation of UWB signals.
In GPR applications, to distinguish shallowly buried objects from the strong airground interface reflection, space-frequency TR matrices were applied to enhance GPR
signals and reduce potential clutters [41]. With applying various GPR pre-processing
approaches, TR algorithm provided better buried object detection [42]. In [43], optimum
antenna arrangement was investigated to optimize subsurface objects based on TR
methods. There are several papers [44]-[54] about hidden target’s size and location
estimation in GPR applications. Normally the GPR image is not focused, and a
hyperbolic object pattern appears in the image without consideration of the layer’s
electric property. In [44], a neural network algorithm was proposed to extract pipe
signatures in the GPR image, where the binarization of the discrete image is required
based on choosing a proper threshold of the grey level histogram. In [45]-[47], a diameter
6
estimation of the simulated buried cylindrical pipe was performed by the method of curve
fitting, a generalized Hough transform, weighted least squares, a recursive Kalman filter,
and maximum likelihood. But accurate object location cannot be obtained, because of
ignoring the ground’s dielectric property. In [48], the location and material type of the
buried object was estimated by a pattern-recognition approach based on the binary GPR
image. In [49], the Gaussian process approach was applied to estimate sizes of the buried
objects in the 2-D GPR image with different shapes: circular and square sections. In [50],
the pipe radius was estimated based on the GPR image, where the dielectric constant of
the ground is considered. In [51], the utility material type and radius were retrieved by
the implementation of the pattern recognition and curve fitting. In [52], the cylindrical
object location and relative permittivity were obtained by implementing the generalized
Hough transform and neural network methods. In [53], the orientation of the embedded
cylinder was estimated based on the shortest path reflection assumption, where twice 1-D
scans with a certain separation were required and there was no need to generate the GPR
image. In [54], three neural network algorithms were investigated to characterize the
shape, material, size, and depth of the buried object. These GPR image based size
estimation methods require a threshold to obtain the corresponding binary image. It is not
robust to choose the threshold, which depends on the antenna, frequency band, buried
object, and background noises. In GPR images, hyperbolae may be intersected if there are
several embedded objects, which will be difficult to isolate. In reality, these hyperbolae
may not be observed in GPR images. In aforementioned approaches, it requires intensive
7
2-D simulations to build the training set, where objects are 2-D circle, square, or triangle
sections. If dealing with 3-D scenarios, it will be extremely time-consuming to run
simulations.
In the open literature, there are few papers about a buried target’s size estimation
based on focused near-field microwave images, which presents the outline and accurate
position of the hidden object compared with a GPR image. Besides, there will be no
intersection problem and targets can be separated when the image resolution is fine
enough. It is known that a radar image is different from a conventional image (say photo,
binary image): 1. Radar images are generated based on microwave scattering, which can
reveal buried objects, 2. Refraction and multi-reflections exist when waves propagate
through layered media, 3. Based on different scattering mechanisms from targets of
different shape, images might not show their accurate sizes. Unlike the ultrasound image
in the medical application, which requires experienced doctors to operate and inspect, it
is of importance to develop a quantification algorithm to evaluate focused radar images
and output embedded targets’ information for inexperienced operators.
1.3 Organization of This Dissertation
In Chapter 2, both 2-D and 3-D UDT algorithms are formulated to present smoothly
continuous images through a stratified environment in real time. There are three methods
to obtain the forward model: 1. the point scatterer assumption, 2. Born approximation
based volumetric current model, and 3. Kirchhoff approximation based surface current
8
model. Various algorithms can be applied to solve this inverse problem, such as back
projection, range migration, diffraction tomography, uniform diffraction tomography, and
least squares methods. In this chapter, UDT and the comparison with DT are introduced,
where both algorithms seek a linear relation between the received signal and reflectivity
function in the spectral domain. The difference between DT and UDT is how to
implement the method of stationary phase to evaluate the inner spectral integral in closed
form. In DT, the inner integral is asymptotically evaluated based on a far-field
assumption, where the large parameter is the depth z. However, the stationary phase
method is a high-frequency asymptotic approximation and the far-field assumption is
invalid in the near-field region of the antenna, where the large parameter should be 0 
and all the exponential terms in the inner integral are fast varying. Then, real-time image
generation is achieved with the UDT algorithm based on both numerical and
experimental data. However, it takes longer to acquire the measured data than to generate
the image due to the monostatic configuration.
In Chapter 3, to realize fast data collection, a multistatic radar system with a sparse
antenna array is proposed. It is of importance to consider the antenna gain pattern in the
corresponding imaging algorithm to obtain balanced images in the near field, due to the
difference between monostatic and bistatic signal levels. Based on the monostatic and
bistatic radar configuration, a novel uniform fast back projection (FBP) imaging
algorithm is developed to generate balanced monostatic and bistatic images, overcoming
the singularity problem in UDT caused by nulls in the antenna gain pattern. With the
9
implementation of the monostatic and bistatic radar system, the data collection time can
be reduced by half. To achieve even less time in collecting data, a 13.2cm×13.2cm fixed
multi-static antenna array with nine real elements is proposed, which can provide
additional 24 virtual elements. Compared with the fully populated 81 element monostatic
array with /2 spacing, this is a sparse arrangement.
In Chapter 4, imaging through a periodic metallic structure is investigated, where the
propagation mechanism will be different from free space or planar layered media cases.
Floquet modes and their frequency responses should be considered, depending on the
periodicity and shape of the unit cell. The analytical analysis of Floquet modes is
investigated based on the periodicity. The frequency response of a periodic structure with
plane wave illumination is evaluated numerically by HFSS, where the resonance
frequency is in accordance with the analysis of Floquet modes. To obtain images through
a periodic structure, three methods (matched filter, least squares, and Tikhonov
regularization) are discussed, incorporating with the numerical round-trip electric field
intensity through a periodic structure.
Chapter 5 describes a novel target size estimation procedure based on the near-field
focused microwave image, which is important to provide precise evaluation of old
constructions before structural failure. A hybrid method of library based error analysis
and spatial moments is proposed to determine the shape, location, size, and orientation of
3-D buried objects in a dense slab. To distinguish embedded spheres and cylinders, a
quantitative comparison between the x and y polarized images is performed, because
10
otherwise they look the same in the image. In a dense slab, different buried spheres tend
to have a similar size in microwave images, due to refractions and low areas of
illuminations. The method of spatial moments will not work for spheres. Instead, a library
containing spherical voids with different radii is built for comparing to the sphere image
under test, where the smallest error indicates the maximum likelihood of its size. For
cylinders embedded parallel to the sensor aperture plane, the size difference can be
observed in radar images, and the central spatial moments approach is applied to obtain
the size, shape, and orientation. This method quantifies the object’s information to help
inexperienced operators understand these radar images. Due to the lens effect at the
boundary of the dense slab, two other methods are investigated to show the actual size: 1.
the shadow projection of the buried object onto the rear interface, 2. a combined image
based on the parallel and normal dipole configurations, where the normal dipole can
provide wide angle illumination into the slab.
Chapter 6 summarizes this dissertation, showing major contributions and potential
research in multistatic array design, the corresponding real-time imaging algorithm
development for layered media, and imaging through periodic structures.
11
Chapter 2: Uniform Diffraction Tomography for Stratified Media
Any imaging algorithm is some type of solution for the inverse problem of the
forward model, which will be different when changing sensor types and configurations.
Typically, there are two types of sensors: passive and active. The passive sensor can be
an electrocardiogram electrode to detect bioelectric signals resulting from the electrical
activity of the heart cell [58]. It can also be an antenna which receives the electric field
generated by a Wi-Fi device to help locate people behind a wall [59]. An active sensor
can be an X-ray generator, ultrasound transducer, or transmitting antenna, for example.
The sensor arrangement can be monostatic, bistatic, or multistatic, resulting in different
forward models. In this chapter, monostatic microwave imaging through layered media is
investigated, where one antenna behaves as the transceiver sensor, transmitting and
receiving EM waves.
2.1 Introduction
To obtain a quantitative or qualitative microwave image, there are mainly three
methods to obtain the forward model: 1. Point target assumption [60], 2. Born
approximation based volumetric current model for low dielectric contrast objects [61], 3.
12
x
,0
,1
,2
,
…
,−1
,
0
1
2

−1

…
Tx/Rx
z
y
Fig. 2.1. An embedded object in an opaque stratified structure. The antenna moves along the y axis for
the 2D case and in the xy plane for the 3D case, receiving the backscattered signal.
Kirchhoff approximation (physical optics, PO) based surface current model for metallic
or high contrast dielectric targets [62].
The configuration of a monostatic microwave imaging system is depicted in Fig.
2.1. One horn antenna behaves as the transceiver, which is close to the nonmagnetic
stratified media with variations of the electrical properties in the z direction only. It is
noted that dielectric loss and conductivity in each layer are ignored in the formulation,
which will not affect a qualitative image but could affect a quantitative inverse solution.
The antenna resides in the semi-infinite free space outside the material and moves along
the y-axis forming a synthetic aperture at distance 0 parallel to the first interface of the
layered structure. Each layer has the thickness  and real relative permittivity , . An
  harmonic time convention is assumed and suppressed for the following frequency
domain formulation, where ω is the radian frequency. For simplicity, the 2D forward
model is derived in this section.
13
The 2-D total monostatic scattered signal is then recorded coherently by the receiver
and digitized, where the noise is ignored,
S(y′, ω) =  (y′, ω) +  ()
(2.1)
where y′ is the sensor location along the y axis and Sr includes the direct reflections from
the layered media in the absence of any irregularities as well as the antenna port
reflection, both of which are independent of y′ and can be easily filtered out with a low-
pass filter over y′. Sσ is the electric field signal scattered from irregularities, which can be
modeled by the following three methods.
2.1.1
The Point Target Model
In [60], it was assumed that any object consists of several point scatterers. For one
point target at (y, z), the received electric field signal  is given by,
 (y′, ω) =  2 (, ,  ′ , ω) (, )
(2.2)
where  2 is the round-trip Green’s function and  is the unknown reflectivity of the
point target in the nth layer. In a 2D imaging region  of interest, the received signal
from objects can be treated as a superposition of signals from all point targets in that
region. Then, the 2D monostatic forward model is obtained,
2.1.2
 (y′, ω) = ∫  2 (, ,  ′ , ω) (, )
(2.3)
The Volumetric Current Model
In [61], the volumetric current model was developed for the dielectric object, where
the induced current is inside the target. Let us start from the 3D formulation and then
14
simplify to the 2D case. In a background medium with the permeability and permittivity
(, ), the electric filed at a given single frequency and observation point ′ (antenna
location), satisfies the vector wave equation, obtained from Maxwell’s equations,
∇ × ∇ × (′) − 2 (′ ) = −jω(′)
(2.4)
where  is a constant in nonmagnetic media,  is a function of position in the
inhomogeneous region V, and (′) is the current source (antenna) at the position
′(, , 0). Adding 2  (′) at both sides of (4) where  is the permittivity of the
buried object, (4) becomes,
 ×  × (′) − 2 ( −  )(′) = −(′) + 2  (′)
(2.5)
In order to find the electric field generated by the object, (2.5) can be rewritten as,
 ×  × (′) − 2  (′) = −(′) + 2 ( −  )(′)
(2.6)
It is found that right-hand side of (2.6) behaves as effective current sources. The total
electric field can be solved by the following integral equation,
� (′, ) + 2  ∫ ( −  )()
� (′, )
(′) = − ∫ ()

(2.7)
where ′ is the observation location,  is the source location, () is the total electric
� (, ′) is the dyadic Green’s function, and the first term in (2.7) is
field inside the object, 
the incident field. We can rewrite (2.7) as,
� (′, )
(′) =  (′) + 2  ∫( −  )()
(2.8)
where  (′) is generated by the displacement current and the magnetic polarization
charges generates the second term, which is also the scattered electric field  from
embedded objects. Rewriting the second term of (2.8), we have,
15
� (′, )
̂ ∙  (′) =  (′) = 02 ∫  ()()
(2.9)
where ̂ is the unit vector of receiving antenna polarization direction, 0 is the free
space wave number with 0 = ω√,  is the reflectivity function or object contrast
function with  () = 1 −  /, and () is the total electric field inside the object.
In (2.8), note that when  −  is small, the second term will be small compared
with  (′), the approximation (′) ≅  (′) can be obtained, which is known as
the first-order Born assumption. Therefore, (2.9) can be rewritten as,
� (′, )
̂ ∙  (′) = 02 ∫  () ()
(2.10)
With the consideration of the multi-frequency monostatic configuration, the received
signal will be a function of the antenna position and frequency. Then, (2.10) becomes,
� (′, , ω)
̂ ∙  ( ′ , ω) = 02 ∫  () (,  ′ , ω)
(2.11)
� (, ′, ω) ,
where the incident field can be expressed by  (,  ′ , ω) = −0 0 ̂ ∙ 
̂ = ̂ , and 0 = 120 is the wave impedance in free space. Applying the reciprocity
� (′, , ω) = 
�  (, ′, ω), the 3D forward model (2.11) becomes,
theorem 
� (,  ′ , ω) ∙ 
�  (,  ′ , ω) ∙ ̂ ∙ 
 ( ′ , ω) = −03 0 ∫  ()̂ ∙ 
(2.12)
For the 2D case, we consider one component from the dyadic Green’s function
(scalar case) and a similar expression as (2.3) can be obtained,
 (y′, ω) = −03 0 ∫  2 (, ,  ′ , ω) (, )
Note that the only difference between (2.3) and (2.13) is the amplitude term −03 0 .
16
(2.13)
2.1.3
The Surface Current Model
There is a limitation of the volumetric current model, which is that the target cannot
be a metal because the electric current does not exist inside. In [62], the surface current
model based on PO was proposed for both metallic and dielectric objects, where induced
currents are on the surface of the target. For the perfectly electric conducting (PEC)
object, there is no magnetic current and only the electric current is on the surface. For the
dielectric object, both electric and magnetic currents exist. In the following 2D derivation,
it is focused on the buried dielectric object with the relative permittivity  . The scattered
electric field is generated by the equivalent electric and magnetic surface currents on the
surface of the object, which is given by,
 = −0 0 ∫   − ∇ × ∫  
(2.14)
where C is illuminated surface of the object, G is the scalar Green’s function, the surface
electric current  = � ×  and the surface magnetic current  = −� × , � is the
surface outward normal,  and  are total electric and magnetic field generated with the presence
of the object, which can be approximated by a PO assumption,
 ≅ (1 + )
� ×  ≅
1−
0

(2.15a)
(2.15b)
where  is the reflection coefficient at the reflection point on the surface of the target.
Substituting (2.15a) and (2.15b) into (2.14) with ∇ × ( ) = ∇ × (� × ) =
�0 � ≅ −0  and  = −0 0 ̂  , we have,
�) ≅ −0 �
� ∙ 
−(∇G ∙ 
17
̂ ∙  =  = −̂ ∙ 02 0 ∫  2 
(2.16)
where �0 is the unitary vector with the incident wave propagation direction which is
normal to � and � ∙ �0 = −1 at the reflection point. (2.16) can be rewritten as the scalar
form with a delta like behavior reflectivity function σ () = (′)( − ′), where
(, ) is the pixel point in the imaging region and ′ is the illuminated surface location
of the object, and the integral over the surface will be over the imaging region, where the
received signal is,
 (y′, ω) = −02 0 ∫  2 (, ,  ′ , ω) (, )
(2.17)
Comparing among the three forward models (2.3), (2.13), and (2.17), no matter
based on the point target, volumetric current, or surface current model, the nonlinear
relation between the known received signal and the unknown reflectivity density function
can always be obtained and the only difference is the amplitude term, which is slowvarying and has no great influence on the imaging result. Because they all have the same
exponential term in the round-trip Green’s function and the fast-varying phase term is
dominant in generating images. In the following section, both 2D and 3D UDT algorithm
are derived to solve the forward model, by seeking the linear relation between the
spectral received signal and reflectivity function.
2.2 2D UDT Formulation
In this section, the 2D UDT algorithm is formulated, where the round-trip Green’s
function in (2.3), (2.13), and (2.17) can be replaced by the round-trip electric field
18
intensity 2 of the incident EM wave with the consideration of the spectral gain pattern.
Then, the forward model of (2.3) can be rewritten as,
 (y′, ω) = ∫ 2 (, ,  ′ , ω) (, )
(2.18)
En requires finding the propagation from the transmitting antenna to any point in the
nth layer. Since the antenna may be very close to the first interface, a far-field
approximation or ray-optical solution is technically not valid and will lead to inaccurate
results. To overcome this, we use the spectral form of the radiated field weighted by the
directivity pattern of the antenna, which is known to be valid in the near-field of the
antenna. Each plane wave can be propagated through the layers according to Fresnel
transmission theory, easily incorporating refraction effects. Here we neglect the
reflections inside layers because we are interested in the direct first-order incident field.
The reflections will give rise to late-time shadows of the target of interest and are easily
identified in the image. However, in general the reflections will be much weaker than the
incident field if the dielectric contrast between layers is not great.
In the nth layer, the weighted plane wave spectral representation of the incident field
is given by [61],
∞
−1
 (, , ′, ) = ∫−∞ � , � � � −� (−′)+�−∑=0
 �+∑−1
=0   �
 (2.19)
where  is the far-zone electric field radiation pattern of the antenna and the dispersion
relation in layer  is,
α = +�α2 − 2
19
(2.20)
and α = �,α / is the wavenumber in the layer with corresponding relative
permittivity ,α , where α denotes n or l. The free space wavenumber is 0 = /. The
positive root should be chosen in (2.20) to ensure that the plane waves propagate away
from the antenna in the positive z direction.  is given by,
Τ � � = ∏−1
=0  � �
 � � = 
(2.21a)
2,+1 
(2.21b)
,+1  +, ,+1
where Tl is the transverse magnetic (TM) Fresnel transmission coefficient for the
interface between region l and l+1. It should be mentioned that Tl has singularities on the
real ky axis for lossless media. For this case we add a small amount of loss to move the
singularities off the axis and avoid them in the integration. Note that in (2.19) the
frequency dependence of the radiation pattern is included in P along with some constants.
Substituting (2.19) into (2.18) using 2 =  ∙ ′ with primed variables for the
second integral we obtain the received signal,
∞
 (′, ) = ∫Ω  (, ) ∬−∞  ′ � , ��′ , �Τ � �Τ �′ � ∙
′
−1
′
 −[( + )(−′)+(+)�−∑=0
′
 �+∑−1
=0 ( + ) ]
dΩ
(2.22)
Eq. (2.22) is the forward model used here for the signal scattered from any
irregularities in a stratified media with homogeneous layers. The diffraction tomography
algorithm for extracting the reflectivity function σ n ( y, z ) from the measured signal is
described next.
20
2.2.1 2D Diffraction Tomography
The first step in diffraction tomography (DT) is to change variables with ′′ =  +
′ in (2.22) yielding,
∞
∞
′′
 ( ′ , ) = ∫Ω  (, ) ∫−∞  − (−′) ∫−∞ � , � ∙ �′′ −  , � � � ∙
−1
′
where,
 �′′ −  � ∙  −�(+)�−∑=0
′
 �+∑−1
=0 ( + ) �
′
α
= +�α2 − (′′ − )2
 ′′ Ω (2.23)
(2.24)
Interchanging the spatial integral with the spectral integral over ′′ results in a
Fourier transform (FT) relationship defined by,
′′
1 ∞
 (′, ) = 2 ∫−∞ ̃ �′′ , �  ′ ′′
′′ ′
̃ �′′ , � = ∫  ( ′ , ) −  ′
where,
(2.25a)
(2.25b)
′′
∞
̃ �′′ , � = 2 ∫Ω  (, ) −  ∫−∞ � , � ∙ �′′ −  , �Τ � �Τ �′′ −
′
−1
 � ∙  −�(+)�−∑=0
′
 �+∑−1
=0 ( + ) �
2.2.2 2D Stationary Phase Solution
 Ω
(2.26)
Before the inversion process we must simplify the inner integral over  in (26):
∞
�′′ , � = ∫−∞ � , ��′′ −  , �Τ � �Τ �′′ −  � ∙
′
−1
 −�(+)�−∑=0
′
 �+∑−1
=0 ( + ) �
21

(2.27)
Similar to the approach used in [22], the stationary phase method is applied although
with some modifications. These modifications are unique to the UDT algorithm because
previous papers [20]-[24] have not properly addressed the underlying requirements of the
stationary phase approximation, where 0  is the large parameter and all exponential
terms in (2.27) are fast-varying outside the vicinity of the stationary phase point.
The general form of the stationary phase solution for a single integral can be found
in [63],
+∞
2
() = ∫−∞ () −()  ≈ �|φ"(
0 )|
(0 ) −(0 )−∙sgn[φ"(0 )]/4
1, φ′′(0 ) > 0
)]
sgn[φ′′(0 = � 0, φ′′(0 ) = 0
−1, φ′′(0 ) < 0
(2.28)
(2.29)
where, u is a real variable, κ is real, positive, dimensionless and large, φ() is a real,
continuous function with continuous derivatives within the integral limits, and the
stationary phase point u0 is found by solving φ′ (0 ) = 0. The large parameter κ causes
the exponential factor to be rapidly varying everywhere except in the vicinity of the
stationary phase point, hence the integral is well-approximated by considering only this
region.
Previous papers [20]-[24] have applied the stationary phase method directly to the
inner integral in (2.26) making z the large argument by assuming that the depth z is in the
far-field. This assumption is inappropriate for the following reasons. First, z is not in the
far-field because this is a near-field imaging problem. Second, z is not the large and
22
dimensionless argument in the phase function; the large argument is actually 0  as in all
high-frequency asymptotic solutions. If z is used as the large argument, then the phase
term in (2.27) is not included in the phase function of the stationary phase solution,
because of propagation through the n-1 layers.
Comparing (2.28) and (2.27), one observes that  =  ,  = 0  and,
() = � , ��′′ −  , �Τ � �Τ �′′ −  �
1

1



−1
′
′
() =  ( + 
) �1 − ∑−1
=0  � +  ∑=0 ( +  ) 
0
0
(2.30)
(2.31)
It is noted that the 1/ k0z factor in (2.31) normalizes all of the kzα and depth variables.
Furthermore, if we had incorrectly assumed that z was the large argument, then all of the
phase terms involving tl would have been neglected. Clearly, these terms contribute to the
rapidly varying exponential as much as z does.
Setting the first derivative with respect to  to zero yields,
−
′ � � = �
0 
+
′′ −


′
0 
−


−1
� �1 − ∑−1
=0  � − ∑=0 �
0 
+
′′ −

 
′
0 
�  = 0
(2.32)
from which it may be shown that the stationary phase point is at  = ′′ /2. It also
′
follows that 
=  and,
 = +�2 − (′′ /2)2
(2.33)
Next, the second derivative of (2.31) at the stationary point is evaluated as (see
Appendix A),
2
2
′′ � �| =′′ /2 = − 
3
0 

22 

−1
�1 − ∑−1
=0  � − ∑=0 
23
3
0 

(2.34)
and sgn�′′ � = ′′ /2�� = −1. The stationary phase approximation of (2.27) is then
given by,
′′

′′

�′′ , � = 2 � 2 � T2 � 2 � �
2
′′ ′′
0 | � /2�|
where we have defined the dispersion relation,
′′
−1
∙  −��−∑=0
′′
 �+∑−1
=0   �+/4
′′

= �42 − (′′ )2
(2.35)
(2.36)
Table 2.1 Differences between 2D DT [22] and UDT
Stationary
phase solution
Slow-varying
term
Fast-varying
term
Large
parameter
(  )
"
′′ � �
2
DT
� ��′′ −  �Τ � �Τ �′′ −  �
UDT
� ��′′ −  �Τ � �Τ �′′ −  �
Depth z
0 
−1
′
′
−1
∙  −[∑=0 � +)−( + ) ∑=0
′
 −( + )
 �
′
 + 
′
−1
 −[( + )�−∑=0
−1
′
 �+∑−1
=0 � + ) �
−1
1

1

′ ) 
′
( + 
) �1 − � � + �( + 
0 

0

22
3

22
−
3
0 
=0
−1
=0
−1

22 
�1 − � � − �
3 

0 
=0
=0
As presented in Table 2.1 (for 2D), there are two common mistakes made in
applying the stationary phase method in DT for the layered case [22]: 1) Failure to
distinguish the slowly varying amplitude and fast changing phase terms, and 2)
misunderstanding the high-frequency asymptotic approximation, which is not based on a
far-field assumption. It is of importance to realize that all the exponential terms are fast
varying under the high frequency condition due to 0  in the phase term rather than the
depth z.
24
A comparison among DT, UDT and the exact (numerical) evaluation of (2.27) is
depicted in Fig. 2.2, based on a stratified structure with three dielectric layers with
corresponding relative permittivity 2, 10 and 4. The amplitude and phase are plotted for
" = 0 at 2 and 10GHz as a function of depth. As indicated in Figs. 2.2 (a) and (c), the
(a)
(b)
(d)
(c)
Fig. 2.2. Comparison among exact, DT [20] and UDT solutions of the inner integral (11) based on a 2D
stratified structure with the distance from the antenna aperture to the 1st interface 2.54cm and three
dielectric layers (thickness are 5.08cm, 7.62cm and 7.62cm with the corresponding  =2, 10, 4), (a)
amplitude (dB) at 2GHz, (b) phase (degree) at 2GHz, (c) amplitude (dB) at 10GHz, (d) phase (degree)
at 10GHz.
magnitude of the DT solution is discontinuous at the interfaces whereas UDT agrees well
with the exact solution and is smoothly continuous, resulting in the name of uniform DT.
25
(a)
(b)
(d)
(c)
Fig. 2.3. Comparison among exact, DT [20] and UDT solutions of the inner integral (11) based on a 2D
st
stratified structure with the distance from the antenna aperture to the 1 interface 2.54cm and three
dielectric layers (thickness are 5.08cm, 7.62cm and 7.62cm with the corresponding  =2, 4, 10), (a)
amplitude (dB) at 2GHz, (b) phase (degree) at 2GHz, (c) amplitude (dB) at 10GHz, (d) phase (degree)
at 10GHz.
With a high contrast in the relative permittivity (2 and 10) between the first two layers,
the discontinuity at the 2nd interface leads to a 3dB jump in the DT results in Figs. 2.2 (a)
and (c). Actually, this is not critical and will not influence 2D images too much unless
targets are located close to an interface and the image becomes discontinuous.
26
Figs. 2.2 (b) and (d) shows that the phases of DT and UDT track the exact phase,
although there is a constant π/4 difference for DT. To summarize these results, DT is
expected to give reasonable images because the phase variation is correct, but
discontinuities in amplitude will be present near the interfaces. UDT overcomes these
discontinuities with the correct amplitude function in the stationary phase solution.
Fig. 2.3 shows similar results as Fig. 2.2, based on a different layered structure with
the relative permittivity 2, 4 and 10. A 5dB jump can be observed at the last interface in
Figs. 2.3 (a) and (c), as a result of the high dielectric contrast (10 and 1). Figs. 2.2 and 2.3
verify that our stationary phase solution is very accurate even at the lowest frequency of 2
GHz.
2.2.3 2D UDT Inverse Scattering
From (2.35), the spectral domain signal of (2.26) becomes,
′′
̃ �′′ , � = 2 ∫Ω  (, ) −  2 �′′ /2�T2 �′′ /2� ∙
�
2
′′ ′′
0 | � /2,�|
To simplify notation define,
−1
′′
 −��−∑=0
′′
 �+∑−1
=0   �+/4
(2.37)
 2
 2
1
Ω
−1 
 �′′ , , � = 2 0 |′′ �′′ ⁄2�| =  3 ( − ∑−1
=0  ) + ∑=0  3 


(2.38)
After some rearrangement and replacing Ω with dydz, (2.37) becomes,
̃ �′′ , � = 2�2 �′′ /2�T2 �′′ /2� ∙
′′
−1
 � ∑=0
′′
 −∑−1
=0   �
∬
 (,)
′′ ,,�
� �
27
′′
′′
 −� +� 
(2.39)
We define the double integral in (2.39) the modified spectral reflectivity function,
′′
� �′′ , 
, � = ∬
 (,)
′′ ,,�
� �
′′
′′
 −� +� 
(2.40)
which from (2.39) gives us a linear relation between the signal and reflectivity density in
the spectral domain,
′′
� �′′ , 
, �
=
−1 ′′
′′ −1
′′ ,ω� −( ∑=0  −∑=0   )
̃ �
′′ /2�T2 � ′′ /2�
2� 2 �
 
(2.41)
where the signal spectrum ̃ is computed from (2.25b). To avoid the singularity caused
by the transmission coefficients in (2.41), a uniform and frequency independent loss of
−j0.1 rad/m is added to the wavenumber for all the layers.
To obtain the spatial reflectivity function, the inverse Fourier transform cannot be
applied directly to (2.40) because  is a function of ′′ and . Instead, a matched filter
is implemented to solve this inverse problem following the procedure in Appendix B with
′′
′′
the dispersion relation 
= �42 − (′′ )2 and changing variables from 
to . The
final result is,
 (, ) =
2
∫  (,,)  
∞
′′

(2.42)
2
 

′′
′′
 � � ,,�
∫ ∫−∞
where the integral over ω is over all the measured frequencies and,
∞
′′
′′
′′
 (, , ) = ∫−∞ � �′′ , 
, � � +�
′′

′′

(2.43)
The imaging function (2.42) can be implemented in real time with a 1D inverse FFT
(IFFT) slice-by-slice computation of (2.43). The algorithm goes as follows.
28
1. Precompute ̃ �′′ , � using (2.9b).
2. For each layer n:
′′
a. Compute � �′′ , 
, � from (2.41).
b. For each z-slice:
i.
Compute the denominator of (2.42).
ii.
Compute  (, , ) from (2.43) with the IFFT for each ω.
iii.
Integrate over ω to obtain  (, ) from (2.42).
It is noted that due to the product of transmission coefficients in the denominator of
(2.41), the amplitude of the imaging kernel gets larger in deeper regions. Theoretically,
this is to amplify weaker signals from deeper targets. However, in practice it has the
effect of increasing clutter and noise as well, because signal to clutter/noise ratio is fixed.
Furthermore, the gain pattern in the denominator can become small for wide angles
causing an angular-dependent amplification effect. The gain pattern terms are therefore
ignored in obtaining the following results because it does not affect the uniformity of the
images. For the transmission coefficient, only the TM mode is considered. Based on this
qualitative imaging algorithm, the scales in the following reconstructed images are
arbitrary and obtained directly from (2.42) without normalization.
2.2.4 2D Numerical Imaging Results
In this section 2D numerical data is obtained from the finite-difference time-domain
(FDTD) software GprMax [64] and employed to verify the efficacy of the proposed UDT
algorithm. The excitation is an x-directed line source. The FDTD time domain received
29
Cross-range domain (cm)
=1
-5
-12.7
0
,1
=2
1
,2
= 10
(0,7.62)
 = 7
2
,3
=4
,4
=1
3
Down-range domain (cm)
z
(a)
12.7 y
,0
Cross-range domain (cm)
=1
-5
-12.7
0
,1
=2
1
,2
=4
(0,7.62)
 = 7
2
,3
= 10
,4
=1
3
Down-range domain (cm)
z
12.7 y
Cross-range domain (cm)
,0
,0
=1
0
-5
-12.7
12.7 y
Cross-range domain (cm)
12.7 y
-5
-12.7
,1
=2
(0,7.4)
 = 4
1
,2
= 10
(2.54,8.9)
 = 7
2
,3
=4
,4
=1
3
Down-range domain (cm)
z
(b)
,0
=1
,1
=2
(0,7.4)
 = 4
0
1
,2
=4
,3
= 10
2
3
(2.54,8.9)
 = 7
Down-range domain (cm)
,4
=1
z
(d)
(c)
Fig. 2.4. Targets in a 2D opaque layered environment. (a) one dielectric target, (b) two dielectric
targets, (c) one dielectric target, (d) two dielectric targets. (a) and (b) have the same layers’ relative
permittivity  =2, 10, 4. (c) and (d) have the same layers’ relative permittivity  =2, 4, 10. (a) and (c)
nd
have the same target at the center of the 2 interface (0, 7.62) with  = 7. (b) and (d) have the same
st
nd
two targets at the 1 and 2 layers (0, 7.4) and (2.54, 8.9) with  = 4, 7 respectively. All lengths are
in cm.
signals are transformed into the frequency domain before applying the proposed imaging
function.
Fig. 2.4 shows the four different configurations considered in the 2D simulations.
Generally, it is a 5-layer structure, where both the first and last layers are semi-infinite
free space and the dielectric layers have thicknesses of 5.08cm, 7.62cm and 7.62cm along
30
z axis respectively. The line source moves along the y axis from -12.7cm to 12.7cm with
step size 1.27cm, behaving as a transceiver parallel to the 1st interface of the layered
media with standoff distance of 2.54cm. The frequency band of the line source is from
2GHz to 12GHz with step size 10MHz.
Figs. 2.4 (a) and (b) shows the three dielectric layers with relative permittivities
 = 2, 10, 4, which corresponds to the same configuration as used in Fig. 2.2. Figs. 2.4
(c) and (d) present three dielectric layers with = 2, 4, 10, which corresponds to the
configuration in Fig. 2.3. Figs. 2.4 (a) and (c) have a dielectric cylindrical target with
 = 7 located at the center of the 2nd interface (0cm, 7.62cm). In Figs. 2.4 (b) and (d),
two dielectric cylinders are embedded in the 1st and 2nd layers close to the 2nd interface
with  = 4, 7 and locations of (0cm, 7.4cm) and (2.54cm, 8.9cm) respectively. The 2D
cylinders are infinite along the x axis with a radius of 0.1cm.
The imaging results based on DT are presented in Fig. 2.5, and Fig. 2.6 presents
UDT. A moving average window subtraction (2.71) with reducing to 1D case is applied
to each image to mitigate the strong interface reflections. Windowing functions are also
implemented across the synthetic aperture and frequency band of the processed signal to
suppress sidelobe levels. Black lines indicate the locations of interfaces.
As predicted in the previous section, the DT method will lead to discontinuities at
each interface, especially when a high dielectric contrast is present in adjacent layers. It
is noted that the discontinuities along the interfaces in Figs. 2.5 (a)-(d) is not visible due
to removing the strong wall reflections. However, compared with Figs. 2.6 (a) and (b),
31
(a)
(b)
(c)
(d)
Fig. 2.5. The corresponding DT imaging results, (a) a dielectric target, (b) two dielectric targets, (c) a
dielectric target, (d) two dielectric targets. Black lines show locations of interfaces. Zoomed targets are
shown in the right top corner. The color scale is in dB with dynamic range 20dB.
the target discontinuity (3dB difference) on the two sides of the 2nd interface in Figs. 2.5
(a) and (b) is apparent when relative permittivities are 2 and 10 in the 1st and 2nd layers.
There is no discontinuity of the latter target inside the 2nd dielectric layer in Fig. 2.5 (b),
which is further from the 2nd interface than the front cylinder. While UDT possesses a
smooth and continuous amplitude term with respect to different depths, which is the
same as the exact solution, targets will be smooth even close to or at interfaces.
32
(a)
(b)
(c)
(d)
Fig. 2.6. The corresponding UDT imaging results, (a) a dielectric target, (b) two dielectric targets, (c) a
dielectric target, (d) two dielectric targets. Black lines show locations of interfaces. Zoomed targets are
shown in the right top corner. The color scale is in dB with dynamic range 20dB.
In Figs. 2.5 (b) and (d), it is difficult to distinguish the target discontinuity at the 2nd
interface with the DT approach, resulting from the low contrast in relative permittivities
(2 and 4) of the two adjacent layers. This explains why DT produces seemingly correct
imaging results in most GPR and TWI applications. UDT always provides smooth and
continuous images regardless of the contrast between layers.
33
12.7 y
,0
,1
0
1
,2
Cross-range domain (cm)
= 1 = 1.39 = 7.07
-5
-12.7
2
,3
= 1.39
,4
=1
(10.16,8.89)
(-10.16,4.44)
3
Down-range domain (cm)
(a)
z
(b)
(c)
Fig. 2.7. Objects in a 2D opaque layered environment, comparable to the experimental arrangement. (a)
Three dielectric layers’ relative permittivities are  =1.39, 7.07, 1.39 with thicknesses of 1.9, 4.45 and
1.9. There are two x-oriented 2D PEC cylinders with the radius of 0.1 and an air gap with 0.1 width
buried. All lengths are in cm. (b) DT imaging result, (c) UDT imaging result. The red and yellow
circles are the locations of the PEC cylinders and air gap, respectively. The color scale is in dB with
dynamic range 20dB.
Fig. 2.7 (a) presents the three dielectric layers with thicknesses of 1.9, 4.45, 1.9 cm
and relative permittivities  = 1.39, 7.07, 1.39 , corresponding to the following
experimental configuration. There are two x-oriented 2D PEC cylinders at (-10.16cm,
4.44cm) and (10.16cm, 8.89cm) and one air gap in the 2nd dielectric layer with the y
location 0cm embedded in this layered structure. After removing the strong interface
34
reflection with the moving average subtraction window, targets are shown in Fig. 2.7
based on both DT and UDT algorithm. UDT provides a smooth and continuous image on
targets at the 3rd interface. However, discontinuities can be observed in the DT result due
to the amplitude jump at the interface, as seen in Fig. 2.2.
2.2.5 Experimental Imaging Results
Two metal disks
The air gap
Antenna 2
y
Antenna 1
z
(a)
(b)
Fig. 2.8. Experimental scenario, (a) a three-layer structure consisting of the plywood, concrete block
and plywood, (b) three targets: two metal disks, attached to the 2nd and 3rd interfaces respectively and a
vertical air gap with 1mm width. Antenna 1 behaves as the transceiver.
As shown in Fig. 2.8 (a), two horn antennas are connected to a vector network
analyzer (VNA), scanning over the frequency band from 2GHz to 12GHz and moving
along the y axis from -30.48cm to 30.48cm with a step size of 1.27cm, parallel to the 1st
interface of the layered medium with the distance of 2.54cm. There are three dielectric
layers of the stratified medium, consisting of plywood (1.85cm thick with 1 = 1.39),
35
concrete block (4.46cm thick with 2 = 7.07) and plywood (1.85cm thick with 3 =
1.39). The estimation of each layer’s relative permittivity is found from the downrange
profile by computing the expected phase delays in each layer for the given thicknesses.
Fig. 2.8 (b) shows the locations of two metal disks with the radius of 1.1cm placed
between layers, and a 1mm vertical air gap between the edges of the concrete blocks. One
of metal disks is attached in the 2nd interface and the other is in the 3rd interface. The
following imaging results are based on the reflected signal (S11) received by antenna 1.
Two 1D scans through the slice are collected: one is free space measurement, and the
other is containing the metal disks with the presence of the layered structure. The
corresponding x slice of the 3D image can be obtained after subtracting these two signals
to eliminate the undesired reflections from connectors and the horn antenna. The time
cost is only 0.8s to generate each 241 × 51 pixel image.
Imaging results based on DT are presented in Fig. 2.9. Discontinuities are observed
at the interfaces and targets. As shown in Fig. 2.9 (a), without the removal of the strong
interface reflection, the targets can hardly be visualized and the last interface cannot be
observed with the DT approach. To have a better inspection of the targets, the moving
average window (2.71) is implemented to mitigate each interface reflection in Fig. 2.9 (b).
Because of the high dielectric contrast at the 2nd and 3rd interfaces, all three targets are
discontinuous across each interface, which supports the numerical results presented in Fig.
36
(b)
(a)
Fig. 2.9, (a) without the interface reflection mitigation, (b) with the moving average window. The red
and yellow circles are the locations of the metal disks and air gap, respectively. The color scale is in dB
with dynamic range 20dB.
(b)
(a)
Fig. 2.10. UDT images from the experiment, (a) without the interface reflection mitigation, (b) with the
moving average window. The red and yellow circles are the locations of the metal disks and air gap,
respectively. The color scale is in dB with dynamic range 20dB.
2.7 (b). There is approximately 6dB difference of the metal disk on the left and right side
of the 3rd interface, shown in the top right red circle in Fig. 2.9 (b).
However, UDT leads to smooth interface reflections and the last interface can be
detected in Fig. 2.10 (a). Comparing Figs. 2.10 (a) with (b), the edges of the 1mm air gap
37
can be visualized with the wall reflection removal. The images of the metal disks at the
2nd and 3rd interfaces are continuous, as expected.
2.3 3D UDT Formulation
The 2D forward model of section 2.1 is extended to the 3D case in this section.
Instead of scanning along the y axis, the sensor antenna can move in the xy plane to form
a rectangular synthetic aperture. From the 3D vector forward model (2.11), the scalar
received signal can be written as,
�  (′ , , )] ()
 (′ , ) = 02 ∫  (, ′ , ) ∙ [̂ ∙ 
(2.44)
where  =  . To acquire a simple forward model without the dyadic Green’s
function, we substitute 
j
0 0

�  (′, , ω) into (2.44), obtaining,
∙  ( ′ , , ω) = ̂ ∙ 
 (′ , ) =  0 ∫  (, ′ , ) ∙  ( ′ , , ω) ()
0
(2.45)
To evaluate the forward model of (2.45), En needs to be found for any point in layer
n. Since the antenna may be extremely close to the first interface, a ray-optical solution or
far-field approximation is invalid. To overcome this, we apply the spectral (plane wave)
domain radiated field weighted by the directivity pattern of the antenna, which is valid in
the near-field as well as the far-field. Furthermore, the vector form of the spectral electric
field intensity is incorporated, involving all field components and providing polarimetric
information of buried objects. Then, based on Fresnel transmission theory, each plane
wave can propagate through the interfaces, easily considering refraction effects. Here, the
38
reflections inside layers are ignored because the direct first-order incident field is
dominant. The reflections will result in late-time ghost images of the target and can be
easily identified.
Instead of using the Hertzian dipole, an arbitrary antenna is applied and  in (2.45)
will become a weighted plane wave spectral expansion of the incident electric field for
that antenna in the nth layer [48],
∞
∞
 (, ′ , ) = ∫−∞ ∫−∞ � ,  , � � ,  , � ∙
−1
 −� (−′)+ (−′)+�−∑=0
 �+∑−1
=0   �
 
(2.46)
′ ′
(2.47)
With the implementation of the reciprocity relation  (′ , , ) =  (, ′ , ) [48]
and using the prime notation for  (′, , ), it becomes,
∞
∞
′ (′, , ) = ∫−∞ ∫−∞ �′ , ′ , � �′ , ′ , � ∙
′
′
−1
′
 −� (−′)+ (−′)+�−∑=0
′
 �+∑−1
=0   �
where  is the far-zone electric field radiation pattern of the antenna co-polarized with
the Hertzian dipole, the vector spectral coefficient  and the transmission coefficient 
corresponding to the �-polarized antenna is given by [48],
� ,  , � =
−�∙  +�∙2
2
0 

∏−1
=0 ,+1 +  

,+1
= 

,+1
= 
2
�∙ 0 +�∙ 0 +̂ ∙
2
 +,+1
2,+1 
,+1  +, ,+1
39
2   2

0 

∏−1
=0 ,+1 (2.48a)
(2.48b)
(2.48c)
where the superscript TE and TM represents the transverse (to z) electric and transverse
magnetic fields, respectively. The dispersion relation in layer  (n or l above) is given by,
α = �α2 − 2
(2.49)
where 2 = 2 + 2 and α = �,α 0 is the wavenumber in the layer with
corresponding relative permittivity ,α , and 0 = /. Note that in (2.46),  absorbs a
factor of j/8 2 .
Substituting (2.46) and (2.47) into (2.45), we obtain the received signal,

∞
 (′ , ) =  0 ∫v  () ∬ ∬−∞  ′  ′ � ,  , �� ,  , � ∙
0
′
�′ , ′ , ��′ , ′ , � ∙  −[( + )�−
′
−1
 −[(+)�−∑=0
′
 �+∑−1
=0 ( + ) ]
dV
′ �+(
′
′
 + )�− �]
∙
(2.50)
Eq. (2.50) is the forward model used for the signal scattered from any targets in a
stratified structure with homogeneous layers. The diffraction tomography algorithm for
extracting the reflectivity density function  () from the measured data is presented
next.
2.3.1 3D Diffraction Tomography
In DT, the first step is to change variables with ′′ =  + ′ and ′′ =  + ′ in
(2.50) obtaining,
40

∞
′′
 (′ , ) =  0 ∫V  () ∬−∞  −[ �−
0
′ �+ ′′ �− ′ �]

∞
∙ ∬−∞ � ,  , �� ,  , � ∙
�′′ −  , ′′ −  , ��′′ −  , ′′ −  , � ∙
−1
′
where,
 −�(+)�−∑=0
′
 �+∑−1
=0 ( + ) �
  ′′ ′′ V
(2.51)
′
α
= �α2 − (′′ − )2 − (′′ − )2
(2.52)
Interchanging the spatial integral with the spectral integral over ′′ and ′′ gives rise
to a 2D Fourier transform relationship defined by,
′′ ′
′′ ′
∞
1
 (′ , ) = 42 ∬−∞ ̃ �′′ , ′′ , � (  +  ) ′′ ′′
(2.53a)
′′ ′
′′ ′
̃ �′′ , ′′ , � = ∫ ∫  (′ , ) −(  +  ) ′′
(2.53b)
where the spectral received signal is,
2
′′
′′
∞
 
̃ �′′ , ′′ , � = 4 0 ∫V  () −( + ) ∬−∞ � ,  , �� ,  , � ∙
0
�′′ −  , ′′ −  , ��′′ −  , ′′ −  , � ∙
′
−1
 −�(+)�−∑=0
′
 �+∑−1
=0 ( + ) �
2.3.2 3D Stationary Phase Solution
  V
(2.54)
To obtain the linear relation between the spectral received signal and reflectivity
function, we need to simplify the double integral over  and  in (2.54),
∞
∞
�′′ , ′′ , � = ∫−∞ ∫−∞ � ,  , �� ,  , � ∙ �′′ −  , ′′ −  , ��′′ −
′
−1
 , ′′ −  , � ∙  −�(+)�−∑=0
41
′
 �+∑−1
=0 ( + ) �
  (2.55)
Similar to the approach used in [20]-[23], the stationary phase approach is
implemented with some modifications. This is unique to the UDT algorithm because
previous papers [20]-[23] have not properly addressed the underlying requirements of the
stationary phase method, where 0  is the large dimensionless parameter and all
exponential terms in (2.55) are rapidly varying outside the vicinity of the stationary phase
point.
A general double integral can be evaluated approximately by its stationary phase
center (0 , 0 ) asymptotically [50],
∞
∞
() = ∫−∞ ∫−∞ (, ) −(.)  ≈
2 (0 ,0 ) −(0 ,0 )−∙sig[  (0 ,0 )]/4

�| (0 ,0 )|
(2.56a)
2 (,) 2 (,)
  (0 , 0 ) = [
2
∙
 2
−
2 (,)

](0 ,0 )
(2.56b)
where, u and v are real variables, k is real, positive, dimensionless and large, (, ) is a
real, continuous function with continuous partial derivatives within the integral limits.
sig[  (0 , 0 )] is the signature of the second partial derivative matrix at the phase
center, which is the difference between numbers of positive and negative eigenvalues.
The phase of the integral is only determined by the stationary phase center, while both
det and the phase center influence the amplitude term.
It should be noticed that the stationary phase method is a high frequency asymptotic
approximation. The large parameter is the 0  rather than depth z. Compared with (2.55)
and (2.56a), one can find that  =  ,  =  ,  = 0 , and
42
g(u, v) = � ,  , �� ,  , � ∙ �′′ −  , ′′ −  , ��′′ −  , ′′ −  , �
(2.57a)
 (, ) =
′ )�−∑−1  �+∑−1( + ′ )
( +

 
=0 
=0
(2.57b)
0 
From (2.57b), the stationary phase center can be determined by its first derivatives
with respect to  and  ,
 � , �

 � , �

= �
−
0  � ,
′′ −
′ � , �
0 
 
= �
−

′′ −
′
0  � , �
]  = 0
0  � ,
′′ −


+ 
�
]
+ 
�

′ � , � 
0 
 


−1
� �1 − ∑−1
=0  � + ∑=0 [
−
0  � , �
+
(2.58a)
′′ −


′
0  � , �
=0


−1
� �1 − ∑−1
=0  � + ∑=0 [
−
0  � , �
+
(2.58b)
By setting these partial derivatives to zero, it may be seen that the stationary phase
center is at (′′ /2, ′′ /2). Then, it follows that the dispersion relations in (2.49) and (2.52)
′
are identical 
=  and,
 = �2 − (′′ /2)2 − (′′ /2)2
(2.59)
Next, by evaluating the second partial derivatives of (2.58a) and (2.58b) at this
stationary point, (2.56b) can be simplified as (see Appendix C),
n �′′ /2, ′′ /2�
=

4 �1−∑−1
=0 �
4



2
+
 2
 ( )

4 ∑−1
=0
 4

−1 −1
∑−1
=0  � + 4 ∑=0 ∑=0
43

+ 4 ∑−1
=0
2 +  2 
 
  
2 +
2
 
   
3  3


 
3 3



�1 −
(2.60)
In (2.60), note that first two terms are self-interactions of each layer and the last two
terms are mutual interactions between two different layers, indicating that all the layers’
properties will influence the amplitude term of the double integral (2.55), such as the
thickness and relative permittivity, rather than only the self-interaction of the last layer,
shown in DT results in Table 2.2.
Table 2.2 Differences between 3D DT [20], [23] and UDT
Stationary
phase solution
Slow-varying
term
Fast-varying
term
Large
parameter
DT
� ,  , �� ,  , �
∙ �′′ −  , ′′ −  , ��′′ −  , ′′
UDT
� ,  , �� ,  , �
∙ �′′ −  , ′′ −  , �
∙ �′′ −  , ′′ −  , �
Depth z
0 
′
−1
−  , � −�−( + ) ∑=0
′
 −( + )
′
 +∑−1
=0 ( + )
′
−1
 −[( + )�−∑=0
′
 �+∑−1
=0 � + ) �
−1
(  )
"
′′ � �
2
1

′
( + 
) �1 − � �
0

′
 + 
4 �1 −
42
4

−1

∑−1
=0  �
4

+
2
=0
−1
1

′ ) 
�( + 
0

=0

 (  )2

+4�
4

−1
=0
−1
2
2
+  

 

+4�
�1 − � �
3 3


 
=0
−1 −1
=0
2
2
 
+  
 
+4� �
(s ≠ m)
3
3
 
 
=0 =0
There are only two negative eigenvalues of the second partial derivative matrix
[  (0 , 0 )], resulting in the signature of the matrix, which is −2 (see Appendix D).
The stationary phase approximation of (2.55) is then given by,
44
�′′ , ′′ , � =
′′ ′′
′′ ′′

 
 
2 2 �  , ,���  , ,��
2
2
2
2
′′ /2�|
0 �|n �′′ /2,
′′
−1
 −��−∑=0
′′
 �+∑−1
=0   �+/2
(2.61)
where, we have defined,
′′

= �42 − (′′ )2 − (′′ )2
(2.62)
As shown in Table 2.2, similar to 2-D UDT [25]-[26], compared with 3D DT and
UDT solutions, two mistakes can be observed in the implementation of the stationary
phase approach in DT [20], [23]: 1) Failure to distinguish the fast changing phase term, 2)
misunderstanding the essence of the stationary method, which is the high-frequency
asymptotic approximation and not based on the far-field assumption. It is of significance
to realize that all the exponential terms are fast varying under the high-frequency
condition, due to dimensionless 0  in the phase term of (2.55) rather than z.
In Fig. 2.11, a comparison among 3D DT, UDT and the numerical integration of
(2.55) is based on a planar layered structure with two dielectric layers of thicknesses
10.16cm, 7.62cm and relative permittivities 2, 10 respectively. By plotting (2.55) and
(2.61) with ′′ = ′′ = 0 at the lowest and highest frequencies 2 and 10GHz, both
amplitude and phase results are presented with respect to the depth z. As presented in
Figs. 2.11 (a) and (c), it is discontinuous of the DT solution at each interface, whereas
UDT matches well with the exact solution (2.55) and is smoothly continuous, resulting in
the name of uniform DT. Compared with the 2D case [26], in Fig.2.11 (a) and (c), a
larger amplitude jump (7dB) at the second interface (12.7 cm) with a high dielectric
45
(a)
(b)
(d)
(c)
Fig. 2.11. Comparison among exact, DT and UDT solutions of the inner double integral based on a 3D
stratified structure with the distance from the antenna aperture to the first interface 2.54cm and two
dielectric layers (thickness are 10.16cm and 7.62cm with the corresponding  =2, 10), (a) amplitude
(dB) at 2GHz, (b) phase (degree) at 2GHz, (c) amplitude (dB) at 10GHz, (d) phase (degree) at 10GHz.
contrast (2 and 10) can be observed, leading to a critical discontinuity in the 3-D imaging
results.
It is shown in Figs. 2.11 (b) and (d) that the phases of DT and UDT agree with the
exact phase, explaining that DT presents reasonable images. However, magnitude
discontinuities near the interfaces will cause discontinuities of targets close to them. With
46
the consideration of all the exponential terms, UDT overcomes these discontinuities with
the correct amplitude function in the stationary phase solution.
2.3.3 3D UDT Inverse Scattering
Substituting the stationary phase solution of (2.61) into (2.54), the spectral domain
received signal becomes,
 3
′′
′′

̃ �′′ , ′′ , � = −8 0 ∫V  () −( + ) ∙
0
−1
′′
 −��−∑=0
′′
 �+∑−1
=0   �
To simplify (2.63), we define,
′′
 ′′ 
��
2
 �′′ , ′′ , , � = 02  2 �detn � 2 ,
4 ∑−1
=0
2 + 2  2
2 
 
3 3


−1
4 ∑−1
=0 ∑=0
′′ ′′
2
2
2
2
′′
′′ 
0 �|detn �  , �|
2 2
∙
(2.63)
2 �−∑−1  �
4
=0 
4

 ( − ∑−1
=0  ) +
3  3



V
=
2 + 2  2
2 
 
′′ ′′
 
 
 2 �  , ,���  , ,��
2
+ 4 ∑−1
=0
  (s ≠ m)
2 ( )2
4

+
(2.64)
After some rearrangement and replacing V with dxdydz, (2.63) becomes,
′′
3
′′
′′
′′



 


̃ �′′ , ′′ , � = −8 0 2 � 2 , 2 , � � � 2 , 2 , �� ∙
′′
0
−1
 � ∑=0
∭
′′
 −∑−1
=0   �
 ()
′′ ,,�
� �′′ ,
∙
′′
′′
′′
 −( + +) ddd
(2.65)
We define the integral over the volume as the 3-D modified spectral reflectivity
function,
47
′′
� �′′ , ′′ , 
, � = ∭
 ()
′′ ,,�
� �′′ ,
′′
′′
′′
∙  −( + +) 
(2.66)
from which, (2.65) leads to a linear relation between the received signal and reflectivity
density function in the spectral domain,
′′
� �′′ , ′′ , 
, �
=
−1 ′′
′′ −1
′′ ,ω� −( ∑=0  −∑=0   )
−0 ̃ �′′ ,
8 3 
(2.67)

′′
′′ ′′
′′
 
2  
0  � 2 , 2 ,��� 2 , 2 ,��
where the spectral received signal is derived from (2.53b).
To obtain the spatial reflectivity function, the inverse Fourier transform cannot be
applied directly to (2.66), because  depends on ′′ , ′′ and  as well. Instead, a
matched filter is implemented to solve this inverse problem with the dispersion relation
′′
′′
= �42 − (′′ )2 − (′′ )2 and changing variables from 
to  (see Appendix E).

The final reflectivity density function is,
 () =
∞
2
∫  (,)  
∞
∫ ∫−∞ ∫−∞
′′
′′
 
′′ ′′
′′
 � � , ,,�
(2.68)
2
 

where, the integral over ω is from min to max , over all the measured radian
frequencies and,
∞
∞
′′
′′
′′
′′
 (, ) = ∫−∞ ∫−∞ � �′′ , ′′ , 
, � � + +�
′′
′′ 
′′

(2.69)
The above imaging function (2.68) can be achieved in real time with a 2D inverse
FFT slice-by-slice computation of (2.69). The algorithm is shown as follows.
48
Step 1. Apply a 2-D spatial Fourier transform over  ′ and  ′ to obtain ̃ �′′ , ′′ , �
from the collected data  ( ′ ,  ′ , ) using (2.53b).
′′
Step 2. For each layer n, compute � �′′ , ′′ , 
, � from (2.67), where the vector
spectral coefficient can be determined by (2.48a)-(2.48c). Note that this spectral
coefficient does not change within a given layer.
Step 3. For a given z-slice, compute the denominator of (2.68) first, then compute
 (, ) from (2.69) with a 2D inverse spatial Fourier transform over  and  , and last
sum over the operating radian frequency band ω from (2.68) to obtain the slice image
 ().
Step 4. Repeat Steps 2 and 3 for the next depth slice  +  until the end of the depth.
The 3-D image of the buried targets is generated from all the slices. To avoid
singularities in the spectral vector coefficient, a small loss is added to each layer to make
integration paths along  and  off the real axes. Note that the gain pattern in the
denominator of (2.67) can be small and also cause a singularity, and is ignored in
obtaining the following numerical and experimental images. And there is no influence on
the uniformity of the results. Since this is a qualitative imaging algorithm, scales in the
following reconstructed images are arbitrary and obtained directly from (2.68) without
normalization.
2.3.4 3D Background Removal
The scattered signals reflected from buried objects are very weak compared with the
background signal, so it is essential to greatly reduce or eliminate the background. Since
49
the background signal is expected to be relatively independent of antenna position if the
scan plane is parallel to the interfaces, the classical method of background removal is
subtraction of the average over all the scanning area. However, in reality the media of
each layer is not perfectly homogeneous and its electromagnetic properties vary slightly
with position on the measurement plane. Furthermore, the scan plane may not be
perfectly parallel. Thus, subtracting the average may not lead to sufficient background
reduction.
A 1D moving average filter has been widely applied to remove the background in
GPR and through-the-wall radar [65]-[66]. In this work a 2D moving average filter is
introduced. The receiver collects frequency samples of the coherent reflected signal and
stores the data as a matrix denoted by (, , ), where m=1,2,…,M, n=1,2,..,N are
antenna positions in the x and y directions in the scan plane. The desired scattered signal
′(, , ) can be estimated from the received signal (, , ) with a 2D moving
average filter:
S ′ (m, n, ω) = S(m, n, ω) − ̃(m, n, ω)
(2.70)
where ̃(m, n, ω) is the moving average of S(m, n, ω):
̃(m, n, ω) =
+
+

∑=−
∑=− (,,)


(2 +1)(2 +1)
(2.71)
in which 2 + 1 and 2 + 1 are the integer lengths of the moving window in the x and
y directions, respectively. The lengths are chosen based on the expected variations in the
background and the size of the objects of interest. In the following numerical and
50
experimental imaging results, both  and  are chosen as 2 and the integer lengths are
5.
2.3.5 3D Numerical Imaging Results
In the following section, 3-D numerical data is generated from the finite-difference
time-domain (FDTD) software GprMax [51], transformed into the frequency domain and
applied to validate the efficacy of the proposed 3-D UDT algorithm. The source is a yoriented Hertzian dipole, moving in a rectangle aperture plane.
,0
Cross-range domain (cm)
=1
0
-12.7
,1
=2
1
,2
= 10
(0,0,12.7)
 = 1
12.7 x
,3
=1
Cross-range domain (cm)
12.7 x
2
Down-range domain (cm)
z
-12.7
y
,0
=1
0
y
,1
=2
(0,0,7.62)
 = 1
1
,2
= 10
,3
=1
(2.54,0,15.24)
 = 1
2
Down-range domain (cm)
z
(a)
(b)
Fig. 2.12. Spherical voids in a 3D opaque layered environment, (a) a spherical void locates at the center
of the second interface (0, 0, 12.7), (b) two spherical voids locate inside the first and second dielectric
layers at (0, 0, 7.62) and (2.54, 0, 15.24) respectively. Both (a) and (b) have the same layers’ relative
permittivities of  =2, 10 and thicknesses of 10.16, 7.62 respectively. All spherical voids have the same
diameter of 1cm. All lengths are in cm.
As shown in Fig. 2.12, there are two configurations in the 3D simulations. Generally,
it is a 4-layer structure, where both the first and last layers are semi-infinite free space.
The thicknesses of dielectric layers are 10.16cm and 7.62cm with the corresponding
relative permittivities 2 and 10. The y-directed Hertzian dipole is placed on the z=0 plane
51
(a)
(b)
(c)
(d)
Fig. 2.13. The corresponding DT imaging results, (a) 3D view of the spherical void, (b) y cut of the
spherical void at y=0 cm, (c) 3D view of two spherical voids, (d) y cut of two spherical voids at y=0 cm.
Zoomed targets are shown in the right top corner. The color scale is in dB with dynamic range 20dB.
with a standoff distance of 2.54cm, moving from -12.7cm to 12.7cm with a step size of
1.27cm in both directions. The frequency band of the source is from 2GHz to 10GHz
with of a step size 25MHz.
In Fig. 2.12 (a), there is one spherical void at (0cm, 0cm, 12.7cm), centered at the
second interface. While in Fig. 2.12 (b), two spherical voids are embedded in the first and
second dielectric layers respectively, at (0cm, 0cm, 7.62cm) and (2.54cm, 0cm, 15.24cm).
All spheres have the same diameter 1cm.
52
(a)
(b)
(c)
(d)
Fig. 2.14. The corresponding UDT imaging results, (a) 3D view of the spherical void, (b) y cut of the
spherical void at y=0 cm, (c) 3D view of two spherical voids, (d) y cut of two spherical voids at y=0 cm.
Zoomed targets are shown in the right top corner. The color scale is in dB with dynamic range 20dB.
The imaging results based on DT are shown in Fig. 2.13, and Fig. 2.14 presents
UDT results. A moving average window subtraction (2.71) is applied to mitigate the
strong interface reflection. Windowing functions (Chebyshev) are also implemented
across the synthetic aperture and frequency band of the processed signal to suppress
sidelobe levels.
As predicted in Fig. 2.11, the DT method will cause discontinuities at all the
interfaces, especially when a high dielectric contrast is present in adjacent layers. To
53
achieve a better observation of the target, the DT y cuts of 3D images at the target slice
y=0 cm are obtained in Figs. 2.13 (b) and (d), where the discontinuities at the second
interface are presented. In Fig. 2.13 (d), the discontinuity occurs at the shadow of the
sphere in the first dielectric layer, projected onto the second interface. However, it is
shown in Fig. 2.14 that UDT presents smoothly continuous images through all the layers.
Compared with Fig. 2.13 (d), two spherical voids in Fig. 2.14 (d) have the similar peak
value, because of considering all layers’ thicknesses and relative permittivities in UDT.
2.3.6 Experimental Imaging Results
The proposed imaging algorithm has been validated experimentally for detecting
metallic disks embedded in a three-layer media. The experimental setup shown in Fig.
2.15 consists of a Vector Network Analyzer (VNA) Agilent 5230A and a broadband horn
antenna OBH20180B mounted on a 2D raster scanner. The scanner moves the antenna
parallel to the three-layered structure with a standoff distance of 2.54 cm, synthesizing a
square aperture of 25.4×25.4 cm with a step size of 1.27 cm in both x and y directions for
a total of 441 positions. The aperture center is set to be x=0, y=0, and z=0, where z=0
corresponds to the measurement plane. The three layer media is comprised of plywood, 4
abutting concrete blocks, and plywood. The layers have the following thicknesses,
starting with the layer closest to the antenna: 1.85cm, 4.46cm, and 1.85cm. Four metallic
disks of 1.3 cm radius are located at (6.35, 9.65, 8.85) cm, (6.35, -4.83, 8.85) cm, (-7.87,
9.65, 8.85) cm and (-8.38, -4.57, 8.85) cm. Using this VNA-based monostatic microwave
imaging system, the complex S-parameter 11 was recorded at each antenna position for
54
14.48cm
x
x
Antenna 2
Antenna 1
z
14.22cm
y
1mm air gap
y
(b)
(a)
Fig. 2.15. Experimental scenario with 2D scan, (a) a layered medium consisting of the plywood,
concrete block and plywood, (b) five targets: four metal disks, attached to the third interface and an air
gap with 1mm width. Antenna 1 behaves as the transceiver.
321 frequency steps from 2 to 10 GHz with a step size of 25 MHz. The recorded data is
pre-processed by subtracting the identical measurement in free space, which is important
to cancel out the reflections from the coaxial cable connections, the antenna, and the
raster scanner structure. Note that this is not the same as the background subtraction
discussed in Section 2.3.4 which aims to remove the strong reflections from all the
interfaces, although that approach would also automatically remove these undesired cable
and antenna reflections.
In the imaging algorithm, the relative permittivity of each layer is assumed as a
priori knowledge. For unknown lossless materials, the relative permittivities can be
estimated by replacing the layered structure with a flat metal plate and collecting
frequency data with the antenna at the center position. Then, the dielectric constants of
the plywood and concrete block are estimated as 2.1 and 5.0, respectively.
55
(a)
(b)
(c)
(d)
Fig. 2.16. 3D UDT and DT imaging results without the moving average subtraction, (a) 3D UDT image
at the third interface, (b) y cut of the 3D UDT image at y=9.65 cm, (c) 3D DT image at the third
interface, (d) y cut of the 3-D DT image at y=9.65 cm. The color scale is in dB with dynamic range 20
dB.
Without the moving average subtraction, 3D UDT and DT imaging results are
presented in Fig. 2.16, where all the interfaces, four metallic disks and the air gap can be
observed. The time cost for UDT and DT images are the same, 6 seconds based on the
image pixel size 101×101×51. In DT results shown in Figs. 2.16 (c) and (d), there are
discontinuities through all the layers. The four disks are obscured by the last interface
reflection. In Figs. 2.16 (a) and (b), however, UDT gives smoothly continuous interfaces.
56
(a)
(b)
(c)
(d)
Fig. 2.17. 3D UDT and DT imaging results with the moving average subtraction, (a) 3D UDT image at
the third interface, (b) y cut of the 3D UDT image at y=9.65 cm, (c) 3D DT image at the third interface,
(d) y cut of the 3-D DT image at y=9.65 cm. The color scale is in dB with dynamic range 20 dB.
In Fig. 2.17, to have a better observation on targets, the moving average subtraction
(2.71) is implemented to remove the strong interface reflections. Compared with Figs.
2.16 (a), the four disks and air gap are clearly shown in Fig. 2.17 (a). Similar results as
Fig. 2.16 can be obtained: Fig. 2.17 (b) shows smoothly continuous targets attached on
the third interface, however, targets and shadows discontinuities are presented in Fig.
2.17 (d), which is based on the DT algorithm. Therefore, it is concluded that UDT can
57
always provide smooth images through layers, whatever the dielectric contrast between
two adjacent layers is, overcoming the discontinuity caused by DT.
2.4 Summary of Chapter 2
In this chapter, both 2D and 3D uniform diffraction tomographic imaging algorithms
are developed to obtain imaging results of layered media in the near field of the antenna
array, which can be applied to any homogenous planar layered structure with no
limitation on thicknesses, relative permittivities of layers and sizes of targets. This work
is based on three basic mechanisms: 1) the point target, Born, or Kirchhoff assumption
leads to a simple convolution relation between the spatial received signal and the spatial
reflectivity function, 2) the scalar E-field is used in the formulation for 2D images and the
vector E-field is used for the 3D derivation, 3) the high frequency asymptotic solution
based on the stationary phase method enables the reduction of the integration complexity
to obtain a linear relation between the received signal and the reflectivity function in the
spectral domain. The conventional DT when applied to layered media makes an incorrect
assumption about the large argument in the stationary phase solution, resulting in
amplitude discontinuities in the image as seen in the imaging results. UDT on the other
hand presents a smooth and continuous amplitude variation, and is in very good
agreement with the exact integration result as well.
With the implementation of UDT, the FFT can be easily applied to achieve a realtime imaging algorithm. However, it is still time-consuming to operate the raster scan to
58
obtain the monostatic backscattered data. Therefore, it is necessitated to design a fixedaperture single-input multiple-output (SIMO) or multiple-input multiple-output (MIMO)
radar system to achieve fast data acquisition, and develop the corresponding real-time
imaging algorithm. A solution to these requirements is presented in the next chapter.
59
Chapter 3: Multi-Input and Multi-Output (MIMO) Radar System
for Fast Data Collection
As discussed in Chapter 2, UDT is developed to generate an image in real time
based on the monostatic configuration with one antenna mechanically scanning over the
synthetic aperture step-by-step, which is too time-consuming for the data collection. UDT
corrects the discontinuity at each interface of the layered media when applying
conventional DT, and the antenna gain pattern is considered in the formulation to
properly weight the plane waves in the spectral domain. In this chapter, the fast back
projection (FBP) imaging algorithm is developed to solve singularity issue in UDT and
provide balanced images when combining monostatic and bistatic measurements.
Furthermore, a sparse multistatic antenna array is designed for fast data acquisition.
3.1 Introduction
A MIMO radar system is one of the best candidates to realize fast data collection,
where the minimum number of antennas can be achieved by utilizing both monostatic
and bistatic measurements, and the associated formation of virtual elements to fill in the
gaps between real elements. Therefore, it is of importance to include the antenna pattern
in the imaging algorithm to balance the monostatic and bistatic signals. However, by
60
seeking a linear relation between spectral domain received signal and the reflectivity
function, the gain pattern can cause a singular problem, because it is in the denominator
term of imaging function. To obtain a balanced image without singularities, FBP is
proposed and verified based on the monostatic and bistatic SAR measurements in this
chapter, achieving the same time cost in the image generation and half the time cost in
the data collection.
In [67], a fully populated monostatic antenna array was proposed to achieve fast data
acquisition. To reduce the total number of antenna elements, a MIMO array is designed
and validated by simulations, where only 9 antennas are used to generate a 13.2×13.2
2 fixed sparse aperture rather than the fully populated 81 antennas in the monostatic
configuration. Further work is needed to verify the proposed antenna array
experimentally, where one antenna will be the transmitter and other corresponding
antennas will be receivers at a given time. It is also important to develop the
corresponding fast imaging algorithm, as is done in this chapter.
3.2 Comparison between DT and FBP
For simplicity, the comparison between DT and FBP is illustrated by considering an
object in free space and monostatic data collection. The configuration of a monostatic
microwave imaging system is depicted in Fig. 3.1, where one antenna behaves as the
transceiver, transmitting and receiving EM waves. In free space, data collection is
performed by scanning the antenna to synthesize a rectangular aperture in the plane  = 0.
61
x
Tx/Rx
z
y
Fig. 3.1. An object in free space. The transceiver antenna moves in the xy plane.
The forward model of the imaging problem is provided in Chapter 2.1, where the 2D
scalar case is shown in (2.13). Extending this expression to a 3D case and replacing the
Green’s function with the incident electric field intensity, the 3D scalar forward model
can be obtained,
 (′, ω) = ∫  2 (, ′, ω)()
(3.1)
where, ′(′, ′, 0) is the antenna location in the xy aperture plane, ω is the radian
frequency, (, , ) is the location of any pixel point in the image, V is the 3-D imaging
region of interest, and  is the electric field intensity of the incident wave in free space,
which can be expressed by a summation of weighted plane waves [51],
∞
∞
(, ′ , ) = ∫−∞ ∫−∞ � ,  , � ∙  −� (−′)+ (−′)+ �  
(3.2)
where,  is the far-field gain pattern of the antenna, 0 = ω/ is the wave number in free
space and the corresponding dispersion relation  = +�02 − 2 − 2 . Note that, in free
space, the solution of UDT is the same as DT.
62
3.2.1 Diffraction Tomography (DT)
In this section, the singularity problem caused by the antenna gain pattern in the DT
algorithm can be observed based on the 3-D free-space case. The derivation is similar to
the layered case in the previous section. Substituting (3.2) into (3.1), using  2 =  ∙  ′
with primed variables for the second integral,
∞
 (′ , ) = ∫v () ∬ ∬−∞  ′  ′ � ,  , ��′ , ′ , � ∙
′
 −[( + )�−
′ �+(
′
′
′
 + )�− �+( + )]
dV
(3.3)
Changing variables with ′′ =  + ′ and ′′ =  + ′ , (3.3) becomes,
∞
′′
 (′ , ) = ∫V () ∬−∞  −[ �−
′ �+ ′′ �− ′ �]

′
∞
∬−∞ � ,  , � ∙ �′′ −
 , ′′ −  , � ∙  −�( +)�   ′′ ′′ V
(3.4)
where, ′ = �02 − (′′ − )2 − (′′ − )2, which is the dispersion relation.
Interchanging the spatial integral with the spectral integral over ′′ and ′′ leads to a
Fourier transform relationship shown in (2.53a) and (2.53b). Then, the spectral domain
received signal can be expressed by,
′′
′′
∞ ∞
̃ �′′ , ′′ , � = 4 2 ∫V () −( + ) ∫−∞ ∫−∞ � ,  , � ∙ �′′ −
′
 , ′′ −  , � ∙  −( + )   V
(3.5)
To obtain the linear relation between the spectral received signal and reflectivity
function, it is necessary to simplify the inner double integral over  and  in (3.5),
∞
∞
′
�′′ , ′′ , � = ∫−∞ ∫−∞ � ,  , ��′′ −  , ′′ −  , � ∙  −( +)   (3.6)
63
The inner double integral in (3.5) over  and  can be evaluated asymptotically by
the stationary phase method [63], where the large dimensionless parameter is 0  and the
phase function � ,  � = ( + ′ )/0 . By taking the first derivatives of the phase
function with respect to  and  , the stationary phase center is obtained at (′′ /2, ′′ /2)
and the dispersion relations ′ =  = �02 − (′′ /2)2 − (′′ /2)2 . As shown in the
previous chapter, by evaluating the second partial derivatives of � ,  � at this
stationary point, the amplitude term of the asymptotic solution is determined, which is
|�′′ /2, ′′ /2�| = 4/4 . Then, the stationary phase approximation of (3.6) is given
by,
�′′ , ′′ , � =
′′ ′′
 
′′2  2 �  , ,�
2
40 
2
′′
 − 
(3.7)
where ′′ = 2 = �402 − (′′ )2 − (′′ )2 . Substituting the stationary phase solution
(3.7) into (3.5), the spectral domain received signal becomes,
̃ �′′ , ′′ , � =  3
′′ ′′
 
′′2  2 �  , ,�
2
0
2
∫V
()

′′
′′
′′
 −( + + ) V
(3.8)
The Fourier transform relation between the spatial and spectral reflectivity function
can be observed,
��′′ , ′′ , ′′ � = ∫V

∞
()

′′
′′
′′
 −( + + ) V
′′
′′
′′
() = 83 ∭−∞ ��′′ , ′′ , ′′ � � + + � ′′ ′′ ′′
64
(3.9a)
(3.9b)
where, the absolute value of (3.9b) is the 3-D DT imaging function in free space () =
|()|. The linear relation between the spectral received signal and reflectivity function
is obtained from (3.8) and (3.9a), where the spectral reflectivity function is,
��′′ , ′′ , ′′ � =
′′ ,�
0 ̃ �′′ ,
(3.10)
′′ ′′
 
 3 ′′2 2 �  , ,�
2
2
Substituting (3.10) into (3.9b), the 3-D DT imaging function is obtained,

∞
() = �86 ∭−∞
′′ ,�
0 ̃ �′′ ,
′′
′′ 
′′2  2 �  , ,�
2 2
′′
′′
′′
 � + + � ′′ ′′ ′′ �
(3.11)
To obtain a better comparison between DT and fast BP, the next step is to map ′′
back to  domain with the dispersion relation ′′ = �402 − (′′ )2 − (′′ )2 where 0 =
/ yielding,
4 2
and the imaging function will be,

∞
() = �26 ∫  ∬−∞
′′ = 0′′ 
(3.12)

′′ ,�
03 ̃ �′′ ,
′′
′′ 
′′3  2 �  , ,�
2 2
′′
′′
′′
 ( + + ) ′′ ′′ �
(3.13)
The image (3.13) can be achieved slice-by-slice by applying a 2-D inverse FFT
(IFFT) over the spectral domain ′′ and ′′ . The antenna gain pattern P is on the
denominator, which will cause the singularity problem. Therefore, it is necessary to
develop a novel fast algorithm to solve this issue and obtain balanced images, which is
FBP.
65
3.2.2 Fast Back-Projection (FBP)
To solve the singularity problem, a FBP algorithm is proposed in this section.
Starting from the forward model (3.1) and obtaining the unknown σ(), a testing function
which is the complex conjugate of the round-trip electric field intensity can be
implemented on both sides of (3.1) with the testing point  , integrating over the aperture
and frequency domain, which leads to,
∫ ∫′ ∫′  (′ , ω) [ 2 ( , ′ , ω)]∗ ′′ =
∫ () ∫ ∫′ ∫′  2 (, ′ , ω)[ 2 ( , ′ , ω)]∗ ′′
(3.14)
It is assumed that the point spread function (PSF) has non-zero values when the
testing point matches the pixel point in the imaging region, where it is a delta functionlike behavior,
(,  ) = ∫ ∫′ ∫′  2 (, ′ , ω)[ 2 ( , ′ , ω)]∗ ′′ =
�
∫ ∫′ ∫′  2 (, ′ , ω)[ 2 (, ′ , ω)]∗ ′′ ,  = 
0,
ℎ
(3.15)
The right-hand side of (3.14) becomes the reflectivity function () times PSF. By
dividing the non-zero PSF to the left-hand side, the BP imaging function is achieved,
() =
∫ ∫′ ∫′  �′ ,ω�[ 2 (,′ ,ω)]∗ ′′
(,)
(, ) = ∫ ∫′ ∫′ | 2 (, ′ , ω)|2 ′′ 
66
(3.16a)
(3.16b)
Compared with the DT algorithm (3.13), it is shown in (3.16b) that the gain pattern
is involved in the denominator integral, where the singularity problem is solved. PSF is a
function of pixel points, which is fixed over entire image.
It is observed that PSF is a slow-varying amplitude term and independent from the
measured data, which can be precomputed, overcoming the wide angle effect and
balancing the near-field image with the consideration of the antenna gain pattern.
To achieve the real-time image generation, it is demanded to accelerate the
computation of the numerator in (3.16a). Like DT, the stationary phase method can be
applied to reduce the integration complexity and utilize FFT. Substituting (3.2) into
(3.16a) using  2 =  ∙  ′ , the numerator term can be rewritten as,
∞
∞
∞
∞
() = ∫  ∫′ ∫′  (′ , ω) ′′ ∫−∞ ∫−∞ ∫−∞ ∫−∞   ′ ′ ∙
′
′
′
� ,  , ��′ , ′ , �  �� + �(−′)+� + �(−′)+( + )�
(3.17)
The same as shown in the DT formulation, a simplified expression can be obtained
by changing variables with ′′ =  + ′ and ′′ =  + ′ in (3.17) and utilizing the
Fourier transform relation over the sensor aperture,
∞
∞
∞
∞
() = ∫  ∫−∞ ∫−∞ ′′ ′′ [∫−∞ ∫−∞   � ,  , ��′′ −  , ′′ −
′′
′′
′
 , �  � +� ] ∙ ̃ �′′ , ′′ , ω� ( + )
(3.18)
where, the transform pairs between the spectral and spatial received signals are shown in
(2.53b) with the same dispersion relation. The inner double integral over  and  in
(3.18) is,
67
∞
∞
′
�′′ , ′′ , � = ∫−∞ ∫−∞ � ,  , ��′′ −  , ′′ −  , �  � + �   (3.19)
Compared with the inner double integral (3.6) in DT algorithm, the difference is the
sign of the exponential term, where it is positive in (3.19). The same as derived in DT,
(3.19) can be approximated by the method of the stationary phase, where the large
dimensionless parameter is 0  and the phase function will have a different sign
� ,  � = −( + ′ )/0. The same stationary phase center is obtained at (′′ /2, ′′ /
2 ), and the same amplitude term of the asymptotic solution is achieved, which is
|�′′ /2, ′′ /2�| = 4/4 . The stationary phase approximation of (3.19) is given by,
�′′ , ′′ , � =
′′ ′′
 
′′2  2 �  , ,�
2
40 
2
′′
  
(3.20)
Compared with (3.7) and (3.20), they have the same dispersion relation while the
only difference between DT and fast BP is the sign of the exponential term when
applying the stationary phase method. Substituting (3.20) into (3.18), the simplified
numerator term can be obtained,

′′
∞ ′′2 2 ′′ 

�
,
, � ̃ �′′ , ′′ , ω�
0
2
2
() = 4 ∫  ∬−∞
′′
′′
′′
∙  ( + + ) ′′ ′′ (3.21)
Compared with (3.13) and (3.21), the same algorithm efficiency of DT and FBP can
be achieved slice-by-slice with implementing the 2-D IFFT over the spectral domain.
Finally, the BP imaging function is,
() = �
()
�
(,)
68
(3.22)
where, the denominator PSF can be precomputed, which will not influence the
computation of the numerator. To generate the following images, the antenna gain pattern
is considered, which is a simple cosine pattern  = ′′ /0 .
Both UDT and FBP methods take advantage of 2-D FFT and IFFT to achieve the
real-time image formation. The difference is that FBP considers PSF and solves the
singularity problem caused by the antenna gain pattern in DT.
3.2.3 Numerical Imaging Results
In this section, 3-D finite-difference time-domain numerical data [64] is applied to
verify that the FBP algorithm has the same efficiency as DT. In the simulation, the
antenna is a y-oriented Hertzian dipole, moving in a rectangle aperture plane at z=0cm
from -6.35cm to 6.35cm with a step size of 1.27cm in both x and y directions. The
frequency band of the antenna is from 2GHz to 10GHz with a step size of 100MHz. The
estimated cosine gain pattern is considered in both imaging algorithms.
As shown in Fig. 3.2 (a), there is a perfect electrical conducting (PEC) cylinder
locating at (0, 0, 15.24) cm in free space with the radius of 2.54cm and length of 7.62cm.
The size of the imaging region is 12.7cm × 12.7cm × 25.4cm with the pixel size
101×101×101. To generate images slice-by-slice in Figs. 3.2 (b) and (c), DT and FBP
have the same time cost, which is approximately 4 seconds.
Compared with Figs. 3.2 (b) and (c), the cylinder image constructed by DT is not
smooth, because of the singularity problem. To have a better observation on the cylinder,
both x and y cut images at x=0 and y=0cm are obtained, presented in Figs. 3.2 (d)-(g).
69
6.35 x
Cross-range domain
PEC cylinder.
-6.35
y
Down-range domain
(a)
z
(b)
(c)
(e)
(d)
(d)
(g)
(f)
Fig. 3.2. Comparison between 3-D images of a PEC cylinder based on both DT and FBP algorithm, (a)
a PEC cylinder locates at (0, 0, 15.24)cm with the radius of 2.54cmand length of 7.62cm, (b) DT, 3-D
view, (c) FBP, 3-D view, (d) DT, x cut, (e) FBP, x cut, (f) DT, y cut, (g) FBP, y cut. All x cut images
are at x=0cm and y cut are at y=0cm. The color scale is in dB with the dynamic range 20dB.
70
The circular cross-section of the cylinder in the black dash line is shown in Figs. 3.2 (d)
and (e), where its front surface close to the synthetic aperture can be observed at (0, 0,
12.7) cm in the image. The rectangular cross-section of the cylinder is presented in Figs.
3.2 (f) and (g). It is observed in Figs. 3.2 (d) and (f) that there are some defects in DT
images, due to the singularity issue caused by the antenna gain pattern. In the layered
case, this problem tends to be worse and interfaces will be unsmooth. As derived in
(3.22), there will be no singularity problem with the implementation of FBP and smooth
images are obtained in Figs. 3.2 (e) and (g).
3.3 Monostatic and Bistatic Combined Radar System
In section 3.2, it has been validated that FBP has the same computation efficiency as
DT and overcomes the singularity problem caused by the antenna gain pattern, making it
possible to balance monostatic and bistatic data for the multistatic antenna array. In this
section, we extend the free space case to the layered case, and the FBP imaging algorithm
will be verified based on the near-field monostatic and bistatic SAR system. It is
demonstrated that well-balanced monostatic and bistatic images are obtained and the data
collection time cost can be reduced by half.
3.3.1 The Forward Model
While the forward model in the previous section is based on the monostatic
configuration, a general forward model valid for both monostatic and bistatic systems is
provided. The configuration of the monostatic and bistatic microwave imaging system is
71
x
,0
,1
,2
,
…
,−1
,
0
1
2

−1

…
Rx
D
Tx/Rx
o
y
z
Fig. 3.3. A buried object in an opaque layered environment. The transmitter and receiver antennas
move in the xoy plane.
depicted in Fig. 3.3. There are two horn antennas: one behaves as the transceiver, the
other is the receiver with the separation distance D in the x direction, which are close to
the nonmagnetic stratified media with piecewise variation in the electrical properties only
in the z direction. It is noted that the conductivity in each layer is ignored in the
formulation. The transmitter and receiver reside in the semi-infinite free space with
distance 0 from their aperture to the first interface. The electromagnetic wave propagates
through each layer with thickness  and relative permittivity , , illuminating buried
objects (metals, voids, and cracks), which scatters back to two separated antennas,
recording the monostatic  and bistatic  signals (electric field intensities) as
 ( , ω) = ∫  (,  ,  , ω) ()
(3.23)
where, (, , ) is each pixel point,  ( ,  , 0) and  ( + ,  , 0) are transmitter
and receiver locations in the xy plane,  is the separation between the transmitter and
receiver along the x axis, ω is the radian frequency,  is the reflectivity function in the
72
nth layer of the imaging region V of interest, τ denotes mono or bi for the monostatic or
bistatic case,  is the round-trip electric field intensity,
2
 (,  ,  , ω) = 
(,  , ω)
(3.24a)
 (,  ,  , ω) =  (,  , ω) (,  , ω)
(3.24b)
where,  and  are the scalar transmitting and receiving electric field intensities in
the nth layer. It is observed that the monostatic configuration can be treated as a special
case of the bistatic one when the transmitter and receiver are co-located, resulting in D=0
and  =  .  and  are functions of the sensor antenna location, pixel point, and
frequency, which can be obtained in the absence of targets [51],
∞
∞
 (,  , ) = ∫−∞ ∫−∞  � ,  , � ∙
−1
 −� (−)+ (−)+�−∑=0
∞
∞
 (,  , ) = ∫−∞ ∫−∞  � ,  , � ∙
 �+∑−1
=0   �
−1
 −� (−−)+ (−)+�−∑=0
 
 �+∑−1
=0   �
 
(3.25a)
(3.25b)
where,  and  are the far-field gain pattern of the antenna,  = 2�, / is the
wave number in the ith dielectric layer with the relative permittivity , and the
corresponding dispersion relation  = +�2 − 2 − 2 , i denotes n or l.
3.3.2 BP Inverse Scattering
To solve the singularity problem in UDT, a FBP algorithm is proposed in this
section. In (3.23), (3.24a), and (3.24b), the known quantities are the received signal
 ( , ω) and the scalar electric field intensities  ,  . To solve this inverse problem
73
and obtain the unknown σ (), a testing function which is the complex conjugate of the
round-trip electric field intensity can be implemented on both sides of (3.23) with the
testing point ′( ′ ,  ′ , ′) , integrating over the aperture and frequency domain and
leading to,
∫ ∫ ∫  ( , ω) ∗ (′,  ,  , ω)   =


∫  () ∫ ∫ ∫  (,  ,  , ω) ∗ (′,  ,  , ω)  


(3.26)
It is assumed that the point spread function (PSF) has non-zero values when the
testing point matches the pixel point in the imaging region, where it is a delta functionlike behavior,
(, ′) = ∫ ∫ ∫  (,  ,  , ω) ∗ (′,  ,  , ω)   =

�

∫ ∫ ∫ | (,  ,  , ω)|2    ,  = ′
0,


ℎ
(3.27)
The right-hand side of (3.26) becomes the reflectivity function  () times PSF. By
dividing the non-zero PSF to the left-hand side, the BP imaging function is achieved,
 () =
∫ ∫ ∫  ( ,ω)∗ (, , ,ω)  


(,)
(, ) = ∫ ∫ ∫ | (,  ,  , ω)|2   


(3.28a)
(3.28b)
Compared with the UDT imaging function [27] and [32], the gain pattern is involved
inside the denominator integral in this formulation, where the singularity problem is
solved. Note that in (3.28a), if the object is a point scatterer, the value of the reflectivity
function will be 1 at the location of the object.
74
It is observed that PSF is a slow-varying amplitude term and independent from the
measured data, which can be precomputed, overcoming the wide angle effect and
balancing the near-field monostatic and bistatic images with the consideration of the
antenna gain pattern. And PSF for monostatic and bistatic cases will be different, because
of the different antenna configurations.
3.3.3 Stationary Phase Solution
To achieve real-time image generation, we must accelerate the computation of the
numerator in (3.28a). Like UDT, the stationary phase method can be applied to reduce
the integration complexity and utilize FFT. The monostatic configuration is a special case
of the bistatic one. Here, we will focus on the derivation for the bistatic case.
′
Substituting (3.24)-(3.25) into (3.28a) using  = 
∙  , the numerator for the
bistatic case can be rewritten as,
 () = ∫  ∫ ∫  ( , ω)   ∙
∞

∞
∞

∞
∫−∞ ∫−∞ ∫−∞ ∫−∞   ′ ′  �′ , ′ , �  � ,  , � −  ∙
′
′
′
−1
 �� +�(−)+� + �(−)+(+)�−∑=0
′
 �+∑−1
=0 ( + ) �
(3.29)
where, D is the separation between the two antennas. If D=0, (3.29) represents the
monostatic case.
Changing variables with ′′ =  + ′ and ′′ =  + ′ in (3.29) and utilizing the
Fourier transform relation over the sensor aperture, a simplified expression can be
obtained,
75
∞
∞
∞
∞
 () = ∫  ∫−∞ ∫−∞ ′′ ′′ [∫−∞ ∫−∞    �′′ −  , ′′ −
−1
′
 , �  � ,  , � −  ∙  �+��−∑=0
−1
′
 �+ ∑=0
( +
)
′′
′′
̃ �′′ , ′′ , ω� ( + )
(3.30a)
′′
′′
̃ �′′ , ′′ , ω� = ∫ ∫  ( , ω)  −( + )  

]∙

(3.30b)
where, (3.30b) presents the transform pairs between the spectral and spatial received
signals. The dispersion relation in (3.30a) is,
′
α
= +�α2 − (′′ − )2 − (′′ − )2
(3.31)
where, α denotes n and l, n is the last layer in the imaging region and l is the layer before
n (l<n). The inner double integral over  and  in (8a) can be asymptotically evaluated
by the stationary phase method at the stationary phase center (′′ /2, ′′ /2) [27],
 �′′ , ′′ , � =
′′
′′
� � ,  ��
�det 
2
2
′′ ′′
′′
 
2 2 �  , ,� − /2
2
2
′′ /2�|
� �′′ /2,
0 �| det 

4 �1−∑−1
=0 �
2
−1
′′
 ��−∑=0

′′
 �+∑−1
=0   �
(3.32a)
2
2
2
 �  �
−1  
−1   +  
∑
∑
=
+
4
+
4
�1 −
3 3
4
=0
=0
4





2
2

−1 −1   +   
∑−1
(s ≠ m) (3.32b)
3
3
=0  � + 4 ∑=0 ∑=0
 
 

� is determined by the second derivatives of the phase
where, the amplitude term det 
function for the inner double integral, considering all the layers’ thicknesses and relative
permittivities and providing the smoothly continuous image through interfaces. We
define the dispersion relation,
′′

= �42 − (′′ )2 − (′′ )2
Substituting (3.32a) into (3.30a), a simplified expression can be obtained,
76
(3.33)
∞
∞
 () = ∫  ∫−∞ ∫−∞ ′′ ′′
′′
−1
 ��−∑=0
′′ ′′
′′
 
2 2 �  , ,� − /2
2
2
′′ /2�|
0 �|n �′′ /2,
′′
 �+∑−1
=0   �
∙
′′
′′
̃ �′′ , ′′ , ω� ( + )
(3.34)
This numerator term (3.34) can be evaluated slice-by-slice with the implementation
of the 2D IFFT in real time. When the separation of the two antennas D=0, (3.34) will
represent the monostatic case. Finally, the monostatic or bistatic imaging function can be
obtained from (3.28a) and (3.34), which is the absolute value of the reflectivity function,
 () = |
 ()
(,)
|
(3.35)
where, the denominator PSF only depends on the antenna arrangement and layers
properties. The combination of well-balanced monostatic and bistatic images can be
achieved,
_ () = _ () + _ ()
(3.36)
where, the absolute value of monostatic and bistatic images are added. _ () can be
obtained directly from (3.35) and _ () is achieved based on (3.35) with setting the
antenna separation D=0 cm in (3.34). The algorithm goes as follows.
1. Precompute ̃ �′′ , ′′ , ω� with FFT using (3.30b) and () using (3.28b).
2. For each z-slice:
′′
′′
a. Compute the integrand of (3.34) except the exponential term  ( + ) .
b. Compute  (, ) from (3.34) with IFFT over ′′ and ′′ for each ω.
c. Ingrate over ω to obtain  () from (3.34).
77
3. Combine the absolute value of the balanced monostatic and bistatic images.
3.3.4 Experimental Setup
VNA
Controller
(b)
(a)
Fig. 3.4. The experiment scenario, (a) the data collection device and slider controller, (b) the 5-layer
structure and two horn antennas (which is not the corresponding measurement in the following imaging
results, aiming to illustrate the scanning mechanism).
In Fig. 3.4 (a), the slider controller and vector network analyzer (VNA) are under
the control of a personal computer, achieving the mechanical scan and data collection.
The scanning frequency band is from 2.02 to 10GHz with the step size of 30MHz. As
shown in Fig. 3.4 (b), two broadband dual-ridged horn antennas H-1498 are mounted
onto the slider with the center-to-center separation D=13.97cm and the vertical
polarization (vertical to the ground). The 38.1cm×17.78cm scanning aperture with the
step size of 1.27 cm moves parallel to the first interface of the multilayer with the
standoff distance 2.54 cm. The received monostatic and bistatic signals  and  are
78
matrices with the size of 31×15×267, which cost 40 minutes to collect. The thicknesses
of the three dielectric layers are well measured, which are 5.08, 7.62 and 7.62 cm along
the z direction respectively. A metal strip is attached onto the second interface.
For the stratified structure, the layers’ dielectric constants are normally unknowns,
which may change with the room temperature, pressure, and humidity because concrete
and wood and porous materials, and therefore need to be estimated before the image
generation. To obtain well-balanced images from monostatic and bistatic data, the gain
pattern of the horn antenna should be incorporated.
A. Estimation of the relative permittivity for each layer
The waterfall plot is the sequence of downrange profiles without the consideration
of the relative permittivity for each layer, only including the round-trip effect. Fig. 3.5
presents one slice of the waterfall plot, showing that four peaks correspond to four
interfaces in the 5-layer structure, and the propagation delays can be determined. If the
dielectric constants are considered, each peak’s location should be the same as the real
scenario. With measuring the distance between two adjacent peaks and knowing the
thickness of each layer, the ith layer’s relative permittivity  can be estimated with the
following formula,
 = (∆ / )2
(3.37)
where, ∆ is the distance between two adjacent peaks in Fig. 3.6 and  is the thickness
of the ith layer. The estimated values of relative permittivity are 2.25, 4.55 and 6.58
respectively.
79
7.62cm
16.26cm
19.56cm
ε r1 = 2.25
ε r 2 = 4.55
ε r 3 = 6.58
Fig. 3.5. The downrange profile from the waterfall plot.
B. Estimation of the horn antenna gain pattern
It is the wide angle antenna pattern that results in unbalanced near-field monostatic
and bistatic images. As shown in Fig. 3.6, a very large interface reflection is received
from the broadside of the horn antenna in the monostatic configuration. However, in the
bistatic case, a lower electric field strength will be received by the receiver, which is from
the sidelobe of the antenna, resulting in different amplitudes in monostatic and bistatic
images. Compared with the shallow point A, the angle  will be smaller at the deep point
B. In the deeper region, there will be less effect caused by the wide angle. As the standoff
distance increases to the far-field region, this angle  can be small enough to be treated as
the broadside of the horn and there will be no wide angle effect. In this case, the bistatic
80
x
Side lobe.
Main lobe.
Rx
D
A
B
θ
Tx/Rx
z
Fig. 3.6. Schema of the horn antenna illuminating the near-field layered medium.
configuration is a special monostatic one with the virtual sensor location in the middle of
the two antennas.
In near-field imaging, one may achieve balanced monostatic and bistatic images
with the normalization to each peak value. However, there is no physical meaning based
on that approach and the peak value may result from the interface reflection, leading to
unbalanced filed strengths on targets. Therefore, it is of importance to consider the gain
pattern to obtain well-balanced images of the buried object.
It is assumed that the horn antenna has a cosine gain pattern. The same horns are
used as the transmitter and receiver. In the spectral domain, the gain pattern can be
expressed as,
 =  =  =
81
0
0
(3.38)
where, 0 is the wavenumber in free space, the corresponding dispersion relation is 0 =
�02 − 2 − 2 .
Although it is not rigorous to assume a simple cosine pattern, the imaging results in
the following section present a good agreement with the theory. Furthermore, this
approach may be improved with the cosine based polynomial to match the simulated or
measured gain pattern for each frequency.
C. Estimation of the horn antenna phase center
The measured data tends to include reflections from the coaxial cable connectors
and the time delay in the cable. These reflections can be mitigated through the subtraction
of background measurement or spatial-averaged measurement. To remove time delay
effects, the reference plane can be moved from the VNA port to the sensor’s aperture.
However, the phase center of the dual-ridged horn antenna is located inside the horn.
Therefore, it is inappropriate to treat the center of the horn’s aperture as its phase center
in the near-field situation. Therefore, it is prudent to estimate the horn’s phase center.
In Fig. 3.7, the feed of the horn antenna resides on the origin of the Cartesian
coordinate in the simulation. The distance from the feed to the aperture of the waveguide
is 0.6cm. The depth of the horn is 9.9cm. ′ and r are the distances from a point in the far
field to the phase center and origin respectively, d is the distance from the phase center to
the origin, and  is the angle measured from the z axis. If referenced to the phase center,
in the far field, the electric fields radiated by the horn are spherical waves with equiphase
82
r′
Phase center
r
θ
d
Origin
Fig. 3.7. The horn antenna in HFSS.
0 . If referenced to the feed point, the phase variation of the electric field can be observed,
resulting from the separation d from the phase center,
(, )~
 −

|()| (0 +0 )
(3.39)
where, 0 is the wave number in the free space, () is the equiphase wave front with a
constant phase 0 , which is independent of location. Due to the presence of the phase
center shift d, there is a variation of the phase () = 0  + 0 .
This phase term () can be obtained from the simulation, by observing the phase
of the E-field in the far field. Taking the first derivative of the phase term with respect to
, we find that the distance d is
 = −0 /[
83
()

]
(3.40)
The distance d is estimated based on the phase term of the H-plane ( = 0°) electric
field. The equiphase wave front should be observed from the broadside of the horn. In the
frequency band from 2.02 to 10 GHz, the distance from the phase center to the origin is
stable, d=4 cm. With the consideration of the distance from the feed to the horn’s inner
aperture, the distance from the phase center to the horn’s outer aperture will be 6.6 cm.
Therefore, the distance from the phase center to the first interface is 9.14 cm, which is
applied to obtain the following images.
3.3.5 Experimental Imaging Results
In this section, 3-D measured data is employed to verify the efficacy of the proposed
algorithm. To balance the monostatic and bistatic images, the estimated gain pattern of
the horn antenna is considered in the imaging function and the singularity problem is
avoided. To reduce the data collection time by half (20 minutes), a large sensor step size
2.54 cm is applied along the x direction and the sensor step size remains the same as 1.27
cm along the y direction. The size of the imaging region is 38.1 cm×17.78 cm×25.4 cm
with the pixel size 121×57×41.
Due to the wide angle effect of the horn antenna in the near-field imaging region, it is
predicted that the peak strength in the bistatic image will be lower than that in the
monostatic one. In Fig. 3.8, the monostatic and bistatic images are generated based on the
imaging function (13) without the consideration of the antenna gain pattern and PSF. It
costs 5 seconds to obtain each image. Both configurations show the same location of the
buried metal strip, attached onto the second interface. The peak value of the bistatic
84
(a)
(b)
Fig. 3.8. 3-D imaging results at the second interface without the PSF, (a) monostatic, (b) bistatic. Color
scale is in dB with the dynamic range 20dB.
image is approximately 8 dB lower than that of the monostatic one, demonstrating the
prediction in the previous section.
In Fig. 3.9, imaging results are obtained by implementing the 3-D UDT algorithm [33]
with the consideration of the antenna gain pattern. Unlike the proposed FBP
(b)
(a)
Fig. 3.9. 3-D UDT imaging results at the second interface with considering the gain pattern, (a)
monostatic, (b) bistatic. Color scale is in dB with the dynamic range 20dB.
algorithm, the UDT formulation seeks the linear relation between the received signal and
reflectivity function in the spectral domain, where the gain pattern will be on the
85
denominator. Due to the weak illumination from the wide angle of the antenna, there will
be a singularity problem caused by the antenna pattern. Compared with Figs. 3.10 (b) and
(d), the monostatic and bistatic images in Fig. 3.9 are imbalanced and not smooth, where
there is a 3 dB difference in amplitude. Theoretically, with the consideration of the gain
pattern, balanced images should be formed by applying UDT. However, due to the
singularity, the imaging results cannot be balanced. Therefore, it is necessary to apply the
proposed FBP algorithm to obtain balanced images without the singularity problem.
Because of the difference in amplitude, the monostatic and bistatic images cannot be
combined directly. Implementing (3.35), the balanced images are achieved with the
consideration of the gain pattern and PSF, shown in Fig. 3.10, where the singularity
problem is solved. Compared with Figs. 3.10 (b) and (d), the monostatic and bistatic
images are well balanced at the buried target, presenting the same scattering strength.
Then the combined images are shown in Figs. 3.10 (e) and (f), where the smooth
interface and hidden metal strip can be observed. The data collection time is 40 minutes
with the antenna step size 1.27 cm on both x and y axes, while the generation of the
image is only 5 seconds.
With the implementation of the monostatic and bistatic radar system, our goal is to
achieve a large sensor step size and reduce the scanning time. Twice the original step size
2.54 cm along the x axis is applied in obtaining Fig. 3.11, where the data acquisition time
is decreased to 20 minutes. Applying the same imaging algorithm in (3.35), images in Fig.
3.9 are achieved in 3 seconds. In Figs. 3.11 (b) and (f), a smooth and well-focused object
86
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 3.10. 3-D imaging results with the PSF, the sensor step size is 1.27 cm along both x and y axes (a)
monostatic, first interface, (b) monostatic, second interface, (c) bistatic, first interface, (d) bistatic,
second interface, (e) combined, first interface, (f) combined, second interface. Color scale is in dB with
the dynamic range 20dB.
87
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 3.11. 3-D imaging results with the PSF, the sensor step size is 2.54 cm along the x axis and 1.27
cm along the y axis, (a) monostatic, first interface, (b) monostatic, second interface, (c) bistatic, first
interface, (d) bistatic, second interface, (e) combined, first interface, (f) combined, second interface.
Color scale is in dB with the dynamic range 20dB.
88
is obtained. Because of choosing a large sensor step size, the first interface of the layered
medium is not smooth in the bistatic image, shown in Fig. 3.11 (c) and the fourth
interface is not smooth in the monostatic image, shown in Fig. 3.10 (a). With the
combination of the monostatic and bistatic images Figs. 3.11 (a) and (c), the smooth first
and fourth interfaces are obtained in Fig, 3.11 (e), which is similar to the combined image
Fig. 3.10 (e). Compared with Fig. 3.10 (d), the target in Fig. 3.11 (d) is slightly defocused,
which is solved by the combined image in Fig. 11 (f).
3.4 Multistatic Radar System
In the previous sections, it is verified that balanced monostatic and bistatic images
can be achieved by FBP with no singularity problem, reducing the data collection time by
half. To realize fast data acquisition, a sparse multistatic antenna array is proposed in this
section to utilize monostatic and bistatic signals, where the balancing of all received
signals is of importance to be considered. It is known that the different amplitude values
of these signals are caused by different look angles under different pairs of transmitting
and receiving antennas, weighted by the gain pattern. With the proper implementation of
the antenna pattern, balanced multistatic images can be obtained.
3.4.1 The Multistatic Array Design
In this design, a small horn antenna is applied with aperture size of 2.54 cm by 2.54
cm. Its frequency band is from 4 to 9 GHz, where the lowest wavelength is about 3.33cm.
89
7 Virtual element
3
1.65cm
8
4
1
5
3.3cm
2
6
Real element
9
(b)
(a)
Fig. 3.12. The proposed 13.2cm-by-13.2cm multistatic array, (a) Schema, (b) HFSS simulation with
the horn antenna.
To avoid grating lobes, the spacing between each element is chosen as 1.65cm
(approximately half the wavelength). Then, as shown in Fig. 3.12, the center-to-center
separation of two real elements is 3.3cm, which is larger than the antenna aperture size of
2.54cm. Because of the size of the real elements, they could not possibly be spaced at ½
wavelength. The virtual elements fill in the space between real elements, existing exactly
at the center of each pair of real elements. With the arrangement in Fig. 3.12(a), the
aperture is fully populated with real and virtual elements in the square defined by the real
elements at the corners. There will be 9 real elements in red circles, providing 24 virtual
elements for a total of 33 elements in the array. Compared with the fully populated array
with 81 (9×9) real elements, this sparse antenna array only needs 9 real elements.
90
3.4.2 Multistatic Imaging Function
The forward model of the multistatic antenna array system is similar to (3.23),
 (p, q, ω) = ∫  (,  , ω) (,  , ω) ()
(3.41)
where, p is the index of the transmitting antenna at location  ( ,  , 0) and q is the
index of the receiving antenna at location  ( ,  , 0), representing each transmitter and
receiver pair.  and  are electric field intensities for the transmitting and receiving
antennas in the absence of targets [51],
∞
∞
 (,  , ) = ∫−∞ ∫−∞  � ,  , � ∙
−1
 −� �− �+ �− �+�−∑=0
∞
∞
 (,  , ) = ∫−∞ ∫−∞  � ,  , � ∙
−1
 −� �−�+ �−�+�−∑=0
 �+∑−1
=0   �
 �+∑−1
=0   �
 
(3.42a)
 
(3.42b)
To solve the forward model (3.41), the BP imaging algorithm is applied, which was
shown in the previous section. The imaging function can be obtained,
 () =
∫ ∑ ∑  (p,q,ω)[ (, ,ω) (, ,ω)]∗ 
(,)
(, ) = ∫  ∑ ∑� (,  , ω) (,  , ω)�
(3.43a)

(3.43b)
Note that (3.43a) is not a real-time algorithm. The FFT cannot be directly
implemented because the transmitter and receiver positions are not on a regular grid.
3.4.3 Theoretical Imaging Results
The proposed multistatic array is demonstrated and compared with the equivalent
33-element monostatic configuration. In free space, there is an ideal point target located
91
at (1.3”, 1.3”, 2.5”). The aperture size of the fixed antenna array is 13.2 cm×13.2 cm (5.2”
×5.2”). In Fig. 3.13, the imaging region is 15.24 cm×15.24 cm×12.7 cm (6” × 6” × 5“).
(a)
(b)
Fig. 3.13. The 3D image with a point target in free space, (a) Monostatic image with 33 real elements,
(b) Equivalent multistatic image.
It is observed that there are no grating lobes and the images of the monostatic and
equivalent multistatic cases are similar.
As shown in Fig. 3.14(a), there is an ideal point target embedded in a layered
structure at (0, 0, 12.7)cm. The proposed fixed antenna array is 2.54cm away from the
first interface and the frequency band is chosen from 4 to 9GHz with the step size of
0.1GHz. The multistatic received signal is generated based on the layered Green’s
function for the point target and interfaces separately. The time cost of the image
generation is about 50 minutes.
In Fig. 3.14(b), the correct location of the point target can be observed. In Figs. 3.14
(c) to (f), there are empty corner regions on interfaces due to the sparse antenna
92
Cross-range domain
x
9
5
 = 2
0 =1“
 = 4
1 =2“
3 =3“
1
 = 7
4 =3“
5”
z
3
7
Down-range domain
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 3.14. 3-D multistatic imaging results, (a) the 5-layer structure, (b) 3D view at the target slice, (c)
the first interface, (d) the second interface, (e) the third interface, (f) the fourth interface. Color scale is
in dB with the dynamic range 20dB.
93
(a)
(b)
(c)
(d)
Fig. 3.15. 3-D multistatic imaging results based on the interface model, (a) the first interface, (b) the
second interface, (c) the third interface, (d) the fourth interface. Color scale is in dB with the dynamic
arrangement. In order to obtain uniform interface reflections similar to monostatic
interface images, the moving average window proposed in (2.71) is applied to smooth the
multistatic data. Then, the 3D UDT algorithm is implemented to achieve uniform
interfaces in real time.
As presented in Fig. 3.15, uniform interfaces can be observed and the empty corner
regions are avoided by applying the moving average window. But, there is no point target
in these images.
94
Since there are no antennas on corners of the fixed multistatic array, interface
images lose the corner information. To obtain uniform interface reflections, a moving
average window is implemented. But the ideal point target is smoothed by the window as
well, which cannot be observed in Fig. 3.15. Therefore, for this sparse multistatic
imaging system, it is necessary to form a composite image to analyze embedded objects
and interfaces individually.
To design this multistatic imaging system, the proposed antenna array should be
verified by commercial-grade software ANSYS HFSS [70] based on the finite element
method [71]. Then, we can move forward to build such a system and demonstrate by
experiment.
3.5 Summary of Chapter 3
To overcome the singular issue caused by the antenna pattern, the FBP algorithm is
proposed and compared with DT in free space, where FBP has a more straightforward
formulation. To realize a real-time imaging system, both fast data collection and real-time
image generation are required. Well-balanced images with the same peak pixel value are
obtained based on the monostatic and bistatic signals, where the time cost for the data
collection is reduced by half. And the corresponding real-time imaging algorithm is
developed. Then, a multistatic antenna array is proposed to achieve fast data acquisition.
To balance each transmitter and receiver pair, the antenna gain pattern is considered in
the imaging function. However, it is time-consuming to generate the image, because the
95
corresponding fast multistatic imaging algorithm still needs to be developed. This
antenna array will be validated by simulations and experiments in the future.
96
Chapter 4: Imaging Through the Periodic Structure
(a)
(b)
Fig. 4.1. Periodic structures, (a) Rebar in concrete, (b) Metal grating outside a masonry wall.
In the previous chapters, imaging through layered structures has been investigated
where the UDT and FBP imaging algorithms are developed to realize real-time image
generation. In reality, as shown in Fig. 4.1, there might be a periodic structure in the layer
(rebar in reinforced concrete) or attached in front of the layer (metal grating outside a
masonry wall), where the propagation mechanism will be totally different from the case
without the periodic structure.
97
(a)
(b)
(d)
(c)
(f)
(e)
Fig. 4.2. 3D FBP monostatic imaging results, (a), (c), and (e) have no grating while (b), (d), and (f)
have grating, (a) the first interface, (b) the first interface, (c) the second interface, (d) the second
interface, (c) the third interface, (d) the third interface. Images are normalized to its own peak value.
Color scale is in dB with the dynamic range 20dB.
98
4.1 Introduction
Our questions are: will UDT or FBP work in the presence of a periodic structure,
and can we obtain a similar image as the case without the periodic structure showing
buried objects clearly?
Fig. 4.2 is obtained by UDT based on measurement of a layered media with 3
dielectric layers. Figs. 4.2 (a), (c), and (e) present the case without a metal grating (shown
in Fig. 4.1 (b)), where the first three smooth interfaces are imaged and two air gaps can
be observed at the third interface. The metal grating is placed on the outer interface, in
Figs. 4.2 (b), (d), and (f), showing a strong periodic pattern can be seen at the first
interface and the following interfaces are much weaker than the case without the grating,
where the two gaps on the third interface can hardly be observed in Fig. 4.2 (f).
It is concluded that the conventional imaging algorithm will not work with the
presence of the periodic structure, no matter DT, BP, UDT, or FBP. To solve this
problem and obtain information on buried objects, it is necessary to develop an
alternative imaging algorithm, which can incorporate the numerical Green’s function or a
periodic Green’s functions based on Floquet modes with FBP/UDT. The grating shown
in Fig. 4.1 (b) is analyzed in the following, first finding the Floquet modes, and
comparing the electric field distribution through the grating with the free-space case.
Then, an imaging algorithm based on regularized least squares is applied to solve the
inverse problem.
99
4.2 Analysis of the Periodic Structure
4.2.1 Floquet Modes
Unit cell of the periodic
structure
y
x
Fig. 4.3. The metal grating with a diamond shape unit cell, the periodicity is 13 cm and 3.3cm along the
x and y axes respectively.
As shown in Fig. 4.3, there is a metal grating with a diamond shaped unit cell, which
is attached to a masonry wall, shown in Fig. 4.1 (b). It is assumed that these unit cells are
identical with the periodicity along the x axis is  = 13 and along the y axis is  =
3.3.
In Fig. 4.4 (a), there is a 1D periodic structure infinite along the y axis and placed
along the x axis with the periodicity  . It is assumed that a plane wave illuminates this
structure. Then, the reflected and transmitted electrical field intensities   and   can be
expressed as,
100
∞
(4.1a)
∞
(4.1b)
  (, ) = ∫−∞ ( ) ∙  −( − ) 
x
  (, ) = ∫−∞ ( ) ∙  −( + ) 
x


z
z

y
(a)
(b)
Fig. 4.4. The periodic structure with the plane wave illumination, (a) 1D, (b) 2D.
By enforcing the periodic boundary condition   (, ) =   ( +  , )[68], one
can find  = 0 + 2/ , where 0 = 0 0 , 0 is the free-space wavenumber,
0 is the incident angle of the incident plane wave, and m is the discrete Floquet mode,
which can be any integer number. It is also known the dispersion relation is  =
101
�02 − 2 . For the propagation mode,  should be real and 02 ≥ 2 , leading to the
solution of the cutoff frequency,
 = 
||∙
(4.2)
 (1−0 )
where, c is the speed of light. For the normal incident wave, 0 = 0 and the periodicity
 = 13, the dominant Floquet modes in the frequency band from 2 to 10 GHz can be
obtained,
Table 4.1 Cut-off Frequencies of Different Floquet Modes for the 1D Periodic Structure
m
f(GHz)
±1
2.3
±2
4.6
±3
6.9
±4
9.2
These modes will have better penetration through the periodic structure, which
should be considered in the imaging problem.
As shown in Fig. 4.4 (b), for a 2D planar periodic structure with the periodicity 
along the x axis and  along the y axis, the reflected and transmitted electrical field
intensities   and   are given by,
∞
∞
(4.3a)
∞
∞
(4.3b)
  (, , ) = ∫−∞ ∫−∞ � ,  � ∙  −( + − )  
  (, , ) = ∫−∞ ∫−∞ � ,  � ∙  −( + + )  
where,  = 0 + 2/ ,  = 0 + 2/ , and (m, n) represents each discrete
Floquet mode. With knowing the dispersion relation  = �02 − 2 − 2 , 0 =
102
0 0 0 , and 0 = 0 0 0 , for a normal incident wave with 0 = 0,  =
13, and  = 3.3, the cutoff frequency can be obtained,
2
2
 =  � 2 + 2

(4.4)

The corresponding Floquet modes of the normal incident plane wave in the
frequency band from 2 to 10 GHz is shown in Table 4.2 (the unit is GHz).
Table 4.2 Cut-off Frequencies of Different Floquet Modes for the 2D Periodic Structure
n\m
0
±1
0
0
9.09
±1
2.31
9.38
±2
4.62
10.2
±3
6..92
11.43
±4
9.23
12.96
Based on the analytical analysis of the cut-off frequency of the Floquet mode, we
obtain the dominant mode in the frequency band of interest. However, the transmitted or
reflected electric field distribution (4.3) of the corresponding mode cannot be derived in
closed form, and requires a numerical approach (like the periodic method of moments) to
be applied. Unfortunately, existing commercial numerical software (FEKO, CST, and
HFSS) does not give the frequency response of each Floquet mode, but computes the
total frequency response of all modes. Nevertheless, the numerical simulation can still
give some insight into the reflection and transmission properties of this metal grating, in
particular, which polarization of the wave can provide better penetration.
The grating with a diamond shaped unit cell is simulated in HFSS, shown in Fig. 4.5
(a), where a normal incident plane wave impinges on the infinite periodic structure. The
frequency bandwidth from 0.125 to 10 GHz is considered. Fig. 4.5 (b) is obtained based
103
Reflection
13cm
3.3cm
y
Transmission
x
(a)
(b)
Transmission
Reflection
(c)
Fig. 4.5. HFSS simulation of the metal grating, (a) the grating in HFSS, (b) transmission and reflection
coefficients based on x pol., (c) transmission and reflection coefficients based on y pol..
on an x-polarized incident plane wave, where the red plot shows the reflection coefficient
and the blue plot describes the transmission coefficient. Similarly, Fig. 4.5(c) plots the
same curves for a y-polarized incident plane wave. Comparing Figs. 4.5 (b) and (c), it is
observed that y-polarized EM waves have better propagation through the grating. Both
Figs. 4.5 (b) and (c) show that transmission coefficients tend to drop to zero when the
104
frequency approaches zero, due to the fact that the wavelength will be much larger than
the cell size and EM waves will be shorted out. Compared with the cut-off frequencies in
Table 4.2, in Figs. 4.5 (b) and (c), resonant points match with these Floquet modes at 4.62,
6.92, and 9.23 GHz.
Therefore, it may be concluded that the dominant Floquet modes in the forward
model with FBP or UDT might be applied to realize imaging through a periodic structure,
requiring a numerical analysis of the metal grating to find the frequency response for
each Floquet mode, which requires a specialized periodic solver. Instead, we can look at
electric field distributions with and without the grating, comparing both amplitude and
phase terms. Then, FBP or UDT may be available to generate clear images with an
empirical amplitude or phase compensation.
4.2.2 Electric Field Distributions through the Grating
To obtain the electric field distribution, the FDTD solver GprMax [64] is again
applied. As shown in Fig. 4.6 (a), there is one fixed transmitter antenna located at (-12.7,
0) cm and four fixed receiver antennas located at (-6.35, 6.35), (-5.08, 6.35), (-3.81, 6.35),
and (-2.54, 6.35) cm in free space. In Fig. 4.7 (c), there is a PEC grating with periodicity
13 cm located 2.54 cm away from the transmitter, and the transmitter and receivers
locations are the same as presented in Fig. 4.6 (a).
Fig. 4.6 (b) shows the electric field frequency response at these four test points,
where the uniform pattern with different amplitudes can be observed in free space from 3
to 10 GHz. However, in Fig. 4.6 (d), with the presence of the grating, the electric field
105
Cross-range domain
x
12.7
(-2.54,6.35) cm
(-3.81,6.35) cm
(-5.08,6.35) cm
(-6.35,6.35) cm
-12.7
Down-range domain
z
(a)
(b)
A periodic structure is 2.54 cm away
from the antenna aperture.
Cross-range domain
x
12.7
(-2.54,6.35) cm
(-3.81,6.35) cm
(-5.08,6.35) cm
(-6.35,6.35) cm
-12.7
Down-range domain
z
(c)
(d)
Fig. 4.6. Electric field variation in free space, (a) without the grating, (b) amplitude of the electric field
at four different locations without the grating, (c) with the grating, (d) amplitude of the electric field at
four different locations with the grating.
pattern will change at different locations. Compared with the blue plots in Figs. 4.6 (b)
and (d) at (-6.35, 6.35) cm, the electric field amplitude through a grating will be lower
than that in free space.
To have a better comparison, both amplitude and phase variations should be
investigated. As presented in Fig. 4.7, both the amplitude and phase of the electric field
106
(a)
(b)
(c)
(d)
Fig. 4.7. The electric field comparison with and without the grating, (a) amplitude at (-6.35,6.35) cm,
(b) amplitude at (-5.08,6.35) cm, (c) phase at (-6.35,6.35) cm, (d) phase at (-5.08,6.35) cm.
are compared at two receivers’ locations (-6.35 ,6.35) and (-5.08 ,6.35)cm. In Figs. 4.7
(a) and (b), the electric field amplitudes with and without the grating are different,
meaning that amplitude compensation is needed in the imaging function. While in Figs.
4.7 (c) and (d), the phase variations with and without the grating are very similar,
indicating that there is no need to change the phase of the round-trip Greens’ function in
the imaging algorithm. This greatly simplifies a numerical Green’s function based
107
approach, because usually the phase variation is the most critical in the imaging
algorithm.
4.3 Inversion by the Method of Least Squares
The method of least squares is a common imaging approach to solve the inverse
problem via matrix operations. The forward model can be formulated as the matrix form
 =  + , where  is a M×1 column matrix, representing the measured data,  is a
M×N matrix, denoting the round-trip Green’s function,  is a N×1 column matrix,
representing the unknown pixel intensity, and  is a M×1 column matrix, approximating
the noise and model error in the measured data. There are different methods to solve the
forward model in matrix form, depending on the conditioning of the matrix. Considered
here are 1. The conjugate-phase matched filter, 2. The method of least square error, 3.
Tikhonov regularization to minimize ‖ − ‖2 + ‖‖2 where  is a real parameter.
First, we need to obtain forward model in matrix form.
4.3.1 The Forward Model in the Matrix Form
Here, we consider the 2D case with the forward model shown in (2.18). To find the
unknown value of , we define an error minimization problem to minimize the 2-norm of r,
‖ r(y′, ω)‖2 = � (y′, ω) − ∫ 2 (, ,  ′ , ω)(, ) �
2
(4.5)
which is a linear least square problem in continuous space and  that gives the minimum
error will be the imaging result. ‖ r(y′, ω)‖2 is defined by   r. Discretizing the unknown
108
reflectivity function (, ) = ∑   (, ) where n is the index in the imaging region,
(4.5) can be rewritten as,
‖ r(m)‖2 = ‖ (m) − ∑  (, )‖2
(, ) =  2 (, )∆∆
(4.6a)
(4.6b)
where, m relates to (′, ), being the mth measured data point with the sensor location ′
and radian frequency . E is the incident field intensity inside the imaging region without
any target, which is obtained numerically with the presence of the grating, and σ is the
received signal with the target.  is the discretized reflectivity coefficient of the nth
pixel and  is the basis function of the corresponding pixel, which is defined by,
 (, ) = �
1,
0,
(, )  ℎ ℎ 
elsewhere
(4.7)
To obtain the minimum error of (4.6a), the derivative should be taken with respect to
an arbitrary pixel reflectivity coefficient  (a complex number) and set to zero.



‖ (m) − ∑  (, )‖2 =
[ (m) − ∑  (, )] [ (m) −

∑  (, )] =

� + �

∗
∑
=1[ (m) − ∑  (, )] ∙ [ (m) −


∗
∗
∑  (, )] = 2[∑
=1  (, ) (m) − ∑=1 ∑=1   (, )(, )] = 0 (4.8)
By obtaining N such equations results in a linear system,


× × ×1 = × ×1
(4.9)
where,  is the known forward-model matrix describing the round-trip electric field
intensity,  is the column vector of unknown pixel points, and  is the column vector of
known received signal. M and N are the total number of the measured data and pixel
109
points, respectively. The superscript H represents the Hermitian of the complex matrix,
which is the conjugate transpose.
Then, any of the following three methods can be applied to solve for  in (4.9).
4.3.2 The Conjugate-Phase Matched Filter [72]
The same as the BP imaging algorithm, it is assumed that 
× × in (4.9) is a
constant diagonal matrix, based on the assumption that  ∗ (, ) ∙ (, ) will be zeros
if the test point p does not match the pixel point n,
,  = 
 ∗ (, ) ∙ (, ) = [ 2 (, )]∗  2 (, )(∆∆)2 = �
(4.10)
0,
elsewhere
By ignoring the constants in the diagonal of 
× × , (4.9) can be reduced to,
×1 = 
× ×1
(4.11)
which is the solution of the forward model (4.9).
4.3.3 The Method of Least Square Error [73]
A direct solution of (4.9) gives the least square error estimate of . The direct
solution is obtained from (4.9), by taking the inverse of the square matrix 
× × ,
−1 
×1 = (
× × ) × ×1
(4.12)
where, 
× × in general is not a diagonal matrix. The approach of minimizing the
norm can be treated as the special case of Tikhonov regularization with the real
parameter  = 0.
4.3.4 Tikhonov Regularization [72]
Because of noise in the received signal, model error, and poor conditioning of the
matrix 
× × , Tikhonov regularization can be applied to improve the image and
110
suppress spurious effect. In Tikhonov regularization, to find the value of , the following
error norm is minimized,
‖ r(m)‖2 = ‖ (m) − ∑  (, )‖2 + ‖∑  ‖2
(4.13)
The same as solving (4.6a), to obtain the minimum error of (4.13), the derivative
should be taken with respect to an arbitrary pixel reflectivity coefficient  (a complex
number) and set to zero. By obtaining N such equations results in a linear system,

(
× × + × )×1 = × ×1
(4.14)
where, λ is a real-valued regularization parameter and I is an identity matrix.
The direct solution for the reflectivity function matrix can be obtained from (4.14),
−1 
×1 = (
× × + × ) × ×1
(4.15)
4.4 Numerical Simulations
As shown in Fig. 4.8, an infinite PEC grating with the periodicity 13cm along the x
axis, which is 2.54 cm away from the antenna aperture, and a PEC cylinder is located at
(0, 12.7) cm with radius of 0.254cm, infinite along the y axis. The imaging region is
shown in the blue box around the PCE cylinder with size 12.7 cm×12.7 cm and pixel size
of 0.508 cm. The imaging region along the x axis is from -6.35 to 6.35 cm and along the z
axis is from 6.35 to 19.05 cm. The simulated received signal is obtained from GprMax
[64], where a Hertzian dipole is moved from -6.35 to 6.35 cm with a step size of 1.27 cm,
and the frequency band is from 3 to 10 GHz with a step size of 20MHz. Therefore, there
will be 11 sample points on the synthetic aperture, 351 points in frequency, and 676 pixel
111
Cross-range domain (cm)
12.7
x
2.54
-12.7
Down-range domain (cm)
z
Fig. 4.8. A 2D PEC cylinder behind a metal grating.
points in the imaging region, so the total number of received signal data points is
M=11×351=3971, and the total number of pixel points is N=676.
To generate the image through the grating, (4.11), (4.12), and (4.15) are applied
where × is obtained numerically by GprMax [64] with the presence of the grating
and absence of the PEC cylinder, so the Floquet modes will be automatically considered
in the simulation instead of obtaining the frequency response of each propagating Floquet
mode.
As shown in Fig. 4.9, the PEC cylinder can be observed at (0, 12.7) cm by
implementing three methods: matched filter, least square, and Tikhonov regularization. In
Fig. 4.9(a), the matched filter is applied. Compared among Figs. 4.9(a), (b), and (c), the
image resolution in Fig. 4.9(a) is not good as two other images. To investigate the
approach of Tikhonov regularization, different values of λ are discussed and imaging
112
target
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 4.9. 2D imaging results, (a) matched filter, (b) least square approach, (c) Tikhonov regularization,
λ=0.3, (d) Tikhonov regularization, λ=1, (e) Tikhonov regularization, λ=2, (f) Tikhonov regularization,
λ=3.
results are shown in Figs. 4.9(c), (d), (e), and (f), where is nearly no difference among
113
these images. In Fig. 4.9, other bright spots are in the imaging region as well. All three
methods do not show well the clean image. A more rigorous method based on
propagating Floquet modes in FBP/UDT needs to be investigated in the future.
4.5 Summary of Chapter 4
In this chapter, imaging through a periodic structure has been investigated, where
the conventional imaging method fails to present clean and clear images of buried objects
due to ignoring the Floquet modes in the round-trip electric field intensity (or Green’s
function).
With the ideal point scatterer analysis of the special metal grating attached on a
masonry wall, cut-off frequencies of different Floquet modes in the frequency band of
interest are obtained. With the numerical analysis of the grating in HFSS based on x and y
polarized normal incident plane waves, the total frequency response based on all the
discrete Floquet modes is obtained. To achieve a better penetration through the grating,
the y polarized antenna should be applied in the measurement.
Instead of using the Floquet modes directly in the imaging function, the round-trip
electric field intensity is generated numerically with the presence of the grating in the
FDTD simulation. This is used to obtain a matrix form of the forward model. By solving
the linear least squares problem, the unknown imaging function can be obtained.
However, the image is not clean and clear, due to the interaction between the grating and
114
target. In the future, rigorous Floquet modes incorporated into FBP/UDT should be
investigated.
115
Chapter 5: Target Size Estimation Based on Near-Field Radar
Imaging
In previous chapters, we have discovered different methods to construct near-field
microwave images through layered media or periodic structures, such as UDT, FBP, and
least squares to locate buried objects. The experienced human operator is required to
analyze these images to evaluate the condition of the construction under examination.
The question is: is it possible to achieve quantitative image analysis by an image
processing algorithm to output the pertinent information of the buried object without
human intervention?
5.1 Introduction
The answer is yes. There have been several papers finding the size and shape of
buried pipelines in GPR applications by using the neural network algorithm, curve fitting,
generalized Hough transform, weighted least squares, recursive Kalman filter, maximum
likelihood, pattern recognition, and Gaussian process approaches [34]-[44]. Normally the
GPR image is not focused, where a hyperbolic object pattern is presented without
consideration of the layer’s electrical property. These GPR image based size estimation
methods require a threshold to obtain a corresponding binary image. It is not robust to
116
choose the threshold, which depends on the antenna, frequency band, buried object,
material properties, and background noises. Furthermore, in GPR images, hyperbolae
may intersect if there are several embedded objects, making it difficult to isolate objects.
In the aforementioned approaches, it requires intensive 2-D simulations to form the
training data set, where objects are 2-D circlar, square, or triangular cross-sections. If
dealing with 3-D scenarios, it becomes extremely time-consuming to run simulations to
build the training data set.
In the open literature, there are few papers about buried target size estimation based
on focused near-field microwave images, which can present the outline and accurate
position of a hidden object compared with a GPR image. Furthermore, there will be no
intersection problem and targets can be separated when the image resolution is fine
enough. Unlike the ultrasound image in medical applications, which requires experienced
doctors to operate and inspect, it is of importance to develop a quantification algorithm to
evaluate radar images and output the embedded target information for inexperienced
operators.
This chapter begins with a discussion of the physics behind EM illumination in
dense media for near-field imaging. The physics determines the best approaches for
designing the EM sources and for size and shape estimation algorithms.
117
5.2 Physics behind EM Illumination in Dense Media
When EM waves illuminate dense media, a lensing effect will occur at the high
dielectric contrast interface, focusing the antenna beam toward the normal direction due
to refraction. Fig. 5.1 shows an object embedded in a dense slab with thickness  and
relative permittivity  . In this monostatic radar system, one antenna, residing in the
semi-infinite free space, moves in a rectangular xy plane, transmitting and receiving
electromagnetic (EM) waves to detect buried objects.
x
0
A3




A2

A1
0
Shadow projection


Interface reflection
Deflected ray
not received
Tx/Rx
y
z
Fig. 5.1. An object embedded in a dense slab. The transceiver antenna moves in the xy plane.
Fig. 5.1, illustrates the scattering phenomenology for a buried sphere. Consider the
transceiver at location A1. The red arrow represents waves that refract into the medium,
reflect from the surface of the sphere at point a, and follow the same path back to the
receiver. The orange arrow is a ray that also refracts into the medium but reflects from
the sphere at point b into a different direction and is not received. The purple ray is the
118
wave that is launched normally into the medium, grazes the sphere at point c, and reflects
from the rear interface back to the receiver. Because of the refraction or lensing effect,
rays that hit the sphere between points a and c are deflected and never return to the
receiver. As the transceiver moves to locations A2 and A3, the red arrows fill in the space
between points a and d on the surface of the sphere. Hence, only that portion of the
surface will contribute to the image. On the other hand, the shadow projection on the rear
interface should indicate correct size and shape information of the object, which is
normally smaller than the marked size due to diffraction. At A3, Brewster angle can exist
for the incident wave (shown in solid black arrow) with the parallel polarization, where
the EM wave will be totally transmitted into the slab. Brewster angle is defined by  =
−1 (� /0 ). The presence of a dense slab leads to a large Brewster angle, which is
far off the antenna broadside. The Brewster angle effect may not be utilized by applying a
parallel dipole, but a normal dipole tends to perform better in a dense slab.
To have a better understanding of the lensing effect, electrical field distributions of a
y-directed Hertzian dipole in YZ plane with the presence of a slab at different frequencies
2, 6, and 10 GHz are presented in Fig. 5.2, which is obtained from the finite-difference
time-domain (FDTD) based numerical software GprMax [64]. Two black lines show the
location of the slab with the thickness of 25.4 cm and relative permittivity of 10. Black
arrows indicate the dipole beamwidth in the dense slab. To have a better observation of
EM waves propagating into the slab, the strong electric field in the vicinity of the dipole
is removed manually. In Figs. 5.2 (a), (c), and (e), the Hertzian dipole is at (0, 2.54) cm,
119
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5.2. Electric field distributions of a y-oriented Hertzian dipole in YZ plane with the presence of a
dense slab, (a) attach at 2 GHz, (b) 2.54 cm away at 2 GHz, (c) attach at 6 GHz, (d) 2.54 cm away at 6
GHz, (e) attach at 10 GHz, (f) 2.54 cm away at 10 GHz. The color scale is in dB with the dynamic
range 50dB.
120
directly attached onto the first interface of the slab where there will be no refraction and
lensing effects. In Figs. 5.2 (b), (d), and (f), the Hertzian dipole is located at (0, 0) cm,
2.54 cm away from the slab. Compared with Figs. 5.2 (a) and (b), it is noted that the
beamwidth (shown in black arrows) is much narrower and less energy penetrates into the
slab when the dipole is 2.54 cm away, focusing towards the z direction, which is the
lensing effect. In practice, due to the rough surface of the ground and building walls,
there is usually a standoff distance from the antenna to the first interface. Therefore, it is
of significance to consider the lensing effect when developing algorithms to obtain the
size and shape information of the buried object.
Fig. 5.3 shows electric field distributions of a y-directed Hertzian dipole in the other
principal plane XZ. The lensing effect can be observed as well when the dipole is 2.54cm
away from the slab. Compared with Fig. 5.2, there is a wider illumination angle in Fig.
5.3, showing that the refraction effect is polarization dependent.
Because of the lensing effect in dense media, we lose the wide angle illumination
information when detecting buried objects. Therefore, a normal dipole (z-directed) can be
applied to compensate the limited illumination angle. In Figs. 5.4 and 5.5, the electric
field distributions of a z-directed Hertzian dipole in both principal planes with the
presence of a slab at different frequencies 2, 6, and 10 GHz are shown. Due to the
symmetry of the normal dipole, radiation patterns in XZ and YZ planes are identical. In
Figs. 5.4 and 5.5 (a), (c), and (e), the Hertzian dipole is at (0, 2.54) cm, directly attached
onto the first interface of the slab where there will be no refraction and lensing effects. In
121
(a)
(b)
(c)
(d)
(f)
(e)
Fig. 5.3. Electric field distributions of a y-oriented Hertzian dipole in XZ plane with the presence of a
dense slab, (a) attach at 2 GHz, (b) 2.54 cm away at 2 GHz, (c) attach at 6 GHz, (d) 2.54 cm away at 6
GHz, (e) attach at 10 GHz, (f) 2.54 cm away at 10 GHz. The color scale is in dB with the dynamic
range 50dB.
122
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5.4. Electric field distributions of a z-oriented Hertzian dipole in YZ plane with the presence of a
dense slab, (a) attach at 2 GHz, (b) 2.54 cm away at 2 GHz, (c) attach at 6 GHz, (d) 2.54 cm away at 6
GHz, (e) attach at 10 GHz, (f) 2.54 cm away at 10 GHz. The color scale is in dB with the dynamic
range 50dB.
123
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5.5. Electric field distributions of a z-oriented Hertzian dipole in XZ plane with the presence of a
dense slab, (a) attach at 2 GHz, (b) 2.54 cm away at 2 GHz, (c) attach at 6 GHz, (d) 2.54 cm away at 6
GHz, (e) attach at 10 GHz, (f) 2.54 cm away at 10 GHz. The color scale is in dB with the dynamic
range 50dB.
124
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5.6. Electric field distributions of a Brewster angle-oriented Hertzian dipole in YZ plane with the
presence of a dense slab with  = 10, (a) attach at 2 GHz, (b) 2.54 cm away at 2 GHz, (c) attach at 6
GHz, (d) 2.54 cm away at 6 GHz, (e) attach at 10 GHz, (f) 2.54 cm away at 10 GHz. The color scale is
in dB with the dynamic range 50dB.
125
Figs. 5.4 and 5.5 (b), (d), and (f), the Hertzian dipole is located at (0, 0) cm, 2.54 cm
away from the slab. Compared with Figs. 5.4 (a) and (b), less energy will penetrate the
dense slab with the dipole placed away, but the wide range of illumination into the slab is
achieved, due to the refraction. Compared with Figs. 5.2 (b) and 5.4 (b), the z-directed
normal dipole covers the wide angle region while the y-directed parallel dipole loses the
wide angle information.
Fig. 5.6 presents electric field distributions of a Hertzian dipole placed toward the
Brewster angle in YZ plane. It is assumed that the relative permittivity of the slab is 10.
Thus the Brewster angle is 72.45 degree. Compared with z-oriented dipole results in Fig.
5.4, there is nearly no difference in electric field distributions when applying the
Brewster angle-oriented dipole. Thus, it is convenient to implement a z-oriented dipole
without knowing the relative permittivity of the slab.
To detect buried objects, it is desired to provide wider illumination in dense media.
As shown in Figs. 5.2 to 5.5, attached parallel dipole and 2.54cm away normal dipole are
good candidates. But due to the roughness of realistic structure surface, a normal dipole
2.54cm away from the interface is better.
5.3 Spatial Moments Approach
The 3-D smooth and continuous focused microwave images through layered media
can be obtained by the implementation of the UDT algorithm in Chapter 2, where the
location, shape, orientation, and size of the buried target can be estimated by the method
126
of spatial moments [69]. It is assumed that there is one target in the imaging region of
interest. If multiple targets are present, the segmentation should be performed to divide
the image into several regions with only one object in each of them.
For a general 3-D discrete image (, , ), it is convenient to apply the method of
spatial moments [69] to quantify the image information. The (, , )th order spatial
moments M is defined by,

  
M(, , ) = ∑=1 ∑
(5.1)
=1 ∑=1    ∙ (, , )
where, (, , ) is the pixel point in the image and J, K, L are the total numbers of pixels
along x, y, and z axes. The image centroid or target location (̃, �, ̃) is defined by the
zero-order moment M(0,0,0), and first-order moments M(1,0,0), M(0,1,0), and M(0,0,1),
(1,0,0)
̃ = (0,0,0)
(5.2a)
(0,1,0)
� = (0,0,0)
(5.2b)
(0,0,1)
̃ = (0,0,0)
(5.2c)
Due to the limitation of the one-sided scanning mechanism, the downrange
information (dimension in z direction) of the buried object cannot be discerned in the
image. While the crossrange image shows the details of the target, the rotation angle and
axial ratio can be determined based on its largest z-slice image. The (, )th order scaled
spatial central moments U is given by,
U(, ) =

� 
∑=1 ∑
=1(−̃ ) (− ) ∙(,)
 
127
(5.3)
The moment of inertia covariance matrix C can be can be obtained by the second
order scaled central moments: row U(2,0), column U(0,2), and row-column U(1,1)
moments of inertia,
U(2,0) U(1,1)
]
U(1,1) U(0,2)
C=[
(5.4)
The rotation angle of the target can then be determined by the largest eigenvalue 
of (5.4) and moments of inertia,
θ = arctan(
 −(0,2)
(1,1)
)
(5.5)
The axial ratio can be obtained from the two eigenvalues of (5.4), indicating the
shape of the target,
AR =


(5.6)
To estimate the size of the target in the discrete image (, , ̃) at the depth ̃, the
scaled standard deviation (SSD) can be applied, which is similar to the square root of the
second order spatial central moments.
� = �
2
∑=1 ∑
=1(−̃ ) ∙(,)
(5.7a)
� 2
∑=1 ∑
=1(− ) ∙(,)
(5.7b)
∑=1 ∑
=1 (,)
� = �
∑=1 ∑
=1 (,)
Assuming that there is a 1D binary image centered at origin with length 2a, its SSD
is defined as � = �

∫−  2 

∫− 
2
= 2√3. For this binary image, the object length is 2√3 �,
128
showing that SSD is not equivalent to the actual size and a tuning factor λ should be
chosen based on the training data set to obtain the estimated size (λ� , λ� ). In microwave
images, this tuning factor is normally in the range 2 < λ < 2√3.
5.4 Numerical Imaging Results
In this section, the top view of 3-D images is formed based on the UDT algorithm,
and the monostatic received signal is obtained from the FDTD simulation [64]. As shown
in Fig. 5.7 (a), the transceiver is a Hertzian dipole moving in a 12.7cm×12.7cm aperture
plane with step size of 1.27cm. The frequency bandwidth is from 2 to 10 GHz with step
size of 100MHz. The cross-range resolution is defined by  /2, where R is the range
from the antenna aperture to the pixel point,  is the central frequency wavelength, and
L is the antenna aperture length. In a focused near-field image, the minimum cross-range
resolution is  /2, which is 2.5cm. The down-range resolution is determined by the
speed of light over double the frequency bandwidth (c/2B), which is 1.88cm. The
standoff distance is 2.54cm. A spherical or cylindrical void is located in a single slab with
thickness 27.94cm and relative permittivity 10. The size of the imaging region is
12.7cm×12.7cm×38.1cm with pixel grid 101×101×151. The time cost for the image
generation is about 7 seconds.
In Figs. 5.7 and 5.8, these top view images are taken from the object surface close to
the sensor aperture, which is the center of gravity slice in z direction determined by the
first-order spatial moments (5.2c). Both x and y polarized sources are applied to
129
investigate embedded spherical and cylindrical voids. It is observed that the same target
looks different based on x and y polarizations, showing that the radar image is
polarization dependent and sensitive to the antenna pattern.
In Fig. 5.7, spherical voids with radii of 1.524cm, 2.54cm, and 3.81cm are presented
with the center located at (0, 0, 16.51)cm. Compared among Figs. 5.7 (b), (d), and (f)
based on x pol., different size spheres with similar image size can be observed, so the
method of spatial moments will not present accurate size and shape estimations. This is
caused by the lensing effect on the boundary of two adjacent layers with a high dielectric
contrast. Since the relative permittivity of the slab is 10, the largest refraction angle at the
first interface is only 18.43°, resulting in a focused antenna beam and areas of low
illuminations. The only significant difference is the peak magnitude of these spherical
voids. There is approximately 4dB difference between spheres with radii of 1.52 cm and
2.54 cm in Figs. 5.7 (b) and (d). The size estimation for spheres can be realized by
utilizing the amplitude difference. In Figs. 5.7 (b) and (c), the sphere is not symmetric in
the radar image due to the difference of antenna gain pattern in both principal planes.
However, applying a 90° rotation with respect to the center of the object to the ypolarized image in Fig. 5.7 (c), this rotated image will be the same as the x-polarized
image in Fig. 5.7 (b).
In Fig. 5.8, cylindrical voids with different sizes and rotation angles are presented
with the center located at (0, 0, 16.51) cm. Differences can be observed in these radar
images allowing the method of spatial moments to be applied to quantify cylinders. Note
130
Cross-range domain
6.35 x
-6.35
,0
=1
0
y
,1 = 10
,2 = 1
1 = 27.94
Down-range domain
z
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Fig. 5.7. Top view of 3-D images for different spherical voids based on both x and y polarizations, (a) a
spherical void in a dense slab, (b) radius 1.52cm, x pol., (c) radius 1.52cm, y pol., (d) radius 2.54cm, x
pol., (e) radius 2.54cm, y pol., (f) radius 3.81cm, x pol., (g) radius 3.81cm, y pol.. The color scale is in
dB with the dynamic range 20dB.
131
Cross-range domain
6.35 x
-6.35
,0
=1
0
y
,1 = 10
,2 = 1
1 = 27.94
Down-range domain
(a)
z
(c)
(b)
(e)
(d)
(f)
(g)
Fig. 5.8. Top view of 3-D images for different cylindrical voids based on both x and y polarizations, (a)
a cylindrical void in a dense slab, (b) radius 1.78cm, length 5.08cm, x pol., (c) radius 1.78cm, length
5.08cm, y pol., (d) radius 2.54cm, length 7.62cm, x pol., (e) radius 2.54cm, length 7.62cm, y pol., (f)
radius 1.78cm, length 7.11cm, 45° rotation, x pol., (g) radius 1.78cm, length 7.11cm, 45° rotation, y
pol.. The color scale is in dB with the dynamic range 20dB.
132
that we only consider cylinders rotated in the xy plane, because the object size
information in the cross-range directions can be captured, but not the downrange. The
image difference associated with different polarizations can be observed in Figs. 5.8 (b),
(c), (d), and (e). There is no difference between Figs. 5.8 (f) and (g), due to the symmetry
with respect to x and y polarizations after 45° rotation of the cylinder.
5.5 Target Size Estimation
In this section, it is assumed that there is only one embedded object in the dense slab:
a spherical or cylindrical void. If more than one target is in the imaging region,
segmentation should be performed to separate these targets into several images with each
containing one of them before applying the size estimation algorithm. Theoretically, to
obtain accurate target information, spatial moments can be applied to any kind of image
(photo, binary image, ultrasound image, or radar image). The more precise the image
generated, the closer the estimation to the real size of the object.
However, radar images are not perfect, influenced by various factors: antennas (gain
pattern and polarization), the operating frequency bandwidth, the synthetic aperture size,
the standoff distance, the property of the background medium, and the type of the target
(shape, size, and material). In free space, the radar system can capture the curvature of a
spherical target in focused images. But in a dense medium, it is difficult to obtain its
surface curvature due to the lensing effect caused by refraction. If the relative permittivity
of a single slab is 10, the largest refraction angle is only 18.43°, resulting in low areas of
133
x pol.
Generate both x and y
polarized 3-D images and
extract the buried object.
Apply the first order spatial
moments (5.2) to determine
the object location (̃, �, ̃).
Extract z slice of the embedded object at the
depth ̃ and perform the error analysis based
on (5.9) for both x and y polarized images.
y pol.
x pol.
Output the location,
shape, size, and
rotation angle.
The target is not a
sphere.
Apply
the
second order central
spatial moments (5.3)(5.7) to the x polarized
image.
Large
y pol.
Whether
the
error is small.
Small
The target is a sphere. Apply
the library based error analysis
(5.10) to the x polarized
image.
Output
the
location,
shape, and the maximum
likelihood of the size.
Fig. 5.9. The size estimation algorithm.
illuminations into the medium. The illumination region will be even narrower because of
the limited beamwidth of the sensor antenna.
134
In Fig. 5.7, it is shown that different spheres look similar in size, so the method of
spatial moments will not work in these radar images. As shown in Fig. 5.8, cylindrical
images present outlines that look like cylinders.
The size estimation algorithm is described in Fig. 5.9. First, both x and y polarized
images are formed based on 3D UDT algorithm. With knowing the standoff distance and
the thickness of the slab, it is easy to remove the front and rear interface reflections and
obtain the radar image with the presence of the hidden object. Without the influence of
these strong reflections, the first order spatial moments (5.2a)-(5.2c) can be applied to
estimate the target position (̃, �, ̃), shown in Table 1. S, C, R, and L denote the spherical
void, the cylindrical void, radius, and length respectively. All units are in cm. The front
surface of the hidden object close to the sensor aperture is captured and imaged in Figs.
Table 5.1. Estimated Locations (cm)
Buried Object
S, R1.27
S, R1.52
S, R2.54
S, R2.54
S, R2.54
S, R3.81
C, R1.78, L5.08, Angle 0°
C, R2.54, L7.62,Angle 0°
C, R1.78, L7.11, Angle 45°
C, R2.54, L10.16, Angle 60°
Front Surface
Location
(0, 0, 15.24)
(0, 0, 14.99)
(0, 0, 13.97)
(0, 0, 5.08)
(2.54, 2.54, 13.97)
(0, 0, 12.7)
(0, 0, 14.73)
(0, 0, 13.97)
(0, 0, 14.73)
(0, 0, 13.97)
Estimated Location
Location Error
(0.13, 0.24, 15.75)
(0.13, 0.24, 15.49)
(0.13, 0.24, 14.48)
(0.13, 0.21, 5.59)
(2.42, 2.38, 14.48)
(0.13, 0.24, 12.95)
(0, 0.13, 14.73)
(0, 0.13, 13.97)
(-0.06, 0.07, 14.73)
(0.02, 0.13, 13.97)
(0.13, 0.24, 0.51)
(0.13, 0.24, 0.5)
(0.13, 0.24, 0.51)
(0.13, 0.21, 0.51)
(0.12, 0.16, 0.51)
(0.13, 0.24, 0.25)
(0, 0.13, 0)
(0, 0.13, 0)
(0.06, 0.07, 0)
(0.02, 0.13, 0)
5.7 and 5.8. Note that locations presented in Table 5.1 are centers of buried targets’ front
surfaces rather than centers of these targets. As shown in the last column of Table 5.1, the
largest error on the crossrange is only 0.24 cm and on the downrange is 0.51 cm, which
135
are acceptable and less than the cross-range resolution 2.5 cm and down-range resolution
1.88 cm.
With obtaining the location estimation of the buried object and extracting the z-cut
image, there will be three steps to achieve its size estimation,
1. Distinguish the spherical and cylindrical voids,
2. Apply the library based error analysis to the spherical voids,
3. Implement the method of the central spatial moments to the cylindrical voids.
Buried objects with shape other than spheres and cylinders can also be estimated by
the method of spatial moments. An ellipse can be defined by the axial ratio, where
circular cross-section is a special case when the axial ratio is one.
5.5.1 Target Shape Identification
Due to the different gain pattern of the Hertzian dipole in E and H plane, both x and
y polarized images can be applied to distinguish the hidden object. Compared with the
spherical images in Figs. 5.7 (a) and (b), it is observed that the x polarized image is the
same as the 90° rotated y polarized image. However, for cylinders, with the 90° rotation,
the x and y polarized images will be totally different. An error analysis between the x

polarized image  and the 90° rotated y polarized image 
can be performed to
distinguish the spherical and cylindrical voids.
The normalized root mean square error (NRMSE) is define by,
NRMSE = ��

∑ ∑ ∑[ (,,)−
(,,)]2
2
∑ ∑ ∑ 
(,,)
136
�
(5.8)
The 3-D images in Figs. 5.7 and 5.8 are applied with (5.8). Table 5.2 shows the
NRMSE results. The radius and length is in cm and NRMSE is dimensionless. It is
presented that NRMSE for spheres is on the order of 10−6 , while for cylinders is
approximately on the order of 1, which is much larger. Therefore, a small error indicates
the maximum likelihood of the object is a sphere. After the discrimination of the
spherical and cylindrical voids, the methods of the library based error analysis and spatial
moments can be implemented to estimate the size of the sphere and cylinder respectively.
Table 5.2. Normalized Root Mean Square Error
Buried
Object
SRMSE
S
R1.52
1.16 × 10−6
S
R2.54
3.63 × 10−6
S
R3.81
2.29 × 10−6
C R1.78
L5.08
1.03
C
R2.54 L7.62
1.22
C_rotated
R1.78 L7.11
0.29
5.5.2 Size Estimation of Spherical Voids
As presented in Fig. 5.7, sizes of different spheres in radar images look similar and
the unique difference is the scattering strength. To estimate their sizes, a library
containing sphere images can be built based on the numerical simulation, which is only a
function of the radius R. Note that only x polarized images are stored in the library for the
error analysis. For the same spherical void with different locations in a slab, there will be
a small difference in the peak amplitude of the sphere, due to the slow variation of the
point spread function. This difference will not influence the accuracy of the error analysis,
because it is the same location for spheres in the library and the peak scattering strength
is not sensitive to different positions.
137
With the implementation of the first-order spatial moments, the depth of the
spherical surface front ̃ is obtained by (5.2c). This z slice image ℎ �, , ̃� is applied
to compare with each library image  at the same depth ̃. The root mean square
error (RMSE) is defined by,
RMSE(R) = �∑ ∑[|ℎ �, , ̃�| − | (, , )|]2
(5.9)
In Table 5.3, the first row shows the library containing x polarized images with the
same sphere center location (0, 0, 16.51) cm and different radii, resulting in different
front surface positions. The first column presents different spheres with different front
surface locations under test. The radii and locations are in cm. Errors in the table are
dimensionless. In each row, the error is a function of the radius. The smallest error in red
indicates the maximum likelihood of the sphere, which is in accordance with the
reference spherical void.
The library based error analysis method is sensitive to distinguish small spheres with
similar radii, such as 1.27 and 1.52 cm. In the second row of Table 5.3 the sphere with
Table 5.3. Root Mean Square Error for Spheres
Buried Sphere
R1.27 at (0,0,15.24)
R1.524 at (0,0,14.99)
R2.54 at (0,0,5.08)
R2.54 at (0,0,13.97)
R2.54 at (2.54,2.54,13.97)
R3.81 at (0,0,12.7)
1.27
0
0.58
2.17
2.12
1.84
4.38
1.524
0.58
0
1.62
1.54
1.31
3.81
138
2.54
2.12
1.54
0.7
0
0.77
2.33
3.81
4.38
3.81
2.43
2.33
2.72
0
radius 1.27 cm is tested based on the library. The error between spheres with radii 1.27
and 1.52cm is 0.58, which is large enough to tell their differences.
In the fifth row of Table 5.3, there is a sphere at (0, 0, 16.51) cm with radius of 2.54
cm under test. The library includes spheres at the same location at (0, 0, 16.51) cm with
four different radii. The smallest error is 0, providing the maximum likelihood of the
sphere. Compared with the fourth and sixth row, where the same sphere has different
locations, the smallest errors 0.7 and 0.77 still show the correct estimation, although the
sphere in the library is in a different position. Because all spheres in the library have the
same location, there will be a uniform position difference between the sphere image
under test and the library.
5.5.3 Size Estimation of Cylindrical Voids
In the previous section, size estimation of the spherical void is obtained from the
library based error analysis, which can be applied to cylindrical voids as well. However,
unlike spheres, there are two more factors: the length and rotation angle. RMSE for
cylinders will be a function of the radius, length, and rotation angle, resulting in
tremendous time for simulations and image generation, and a huge library. Therefore, it is
necessary to find another approach for cylinder size estimation, namely, the method of
spatial moments.
Table 5.4. Estimated Rotation Angle and Size for Cylinders
Buried Cylinder
R1.78, L5.08, Angle 0°
R2.54, L7.62,Angle 0°
R1.78, L7.11, Angle 45°
R2.54,L10.16, Angle 60°
Estimated Angle
0°
0°
44. 5°
56.5°
Estimated Size
R2.4, L5.74
R2.58, L7.16
R2.31, L7.32
R2.44, L9.04
139
Angle Error
0°
0°
0.5°
3.5°
Size Error
R0.62, L0.66
R0.04, L0.46
R0.53, L0.21
R0.01, L1.12
In Fig. 5.8, differences among different cylinders can be observed, indicating the
feasibility of the central spatial moments method. The estimated rotation angle is
obtained based on (5.4) and the size is estimated based on (5.7) with a tuning factor,
which are shown in Table 5.4. All length units are in cm. The tuning factor is obtained
from all radii and lengths of these four cylinders. The four cylinders are used as the
training set. By taking the average of eight ratios of their real and corresponding
estimated sizes, the tuning factor λ is chosen as 1.54. To acquire a more accurate tuning
factor, a large training set with more varied cylinders can be simulated and imaged.
As presented in the fourth column in Table 5.4, the largest error of the rotation angle
is only 3.5°, which is acceptable and 5.8% error compared with the real angle. The last
column shows the size error, it is observed that the largest error is 1.12 cm, which is large
compared with its real length 10.16cm but smaller than the cross-range resolution 2.5 cm.
This error is reasonable, due to the limitation of the system resolution. Therefore, based
on the method of spatial moments, it should depend on the cross-range resolution rather
than the percentage error compared with its real size to tell whether the error is small.
Unlike investigating the hyperbola in the GPR image, a novel size estimation
algorithm based on the focused near-field microwave image is proposed to provide the
location, shape, size, and rotation angle of the buried object. From the observation of the
constructed images based on the FDTD simulation, it has been observed that buried
spheres have similar size in radar images due to the lensing effect caused by the high
140
dielectric contrast at the boundary of the dense medium. However, the magnitude of the
image depends on the size of the sphere and can be used for discrimination.
5.6 The Shadow-Projection Approach and Normal Dipole Illumination
As shown in section 5.2, due to the lensing effect, a better projection of the buried
object can be observed at the rear interface, which will provide better size and shape
information. To gain better wide angle illumination into the dense media and show the
front curvature of the embedded target, the normal dipole can be chosen as the
transceiver.
5.6.1 The Shadow-Projection Approach
For better illustrating the idea of the shadow-project method, it is assumed that the
object is a perfectly absorbed material first, where all illuminated EM waves will be
absorbed by this object and there is no diffraction and penetration. In this case, only
waves, which do not illuminate the object, can propagate to the rear interface of the slab.
Then, there will be a shadow region, which has the same size as the object. Under this
assumption, the 3-D UDT imaging function in Chapter 2 can be directly applied to
generate the z-slice shadow image at the rear interface with utilizing all the collected data.
However, the hidden target is not perfectly absorbing because reflection and
diffraction occurs with the illumination of EM waves. With understanding the lensing
effect, the object is assumed to be perfectly absorbed locally. Each pixel point should be
generated by a small synthetic aperture around it rather than the whole scanning aperture.
141
Thus, the shadow image at the rear interface  �, ,  �can be obtained based on the 3-
D UDT imaging function (2.68), where the only difference applied here is the spectral
received signal ̃ ,
′′ ′
′′ ′
+
+
̃ �′′ , ′′ , � = ∫−  ∫−  (, , ω) ( ′ ,  ′ , ω) −(  +  ) ′ ′


(5.10)
which should be generated locally, achieved by implementing a moving rectangular
window W with center at each pixel point (, ,  ) and real size of 2 × 2 in the x
and y directions.  is the depth of shadow projection,  is the spatial received signal,
and ( ′ ,  ′ ) is the antenna position in the synthetic aperture plane. With the
implementation of the windowing function, the received signal in the vicinity of a pixel
point rather than all the collected data will be utilized to form the image at that pixel
point. Then the center of the window moves to next pixel point, finding a new set of local
collected data within the size of the window and generating the corresponding image at
the new pixel point. In this shadow-projection method, the spectral received signal in
(5.10) is different from (2.53b), where the local received signal is applied to construct the
image.
In this section, 3-D numerical time-domain received signal is obtained from GprMax
[64]. With the implementation of the Fourier transform, the frequency-domain received
signal can be obtained. In the simulation, the antenna is a y-oriented Hertzian dipole,
scanning in a rectangular aperture plane at z=0 cm from -12.7 cm to 12.7 cm with a step
size of 1.27 cm in both x and y directions. The frequency band of the antenna is from 2
GHz to 10 GHz with a step size of 100 MHz. The received signal has the size of
142
Cross-range domain
12.7 x
,0
=1
0
-12.7
,1 = 10
,2 = 1
1 = 27.94
Down-range domain
z
y
(a)
(b)
(c)
Fig. 5.10. Images of the buried spherical void with diameter 7.62 cm illuminated by a y-oriented dipole,
(a) a spherical void in a dense slab, (b) x cut of the 3D UDT image at x=0 cm, (c) sphere projection in
the 3D UDT image at z=30.48 cm.
21×21×81. There is a dense slab with thickness 27.94 cm and relative permittivity 10
placed 2.54 cm away from the aperture plane, where a spherical or cylindrical void with
the same center location (0, 0, 16.51) cm and different sizes is embedded. A moving
average window (a function of the antenna location) is applied to the frequency-domain
received signal to mitigate the strong interface reflections, which is different from the
moving window  (a function of the pixel location) proposed here. The 3D imaging
143
(a)
(b)
(c)
(d)
(f)
(e)
Fig. 5.11. Images of different buried spherical voids illuminated by a y-oriented dipole, (a), (c), and (e)
are the object slices in the 3D UDT image, while (b), (d), and (f) are the object projections at z=30.48
cm based on the shadow-projection algorithm. (a) diameter 2.54cm, (b) diameter 2.54cm, (c) diameter
5.08cm, (d) diameter 5.08cm, (e) diameter 7.62cm, (f) diameter 7.62cm. Each image is normalized to
its peak value.
144
region has the size of 25.4×25.4×38.1 cm3 with the step size of 0.254 cm on all three
axes, corresponding to the pixel size of 101×101×151. The moving window is centered
at each pixel point with the size of 10.16×10.16 cm2 in the projection plane.
As shown in Fig. 5.10, a spherical void is buried in the dense slab with different
diameters: 2.54, 5.08, and 7.62 cm. All images are normalized to their peak values and
circles in black dash lines show sizes of different spheres. Based on the 3D UDT
algorithm (not using the windowing approach of (5.10)), Figs. 5.10 (b) and (c) are
obtained, while Figs. 5.11 (e), (g), and (i) are achieved by the proposed shadowprojection method.
In Fig. 5.10 (b), there is no strong interface reflection, resulting from the
implementation of the moving average window for better observing the buried object and
its shadow. It shows the x cut of the 3D UDT image at x= 0 cm, where the primary image
at the depth approximately 15 cm is a small front portion of the whole sphere with
diameter 7.62cm. This is caused by the lensing effect, explained in the previous section.
And the shadow image at depth 30.48 cm presents the correct size of this spherical void,
which is contributed by the lensing effect, demonstrating that the shadow image rather
than the primary image can tell the size of the buried object. As presented in Fig. 5.10 (c),
the shadow on the rear interface of the slab in the 3D UDT image is obtained. The size of
the shadow is close to the actual sphere (diameter 7.62 cm), but it has a ring shape, due to
the transmission through the void and diffractions at edges.
145
Cross-range domain
12.7 x
-12.7
,0
=1
0
y
,1 = 10
,2 = 1
1 = 27.94
Down-range domain
(a)
z
(c)
(b)
(d)
(e)
(g)
(f)
Fig. 5.12. Images of different buried cylindrical voids illuminated by a y-oriented dipole, (b), (d), and
(f) are the object slices in the 3D UDT image, while (c), (e), and (g) are the object projections at
z=30.48 cm based on the shadow-projection algorithm. (a) a cylindrical void in a dense slab, (b)
diameter 3.56cm, length 5.08cm, (c) diameter 3.56cm, length 5.08cm, (d) diameter 5.08cm, length
7.62cm, (e) diameter 5.08cm, length 7.62cm, (f) diameter 3.56cm, length 7.11cm, 45° rotation, (g)
radius 3.56cm, length 7.11cm, 45° rotation. Each image is normalized to its peak value.
146
Figs. 5.11 (a), (c), and (e) shows top-view primary images of three different
embedded spherical voids with diameters 2.54, 5.08, and 7.62 cm. These spheres have
similar size in the images and the difference is the strength, showing that the primary
image generated by the 3D UDT algorithm fails to tell the size difference among different
buried objects. Therefore, it is of importance to develop the shadow-projection algorithm.
Figs. 5.11 (b), (d), and (f) are constructed by the proposed shadow-projection
approach, matching the actual size of the sphere marked in the black dash line. Compared
with Figs. 5.11 (a), (c), and (e), the shadow-projection method can distinguish different
buried spheres in size.
As illustrated in Fig. 5.12 (a), a cylindrical void is embedded in the same dense slab.
Figs. 5.12 (b), (d), and (f) present the top-view primary image, obtained from the 3D
UDT algorithm, where the cylinder in (b) has diameter of 3.56 cm and length of 5.08 cm,
the cylinder in (d) has diameter of 5.08 cm and length of 7.62 cm, and the cylinder in (f)
has diameter of 3.56cm, length of 7.11 cm and rotation angle of 45° in the xy plane. The
size difference in the rectangular cross-section of these cylinders can be observed and
their lengths are well presented, which is different from the circular cross-section of
spheres in Figs. 5.11 (a), (c), and (e). However, in Figs. 5.12 (b), (d) and (f), the diameter
of the cylindrical voids cannot be retrieved, due to the limited illumination on the front
surface of the cylinder’s circular cross-section. In Figs. 5.12 (c), (e), and (g), the shadowprojection method shows the diameters of the cylinders, more accurately than in the
primary image.
147
5.6.2 Normal Dipole Illumination
In section 5.2, it was shown that parallel dipole illumination provides a focused
beam into the dense media, resulting in good projection of the buried object’s shadow on
the next interface, showing its size and shape. But for the primary image, due to the
limited illuminating angle, different spherical voids tend to have similar size in the nearfield radar image. To overcome this problem and show better frontal outlines of these
objects, the normal dipole is applied to provide wide angle illumination.
The same object geometry as shown in Fig. 5.10 (a) with embedded spherical voids
is applied with the same y-directed (parallel) Hertzian dipole. Three different spherical
voids are buried in this dense slab with diameters 5.08, 7.62, and 10.16 cm. The same 3D
UDT imaging algorithm as applied in obtaining Figs. 5.10 (b) and (c), are obtained in Fig.
5.13, showing that different spheres have a similar size and different peak amplitude. Figs.
5.13 (b), (d), and (f) are the primary images at the front surface slices.
Changing the orientation of the dipole from y-directed (parallel) to z-directed
(normal) and implementing the same 3D UDT imaging algorithm, a larger frontal
outlines can be observed in the larger sphere shown in Figs. 5.14 (a), (c), and (e). There is
nearly no shadow projection when applying a normal dipole, because all the rear interface
can be illuminated. In Fig. 5.14 (a), there is a bright spot on the front of the sphere with
radius of 2.54cm and two smaller spots on two sides. As the sphere becomes larger, in
Fig. 5.14 (e), the frontal outline is continuous but still bright on the front. Comparing Figs.
5.13 (c) and 5.14 (c), it is the z-directed dipole that shows a better size of the buried
148
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5.13. 3D UDT images of different buried spherical voids illuminated by a y-oriented dipole, (a),
(c), and (e) are x-cut images at x=0 cm, (b), (d), and (f) are z-cut images. (a) diameter 5.08cm, x-cut, (b)
diameter 5.08cm, z-cut, (c) diameter 7.62cm, x-cut, (d) diameter 7.62cm, z-cut, (e) diameter 10.16cm,
x-cut (f) diameter 10.16cm, z-cut. There is no normalization. The color scale is in dB with the dynamic
range 20dB.
149
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5.14. 3D UDT images of different buried spherical voids illuminated by a z-oriented dipole, (a),
(c), and (e) are x-cut images at x=0 cm, (b), (d), and (f) are z-cut images. (a) diameter 5.08cm, x-cut, (b)
diameter 5.08cm, z-cut, (c) diameter 7.62cm, x-cut, (d) diameter 7.62cm, z-cut, (e) diameter 10.16cm,
x-cut (f) diameter 10.16cm, z-cut. There is no normalization. The color scale is in dB with the dynamic
range 20dB.
150
sphere in a dense slab, due to wide angle illumination. Compared with the shadowprojection approach, shown in Fig. 5.11, the normal-dipole based Figs. 5.14 (b), (d), and
(f) cannot present the accurate size information of different spheres, although differences
in size can be observed.
5.7 Summary of Chapter 5
In this chapter, with understanding the essence of physics behind the EM wave
propagation into a dense media, the approach of spatial moments is proposed to quantify
the location, rotation angle, size, and shape of buried targets based on numerical
simulations. To have a better observation on the hidden objects, the lensing effect is
utilized to generate the shadow-projection images rather than the primary images. The
shadow-projection imaging algorithm assumes that buried objects are perfectly absorbing,
so their projections on the rear interface of the slab will show better size and shape
information. To achieve this shadow-projection algorithm, a windowing function
centered at each pixel point is implemented and moved pixel-by-pixel, constructing the
image of the central pixel by the collected data within the size of the window. To show a
larger frontal outline of a buried object and overcome the limited illumination into the
dense slab, a normal dipole is used as illumination. The electric field distributions of a
Brewster angle-oriented dipole with the presence of a dense slab is investigated.
Comparing with the z-oriented dipole case, there is nearly no difference in electric field
distributions.
151
Chapter 6: Contributions and Future Work
To develop a real-time near-field microwave imaging system, it is required to
achieve both real-time image generation and fast data collection. In the beginning of this
work, two novel real-time imaging algorithms (UDT and FBP) are proposed and verified
based on numerical and experimental data. UDT seeks the linear relation between the
spectral domain received signal and reflectivity function and takes advantage of the FFT.
It is critical to include all layers’ information into the phase function when correctly
applying the method of stationary phase. Doing so overcomes the amplitude discontinuity
at each interface seen in DT, and provides a uniformly continuous image. When
including the antenna gain pattern into the imaging function, there can be a singularity
problem in both DT and UDT formulations due to nulls in the pattern. FBP solves this
singularity issue, where nulls are eliminated by a non-singular spectral integral in the
denominator, and provides balanced monostatic and bistatic images. The gain pattern is
incorporated into the PSF, which is especially essential to balance near-field multistatic
images.
In order to determine the size and shape of buried objects, the approach of spatial
moments is developed to provide quantitative target information without the guidance of
an experienced operator. It is found that the spatial moments method works well for voids
152
with the cylindrical shape, but fails to distinguish different spherical voids, due to the
lensing effect at the boundary between two adjacent layers with a high dielectric contrast.
Looking into the physics, it is the refraction effect that forces the antenna beam to bend
toward its broadside. Although different spheres have similar size in microwave images,
the scattering strengths from these spheres are different, so a library based matching
method between the generated image and images in the library is applied to tell the
diameter of the sphere by minimizing the error. Note that, unlike cylinders, the library
containing different sphere images is only a function of the diameter, and the image is not
sensitive to the sphere’s location, so therefore intensive simulations are not needed to
build the library.
Another approach to overcome the problem that different spheres are similar in size
in these microwave images is to exploit the lensing effect. The shadow-projection
approach is developed to present accurate size and shape of buried objects and
distinguish diameters of different embedded spherical voids by looking at the shadow
cast on the interface behind the object. The algorithm assumes that the buried target is
perfectly absorbing and local measured data is applied to generate the image pixel-bypixel at the rear interface of the dense slab. To obtain better images of the frontal outline
of buried objects, normal dipole excitation is investigated to provide wide angle
illumination and overcome the limited illuminating region into the slab of a parallel
dipole source due to the lensing effect. The primary images show better size information
than images based on only parallel dipole illumination.
153
To achieve fast data acquisition, a fixed sparse multistatic antenna array is proposed
and demonstrated by the analytical solution using an ideal point scatterer and the
conventional BP imaging algorithm. The problem of imaging through a periodic structure
is also investigated and the method of least squares is applied to solve this problem with
consideration of the periodic structure into the numerical round-trip electric field
intensity function. But a clean and clear image of an object behind the metal grating has
not been obtained. It is recommended that future be focused on these three problems: 1.
fabrication and experimental verification of the multistatic array, 2. development of the
corresponding fast multistatic imaging algorithm, and 3. solution to imaging through a
periodic structure.
Fixed Antenna Array
Fig. 6.1. Fixed antenna array for generating 3D snapshot images of stratified media.
There is still no commercially available portable real-time microwave imaging
device for detecting voids, cracks, and erosions in layered media. With such a device, the
regular inspection and maintenance of old buildings, pavement, bridges, and furnace
154
walls can be achieved in real time to extend the life span of these structures, improve
safety, and avoid structural failure. Therefore, further study is needed to demonstrate the
proposed multistatic array by a numerical simulations, and experimental verification with
the implementation of the corresponding fast imaging algorithm. A field-programmable
gate array (FPGA) can be developed to control each transmitting and receiving antenna
pair and acquire the scattering data.
It is of significance to solve the problem of imaging through a periodic structure,
because there might be such structure attached on or embedded in the stratified
environment, leading to obscured objects in images with the implementation of a
conventional imaging algorithm. The corresponding Floquet modes and frequency
responses can be investigated by numerical methods, such as MoM. Therefore, a novel
imaging algorithm which exploits the dominant Floquet modes should be developed to
focus hidden objects behind a periodic structure.
155
References
[1] J. Daniels, Ground Penetrating Radar, 2nd ed. London, United Kingdom: The Institution of Electrical
Engineers, 2004.
[2] M .G. Amin, Through-the-Wall Radar Imaging, Boca Raton, FL: CRC Press, 2011.
[3] Z. H. Cho, J. P. Jones and M. Singh, Foundation of Medical Imaging, New York: Wiley-Interscience,
1993.
[4] P. C. Chang, R. J. Burkholder, R. J. Marhefka, Y. Bayram, and J. L. Volakis, “High-Frequency EM
Characterization of Through-Wall Building Imaging,” IEEE Trans. on Geoscience and Remote
Sensing, Special Issue on Remote Sensing of Building Interior, 47(5): 1375-1387, May 2009.
[5] P. C. Chang, R. J. Burkholder and J. L. Volakis, “Adaptive CLEAN with target refocusing for throughwall image improvement”, IEEE Transaction on Antennas and Propagation, vol. 58, no. 1, pp. 155162, Jan. 2010.
[6] K. E. Browne, R. J. Burkholder, and J. L. Volakis, “Through-wall opportunistic sensing system
utilizing a low-cost flat-panel array,” IEEE Transactions on Antennas and Propagation, vol. 59, no. 3,
pp.895-868, March 2011.
[7] M. Amin and K. Sarabandi, Guest Editors, IEEE Trans. on Geoscience and Remote Sensing, Special
Issue on Remote Sensing of Building Interior, 47(5), May 2009.
[8] K. Ren, J. Chen, and R. J. Burkholder, “Investigation of gaps between blocks in microwave images of
multilayered walls,” IEEE International Symposium on Antennas and Propagation, Vancouver,
Canada, Jul. 19-25, 2015.
[9] S. X. Pan and A. C. Kak, “A computational study of reconstruction algorithms for diffraction
tomography: interpolation versus filtered backpropagation,” IEEE Transactions on Acoustics, Speech
and Signal Processing, vol. ASSP-31, pp. 1262-1275, Oct. 1983.
[10] E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic
data,” Optical Communications, vol. 1, pp. 153–156, Sept. /Oct. 1969.
[11] E. Wolf, “Determination of the amplitude and the phase of scattered fields by holography,” Journal of
Optical Society of America, vol. 60, pp. 18–20, Jan. 1970.
[12] A. J. Devaney, “Inverse-scattering theory within the Rytov approximation,” Optics letters, vol. 6, pp.
374–376, Aug. 1981.
[13] A. J. Devaney, “A filtered backpropagation algorithm for diffraction tomography,” Ultrasonic imaging,
vol. 4, pp. 336–350, 1982.
[14] A. J. Devaney, “A computer simulation study of diffraction tomography,” IEEE Transactions on
Biomedical Engineering, vol. BME-30, pp. 377–386, July 1983.
156
[15] A. J. Devaney, “Geophysical diffraction tomography,” IEEE Transactions on Geoscience and Remote
Sensing, vol. GE-22, pp. 3–13, Jan. 1984.
[16] A. J. Witten, and E. Long, “Shallow applications of geophysical diffraction tomography,” IEEE
Transactions on Geoscience and Remote Sensing, vol. GE-24, pp. 654–662, Sep. 1986.
[17] J. E. Molyneux and A. J. Witten, “Diffraction tomographic imaging in a monostatic measurement
geometry,” IEEE Transactions on Geoscience and Remote Sensing, vol. 31, pp. 507–511, Mar. 1993.
[18] A. J. Witten, J. E. Molyneux, and J. E. Nyquist, “Ground penetrating radar tomography: algorithms
and case studies,” IEEE Transactions on Geoscience and Remote Sensing, vol. 32, pp. 461–467, Mar.
1994.
[19] R. W. Deming and A. J. Devaney, “Diffraction tomography for multi-monostatic ground penetrating
radar imaging,” Inverse Problems, vol. 13, pp. 29-45, 1997.
[20] T. B. Hansen and P. M. Johansen, “Inversion scheme for ground penetrating radar that takes into
account the planar air-soil interface,” IEEE Transactions on Geoscience and Remote Sensing, vol. 38,
no. 1, pp.496- 506, Jan. 2000.
[21] P. Meincke, “Linear GPR inversion for lossy soil and a planar air-soil interface,” IEEE Transactions
on Geoscience and Remote Sensing, vol. 39, no. 12, pp.2713- 2721, Dec. 2001.
[22] W. J. Zhang and A. Hoorfar, “Two-dimensional through-the-wall radar imaging with diffraction
tomographic algorithm,” IEEE International Conference on Microwave Technology and
Computational Electromagnetics, pp. 96-99, 2011.
[23] W. J. Zhang and A. Hoorfar, “ Three-dimensional real-time through-the-wall radar imaging with
diffraction tomographic algorithm,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51,
no. 7, pp. 4155-4163, Jul. 2013.
[24] J. Chen, K. Ren, and R. J. Burkholder, “3-D imaging for the detection of embedded objects in layered
medium,” IEEE International Symposium on Antennas and Propagation, Vancouver, Canada, Jul. 1925, 2015.
[25] K. Ren and R. J. Burkholder, “A uniform diffraction tomographic algorithm for real-time portable
microwave imaging,” IEEE International Symposium on Antennas and Propagation, Fajardo, Puerto
Rico, Jun. 26-Jul. 1, 2016.
[26] K. Ren and R. J. Burkholder, “A uniform diffraction tomographic imaging algorithm for near-field
microwave scanning through stratified media,” IEEE Transactions on Antennas and Propagation, vol.
64, no. 12, pp. 5198-5207, Dec. 2016.
[27] K. Ren and R. J. Burkholder, “A 3-D uniform diffraction tomographic imaging algorithm for near-field
microwave scanning through stratified media,” IEEE Transactions on Antennas and Propagation,
under review.
[28] P. C. Gong, J. Q. Zhou , Z. H. Shao, L. L. Cui, and L. Wang, “A near-field imaging algorithm based on
SIMO-SAR system”, IEEE International Conference on Computational Problem-Solving, pp.678-681,
2011.
[29] T. S. Ralston, G. L. Charvat, and J. E. Peabody, “Real-time through-wall imaging using an
ultrawideband multiple-input multiple-output (MIMO) phased array radar system”, IEEE International
Symposium on Phased Array Systems and Technology, pp.551-558, 2010.
[30] K. E. Browne, R. J. Burkholder, and J. L. Volakis, “Through-wall opportunistic sensing system
utilizing a low-cost flat-panel array,” IEEE Transactions on Antennas and Propagation, vol. 59, no. 3,
pp.895-868, March 2011.
157
[31] X. Zhuge and A. Yarovoy, “Three-dimensional near-field MIMO array imaging using range migration
techniques,” IEEE Transaction on Image Processing, vol. 21, no. 6, pp. 3026-3033, June 2012.
[32] K. Ren and R. J. Burkholder, “A comparison between 3-D diffraction tomography and novel fast back
projection algorithm based on the near-field monostatic SAR,” IEEE Geoscience and Remote Sensing
Letters, under review.
[33] K. Ren and R. J. Burkholder, “A 3-D novel fast back projection imaging algorithm for stratified media
based on the near-field monostatic and bistatic SAR,” IEEE Transactions on Antennas and
Propagation, under review.
[34] M. E. Yavuz and F. L. Teixeira, “On the sensitivity of time-reversal imaging techniques to model
perturbations,” IEEE Trans. Antennas Prop., vol. 56, no. 3, pp. 834-843, 2008.
[35] M. E. Yavuz and F. L. Teixeira, “Space–frequency ultrawideband time-reversal imaging,” IEEE Trans.
Geosci. Remote Sens., vol. 46, no. 4, pp. 1115-1124, 2008.
[36] M. E. Yavuz and F. L. Teixeira, “Ultrawideband microwave remote sensing and imaging using timereversal techniques: A review,” Remote Sens., vol. 1, no. 3, pp. 466-495, 2009.
[37] A. E. Fouda and F. L. Teixeira, “Imaging and tracking of targets in clutter using differential timereversal techniques,” Waves Random Complex Media, vol. 22, no. 1, pp.66-108, 2012.
[38] A. E. Fouda and F. L. Teixeira, “Statistical stability of ultrawideband time-reversal imaging in random
media,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 2, pp. 870-879, 2014.
[39] A. E. Fouda and F. L. Teixeira, “Ultra-wideband microwave imaging of breast cancer tumors via
Bayesian inverse scattering,” J. Appl. Phys., vol. 115, no. 6, 064701, 2014.
[40] A. E. Fouda and F. L. Teixeira, “Bayesian compressive sensing for ultrawideband inverse scattering in
random media,” Inverse Prob., vol. 30, no. 11, 114017, 2014
[41]M. E. Yavuz, A. E. Fouda, and F. L. Teixeira, “GPR signal enhancement using sliding-window spacefrequency matrices,” Prog. Electromagn. Res., vol. 145, pp. 1-10, 2014.
[42]V. R. N. Santos and F. L. Teixeira, “Application of time-reversal-based processing techniques to
enhance detection of GPR targets,” J. Appl. Geophys., vol. 146, pp. 80-94, 2017.
[43] V. R. N. Santos and F. L. Teixeira, “Study of time-reversal-based signal processing applied to
polarimetric GPR detection of elongated targets,” J. Appl. Geophys., vol. 139, pp. 257-268, 2017.
[44] P. Gamba and S. Lossani, “Neural detection of pipe signatures in ground penetrating radar images,”
IEEE Transactions on Geoscience and. Remote Sensing, vol. 38, no. 2, pp. 790-797, Mar. 2000.
[45] S. Shihab, W. Al-Nuaimy, and A. Eriksen, “Radius estimation for subsurface cylindrical objects
detected by ground penetrating radar,” Tenth International Conference on Ground Penetrating Radar,
Delft, The Netherlands, pp. 319-322, Jun. 21-24, 2004.
[46] C. G. Windsor, L. Capineri, and P. Falorni, “The estimation of buried pipe diameter by generalized
Hough transform of radar data,” Progress In Electromagnetics Research Symposium, Hangzhou,
China, pp. 345-349, Aug. 22-26, 2005.
[47] A. Dolgiy, A. A. Dolgiy, and V. Zolotarev, “Estimation of subsurface pipe radius using ground
penetrating radar data,” Near Surface, Helsinki, Finland, pp. 1-5, Sep. 4-6, 2006.
[48] E. Pasolli, F. Melgani, and M. Donelli, “Automatic analysis of GPR images: A pattern recognition
approach,” IEEE Transactions on Geoscience and. Remote Sensing, vol. 47, no. 7, pp. 2206-2217, Jul.
2009.
158
[49] E. Pasolli, F. Melgani, and M. Donelli, “Gaussian process approach to buried object size estimation in
GPR images,” IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 1, pp. 141-145, Jan. 2010.
[50] M. B. Alhassanat and W. M. A. Wan Hussin, “A new algorithm to estimate the size of an underground
utility via specific antenna,” Progress In Electromagnetics Research Symposium Proceedings,
Marrakesh, Morocco, pp. 1868-1870, Mar. 20-23, 2011.
[51] M. Hashim, S. W. Jaw, and M. Marghany, “Ground penetrating radar data processing for retrieval of
utility material types and radius estimation,” IEEE International RF and Microwave Conference,
Seremban, Malaysia, pp. 191-196, Dec. 12-14, 2011.
[52] Y. Nomura, Y. Naganuma, H. Kotaki, N. Kato. and Y. Sudo, “Estimation method of the radius, depth
and direction of buried pipes with ground penetrating radar,” WSEAS International Conference on
Sensors, Signals, Visualization, Imaging and Simulation, Sliema, Malta, pp. 137-141, Sep. 7-9, 2012.
[53] W. Li, H. Zhou, and X. Wan, “Generalized Hough transform and ANN for subsurface cylindrical
object location and parameters inversion from GPR data,” Proceedings of 14th International
Conference on Ground Penetrating Radar, Shanghai, China, pp. 281-285, Jun. 4-8, 2012.
[54] Y. Zhang, D. Huston, and T. Xia, “Underground object characterization based on neural networks for
ground penetrating radar data,” Proceedings of SPIE, pp. 1-9, 2016.
[55] K. Ren and R. J. Burkholder, “Size and shape estimation of buried objects from spatial moments
analysis of near-field microwave images,” IEEE Transactions on Geoscience and. Remote Sensing,
under review.
[56] K. Ren and R. J. Burkholder, “Shadow-projection based hidden object identification in near-field
microwave images,” IEEE Geoscience and Remote Sensing Letters, under review.
[57] K. Ren and R. J. Burkholder, “Normal dipole based embedded object identification in near-field
microwave images,” IEEE Geoscience and Remote Sensing Letters, under review.
[58] D. B. Geselowitz, “On the theory of the electrocardiogram,” Proceedings of the IEEE, vol. 77, no. 6,
pp. 857-876, June 1989.
[59] K. Chetty, G. E. Smith, and K. Woodbridge, “Through-the-wall sensing of personnel using bistatic
WiFi radar at standoff distances,” IEEE Transactions on Geoscience and. Remote Sensing, vol. 50, no.
4, pp. 1218-1216, April 2012.
[60] M. Fallahpour, J. T. Case, M. T. Ghasr, and R. Zoughi, “Piecewise and wiener filter-based SAR
techniques for monostatic microwave imaging of layered structures,” IEEE Transactions on Antennas
and Propagation, vol. 62, no. 1, pp.282-294, Jan. 2014.
[61] W. C. Chew, Waves and Fields in Inhomogeneous Media, New York: IEEE Press, 1995.
[62] A. Brancaccio and G. Leone, “Multimonostatic shape reconstruction of two-dimensional dielectric
cylinders by a Kirchhoff-based approach,” IEEE Transactions on Geoscience and Remote Sensing, vol.
48, no. 8, pp. 3152- 3161, Aug. 2010.
[63] N. Bleistein and R. A. Handelsman, Asymptotic Expansions of Integrals. New York, NY: Holt,
Rinehart and Winston, 1975.
[64] A. Giannopoulos (2005). GprMax (Ver. 2.0) [Online]. Available: http://www.gprmax.com/
[65] M. G. Amin and F. Ahmad, "Through-the-wall radar imaging: theory and applications," in Academic
Press Library in Signal Processing, Vol. 2: Communications and Radar Signal Processing. Oxford,
UK: Academic Press, 2013.
159
[66] R. Persico, Introduction to Ground Penetrating Radar: Inverse Scattering and Data Processing.
Hoboken, NJ: Wiley-IEEE Press, 2014.
[67] M. T. Ghasr, M. J. Horst, M. R. Dvorksy, and R. Zoughi, “Wideband microwave camera for real-time
3-D imaging,” IEEE Transactions on Antenna and Propagation, vol. 65, no. 1, pp. 258- 268, Jan.
2017.
[68] S. Celozzi, R. Araneo, and G. Lovat, Electromagnetic Shielding, New York: IEEE Press, 2008.
[69] W. K. Pratt, Digital Imaging Processing, 3rd ed. New York: John Wiley & Sons, 2001.
[70]HFSS V10.0 user guide.
[71]J. L. Volakis, A. Chatterjee, and L. C. Kempel, Finite Element Method for Electromagnetics, WileyIEEE Press, 1998.
[72] F. Soldovieri, R. Solimene, L. Lo Monte, M. Bavusi, and A.Loperte, “Sparse reconstruction from GPR
data with applicationsto rebar detection,” IEEE Transactions on Instrumentation and Measurement,
vol. 60, no. 3, pp. 1070–1079, 2011.
[73] P. C. Chang, R. J. Burkholder, and J. L. Volakis, “Model-corrected microwave imaging through
periodic wall structures”, International Journal of Antennas and Propagation, pp. 1-7, 2012.
160
Appendix A: Evaluation of the Second Derivative of the Phase
Function in the 2D Case
Taking the derivative of (2.32) with respect to  , one can obtain the second
derivative of (2.31), which is
2 + 2


′′ � � = −[
3

+
′2 +( ′′ − )2



′3

]∙
−∑−1
=0 
0 
− ∑−1
=0 [
2
2

+
3

+
′2
′′ − )2

+(

′3


]  (A-1)
0
2
′2
With the relation 2 = 
+ 2 and 2 = 
+ (′′ −  )2 , (A-1) can be
simplified as,
 2
 2
′′ � � = −( 3 +  ′3 ) ∙


−∑−1
=0 
0 
 2
 2




− ∑−1
=0 ( 3 +  ′3 )  


0
(A-2)
Thus, (2.34) is the result of evaluating (A-2) at the stationary phase point  =
′
′′ /2, where  = 
.
161
Appendix B: 2D Inversion Scheme
To obtain the spatial reflectivity function from the modified spectral reflectivity
function, we apply a matched filter with testing point ( ′ ,  ′ ) to (2.40) and interchange
the order of integration,
′′ ′ + ′′  ′ �

′′
, � � 
∬ � �′′ , 
∬∬
 (,)
′′ ,,�
� �
′′
 − �−
′′

′′ =
′ �− ′′ �− ′ �

′′

′′ 
(B-1)
The inner double integral on the RHS is nonzero only when the testing point
matches with the pixel point  = ′ and  = ′. Then (B-1) reduces to,
′′ ′ + ′′  ′ �

′′
, � � 
∬ � �′′ , 
′′

′′ =  ( ′ ,  ′ ) ∬
′′  ′′


′′ , ′ ,�
� �
(B-2)
or
 (, ) = �∬
′′  ′′


′′ ,,�
� �
�
−1
′′
′′
′′
′′
, � � +� 
′′
∬ � �′′ , 
(B-3)
′′
The next step is to map 
back to the  domain with the dispersion relation
′′

= �42 − (′′ )2 where  = �,n / yielding,
4 2
′′

=  ′′ 

162
(B-4)
and
 (, ) =
′′ , ′′ ,�
� �
∬ 

∬
2
′′
�′′
 + �   ′′ 

′′

2
′′
  
′′
′′
 � � ,,�
163
(B-5)
Appendix C: Evaluation of the Second Derivative of the Phase
Function in the 3D Case
Taking the partial derivatives of (2.58a) and (2.58b) with respect to  and 
respectively and evaluating at the stationary point (′′ /2, ′′ /2), the second partial
derivatives of (2.57b) can be obtained, which are
2  � , �
2
2  � , �
 
2  � , �
2

|(′′ /2,′′ /2) = −
2 − ′′2
4

3
20 
′′
′′ 
|(′′ /2,′′ /2) = − 2
|(′′ /2,′′ /2) = −
3
0 


−1
�1 − ∑−1
=0  � − ∑=0
3
20 
3
20 

′′
′′ 



−1
�1 − ∑−1
=0  � − ∑=0 2
2 − ′′2
4

′′2
42 −



−1
�1 − ∑−1
=0  � − ∑=0
3
0 

42 −′′2 
3
20 

(C-1a)
(C-1b)
(C-1c)
Substituting (C-1a)-(C-1c) to (2.56b) with some arrangement, one can obtain,
n �′′ /2, ′′ /2�
∑−1
=0
=

4 �1−∑−1
=0 �
4

2 −( + )( ′′2 + ′′2 )
8 





3 3


−1
∑−1
=0 ∑=0


2
+ 4 ∑−1
=0


�1 − ∑−1
=0  � +
2 −( +
′′2
′′2
8 

 )( + )  
3  3


 

 (  )2

4

(s ≠ m)
+
(C-2)
With the dispersion relation (2.59), the third and fourth mutual interaction terms
in (C-2) can be simplified, resulting in (2.60).
164
Appendix D: Signature of the Second Partial Derivative Matrix
To determine the phase compensation term in (2.56a), the signature of the second
partial derivative matrix should be investigated, which is the difference between the
number of the positive and negative eigenvalues of the matrix at the stationary phase
center. The second partial derivative matrix denotes as,
� �′′ /2, ′′ /2�� = �
2 
2
2 
 
2 
 
2 
2

�
(D-1)
′′ /2�
�′′ /2,
Two eigenvalues 1 and 2 of (D-1) can be obtained by solving the following quadratic
equation,
2
 2 +
2 + 2 � 3
 0


−1
�1 − ∑−1
=0  � + ∑=0
2

2 +
3

0 
�  + n �′′ /2, ′′ /2� = 0
(D-2)
Based on Vieta’s formulas for the quadratic equation, one can relate the
coefficients to sums and products of the roots,
2
 2 +
1 + 2 = −2 � 3
 0


−1
�1 − ∑−1
=0  � + ∑=0
′′
 ′′ 
�
2
1 2 = n � 2 ,
165
>0
2

2 +
3

0 
�<0
(D-3a)
(D-3b)
From (D-3a) and (D-3b), both two eigenvalues 1 and 2 should be negative.
Thus, the signature of the matrix sig� �′′ /2, ′′ /2�� = −2 and the phase
compensation term in (2.61) is obtained, which is /2.
166
Appendix E: 3D Inversion Scheme
To obtain the spatial reflectivity function from the modified spectral reflectivity
function, a matched filter can be applied with the testing point ( ′ ,  ′ ,  ′ ) to (2.66) and
interchange the order of integration,
′′ ′ + ′′  ′ + ′′  ′ �


′′
, � � 
∭ � �′′ , ′′ , 
′′
 − �−
′ �− ′′ �− ′ �− ′′ �− ′ �


′′
′′ ′′ 
= ∭∭
 ()
′′ ,,�
� �′′ ,
′′
′′ ′′ 
∙
(E-1)
The inner three-fold integral over the volume on the RHS is nonzero only when the
testing point matches with the pixel point x = x ′ ,  = ′ and  = ′ . Then, (E-1) is
simplified to,
′′ ′ + ′′  ′ + ′′  ′ �


′′
, � � 
∭ � �′′ , ′′ , 
′′
′′ ′′ 
=  () ∭
′′  ′′
′′ 

(E-2)
′′ , ′ ,�
� �′′ ,
or
 () = �∭
′′  ′′
′′ 

′′ , ′ ,�
� �′′ ,
�
−1
′′ ′ + ′′  ′ + ′′  ′ �


′′
∙ ∭ � �′′ , ′′ , 
, � � 
′′
′′ ′′ 
(E-3)
′′
The next step is to map 
back to the  domain with the dispersion relation
′′

= �42 − (′′ )2 − (′′ )2 where  = �,n / yielding,
167
4 2
and
′′

=  ′′ 
(E-4)

 () =
′′ , ′′ ,�
� �′′ ,
∭ 

∭
2
′ ′′ ′ ′′ ′
�′′
  +  +  �   ′′  ′′ 


′′

2
′′
′′
   
′′ ′′
′′
 � � , ,,�
168
(E-5)
Документ
Категория
Без категории
Просмотров
0
Размер файла
8 368 Кб
Теги
sdewsdweddes
1/--страниц
Пожаловаться на содержимое документа