Physics of Automatic Target RecognitionFirooz Sadjadi, Bahram Javidi This book examines the roles of sensors, physics–based attributes, classification methods, and performance evaluation in automatic target recognition. It details target classification from small mine–like objects to large tactical vehicles. Also explored in the book are invariants of sensor and transmission transformations, which are crucial in the development of low latency and computationally manageable automatic target recognition systems. |
Contents
Theory of Invariant Algebra and Its Use | 25 |
Automatic Recognition of Underground Targets | 41 |
A Weighted Zak Transform Its Properties | 57 |
Using Polarization Features of Visible Light | 73 |
The Physics of PolarizationSensitive Optical Imaging | 91 |
Dispersion Its Effects and Compensation | 105 |
Multisensor Target Recognition in Image Response | 126 |
Other editions - View all
Common terms and phrases
algorithm angle applications approach automatic background camera centered classification complex computed consider correlation corresponding defined depth described detection detectors digital holography direction dispersion distribution encrypted estimated example exposure eye tracking false feature space field Figure filter frequency function Gaussian given hologram ID tag IEEE initial input intensity invariant Javidi kernel light linear matched matrix mean measurement method microorganism mines moments noise nonlinear normalized object obtained on-axis optical parameters particle pattern performance phase pixels plane plot polarization position presented problem provides recognition reconstructed reference reflection region relation represent respectively response robust rotation sample scale scene segments shape shown shows signal signature similar single space spatial spectral surface technique template tion tracking transformation types values vector viewing wave weighted
Popular passages
Page 3 - The idea in kernel-based techniques is to obtain a nonlinear version of an algorithm defined in the input space by implicitly redefining it in the feature space and then converting it in terms of dot products. The kernel trick is then used to implicitly compute the dot products in f without mapping the input vectors into F; therefore, in the kernel methods, the mapping $ does not need to be identified. The kernel representation for the dot products in F is expressed as *(x,), (2) where A; is a kernel...