A three-dimensional electromagnetic model (3-D EM-model)–based scattering center matching method is developed for synthetic aperture radar automatic target recognition (ATR). 3-D EM-model provides a concise and physically relevant description of the target’s electromagnetic scattering phenomenon through its scattering centers which makes it an ideal candidate for ATR. In our method, scatters of the 3-D EM-model are projected to the two-dimensional measurement plane to predict scatters’ location and scattering intensity properties. Then the identical information is extracted for scatters in measured data. A two-stage iterative operation is applied to match the model-predicted scatters and the measured data-extracted scatters by combining spatial and attributed information. Based on the two scatter sets’ matching information, a similarity measurement between model and measured data is obtained and recognition conclusion is made. Meanwhile, the target’s configuration is reasoned with 3-D EM-model serving as a reference. In the end, data simulated by electromagnetic computation verified this method’s validity.
This paper proposes a robust method for the matching of attributed scattering centers (ASCs) with application to synthetic aperture radar automatic target recognition (ATR). For the testing image to be classified, ASCs are extracted to match with the ones predicted by templates. First, Hungarian algorithm is employed to match those two ASC sets initially. Then, a precise matching is carried out through a threshold method. Point similarity and structure similarity are calculated, which are fused to evaluate the overall similarity of the two ASC sets based on the Dempster–Shafer theory of evidence. Finally, the target type is determined by such similarities between the testing image and various types of targets. Experiments on the moving and stationary target acquisition and recognition data verify the validity of the proposed method.
KEYWORDS: 3D acquisition, Synthetic aperture radar, Scattering, 3D image processing, 3D modeling, Feature extraction, Data modeling, Data centers, Automatic target recognition, Error analysis
Additional information provided by three-dimensional (3-D) scattering centers (SCs) is useful in automatic target recognition (ATR). An approach is proposed for 3-D SC extraction from multiple-resolution synthetic aperture radar (SAR) measurements at arbitrary azimuths and elevations. This approach consists of a feature-level extraction and a signal-level optimization. In the feature-level extraction, two-dimensional (2-D) SCs are first extracted at each aspect, then 3-D SCs are coarsely generated from these 2-D SCs by a clustering method. This clustering method contains a particular distance equation and an ingenious clustering strategy which is developed based on some basic properties of scattering physics and the geometric transformation of 3-D SCs and 2-D SCs. Exploiting the sparsity of SCs in the feature domain, such a method efficiently extracts 3-D SCs. In the signal-level optimization, 3-D SC parameters are directly re-estimated using the measurement data. This improves the precision of 3-D SC parameters and provides reliable reconstructions. Finally, the experimental results of data generated by the GTD model and the high-frequency electromagnetic magnetism code exhibit the effectiveness of the proposed approach. In addition, we apply our approach to multiple-pass circle SAR. The reconstructed 3-D SCs exactly depict the shape of the target.
An image similarity measure method is proposed. The similarity measure method based on mathematical morphology, ignores the effect of slight distortion or noise on the image similarity, and retains the influence of distortion or loss of the image similarity. The experimental results show that, while the matching image similarity is not reduced, this similarity measure can reduce the matching of image similarity, increasing decision distance. And it helps to improve the performance of image matching, recognition method.
KEYWORDS: Line scan cameras, Cameras, Calibration, 3D modeling, Motion models, Line scan image sensors, 3D image processing, Optical engineering, Data modeling, Image processing
A direct linear transformation (DLT) model is derived to describe the scan imagery of a line scan camera undergoing a uniform rectilinear motion. When more than five points on scan image and their corresponding three-dimensional space points are substituted in the DLT model, 11 coefficients are settled directly and linearly without any approximations. After that, the 11 physically meaningful line scan camera parameters are worked out from the 11 DLT coefficients through a group of analytical operations. The performance is tested and verified by both simulated experiment and demonstration of a real line scan camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.