Synthetic Aperture Radar (SAR) images are commonly utilized in military applications for automatic target recognition (ATR). Machine learning (ML) methods, such as Convolutional Neural Networks (CNN) and Graph Neural Networks (GNN), are frequently used to identify ground-based objects, including battle tanks, personnel carriers, and missile launchers. Determining the vehicle class, such as the BRDM2 tank, BMP2 tank, BTR60 tank, and BTR70 tank, is crucial, as it can help determine whether the target object is an ally or an enemy. While the ML algorithm provides feedback on the recognized target, the final decision is left to the commanding officers. Therefore, providing detailed information alongside the identified target can significantly impact their actions. This detailed information includes the SAR image features that contributed to the classification, the classification confidence, and the probability of the identified object being classified as a different object type or class. We propose a GNN-based ATR framework that provides the final classified class and outputs the detailed information mentioned above. This is the first study to provide a detailed analysis of the classification class, making final decisions more straightforward. Moreover, our GNN framework achieves an overall accuracy of 99.2% when evaluated on the MSTAR dataset, improving over previous state-of-the-art GNN methods.
Synthetic Aperture Radar (SAR) automatic target recognition (ATR) is a key technique for SAR image analysis in military activities. Accurate SAR ATR can promote command and decision-making. In this work, we propose a novel SAR ATR framework with human-in-the-loop. The framework consists of a Reinforcement Learning (RL) Agent, which is followed by a GNN-based classifier. The RL Agent is capable of learning from human feedback to identify the region of target (RoT) in the SAR image. The RoT is then used to construct the input graph for the GNN classifier to perform target classification. By learning from human feedback, the RL Agent can focus on the RoT and filter out irrelevant and distracting signals in the input SAR images. We evaluate the proposed framework on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset. The results show that incorporating human feedback can improve classification accuracy. By visualizing the results, we observe that the RL Agent can effectively reduce irrelevant SAR signals in the input SAR images after learning from human feedback.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.