The Defense Advanced Research Projects Agency (DARPA) Video Verification of Identity (VIVID) program has
as its goal the development of the best video tracker ever. This goal is reached through a philosophy of on-the-fly
target modeling and the use of three distinct modules: a multiple-target tracker, a confirmatory identification
module, and a collateral damage avoidance/moving target detection module. Over the two years of VIVID
Phase I, progress appraisal of the ATR-like confirmatory identification module was provided to DARPA by the
Air Force Research Laboratory Comprehensive Performance Assessment of Sensor Exploitation (COMPASE)
Center through regular evaluations. This document begins with an overview of the VIVID system and its
approach to solving the multiple-target tracking problem. A survey of the data collected under VIVID auspices
and their use in the evaluation are then described, along with the operating conditions relevant to confirmatory
identification. Finally, the evaluation structure is presented in detail, including metrics, experiment design,
experiment construction techniques, and support tools.
KEYWORDS: Automatic target recognition, Performance modeling, Data modeling, Data fusion, Sensors, Mahalanobis distance, Machine learning, Detection and tracking algorithms, Systems modeling, Analytical research
The US Air Force Research Laboratory (AFRL) is exploring the decision-level fusion (DLF) trade space in the Fusion
for Identifying Targets Experiment (FITE) program. FITE is surveying past DLF approaches and experiments. This
paper reports preliminary findings from that survey, which ultimately plans to place the various studies in a common
framework, identify trends, and make recommendations on the additional studies that would best inform the trade space
of how to fuse ATR products and how ATR products should be improved to support fusion. We tentatively conclude
that DLF is better at rejecting incorrect decisions than in adding correct decisions, a larger ATR library is better (for a
constant Pid), a better source ATR has many mild attractors rather than a few large attractors, and fusion will be more
beneficial when there are no dominant sources. Dependencies between the sources diminish performance, even when
that dependency is well modeled. However, poor models of dependencies do not significantly further diminish
performance. Distributed fusion is not driven by performance, so centralized fusion is an appropriate focus for FITE.
For multi-ATR fusion, the degree of improvement may depend on the participating ATRs having different OC
sensitivities. The machine learning literature is an especially rich source for the impact of imperfect (learned in their
case) models. Finally and perhaps most significantly, even with perfect models and independence, the DLF gain may be
quite modest and it may be fairly easy to check whether the best possible performance is good enough for a given
application.
KEYWORDS: Sensors, Automatic target recognition, Data fusion, Data modeling, Statistical modeling, Detection and tracking algorithms, Performance modeling, Systems modeling, Information fusion, Analytical research
Decision-level fusion is an appealing extension to automatic/assisted target recognition (ATR) as it is a low-bandwidth
technique bolstered by a strong theoretical foundation that requires no modification of the source
algorithms. Despite the relative simplicity of decision-level fusion, there are many options for fusion application
and fusion algorithm specifications. This paper describes a tool that allows trade studies and optimizations
across these many options, by feeding an actual fusion algorithm via models of the system environment. Models
and fusion algorithms can be specified and then exercised many times, with accumulated results used to compute
performance metrics such as probability of correct identification. Performance differences between the best of
the contributing sources and the fused result constitute examples of "gain." The tool, constructed as part of the
Fusion for Identifying Targets Experiment (FITE) within the Air Force Research Laboratory (AFRL) Sensors
Directorate ATR Thrust, finds its main use in examining the relationships among conditions affecting the target,
prior information, fusion algorithm complexity, and fusion gain. ATR as an unsolved problem provides the
main challenges to fusion in its high cost and relative scarcity of training data, its variability in application, the
inability to produce truly random samples, and its sensitivity to context. This paper summarizes the mathematics
underlying decision-level fusion in the ATR domain and describes a MATLAB-based architecture for exploring
the trade space thus defined. Specific dimensions within this trade space are delineated, providing the raw
material necessary to define experiments suitable for multi-look and multi-sensor ATR systems.
Efficient MPEG encoding begins with the selection of each frame's coding type. Depending on the constraints of a given application, the options available for individual frame types differ. These options are further limited by the encoding environment, in particular the frame buffer size and computational power available. In this work we assume an off-line encoding environment, with its implicit unlimited frame buffer and arbitrarily fast computations. Such an environment enables us to examine ideal frame types for a variety of situations, with changing emphasis between encoded bits and motion estimation error. We find that the optimal frame types vary dramatically as focus shifts from bit allocation to motion estimation error, even for ``natural'' video sequences. When additional video effects such as fades and scene cuts are introduced, the frame types become even more interesting, deviating significantly from the M and N fixed-spacing scheme used by many real-time MPEG encoders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.