In many situations, the difference between success and failure comes down to taking the right actions quickly. While the
myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a
contextualized centralized display of information and intuitive human interface can help to support the quick and
effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand
all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a
full motion video real-time 360 degree imaging display. The system described could function as a primary display
system for command and control in security, military and observation posts. It has the ability to process and enable
interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their
information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary
electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations
(RWS).
Maintaining tactical superiority in a complex battlespace, with asymmetric threats, dictates the need for a real-time, high
throughput imaging system that provides the user with an intuitive and effective means of achieving imaging situational
awareness quickly and continuously. As systems and sensors grow in both number and intricacy, situational awareness
necessitates presenting large amounts of data in an easily understandable manner to the operator to avoid fatigue and
overload. This paper discusses how we achieved real-time, 360° imaging awareness and presents a demonstrated
prototype. The imaging system architecture is described with emphasis on the dedicated high-speed image processor
which provides the real-time panoramic stitching, digital zoom and image stabilization. The image processor is designed
around a high-speed, state-of-the-art Commercial-off-the-shelf (COTS) FPGA architecture that executes algorithms on
these high speed digital video streams with less than 100ms of latency. Field test results are presented from recent at-sea
and ground vehicle data collects. Planned future expansions to the system are outlined such as the capacity for higher
resolution, infrared capability, and the planned platform integration efforts. Other important lessons learned as a result of
this development are also presented throughout the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.