In this paper, we present the test results of a flight-grade 13μm pixel pitch 6000-element 1.7μm InGaAs linear array in a hermetic package, designed and developed for space remote sensing and imaging applications. The array consists of a single 13μm pixel pitch 6000-element InGaAs linear array and a custom single digital 2.0 Mecapacitance trans-impedance amplifier (CTIA) readout integrated circuit (ROIC) with four gains. We have achieved greater than 80% peak quantum efficiency and higher than 1100 signal-to-noise ratio (SNR) at 90% well fill. The focal plane array is in a vacuum hermatically sealed package with an anti-reflective (AR)-coated Sapphire window and 29 pins, including four for low voltage differential signaling (LVDS) outputs.
Increased vehicle autonomy, survivability and utility can provide an unprecedented impact on mission success and are
one of the most desirable improvements for modern autonomous vehicles. We propose a general architecture of
intelligent resource allocation, reconfigurable control and system restructuring for autonomous vehicles. The architecture
is based on fault-tolerant control and lifetime prediction principles, and it provides improved vehicle survivability,
extended service intervals, greater operational autonomy through lower rate of time-critical mission failures and lesser
dependence on supplies and maintenance. The architecture enables mission distribution, adaptation and execution
constrained on vehicle and payload faults and desirable lifetime. The proposed architecture will allow managing
missions more efficiently by weighing vehicle capabilities versus mission objectives and replacing the vehicle only when
it is necessary.
In airborne UAV operations, it is desirable to havemultiple UAVs operate in a cooperationmode tomaximize
the use of resources, as well as take unique advantages of increased views frommultiple positions separated
in time and space. This capability requires image registration across potentially widely different views that
are taken at different places or time. The resulting images also vary in their noise contents, resolutions,
and projection deformation. In this paper, we characterize the performance enhancement between single
view from a single UAV and multiple views by multiple UAVs using ROC curves. In addition, we propose
a computational framework to facilitate an efficient and accurate registration across images with varied
resolutions, noise levels, and projective deformation. We implement various stages in this framework and
demonstrate the promising results using low to medium resolution images with synthetically-generated
flying path and camera poses.
Unmanned Air Vehicles (UAVs) are expected to dramatically alter the way future battles are fought. Autonomous
collaborative operation of teams of UAVs is a key enabler for efficient and effective deployment of large numbers of
UAVs under the U. S. Army's vision for Force Transformation. Autonomous Collaborative Mission Systems (ACMS)
is an extensible architecture and collaborative behavior planning approach to achieve multi-UAV autonomous
collaborative capability. Under this architecture, a rich set of autonomous collaborative behaviors can be developed to
accomplish a wide range of missions. In this article, we present our simulation results in applying various autonomous
collaborative behaviors developed in the ACMS to an integrated convoy protection scenario using a heterogeneous team
of UAVs.
UAVs are critical to the U. S. Army's Force Transformation. Large numbers of UAVs will be employed per Future
Combat System (FCS) Unit of Action (UoA). To relieve the burden of controlling and coordinating multiple UAVs in a
given UoA, UAVs must operate autonomously and collaboratively while engaging in RSTA and other missions.
Rockwell Scientific is developing Autonomous Collaborative Mission Systems (ACMS), an extensible architecture and
behavior planning/collaboration approach, to enable groups of UAVs to operate autonomously in a collaborative
environment. The architecture is modular, and the modules may be run in different locations/platforms to accommodate
the constraints of available hardware, processing resources and mission needs. The modules and uniform interfaces
provide a consistent and platform-independent baseline mission collaboration mechanism and signaling protocol across
different platforms. Further, the modular design allows for the flexible and convenient extension to new autonomous
collaborative behaviors to the ACMS. In this article, we first discuss our observations in implementing autonomous
collaborative behaviors in general and under ACMS. Second, we present the results of our implementation of two such
behaviors in the ACMS as examples.
UAVs are a key element of the Army’s vision for Force Transformation, and are expected to be employed in large numbers per FCS Unit of Action (UoA). This necessitates a multi-UAV level of autonomous collaboration behavior capability that meets RSTA and other mission needs of FCS UoAs. Autonomous Collaborative Mission Systems (ACMS) is a scalable architecture and behavior planning / collaborative approach to achieve this level of capability. The architecture is modular and the modules may be run in different locations/platforms to accommodate the constraints of available hardware, processing resources and mission needs. The Mission Management Module determines the role of member autonomous entities by employing collaboration mechanisms (e.g., market-based, etc.), the individual Entity Management Modules work with the Mission Manager in determining the role and task of the entity, the individual Entity Execution Modules monitor task execution and platform navigation and sensor control, and the World Model Module hosts local and global versions of the environment and the Common Operating Picture (COP). The modules and uniform interfaces provide a consistent and platform-independent baseline mission collaboration mechanism and signaling protocol across different platforms. Further, the modular design allows flexible and convenient addition of new autonomous collaborative behaviors to the ACMS through: adding new behavioral templates in the Mission Planner component, adding new components in appropriate ACMS modules to provide new mission specific functionality, adding or modifying constraints or parameters to the existing components, or any combination of these. We describe the ACMS architecture, its main features, current development status and future plans for simulations in this report.
We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.
Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.
An important pathway to solve several computer vision problems may be through qualitative vision. Progress in qualitative vision has been very limited due to the difficulties in modeling and analyzing qualitativeness. In this paper, we consider the issue of representing shape in the qualitative sense. A robust representation is important to enable the fusion of qualitative information that is obtained from different sources. We begin with the simple scheme of storing relative positions in space. This representation is compact and can be updated easily. Probabilistic, relaxation-based schemes for fusion are possible. However, we show that this representation is not unique. In particular, we show that two objects with different qualitative shapes could have the same representation. We indicate how the representation can be augmented to overcome this difficulty. We point out the need to identify minimum information requirements for representation and other tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.