PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The increase in modern traffic, the demand for economic and cost-effective operation and more stringent safety demands require the weather- and daylight-independent operation of aircraft. There is a need for imaging sensor support, especially for landing and taxiing. In addition, military transport aircraft need sensor support for navigation, air delivery and landing during extended military tasks and operations. Compared to other sensors and systems, the radar sensor described, which is embedded into an Enhanced Vision System, has the most promising features and performance. This paper reports on the development and flight test of an FM-CW radar sensor operating in the mm-wave band at EADS, Germany GmbH. In close co-operation with the DLR (German Aerospace Center), multiple flights and ground tests have been performed successfully with the demonstrator. The results presented prove the effectiveness of the radar sensor in support of Enhanced Vision. The following features of the radar sensor and the Signal Processing are presented: frequency-scanning, high information renewal rate, high resolution, image presentation and automatic runway detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airport traffic delays continue to increase, without any relief in sight. A major contributing factor is the lack of low visibility capability for both aircraft landing and movement about an airport surface. Only 39 runways in the USA can operate under CAT III (700 ft. visibility to land) conditions, and just 19 additional runways are in the planning stage. Each will cost $DLR1 billion and take about ten years to place in operation. Ultraviolet (UV) technology may offer a solution. Runways that would normally decrease their traffic throughput, or close, as visibility degrades, can maintain their visual tempo and safety norms through the application of UV fog penetration techniques. These techniques can be applied on a selective and incremental basis such that some relief can be expected within two years and major decreases in delays can be realized a year thereafter. Three progressive steps are involved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As air traffic is increasing, the probability of encountering 'surveillance' alerts during flight is also increasing. In order to ensure safety, new on board systems need to be developed to provide the crew with a better 'situation awareness' (SA) about its external environment and potential hazards. In addition, the means to manage the data generated by these new systems needs to be build up. Despite the tremendous amount of information, crew workload must not increase. This is where the ISAWARE project comes in with the Integrated Situation Awareness System (ISAS) concept. ISAWARE (Increasing Safety through collision Avoidance WARning intEgration) is a project partly funded by the European Community, executed by a well balanced composition of several European aerospace companies (airframers, a helicopter manufacturer, avionics suppliers, airlines), one research laboratory and one university. The overall objective of the ISAWARE project is to conduct research into the potential improvements to flight safety that can be achieved by providing the pilot with complete predictive situation awareness during all phases of the flight. The Integrated Situational Awareness System (ISAS) merges data from different safety systems concerning terrain, traffic, weather and other. The system ensures the alerts consistency, prioritises alerts and anticipates threats along a predicted trajectory earlier than current systems can provide. The second main axis of the research is the development of synthetic vision displays (PFD, ND and HUD) to enhance the Human-Machine Interface (HMI). The key focus of the project is the development of a ground-based demonstrator rig which is interfaced to a dynamic flight simulator. This rig is used for the evaluation of the ISAWARE concept by a representative range of active airline crews.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Germany as well as in numerous other countries the air rescue system has been extended significantly since the first operation of the rescue helicopter Christoph 1. The primary target of the air rescue system was to guarantee fast and efficient emergency medical services for victims of accidents. During the years, the scope of the helicopter operations has been extended not only to other types of emergency medical services, but also to secondary medical services like the displacement of patients from hospitals to special service hospitals. While in general the displacement of patients is operated from well known and registered helipads, the primary rescue service currently has to rely on available onboard systems only. Those operations are risky and challenging for the pilots because of time pressure and the danger of obstacles in the environment of the helicopter. In addition, reduced visibility due to fog, rainfall or low light levels can further increase the risks or can make the services unavailable at all. Almost one decade ago, Eurocopter started the investigation of technologies and systems that could help the pilots to perform their tasks with reduced workload and risk, and to allow for a 24 h operation of helicopters irrespective of the weather conditions. After a number of preliminary studies, in 1995 the research program 'All-weather helicopter' has been started as a joint effort of Eurocopter and the supplier industry in Europe. The first phase of the program has been successfully completed in 1999 and the second phase is currently in progress.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DLR has set up a number of projects to increase flight safety and economics of aviation. Within these activities one field of interest is the development and validation of systems for pilot assistance in order to increase the situation awareness of the aircrew. All flight phases ('gate-to-gate') are taken into account, but as far as approaches, landing and taxiing are the most critical tasks in the field of civil aviation, special emphasis is given to these operations. As presented in previous contributions within SPIE's Enhanced and Synthetic Vision Conferences, DLR's Institute of Flight Guidance has developed an Enhanced Vision System (EVS) as a tool assisting especially approach and landing by improving the aircrew's situational awareness. The combination of forward looking imaging sensors (such as EADS's HiVision millimeter wave radar), terrain data stored in on-board databases plus information transmitted from ground or other aircraft via data link is used to help pilots handling these phases of flight especially under adverse weather conditions. A second pilot assistance module being developed at DLR is the Taxi And Ramp Management And Control - Airborne System (TARMAC-AS), which is part of an Advanced Surface Management Guidance and Control System (ASMGCS). By means of on-board terrain data bases and navigation data a map display is generated, which helps the pilot performing taxi operations. In addition to the pure map function taxi instructions and other traffic can be displayed as the aircraft is connected to TARMAC-planning and TARMAC-communication, navigation and surveillance modules on ground via data-link. Recent experiments with airline pilots have shown, that the capabilities of taxi assistance can be extended significantly by integrating EVS- and TARMAC-AS-functionalities. Especially an extended obstacle detection and warning coming from the Enhanced Vision System increases the safety of ground operations. The presented paper gives an overview regarding those two assistance systems and discusses possible concepts and the potential of an integrated system with respect to taxi guidance operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Runway incursions have been declared the nation's foremost aviation safety issue by the National Transportation Safety Board and the Federal Aviation Administration in testimony before congressional aviation committees. Technology solutions to date have been disappointing. After 12 years of development, the frequency of runway incursions shows no sign of abating, even as the cost for such systems continues to rise beyond $DLR9 million per airport. Application of ultraviolet technology offers incremental, low-cost, near- term improvements in runway incursion prevention and other enhancements to aviation safety, as well as increases in airport throughput capability, i.e., a reduction in delays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helicopters often strike against obstacles such as a power line. To reduce such collisions, we are developing an obstacle detection and warning system for helicopters. This paper describes the techniques to detect wire-like obstacles by infrared (IR) images. Measurements were conducted to gather IR images and to investigate the sensor performance to detect obstacles in different environments. IR images proved that the use of IR cameras could greatly increase the possibility of detecting obstacles that could never be found by naked eyes. The performance to suppress sunlight noise by the 8-12 mm IR camera was also demonstrated. However, the target-to-background contrast of the original IR images was not enough as the advisory by which a pilot maneuvers his helicopter. There were cases that even IR cameras failed to detect obstacles in adverse weather and background conditions. Image processing techniques were then proposed to enhance the contrast of IR images and to improve the coverage in adverse conditions. An experimental millimeter wave (MMW) radar is now being developed to improve the detection performance and to add distance information on the enhanced images. The configuration of the MMW radar and the results of preliminary measurements by the radar were presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The requirements for a terrain and obstacle database for various applications in guidance and navigation for low level flight and landing will be derived. Applications include displays for flight guidance and situation awareness and algorithms for ground collision avoidance and terrain reference navigation. Different methods used for data acquisition and processing will be discussed with respect to these requirements. The properties of these methods will be demonstrated by generation and validation of an example database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic vision systems render artificial images of the world based on a database and position/attitude information of the aircraft. Due to both its static nature and inherent modelling errors, the database introduces anomalies in the synthetic imagery. Since it reflects at best a nominal state of the environment, it often requires updating via online measurements. The latter can vary from correction of pose and geometry to more complex operations such as marking the locations of detected obstacles. This paper presents an approach for detecting database geometric anomalies online. Since range sensors have a low update rate, they cannot be used for quick validation. Instead of range data, the proposed technique employs an imaging sensor, which can be of any type. It takes advantage of the fact that given a geometric model of the scene and known motion of the observer, the sensor image warping can be exactly predicted. If the geometry of the database is incorrect, the sensor image will not be correctly predicted and geometric differences will thus be detected. The algorithm is tested against simulated imagery and results show that it can correctly identify geometric anomalies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many of today's and tomorrow's aviation applications demand accurate and reliable digital terrain elevation databases. Particularly future Vertical Cut Displays or 3D Synthetic Vision Systems (SVS) require accurate and hi-resolution data to offer a reliable terrain depiction. On the other hand, optimized or reduced terrain models are necessary to ensure real-time rendering and computing performance. In this paper a new method for adaptive terrain meshing and depiction for SVS is presented. The initial data set is decomposed by using a wavelet transform. By examining the wavelet coefficients, an adaptive surface approximation for various Level-of-Detail is determined. Additionally, the dyadic scaling of the wavelet transform is used to build a hierarchical quad-tree representation for the terrain data. This representation enhances fast interactive computations and real-time rendering methods. The proposed terrain representation is integrated into a standard navigation display. Due to the multi-resolution data organization, terrain depiction e.g. resolution is adaptive to a selected zooming level or flight phase. Moreover, the wavelet decomposition helps to define local regions of interest. A depicted terrain resolution has a finer grain nearby the current airplane position and gets coarser with increasing aircraft distance. In addition, flight critical regions can be depicted in a higher resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the flight test results of a real-time Digital Elevation Model (DEM) integrity monitor for Civil Aviation applications. Providing pilots with Synthetic Vision displays containing terrain information has the potential to improve flight safety by improving situational awareness and thereby reducing the likelihood of Controlled Flight Into Terrain. Utilization of DEMs, such as the digital terrain elevation data, requires a DEM integrity check and timely integrity alerts to the pilots when used for flight-critical terrain-displays, otherwise the DEM may provide hazardous misleading terrain information. The discussed integrity monitor checks the consistency between a terrain elevation profile synthesized from sensor information, and the profile given in the DEM. The synthesized profile is derived from DGPS and radar altimeter measurements. DEMs of various spatial resolutions are used to illustrate the dependency of the integrity monitor's performance on the DEMs spatial resolution. The paper will give a description of proposed integrity algorithms, the flight test setup, and the results of a flight test performed at the Ohio University airport and in the vicinity of Asheville, NC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speed control issues are considered for tunnel-in-the-sky displays with a predictor presenting guidance information in a 3-dimensional format for flight path control. Factors driving the predictor design are described. With reference to the resulting predictor control law, it is shown that the pilot-predictor-aircraft system is stable for operation on the frontside of the power-required curve and unstable for operation on the reverse. This instability can be removed by thrust control. It is shown that this control loop is supported by the predictor control law because of favorable coupling effects between the two loops involved. Furthermore, an appropriate speed indication in the tunnel-in-the-sky display is considered an aid in manual speed control. The theoretical findings are supported by experimental results from pilot-in-the-loop simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enhanced and synthetic vision applications for flight guidance are on the verge of commercialisation and wide-scale employment. The paper will first give an overview at the characteristics and benefits of these types of displays, and focus afterwards on their implementation in an integrated modular avionics environment. The Enhanced/Synthetic Flight Guidance Display is a promising approach to the solution for the problems of poor visibility low-level flight. It yields a computer-generated three-dimensional cockpit view and can optionally be combined with an imaging sensor (e.g. FLIR, mmWR, LL-TV). The paper will provide details on the technical concept and the evaluation of a functional prototype in flight trials. After functional validation, a enhanced/synthetic vision display was chosen to be ESGs first avionics application to be transferred into an integrated modular avionics architecture. This software prototype implementation takes into account the results so far achieved in the Allied Standard Avionics Architecture Council (ASAAC) programme. The effort of this programme to define an open system architecture will be addressed in the paper. The architecture's characteristics of a clearly defined layer structure and its strict hardware-software separation will be explained. Finally the paper addresses the allocation of the functions necessary for the synthetic vision display. It explains how database access, I/O to other systems, numerical calculation and graphics generation are mapped to IMA-based mass memory, data processing and graphics processing components. The paper finishes with the presentation of first successful implementation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A low cost 3D-display and navigation system is described which presents guidance information in a 3-dimensional format to the pilot. For achieving the low cost goal, Commercial-off-the-Shelf components are used. The visual information provided by the 3D-display includes a presentation of the future flight path and other guidance elements as well as an image of the outside world. For generating the displayed information, a PC will be used. An appropriate computer software is available to generate the displayed information in real-time with an adequately high update rate. Precision navigation data which is required for accurately adjusting the displayed guidance information are provided by an integrated low cost navigation system. This navigation system consists of a differential global positioning system and an inertial measurement unit. Data from the navigation system is fed into an onboard-computer, using terrain elevation and feature analysis data to generate a synthetic image of the outside world. The system is intended to contribute to the safety of General Aviation aircraft, providing an affordable guidance and navigation aid for this type of aircraft. The low cost 3D display and navigation system will be installed in a two-seat Grob 109B aircraft which is operated by the Institute of Flight Mechanics and Flight Control of the Technische Universitchen as a research vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The X-38 program began in early 1995 and is developing a series of test vehicles to demonstrate the low-cost technologies and methods required to develop a fully functional CRV that can rapidly return astronauts from onboard the International Space Station to earth. The X-38 program uses a gradual buildup approach and where appropriate, is taking advantage of advanced technologies that may help improve safety, decrease cost, reduce development time, and outperform traditional technologies. Four atmospheric test vehicles and one space-rated vehicle will be developed and tested during the X-38 program. The atmospheric test vehicles are known as vehicle 131 (V131), vehicle 132 (V132), vehicle 131R (V131R), and vehicle 133 (V133). The space-rated vehicle that will fly on the Shuttle in 2002, as a payload bay experiment, is known as vehicle 201 (V201).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Results are described of an ongoing project whose goal is to provide advanced Computer Vision for small low flying autonomous aircraft. The work consists of two parts; range-based vision for object recognition and pose estimation, and monocular vision for navigation and collision avoidance. A wide variety of range imaging methods were considered for the former, and it was found that a promising approach is multi-ocular stereo with a pseudo-random texture projected with a xenon flash. This provides high range resolution despite motion, and can be small and light. The resulting range images, taken at a few meters range, would support the use of Tripod Operators, an efficient and general method for recognizing and localizing surface shapes in 6 DOF. This would provide the ability to recognize immediately upon encounter many kinds of targets. The monocular navigation system is based on finding corresponding features in successive images, and deducing from these the relative pose of the aircraft. Two methods are under development, based on horizon registration and point correspondences, respectively. The first can serve as a preprocessor for the second. This approach aims to continuously and accurately estimate the net motion of the vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance and spatial awareness systems necessitating the capture of panoramic imagery pose new challenges in respect of latency, data fusion and cost. Array sensor technologies support options to capture panoramic imagery. Fusion of imagery from multisensor arrays may be achieved through data capture and computer manipulation. This approach introduces many forms of latency to imagery of concern to the military, cost implications for volume market, size/power considerations for remote/mobile systems. For an array of image sensors aligned such that their overlapping fields of view share consecutive parts, in azimuth and or elevation, of a common scenario, then by staggering their relative synchronization in respect of line respectively frame, time continuous luminance information may be considered to exist around the array. Data fusion of such time continuous luminance information may be achieved live at source by tapping luminance from consecutive sensors, switching source at image correspondence. When such luminance information is punctuated by the introduction of new sync information then display of the fused and live panoramic imagery may be made on conventional displays. This paper addresses design considerations of such systems, latency, at source data fusion control loops, health monitoring and data rate processing extras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A series of IR measurements with a FLIR (Forward Looking Infrared) system during landing approaches to various airports have been performed. A real time image processing procedure to detect and identify the runway and eventual obstacles is discussed and demonstrated. It is based on IR image segmentation and information derived from synthetic vision data. Thhe extracted information from IR images will be combined with the appropriate information from a MMW (millimeter wave) radar sensor in the subsequent fusion processor. This fused information aims to increase the pilot's situation awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 1997 Final Report of the 'White House Commission on Aviation Safety and Security' challenged industrial and government concerns to reduce aviation accident rates by a factor of five within 10 years. In the report, the commission encourages NASA, FAA and others 'to expand their cooperative efforts in aviation safety research and development'. As a result of this publication, NASA has since undertaken a number of initiatives aimed at meeting the stated goal. Among these, the NASA Aviation Safety Program was initiated to encourage and assist in the development of technologies for the improvement of aviation safety. Among the technologies being considered are certain sensor technologies that may enable commercial and general aviation pilots to 'see to land' at night or in poor visibility conditions. Infrared sensors have potential applicability in this field, and this paper describes a system, based on such sensors, that is being deployed on the NASA Langley Research Center B757 ARIES research aircraft. The system includes two infrared sensors operating in different spectral bands, and a visible-band color CCD camera for documentation purposes. The sensors are mounted in an aerodynamic package in a forward position on the underside of the aircraft. Support equipment in the aircraft cabin collects and processes all relevant sensor data. Display of sensor images is achieved in real time on the aircraft's Head Up Display (HUD), or other display devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Federal Aviation Administration (FAA), in cooperation with the Cargo Airline Association (CAA) and three of its member airlines (Airborne Express, Federal Express, and United Parcel Service), have embarked upon an aggressive yet phased approach to introduce new Free Flight-enabling technologies into the U.S. National Airspace System (NAS). General aviation is also actively involved, represented primarily by the Aircraft Owners and Pilots Association (AOPA). These new technologies being evaluated include advanced cockpit avionics and a complimentary ground infrastructure. In support of this initiative, a series of operational evaluations (OpEvals) have been conducted or are planned. The OpEvals have evaluated in-flight as well as airport surface movement applications. Results from the second OpEval, conducted at Louisville, Kentucky in October 2000, indicated that runway incursions might be significantly reduced with the introduction of a cockpit-based moving map system derived from emerging technologies. An additional OpEval is planned to evaluate the utility of an integrated cockpit and airport surface architecture that provides enhanced pilot and controller awareness of airport surface operations. It is believed that the combination of such an airborne and a ground-based system best addresses many of the safety issues surrounding airport surface operations. Such a combined system would provide both flight crews and controllers with a common awareness, or shared picture of airport surface operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the Enhanced Video System (EVS) camera, built by OPGAL as subcontractor of Kollsman Inc. The EVS contains a Head up Display built by Honeywell, a special design camera for landing applications, and the external window installed on the plane together with the electronic control box built by Kollsman. The special design camera for lending applications is the subject of this paper. The entire system was installed on a Gulfstream V plane and passed the FAA proof of concept during August and September 2000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.