This work focuses on a bimodal vision system which was previously demonstrated as a relevant sensing candidate for detecting and tracking fast objects by combining the unique event-based sensor features i.e. high temporal resolution, reduced bandwidth needs, low energy consumption, and passive detection capabilities with the high-spatial resolution of a RGB camera. In this study, we aim to propose a model based on the principle of attentional vision for real-time detection and tracking of UAVs, taking into account computing and on-board resource constraints. A laboratory demonstrator have been proposed to evaluate the operational limits in terms of computation time, system performances (including target detection) versus speed. Our first indoor and outdoor tests revealed the interest and potential of our system to quickly detect objects flying at hundreds of kilometers an hour.
Compared to frame-based visual streams, event-driven visual streams offer very low bandwidth needs and high temporal resolution, making them an interesting choice for embedded object recognition. Such visual systems are seen to overcome standard cameras performances but have not yet been studied in the frame of Homing Guidance for projectiles, with drastic navigation constraints. This work starts from a first interaction model between a standard camera and an event camera, validated in the context of unattended ground sensors and situational awareness applications from a static position. In this paper we propose to extend this first interaction model by bringing a higher-level activity analysis and object recognition from a moving position. The proposed event-based terminal guidance system is studied firstly through a target laser designation scenario and the optical flow computation to validate guidance parameters. Real-time embedded processing techniques are evaluated, preparing the design of a future demonstrator of a very fast navigation system. The first results have been obtained using embedded Linux architectures with multi-threaded features extractions. This paper shows and comments these first results.
A new challenging vision system has recently gained prominence and proven its capacities compared to traditional imagers: the paradigm of event-based vision. Instead of capturing the whole sensor area in a fixed frame rate as in a frame-based camera, Spike sensors or event cameras report the location and the sign of brightness changes in the image. Despite the fact that the currently available spatial resolutions are quite low (640x480 pixels) for these event cameras, the real interest is in their very high temporal resolution (in the range of microseconds) and very high dynamic range (up to 140 dB). Thanks to the event-driven approach, their power consumption and processing power requirements are quite low compared to conventional cameras. This latter characteristic is of particular interest for embedded applications especially for situational awareness. The main goal for this project is to detect and to track activity zones from the spike event stream, and to notify the standard imager where the activity takes place. By doing so, automated situational awareness is enabled by analyzing the sparse information of event-based vision, and waking up the standard camera at the right moments, and at the right positions i.e. the detected regions of interest. We demonstrate the capacity of this bimodal vision approach to take advantage of both cameras: spatial resolution for standard camera and temporal resolution for event-based cameras. An opto-mechanical demonstrator has been designed to integrate both cameras in a compact visual system, with embedded Software processing, enabling the perspective of autonomous remote sensing. Several field experiments demonstrate the performances and the interest of such an autonomous vision system. The emphasis is placed on the ability to detect and track fast moving objects, such as fast drones. Results and performances are evaluated and discussed on these realistic scenarios.
With advances in unmanned and autonomous vehicles, camera-based navigation is increasingly being used. A new low-cost navigation solution based on monochrome polarization filter arrays cameras is presented. For this purpose, we have developed our own acquisition pipeline and an image processing algorithm to find the relative heading of an Unmanned Ground Vehicle (UGV) thanks to the skylight polarization. The precision of the method has been quantified using a rotary stage. Then the system has been mounted on the UGV and the estimated heading is compared to a reference given by a GPS/Inertial Navigation System.
In this note we focus on convergence behavior of the Extended Kalman Filter used as a state estimator for
projectile attitude and position estimation. We provide first the complete dynamical model, into a non linear state space
form, to describe the projectile behavior. Due to strong non linearities and poor observability of the system, very few
estimation techniques could be applied, among them the celebrate EKF. The later is, however, very sensitive to bad
initializations and small perturbations. The main contribution of this work lies in the use of a modified EKF to assure a
strong tracking using magnetometer sensor only. The modified EKF follows from the connection of some instrumental
matrices, fixed by the user, and the convergence behavior. Simulation results show the good performances of the
proposed approach.
A priori information given by the complete modelling of the ballistic behavior (trajectory, attitude) of the projectile is simplified to give a pertinent reduced evolution model. An algorithm based on extended Kalman filters is designed to determinate: • position: x,y,z references in earth frame. • value and direction of the velocity vector; its direction is given by 2 angles (η and θ). • attitude around velocity vector given by 3 angles: roll angle in the range [0, 2π], angle of attack α and side-slip angle β in the range of few milliradians. The estimation is based on the measures of the magnetic field of the earth given by a three-axis magnetometer sensor embedded on the projectile. The algorithm also needs the knowledge of the direction of the earth magnetic fields in the earth frame and aerodynamics coefficients of the projectile. The algorithm has been tested on simulation, using real evolution of attitude data for a shot with a 155 mm rotating projectile over a distance of 16 km, with wind and measurement noise. The results show that we can estimate milliradians with non-linear equations and approximations, with good precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.