Standard imaging techniques do not get as much information from a scene as light-field imaging. Light-field (LF) cameras can measure the light intensity reflected by an object and, most importantly, the direction of its light rays. This information can be used in different applications, such as depth estimation, in-plane focusing, creating full-focused images, etc. However, standard key-point detectors often employed in computer vision applications cannot be applied directly to plenoptic images due to the nature of raw LF images. This work presents an approach for key-point detection dedicated to plenoptic images. Our method allows using of conventional key-point detector methods. It forces the detection of this key-point in a set of micro-images of the raw LF image. Obtaining this important number of key-points is essential for applications that require finding additional correspondences in the raw space, such as disparity estimation, indirect visual odometry techniques, and others. The approach is set to the test by modifying the Harris key-point detector.
Light-field and plenoptic cameras are widely available today. Compared with monocular cameras, these cameras capture not only the intensity but also the direction of the light rays. Due to this specificity, light-field cameras allow for image refocusing and depth estimation using a single image. However, most of the existing depth estimation methods using light-field cameras require a prior complex calibration phase and raw data preprocessing before the desired algorithm is applied. We propose a homography-based method with plenoptic camera parameters calibration and optimization, dedicated to our homography-based micro-images matching algorithm. The proposed method works on debayerred raw images with vignetting correction. The proposed approach directly links the disparity estimation in the 2D image plane to the depth estimation in the 3D object plane, allowing for direct extraction of the real depth without any intermediate virtual depth estimation phase. Also, calibration parameters used in the depth estimation algorithm are directly estimated, and hence no prior complex calibration is needed. Results are illustrated by performing depth estimation with a focused light-field camera over a large distance range up to 4 m.
Light-Field (LF) cameras allow the extraction not only of the intensity of light but also of the direction of light rays in the scene, hence it records much more information of the scene than a conventional camera. In this paper, we present a novel method to detect key-points in raw LF images by applying key-points detectors on Pseudo-Focused images (PFIs). The main advantage of this method is that we don’t need to use complex key-points detectors dedicated to light-field images. We illustrate the method in two use cases: the extraction of corners in a checkerboard and the key-points matching in two view raw light-field images. These key-points can be used for different applications e.g. calibration, depth estimation or visual odometry. Our experiments showed that our method preserves the accuracy of detection by re-projecting the pixels in the original raw images.
During foreign operations, Improvised Explosive Devices (IEDs) are one of major threats that soldiers may
unfortunately encounter along itineraries. Based on a vehicle-mounted camera, we propose an original approach
by image comparison to detect signicant changes on these roads. The classic 2D-image registration techniques
do not take into account parallax phenomena. The consequence is that the misregistration errors could be
detected as changes. According to stereovision principles, our automatic method compares intensity proles along
corresponding epipolar lines by extrema matching. An adaptive space warping compensates scale dierence in
3D-scene. When the signals are matched, the signal dierence highlights changes which are marked in current
video.
The purpose of this document is to present a comparative study of five algorithms of heart sound localization, one of
which, is a method based on radial basis function networks applied in a novel approach. The advantages and
disadvantages of each method are evaluated according to a data base of 50 subjects in which there are 25 healthy
subjects selected from the University Hospital of Strasbourg (HUS) and from theMARS500 project (Moscow) and
25 subjects with cardiac pathologies selected from the HUS. This study is made under the control of an experienced
cardiologist. The performance of each method is evaluated by calculating the area under a receiver operating curve
(AUC) and the robustness is shown against different levels of additive white Gaussian noise.
Today Optronic Countermeasure (OCM) concerns imply an IR Focal-Plane Array (FPA) facing an in-band laser
irradiation. In order to evaluate the efficiency of new countermeasure concepts or the robustness of FPAs, it is necessary
to quantify the whole interaction effects. Even though some studies in the open literature show the vulnerability of
imaging systems to laser dazzling, the diversity of analysis criteria employed does not allow the results of these studies
to be correlated.
Therefore, we focus our effort on the definition of common sensor figures of merit adapted to laser OCM studies. In this
paper, two investigation levels are presented: the first one for analyzing the local nonlinear photocell response and the
second one for quantifying the whole dazzling impact on image. The first study gives interesting results on InSb photocell behaviors when irradiated by a picosecond MWIR laser. With an increasing irradiance, four different successive responses appear: from linear, logarithmic, decreasing ones to permanent linear offset response. In the second study, our quantifying tools are described and their successful implementation through the picosecond laser-dazzling characterization of an InSb FPA is assessed.
The 3-D fluorescence microscope is a powerful method for imaging and studying living cells. However, the data
acquired with conventional 3-D fluorescence microscope are not quantitatively significant for spatial distribution or
volume evaluation of fluorescent areas in reason of distortions induced on data by the acquisition process.
Theses distortions must be corrected for reliable measurements. The knowledge of the impulse response
characterizing the instrument permits to consider the backward process retrieving the original data. One realize a
deconvolution opposed to the convolution process induced by the microscope, projecting the 'object' space in the
'image' space. However, when the response of the system is not invariant in the observation field, the classical
algorithms using Fourier Transform for computations are not usable.
The contribution of this work is to present several approaches making it possible to use the Fourier Transform
in non-invariance conditions and to simulate it's application in the 3-D fluorescence microscope problems.
3-D optical fluorescent microscopy becomes now an efficient tool for volume investigation of living biological samples.
Developments in instrumentation have permit to beat off the conventional Abbe limit, in any case the recorded image
can be described by the convolution equation between the original object and the Point Spread Function (PSF) of the
acquisition system. If the goal is 3-D quantitative analysis, whether you improve the instrument capabilities, or (and)
you restore the data. These last is until now the main task in our laboratory. Based on the knowledge of the optical
Transfer Function of the microscope, deconvolution algorithms were adapted to automatic determine the regularisation
threshold in order to give less subjective and more reproducible results. The PSF represents the properties of the image
acquisition system; we have proposed the use of statistical tools and Zernike moments to describe a 3-D system PSF and
to quantify the variation of the PSF. This first step toward standardization is helpful to define an acquisition protocol
optimizing exploitation of the microscope depending on the studied biological sample.
We have pointed out that automating the choice of the regularization level; if it facilitates the use, it also greatly
improves the reliability of the measurements. Furthermore, to increase the quality and the repeatability of quantitative
measurements a pre-filtering of images improves the stability of deconvolution process. In the same way, the PSF pre-filtering
stabilizes the deconvolution process. We have shown that Zernike polynomials can be used to reconstruct
experimental PSF, preserving system characteristics and removing the noise contained in the PSF.
Fluorescent microscopes suffer from limitations; photobleaching and phototoxicity effects, or influence of the sample
optical properties to 3-D observation. Amplitude and phase of the object can be reached with optical tomography based
on a combination of microholography with a tomographic illumination. So indices cartography of the specimen can be
obtained, and combined with fluorescence information it will open new possibilities in 3-D optical microscopy.
KEYWORDS: Point spread functions, Deconvolution, Luminescence, Microscopes, Image processing, 3D acquisition, 3D image processing, Microscopy, Image acquisition, Algorithm development
3-D optical fluorescent microscopy now becomes an efficient tool for the volume investigation of living biological
samples. Developments in instrumentation have permitted to beat off the conventional Abbe limit. In any case the
recorded image can be described by the convolution equation between the original object and the Point Spread Function
(PSF) of the acquisition system. Due to the finite resolution of the instrument, the original object is recorded with
distortions and blurring, and contaminated by noise. This induces that relevant biological information cannot be
extracted directly from raw data stacks.
If the goal is 3-D quantitative analysis, then to assess optimal performance of the instrument and to ensure the data
acquisition reproducibility, the system characterization is mandatory. The PSF represents the properties of the image
acquisition system; we have proposed the use of statistical tools and Zernike moments to describe a 3-D PSF system and
to quantify the variation of the PSF. This first step toward standardization is helpful to define an acquisition protocol
optimizing exploitation of the microscope depending on the studied biological sample.
Before the extraction of geometrical information and/or intensities quantification, the data restoration is mandatory.
Reduction of out-of-focus light is carried out computationally by deconvolution process. But other phenomena occur
during acquisition, like fluorescence photo degradation named "bleaching", inducing an alteration of information
needed for restoration. Therefore, we have developed a protocol to pre-process data before the application of
deconvolution algorithms.
A large number of deconvolution methods have been described and are now available in commercial package. One
major difficulty to use this software is the introduction by the user of the "best" regularization parameters. We have
pointed out that automating the choice of the regularization level; also greatly improves the reliability of the
measurements although it facilitates the use. Furthermore, to increase the quality and the repeatability of quantitative
measurements a pre-filtering of images improves the stability of deconvolution process. In the same way, the PSF prefiltering
stabilizes the deconvolution process. We have shown that Zemike polynomials can be used to reconstruct
experimental PSF, preserving system characteristics and removing the noise contained in the PSF.
KEYWORDS: Point spread functions, Deconvolution, Luminescence, Microscopy, 3D image processing, Data acquisition, 3D acquisition, Signal to noise ratio, Optical microscopy, Spindles
3-D optical fluorescence microscopy is an efficient tool for volumic investigation of biological samples. Nevertheless the image acquired by this way is altered by the properties of the microscope, according to its Point Spread Function (PSF). The aim of deconvolution algorithms is the reassignment of defocused information. This method provides an improvement in data quality and the possibility to compare specimens acquired using different systems. But deconvolution requires making a compromise between the precision of the result and the stability of the process, since this stability is directly related to the noise level of the data. This noise can be of different types, mainly electronic noise due to the sensors but we also include in the term "noise" the variation of fluorescence during the acquisition. Numerous deconvolution algorithms exist, giving variable results according to specimen characteristics. For the cases where deconvolution is not enough to obtain usable data, we developed some pre-process treatments. These tools can be used separately or consecutively depending on the application needs and specimen requirements.
A measurement of the photoelectric parameter (contrast, pixels affected) degradation of visible Focal-Plane Arrays (FPAs) irradiated by a laser has been performed. The irradiation fluence levels applied range typically from 300 μJ/cm2 to 700 mJ/cm2. A silicon FPA has been used for the visible domain. The effects of a laser irradiation in the Field Of View (FOV) and out of the FOV of the camera have been studied. It has been shown that the camera contrast decrease can reach 50% during the laser irradiation performed out of the FOV. Moreover, the effects of the Automatic Gain Control (AGC) and of the integration time on the blooming processes have been investigated. Thus, no AGC influence on the number of affected pixels has been measured, and it has been revealed that the integration time is the most sensitive parameter in the blooming action. Finally, only little laser energy is necessary for the system dazzling (1 μJ for 152 ns). A simulation of the irradiated images has been developed using a finite-difference solution. A good agreement has been shown between the experimental and simulated images. This procedure can be extended to test the blooming effects of IR cameras.
3-D optical fluorescent microscopy becomes now an efficient tool for volumic investigation of living biological samples. The 3-D data can be acquired by Optical Sectioning Microscopy which is performed by axial stepping of the object versus the objective. For any instrument, each recorded image can be described by a convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. To assess performance and ensure the data reproducibility, as for any 3-D quantitative analysis, the system indentification is mandatory. The PSF explains the properties of the image acquisition system; it can be computed or acquired experimentally. Statistical tools and Zernike moments are shown appropriate and complementary to describe a 3-D system PSF and to quantify the variation of the PSF as function of the optical parameters. Some critical experimental parameters can be identified with these tools. This is helpful for biologist to define an aquisition protocol optimizing the use of the system. Reduction of out-of-focus light is the task of 3-D microscopy; it is carried out computationally by deconvolution process. Pre-filtering the images improves the stability of deconvolution results, now less dependent on the regularization parameter; this helps the biologists to use restoration process.
KEYWORDS: Point spread functions, Luminescence, Deconvolution, Microscopy, 3D acquisition, 3D image processing, Monochromatic aberrations, Image processing, Data acquisition, Objectives
3-D optical fluorescence microscopy becomes now an efficient tool for volumic investigation of living biological samples. However, acquired raw data suffer from different distortions. In order to carry out biological analysis, restoration of raw data by deconvolution is mandatory. The system identification is useful to obtain the knowledge of the actual system and to quantify the influence of experimental parameters. High order centered moments are used as PSF descriptors. Oil immersion index, numerical aperture and specimen thickness are critical parameters for data quality. Furthermore, PSF identification is helpful to precise the experimental protocol. Application to 3-D anthracycline distribution in breast cancer cells is presented.
Video epifluorescence microscopy and image analysis are used for studying anthracycline resistance in breast cancer cells. In order to perform a semi-quantitative image analysis, several deconvolution algorithms are tested and validated on model beads. The most performant algorithm is applied to fluorescent biological specimens. We show that deconvolution makes image segmentation easier. Semi-quantitative measurements on resulting images are correlated with results obtained by cytometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.