Optical phase conjugation is a technique that could find many applications in medical imaging and industry. However, state of the art techniques are limited in speed, portability and efficiency. Especially for digital optical phase conjugation, the electronic delays for image readout on a camera and addressing a spatial light modulator make this technique unpractical for phase conjugation in biological medium. Furthermore, the calibration of such a system is a very complex and expensive task. Thus, we propose integrating on the same device a camera and a liquid crystals spatial light modulator to achieve phase control thanks to in-pixel processing of a photodiode signal.
We present a CMOS light detector-actuator array, in which every pixel combines a spatial light modulator and a photodiode. It will be used in medical imaging based on acousto-optical coherence tomography with a digital holographic detection scheme. Our architecture is able to measure an interference pattern between a scattered beam transmitted through a scattering media and a reference beam. The array of 16 μm pixels pitch has a frame rate of several kfps, which makes this sensor compliant with the correlation time of light in biological tissues. In-pixel analog processing of the interference pattern allows controlling the polarization of a stacked light modulator and thus, to control the phase of the reflected beam. This reflected beam can then be focused on a region of interest, i.e. for therapy. The stacking of a photosensitive element with a spatial light modulator on the same chip brings a significant robustness over the state of the art such as perfect optical matching and reduced delay in controlling light.
We compare the noise performance of two optimized readout chains that are based on 4T pixels and featuring the same bandwidth of 265kHz (enough to read 1Megapixel with 50frame/s). Both chains contain a 4T pixel, a column amplifier and a single slope analog-to-digital converter operating a CDS. In one case, the pixel operates in source follower configuration, and in common source configuration in the other case. Based on analytical noise calculation of both readout chains, an optimization methodology is presented. Analytical results are confirmed by transient simulations using 130nm process. A total input referred noise bellow 0.4 electrons RMS is reached for a simulated conversion gain of 160μV/e−. Both optimized readout chains show the same input referred 1/f noise. The common source based readout chain shows better performance for thermal noise and requires smaller silicon area. We discuss the possible drawbacks of the common source configuration and provide the reader with a comparative table between the two readout chains. The table contains several variants (column amplifier gain, in-pixel transistor sizes and type).
Diffuse Correlation Spectroscopy (DCS) is based on the temporal correlations of the speckle pattern from the light that
has diffused through a biological media. Measurements must be made on a small coherence area of the size of a speckle
grain. Summing independent measurement increases the SNR as the square root of the number of detectors. We present a
bi-dimensionnal pixel CMOS detector array specially designed for this task, with parallel in-pixel demodulation and
temporal correlations computation. Optical signals can be processed at a rate higher than 10,000 samples per second with
demodulation frequencies in the MHz range.
In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with
high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection
algorithms based on background estimation to find regions in movement are simple to implement and computationally
efficient. To reduce power consumption, the background is estimated using a down sampled image formed
of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed
mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable
architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to
implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as
a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps
CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.
Extracting salient regions of a still image, which are pertinent areas likely to attract subjects' fixations, can be useful to
adapt compression loss according to human attention. In the literature, various algorithms have been proposed for
saliency extraction, ranging from region-of-interest (ROI) or point-of-interest (POI) algorithms to saliency models,
which also extract ROIs. Implementing such an algorithm within image sensors implies to evaluate its complexity and
performance of fixation prediction. However, there have been no pertinent criteria to compare these algorithms in
predicting human fixations due to the different nature between ROIs and POIs. In this paper, we propose a novel
criterion which is able to compare the prediction performance of ROI and POI algorithms. Aiming at the electronic
implementation of such an algorithm, the proposed criterion is based on blocks, which is consistent with processing
within image sensors. It also takes into account salient surface, an important factor in electronic implementation, to
reflect more accurately the prediction performance of algorithms. The criterion is then used for comparison in a
benchmark of several saliency models and ROI/POI algorithms. The results show that a saliency model, which has
higher computational complexity, gives better performance than other ROI/POI algorithms.
A fair knowledge of the human hand tremor responsible for camera-shake noise as well as a way to measure the impact
of motion-blur on human-perceived image quality are mandatory to quantify the gain of image stabilization systems. In
order to define specifications for the stabilization chain we have derived a perceptual image quality metric for camera-shake
induced motion blur. This quality metric was validated with visual tests. Comparison to the ground-truth shows a
good fitting in the simple case of straight-line motion blur as well as a fair fitting in the more complex case of arbitrary
motion blur. To our best knowledge this is the first metric that can predict image quality degradation in the case of
arbitrary blur. The quality model on which this metric is based gives some valuable insights on the way motion blur
impacts perceived quality and can help the design of optimal image stabilization systems.
KEYWORDS: High dynamic range imaging, Image sensors, 3D image processing, Image compression, High dynamic range image sensors, 3D acquisition, Image quality, Data compression, Transistors, Integrated circuits
High Dynamic Range (HDR) Image sensors aim at having a dynamic over 120dB. Compared to classical architectures
this is obtained at the cost of a higher transistor count, thus lower fill factor. Three Dimensional integrated circuits (3DIC)
somehow change the constraints, photodiodes and electronics can be stacked on different layers, giving more
processing powers without compromising the fill factor.
In this paper, we propose an original architecture for a high dynamic 3D image sensor with data reduction obtained by
local compression. HDR acquisition is based on a floating point coding shared by a group of pixel (macro-pixel), thus
giving also a first level of compression. A second level of compression is performed by using a Discrete Cosine
Transform (DCT). With this new concept a good image quality (PSNR of about 40 dB) and a high dynamic range (120
dB) are obtained within a pixel area of 5μm×5μm.
This paper presents an on-chip 13 bits 10 M/S Analog to Digital Converter (ADC) specifically designed for infrared
bolometric image sensor. Bolometric infrared sensors are MEMs based thermal sensors, which covers a large
spectrum of infrared applications, ranging from night vision to predictive industrial maintenance and medical
imaging. With the current move towards submicron technologies, the demand for more integrated, smarter sensors
and microsystems has dramatically increased. This trend has strengthened the need of on-chip ADC as the interface
between the analog core and the digital processing electronic. However designing an on-chip ADC dedicated to focal
plane array raises many questions about its architecture and its performance requirements. To take into account those
specific needs, a high level model has been developed prior to the actual design.
In this paper, we present the trade-offs of ADC design linked to infrared key performance parameters and bolometric
technology detection method. The original development scheme, based on system level modeling, is also discussed.
Finally we present the actual design and the measured performances.
On standard CMOS processes, basically two photosensors may be designed: photodiodes or vertical bipolar phototransistors. A trade-off must be found between the area of the sensor, its sensitivity and its bandwidth. In most designs, the high sensitivity of the sensor is a key point and led to choosing a phototransistor based solution. However this choice is made at the expense of the bandwidth of the sensor. For small currents, an analysis shows that it is mainly proportional to the base-emitter capacitance Cbe and to the collector current. Hence, in the case of a floating base bipolar and for a given current, the only way of reducing Cbe is to decrease the emitter area. On the other hand, the sensitivity is to be preserved. We have proposed and tested an original sensor based on the splitting of phototransistors. The basic idea is to use minimum size emitter bipolar transistors and to increase their collector-base junction perimeter. Thanks to this design, for a given sensor area, the bandwidth has been improved by a factor of 3 and the sensitivity has been preserved. This solution has been successfully used on an operational retina performing stochastic computations at video rates. In particular, thanks to our design, we have been able to successfully implement a 150 by 50 micrometer2 optoelectronic random generator providing up to 100,000 random variables per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.