Multicamera systems are commonly applied in the large field-of-view (FOV) measurements. However, two cameras may have a non-overlapping FOV in some scenes, and traditional binocular camera calibration methods cannot be used to directly calibrate non-overlapping cameras. To solve this problem, our study proposes a calibration method for binocular cameras with non-overlapping FOVs on the basis of planar mirrors. According to the reflection characteristics of an optical planar mirror, the proposed method can use the same target to calibrate non-overlapping cameras. Hence, the method overcomes the limitation of non-overlapping cameras being unable to observe common targets. Experiments results show that the maximum RMS error does not exceed 0.53 mm. Hence, the proposed method is effective, and its measurement technique is simpler and more universal than those of other methods. In addition, it is applicable to a wide range of measurements.
Fringe projection technology is often used for three-dimensional (3D) measurement, but it is difficult to measure highlight surfaces. Polarization systems are usually used to remove highlight surfaces. Polarizing filters can be used to eliminate the highlights of the image, but they may also cause the image to be too dark and affect the measurement accuracy. Otherwise, to ensure measurement accuracy, the complexity of the operation of the polarization system will be increased. A method of polarization-based camera intensity response function for 3D measurement is proposed. The intensity response function of the camera under the polarization system is established. It can avoid the complicated polarized bidirectional reflectance distribution function model and directly and quantitatively calculate the required angle between the transmission axes of the two polarizing filters. Then it is combined with the image fusion algorithm to generate the optimal fringe pattern. Experimental results demonstrate that this method significantly eliminates the effects of highlights in the image. The fuzzy transition area between the black and white fringes is effectively reduced, and the edge information of the fringes is correctly restored. Moreover, the high signal-to-noise ratio and contrast of the image are retained when the polarization filters are added.
It is critical to accurately obtain a phase position in a digital fringe projection system. Due to stray light in a complex environment, when projecting an image of a code fringe, the projected light intensity can be influenced. As a result, the intensity of fringe light captured with the camera may be defective, resulting in the decline of measurement precision. Hence, we propose a strong suppression technology for stray light based on polarization. Through processing fringe images in the frequency domain, this technology can effectively suppress interference of stray light on the three-dimensional (3D) imaging system. This method is verified in the experiment. The results show that the method can accurately capture the 3D profile of real-world targets under the interference of stray light.
With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.
Virtual binocular sensors, composed of a camera and catoptric mirrors, have become popular among machine vision researchers, owing to their high flexibility and compactness. Usually, the tested target is projected onto a camera at different reflection times, and feature matching is performed using one image. To establish the geometric principles of the feature-matching process of a mirror binocular stereo vision system, we proposed a single-camera model with the epipolar constraint for matching the mirrored features. The constraint between the image coordinates of the real target and its mirror reflection is determined, which can be used to eliminate nonmatching points in the feature-matching process of a mirror binocular system. To validate the epipolar constraint model and to evaluate its performance in practical applications, we performed realistic matching experiments and analysis using a mirror binocular stereo vision system. Our results demonstrate the feasibility of the proposed model, suggesting a method for considerable improvement of efficacy of the process for matching mirrored features.
Locality sensitive discriminant analysis (LSDA) is a method considering both the discriminant and geometrical structure of the data. Within-class graph and between-class graph are first constructed to discover both geometrical and discriminant structure of the data manifold. Then a proportional constant is used to measure the different importance of two graphs. Finally, a reasonable criterion is used to choose a good map so that the connected points of within-class graph stay as close as possible while connected points of between-class graph stay as distant as possible. The key technique of LSDA is nearest neighbor graph construction. In this paper, we compared two different nearest neighbor graph construction methods. The experiment results demonstrate that splitting a nearest neighbor into equally sized with class graph and between-class graph has smaller amount of computations while construct within-class graph and between-class graph by using different sized nearest neighbors could improving the accuracy.
Autonomous space object tracking under complex space environment is a popular topic in space engineering research. However, it is a challenging task for measurement equipment, implementing navigation under complex environment, and tracking object with unknown trajectory. An algorithm for space object tracking and azimuth determination using star tracker technology is the first time proposed in this paper. It includes two major steps, star tracking and object tracking. In star tracking stage, a motion-vector algorithm is the first time exploring to track stars in sequence images, which can track stars under complex space environment. With the tracked stars, the star tracker’s attitude can be updated in real-time. In object tracking stage, with the obtained attitude of the star tracker, the Kalman filter (KF) model is built to predict the object state. It takes the measured azimuth as observations rather than the object coordinates in CCD plane, which can avoid the computational complexity due to matrix derivations compared to traditional Extend-Kalman filter, and its convergence rate of the filter is improved consequently. The azimuth and the velocity of the object can be updated by the KF prediction process. In addition, different levels of background noise were added to simulate the complex space environment, and an artificial object is also added in frame with non-linear trajectory in CCD plane. The feasibility of the proposed methods is validated using synthesized sequence images which contain object motions. The simulated results show that the algorithm proposed can track stars and object successfully.
In this paper, a novel ellipse extraction method based on edge segments formed by edge detection in an image is
proposed. Edge segments are split into small pieces based on polygonal approximation and adjacent pieces are merged to
elliptical arc segments according to the defined constrain. Then arc segments belong to the same ellipse are grouped into
one arc segment by improved RANSAC algorithm. Finally, the accurate ellipse parameters are obtained by the least
square fitting with those edge points lying on the same grouped arc segment. Experiments on synthetic and real images
verify the good performance of the proposed method in the complex background images.
In this paper, an adaptive image enhancement method based on neural networks is proposed. The low frequency
components of the image can be obtained by average filter and the high frequency components of the image are obtained
by subtracting the low frequency from the original image. The enhanced image is obtained by adding the original image
to the high frequency components multiplied by the scale factor. The masking size of the average filter and the scale
factor are given by the constructed neural net in terms of the mean and standard deviation of the image. Real experiments
has been to test the proposed method and very good result has been obtained.
Camera calibration is the basic of machine vision, the accuracy of feature point detection and the automation of camera
calibration process is very important in actual high precision measurement. The planar pattern with circle feature has its
own advantage compared with chessboard calibration pattern; it is easy to manufacture, and insensitive to threshold.
Although there is a position distortion in detecting the projected centre of circle for perspective projection, the feature
point which is the image of the center of the circle can be computed accurately based on the geometric and algebraic
constrains of projected concentric circles. After the feature points is detected, the set of feature points on the calibration
pattern and the points in the camera images must be matched, manual or user assisted methods are often used to solve the
matching problem. This paper also describes a simple technique based on Delaunay triangulation to automatically match
the feature points recovered by concentric circles. The experiments show that the method is excellent in actual camera
calibration using concentric circles array.
A novel conception of automatic recognition for free-trouble sleeper springs is proposed and Adaboost algorithm based
on Haar features is applied for the sleeper springs recognition in Trouble of moving Freight car Detection System
(TFDS). In the recognition system, feature set of sleeper springs is determined by Haar features and selected by
Adaboost algorithm. In order to recognize and select the free-trouble sleeper springs from all the captured dynamic
images, a cascade of classifier is established by searching dynamic images. The amount of detected images is drastically
reduced and the recognition efficiency is improved due to the conception of free-trouble recognition. Experiments show
that the proposed method is characterized by simple feature, high efficiency and robustness. It performs high robustness
against noise as well as translation, rotation and scale transformations of objects and indicates high stability to images
with poor quality such as low resolution, partial occlusion, poor illumination and overexposure etc. The recognition time
of a 640×480 image is about 16ms, and Correct Detection Rate is high up to about 97%, while Miss Detection Rate and
Error Detection Rate are very low. The proposed method can recognize sleeper springs in all-weather conditions, which
advances the engineering application for TFDS.
It is proposed a method for camera calibration that could be used in stereo systems as well as in stereo head navigation in this paper. A pinhole camera model and two-dimensional planar target are considered. An Iterated Extended Kalman Filter (IEKF) is used to estimate camera parameters. The method takes the observed feature points of images as the filter input and the estimated value of the intrinsic and extrinsic camera parameters as the filter output. Both computer simulation and real data experiments have been used to test the proposed method, and good results have been obtained. The RMS error of absolute distance between reprojection feature points is about 0.09 pixels in real experiments. The experimental results show IEKF is also a feasible optimization algorithm for on-line camera calibration.
A technique is developed to calibrate a camera which is equipped on mobile robot and detect a distant scene. A flexible
calibration target, on which calibration feature points are generated according to cross-ratio invariability, is employed in
this paper. As can be extended as needed, the target has a flexible size and can be used at discretionary distance from the
camera. To guarantee the generating accuracy of calibration points, coordinates of principal point and radial lens
distortion coefficients of first two orders are determined previously. Experiment results revealed the effectiveness of the
presented method and higher accuracy than traditional method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.