The deflection that can reflect the vertical stiffness of a bridge plays an important role in the structural evaluation and health monitoring of bridges. In the past 20 years, the bridge deflection measurement methods based on computer vision and photogrammetry have been gradually applied to the field measurement due to the advantages of noncontact measurement, simple experimental setup, and easy installation. The technical research progress of vision-based bridge deflection measurement is reported from four aspects: basic principles, measurement methods, influencing factors, and applications. Basic principles mainly include camera calibration, three-dimensional (3D) stereo vision, photogrammetry, feature detection, and matching. For measurement methods, the single-camera two-dimensional measurement, the dual-camera 3D measurement, the quasistatic measurement based on photogrammetry, the multipoint dynamic measurement based on the displacement-relay videometrics and the deflection measurement based on UAV platform are introduced, respectively. In the section of influencing factors, this part summarizes the work of many researchers on the effects of camera imaging factors, calibration factors, algorithm factors, and environmental factors on measurement results. The field measurement results at different measurement distances and measurement accuracy based on these are presented in terms of applications. Finally, the future development trends of vision-based bridge deflection measurement are expected. |
1.IntroductionIn the acceptance of new bridges and the health monitoring of bridges in service, the deflection is usually considered as the basic parameter that must be measured because it is closely related to the bearing capacity of bridges. The existing bridge deflection measurement methods mainly rely on displacement transducers, dial gauges, connecting pipes, precision levels, digital levels, inclinometers, total stations, microwave radars, global positioning system (GPS), and other measuring equipment, as shown in Fig. 1. Table 1 introduces the typical application scenarios, advantages, and limitations of these deflection measurement methods. It can be seen from Table 1 that the traditional contact measurement methods have certain limitations, which is difficult to achieve dynamic deflection measurement and meet the engineering requirements of real-time measurement and long-term monitoring. In addition, although the accelerometer can also measure the dynamic deflection of the bridge, it has a large error by integrating the acceleration twice to obtain the displacement.1,2 Table 1Commonly used bridge deflection measurement methods.
For some noncontact measuring equipment, such as laser Doppler vibrometer,3,4 GPS5,6 and radar interferometry,7 although real-time measurement can be achieved, the total cost of the measurement system is often very high. Besides, the low measurement accuracy of GPS prevents it from being applied to bridges with large stiffness and small deflection. To break through the limitations of the existing bridge deflection measurement methods, the vision-based measurement methods based on computer vision and photogrammetry are gradually applied to the bridge deflection measurement.8 By detecting and tracking the corresponding points on the two images before and after the deformation, the bridge deflection can be determined. In past years, several researchers have reviewed the methods and applications of vision-based structural health monitoring (SHM) and condition assessment for civil infrastructure. Xu and Brownjohn9 primarily reviewed the vision-based displacement measurement methods from the perspective of video-processing procedures, which are composed of camera calibration, target tracking, and structural displacement calculation. Ye et al.10 provided a review of the applications of machine vision-based technology in the field of SHM of civil infrastructure. Feng et al.11 mainly summarized the general principles of vision-based sensor systems and discussed the measurement error sources and mitigation methods in laboratory and field experimentations. In addition, Jiang et al.12 only reviewed the development and applications of close-range photogrammetry in bridge deformation and geometry measurement. However, these reviews9–12 mainly summarized from various application fields, with a little introduction to different measurement methods, and the advantages and disadvantages of these methods haven’t been compared and discussed. The purpose of this paper is to fill this gap and give a technical review of vision-based bridge deflection measurement from the perspectives of basic principles, measurement methods, influencing factors, and applications. 2.Basic PrinciplesThe basic components of the vision-based bridge deflection measurement system are shown in Fig. 2, which include the measuring equipment part, software algorithm part, data processing part, and data transmission part. Data processing is generally conducted by computers and data transmission can be divided into wired transmission and wireless transmission. According to different measurement methods, the measuring equipment has different components and the software algorithm is mainly composed of camera calibration, three-dimensional (3D) stereo vision, photogrammetry, feature detection, and feature matching. In software algorithm part, camera calibration and feature detection and matching are the two most important parts because the relationship between image coordinate system and world coordinate system can be established only after the accurate camera calibration,13–16 and the calculation of bridge deflection must be based on the correct detection and matching results of bridge surface features.17 2.1.Camera CalibrationCamera calibration is a process of determining the corresponding relation between the image coordinate system and the world coordinate system and correcting the distorted images. For industrial camera lenses, the original image will be distorted due to manufacturing error, assembly deviation, and some other factors, which makes the measurement accuracy cannot be guaranteed. In camera lens distortion, radial distortion and tangential distortion are usually considered.18 As shown in Figs. 3 and 4, radial distortion causes pixel points to be close to or far from the center of the image plane and tangential distortion causes tangential deviation of pixel points. Usually, lens distortion can be calibrated at the same time as the camera’s intrinsic parameters.19–21 For the complex lens distortion that cannot be described by the distortion model, the lens distortion can be calibrated separately in advance.22,23 To correct lens distortion, the distortion coefficient needs to be defined to describe the mapping relationship between the point with and without distortion. After the calibration of lens distortion, the pixel position without distortion can be derived from the distorted pixel position according to the mapping relationship, and then the displacement and deformation without distortion can be calculated. For dual-camera 3D measurement, it is also necessary to use the pixel position after distortion correction to carry out 3D reconstruction. 2.1.1.Single camera calibration of a simplified modelDue to the large field of view in bridge deflection measurement, the calibration board cannot be used to calibrate extrinsic parameters. Therefore, the single-camera calibration method based on a simplified model is often used. Figure 5 is the schematic diagram of the simplified pinhole camera model. When the optical axis of the camera is perpendicular to the target plane, then where is the image distance; is the object distance; is the displacement of a point in the target plane; is the displacement of a point on the corresponding image plane. For long-distance measurement of bridge deflection, the image distance is generally approximately equal to the focal length of lens, and the object distance can be measured by a laser rangefinder.2.1.2.Single-camera calibrationThe main purpose of single-camera calibration is to calibrate the camera’s intrinsic parameters and lens distortion. Figure 6 is the schematic diagram of the pinhole camera model. The coordinate transformations shown in Fig. 6 represents the process from the world coordinate system to the camera coordinate system, and finally to the image coordinate system. If the point in the world coordinate system can be expressed as , and the corresponding point in the image coordinate system can be expressed as , the relationship between the two points can be expressed as follows: where is the scale factor; is the camera projection matrix; and are the extrinsic parameters of the camera, which respectively represent the rotation and translation of the world coordinate system relative to the camera coordinate system; is the camera intrinsic parameters matrix, which can be expressed as where and respectively represent the equivalent focal length of the lens on the and axes of the image, parameters representing the degree of skewness of the and axes of the image, and represent the coordinates of the principle point.For the calibration of camera intrinsic parameters, the checkerboard calibration method proposed by Zhang is often used.25 Since it is a plane calibration (), the Eq. (2) can be rewritten as In this formula, is called as the homography matrix. Although the matrix has nine unknown parameters, only eight independent parameters need to be solved if the parameters in the matrix are regularized to .To solve the homography matrix, four pairs of corresponding points in the image coordinate system and the world coordinate system should be known for each calibration board position. If there are three calibration boards’ attitudes, the relative extrinsic parameters between camera coordinate system and plane calibration board coordinate system in the camera can be solved. Lens distortion is usually calculated simultaneously with extrinsic parameters of calibration board at different attitudes and intrinsic parameters of the camera by the nonlinear optimization method. In addition to the known pattern calibration method, self-calibration methods26–28 can also be used for camera intrinsic parameter calibration. It should be noted that the camera’s intrinsic parameters are generally considered to remain almost unchanged after calibration, so the camera with calibrated intrinsic parameters can be brought to the experimental site for measurement. 2.1.3.Extrinsic parameter calibration of two camerasIf intrinsic parameters of the two cameras have been calibrated and the image distortions have been corrected, it can be seen from Eq. (2) that there are and for camera 1 and camera 2, respectively. According to the epipolar constraint, and satisfy the following correspondence: where is a fundamental matrix. If the world coordinate system is aligned with the camera coordinate system of camera 1, then . Remember that , , and . Then the fundamental matrix satisfiesAfter the calibration of camera intrinsic parameters, the essential matrix can be calculated by solving the fundamental matrix , and then the relative extrinsic parameters of camera 2 to camera 1 can be calculated by using the singular value decomposition method. Nister points out that if there are at least 5 pairs of corresponding points in the image coordinate system of two cameras, the fundamental matrix of the camera can be solved.29 In Ref. 30, it was proposed that higher solving accuracy can be achieved through nonlinear iterative optimization of relative extrinsic parameters between dual cameras solved by speckle matching and coplanar equation. After the relative extrinsic parameters are calculated, the scale information of the translation vector can be determined by a scale factor. 2.2.Three-Dimensional Stereo VisionAfter image distortion is corrected, according to Eq. (2), the relationship between a point in a world coordinate system and its corresponding image coordinate system can be expressed by two linear equations. If the distortion model is defined on distorted coordinates, the inverse mapping can be used to correct the image distortion. If a point is seen by two cameras at the same time and the world coordinate system is established on the camera coordinate system of the left camera, the relationship between the point and the corresponding points in the image coordinate systems of the two cameras can be expressed by four equations. As shown in Eq. (7), where the subscript 1 and 2 represent camera 1 and camera 2, respectively. In Eq. (7), the intrinsic and extrinsic parameters of the camera can be determined by calibration, and the coordinates of image points can be obtained by matching the images of two cameras. Therefore, the 3D coordinates of the point can be directly solved by the four equations where are the coordinates of principle point in the image coordinate, are the scale factors in image axes, is the parameters describing the skewness of the two image axes, is the translation vector and is the rotation matrix from left camera coordinate system to right camera coordinate system and the subscripts 1 and 2 denote left and right cameras, and are the image coordinates of matched points, are the reconstructed 3D coordinates in the left camera coordinate system.2.3.Monocular PhotogrammetryMonocular photogrammetry can realize the reconstruction of the 3D coordinates of the mark points,31 which is a method to accurately determine the position of the target in 3D space by using digital image processing and photogrammetry technology to process the images of the mark points taken from different positions and angles. Its main steps include image preprocessing, relative orientation, and bundle adjustment. A monocular photogrammetry system is generally composed of a digital camera, marking points, optical reference scales, wireless transmission devices, and computing software. As shown in Fig. 7, based on the principles of multiview vision, the control points in different views can be matched with the help of epipolar constraint and then be reconstructed. The original parameters of the digital camera and lens can be taken as the initial values of the intrinsic parameters of the camera. Based on the extrinsic parameter calibration method in Sec. 2.1.3, relative extrinsic parameters between camera coordinate systems at different angles can be determined if there are some coded marking points fixed on the target surface and some of the same marking points can be seen in a series of images taken in any position and angle of the camera. With the extrinsic parameters and the original intrinsic parameters of the camera, the 3D coordinates of the marking points can be reconstructed. In addition, by using the self-calibration bundle adjustment method, with each bundle as the basic adjustment unit and the coordinates of image points as the observation values, the optimization objective function can be listed as follows according to the collinear condition equation: where represents the number of coded marking points; represents different camera angle; represents the image coordinates of ’th coded marking point under the ’th viewing angle; represents the image coordinates of coded marking points transformed by projection relationship. The 3D coordinates of the spatial points with high precision can be obtained after the adjustment process is carried out in the whole region, which means optimizing and solving the intrinsic and extrinsic parameters of the camera as well as the coordinates of spatial points.2.4.Feature Detection and MatchingThe feature detection and matching of images are the key to realize deflection measurement. The commonly used image features include gray feature, feature point, gradient feature and geometric feature. For geometric feature, some scholars have proposed deflection measurement method based on sampling Moiré pattern.32,33 Gray feature has certain requirements on the gray information of the measured target surface. The most representative technology using gray feature is the digital image correlation (DIC) technology, which has the advantages of subpixel displacement positioning and high accuracy. Compared with the gray feature, the feature point has lower requirements for the gray information of the object surface and has the advantages of scale invariance and rotation invariance, so it performs well in the measurement of complex deformation. By detecting the image features and matching the corresponding features of each frame, the deflection can be calculated. For the field measurement of bridge deflection, the pixel displacement will not be too large, but the feature detection and matching in the actual measurement should also consider a series of factors such as rain, snow and fog shielding, and airflow disturbance. 2.4.1.Grayscale features and matching based on digital image correlationAt present, the commonly used feature detection and matching algorithm is template matching, whose main steps are to select the matching region in the reference image first, and then use the correlation function to match the template with the target region in the deformation image. Tong compared several correlation functions and pointed out that the zero-normalized sum of squared difference correlation function has the best robustness and reliability34 where refers to the width of the template; and represent the gray value of pixel points in the reference and deformation image templates, respectively; and represent the average gray value in the reference and deformation image templates, respectively; is used to represent the shape function of the deformation template relative to the reference template. For bridge deflection measurement, the zero or first order the shape function is enough to meet the measurement requirements.35Matching can be divided into integer pixel matching and sub-pixel matching. For bridge deflection measurement, the displacement between adjacent images is usually not particularly large, so the conventional integer pixel matching algorithm can meet the requirements to provide accurate initial guess for subpixel registration. Pan et al.17 compared several commonly used subpixel matching search methods and pointed out that the Newton–Raphson iterative method has a high precision. Due to the advantages of computational efficiency and robustness, the inverse compositional Gauss-Newton algorithm (IC-GN) is generally used for matching.36–38 2.4.2.Feature detection and matching based on feature pointsHarris,39 SIFT,40 and SURF41 are widely used in feature detection. After the feature points are detected, Euclid distance is used to measure the similarity.40 The smaller the Euclid distance, the higher the similarity. The matching process can select the optimal feature point by traversing all feature points, but the speed is slow. To improve the matching speed, it is widely used to extract the nearest neighbor points for matching and then select the optimal matching point.40 However, there are often abnormal matching results that need to be removed. The random sample consensus method (RANSAC)42 is commonly used to eliminate abnormal matching. 2.4.3.Orientation code matching based on gradient featuresOrientation code matching (OCM) calculates the orientation code of each pixel by matching gradient information, which is proved to have rotation invariance and brightness invariance.33 Assuming that the gray value of each point in an image is represented by and its partial derivatives are and , then the gradient angle corresponding to the point with the pixel coordinate is and its corresponding orientation code is defined as follows: where is the preset sector width and its value is generally ; is a threshold value used to ignore the low-contrast region, whose purpose is to suppress the interference of low-contrast region on matching. However, if its value is too large, the gradient feature information will be lost, so it is necessary to control its range reasonably.After calculating the orientation codes, histograms of orientation codes are employed for approximating the rotation angle of the object and then rotate the object template by the estimated angle to realize template matching. To reduce the error, the bilinear interpolation method was also proposed to interpolate the obtained gradient angles to achieve subpixel accuracy.43 2.4.4.Feature detection and matching based on geometric featureThe sampling Moiré method has been extensively used for displacement measurements of railway bridges. The displacement information can be further used to evaluate the deflection.32 Using 2D grids, the displacement of the research object can be measured in two directions simultaneously. With the help of high-speed cameras, dynamic displacement curves over time can be obtained by this method. Besides, using the DIC-aided SM method,33 displacements exceeding half of the grating pitch can also be measured correctly. In addition to sampling Moiré method, the circular maker localization, corner diction, and cross detection can also be used. 3.Measurement Methods3.1.Two-Dimensional Measurement with a Single CameraAfter calibration of lens distortion, the vision-based measurement of bridge deflection can be achieved by matching the image feature based on the calibration method in Sec. 2.1.1. This method is a widely used visual bridge deflection measurement method for its simplicity and practicality.44–48 For night measurement, the street lights on the bridge can also be used as feature matching points. To reduce the impact of ambient light, high-brightness red LED lamps can also be installed on bridges, which can achieve deflection measurement with active illumination by installing a coupled bandpass optical filter in front of the lens.33 The measurement resolution of this method is mainly limited by the measurement field of view and imaging resolution. For long-span bridges, multiple single-camera systems can also be used to achieve segmental measurement to ensure the measurement resolution. Figure 8 shows that multiple single-camera 2D measurement systems are used to measure the overall deflection of a long-span bridge in Jiangxi Province. In the field measurement of bridge deflection, it is difficult to ensure that the optical axis of camera is perpendicular to the side surface of the bridge to be measured. Due to the particularity of bridge deflection measurement (only the vertical displacement needs to be measured), the yaw angle has no influence on the measured results and the influence of the rolling angle can be eliminated by calculating the total displacement of the pixels. Therefore, only the influence of pitch angle needs to be considered. The correction method is given as follows:49 where is the pitch angle of the camera; is the coordinates of a pixel point; is the principal point coordinates of optical center; is the distance between the camera and the side surface of the bridge to be measured; is the focal length of the lens; is the displacement of pixel. Since the horizontal displacement of the measured point on the bridge is a small quantity of higher order relative to the vertical displacement, it is assumed that the displacement of pixel is only caused by the bridge deflection. is the deflection of the point to be measured; is the actual physical size of a single pixel.Although the single-camera two-dimensional (2D) deflection measurement method is simple, practical, and quick, its application in a large field of view is limited when there are multiple measurement points in the field of view or when full-field measurement is required. It is noteworthy that there are two reasons for this problem. First, most off-axis single-camera measurement methods can only calculate a limited number of points, which cannot measure the deflection of the entire bridge. Moreover, the object distance of a single point measured by a laser rangefinder cannot be applied to the whole field of view, which means point-by-point measurements are required to obtain accurate deflection information. Therefore, the preparation work is tedious if there are many measuring points. Although Tian et al.50 proposed a full-field deflection measurement method under an essential assumption that all the points in the region of interest (ROI) of the bridge span are on a spatial straight-line line, the measurement area can only be limited to a narrow band. To solve the above problems, a multipoint single-camera bridge deflection measurement method with self-calibration of full-field scale information30,51 was proposed, as shown in Fig. 9. Assuming that the intrinsic parameters of the camera have been calibrated in advance,30,31 the camera is used to shoot the same area to be measured at multiple calibration positions to collect the calibration images and at the measuring position to collect the reference image and deformation images. Similar to the principle of extrinsic parameter calibration of two cameras in Sec. 2.1.3, self-calibration can be accomplished by matching the corresponding fixed points on the reference image and the calibration images. Then, the relative extrinsic parameters between camera coordinate system of the measurement position to the world coordinate system of the bridge surface can be solved. The scale factor representing the scale information of the translation vector can be obtained from two points with known distance in the field of view or by measuring the distance from the camera to a point on the bridge with a laser rangefinder. The intrinsic parameters and extrinsic parameters can be used to calculate the mapping between the image coordinate system with the world coordinate system of the bridge. Finally, the full-field deflections of the bridge can be calculated according to the change of image coordinate of the measuring points and the solved mapping function. 3.2.Three-Dimensional Measurement with Dual CamerasBased on the principle of stereo vision measurement, two cameras shot from different angles can be used to measure the 3D displacement of the object surface. The biggest difference between the bridge deflection measurement and the traditional binocular 3D measurement is the field of view to be measured. The calibration of dual-camera system cannot be carried out by using the traditional plane calibration method because the large bridge structure results in large field of view. To achieve the 3D system calibration in large field of view, the calibration method of camera extrinsic parameters based on epipolar constraint in Sec. 2.1.3 can be used when intrinsic parameters have been calibrated. By using monocular photogrammetry to reconstruct mark points installed on the walls, the large wall can be directly used as a calibration board for camera intrinsic parameters calibration.30,31 As shown in Fig. 10, for the camera extrinsic parameter calibration during the field measurement considering the geometric characteristics of the bridge, it can even use the mark points carried by unmanned aerial vehicles (UAV) as the control points in the calibration process if the control point information in the field of view is insufficient.52 For the 3D measurement with two cameras, it is necessary to transfer the vertical axis of the 3D reconstructed coordinate system to the same direction as the deflection of the bridge, so that the measured vertical displacement is the deflection value of the bridge. Similar to the 2D measurement with a single camera, the measurement resolution of 3D measurement with two cameras is mainly limited by the field of view to be measured and the imaging resolution. The resolution of measurement is usually low for the deflection measurement under a large field of view. Compared with the 2D measurement using a single camera, the 3D measurement with two cameras can measure the displacements in three directions, which makes the measurement results more abundant. 3.3.Quasistatic Deflection Measurement Based on Monocular PhotogrammetryBased on the principle of monocular photogrammetry, the deflection information of multiple points at different moments can also be measured by installing marker points on the bridge and carrying out the 3D reconstruction of the marker points before and after the bridge deformation,53 as shown in Fig. 11. The method requires a reference coordinate system without deformation and the 3D coordinate information at different moments needs to be established in the reference coordinate system, so as to achieve the high accuracy measurement of deflection. For convenience, the reference coordinate system can be built on the pier and the vertical axis of the coordinate system needs to be consistent with the direction of the bridge deflection. At the same time, it is necessary to place some calibration rulers in the field of view to determine the measurement scale information. This method has the advantages of simple equipment, multipoint measurement, and high resolution, but it cannot realize the dynamic deflection measurement of the bridge. 3.4.Multipoint Dynamic Measurement Based on the Displacement-Relay VideometricsTo realize multipoint, high-resolution and dynamic measurement of bridge deflection, Yu et al.54 proposed a measurement method called the displacement-relay videometrics with series camera network. Figure 12 shows the system configuration of this measurement method. is the double-head camera; and are the cooperative marker points; The subscript is the unit number from left to right. Assuming that is the vertical displacement of the cooperative mark point in the image, is the magnification, is the vertical displacement of the cooperative marker points or double-headed camera, is the distance between the camera and the cooperative marker points, and is the variation of the pitch angle of the double-headed camera numbered , the following equations can be given only considering vertical displacement and pitch angles: can be obtained by the change of the image coordinates of the marker points, then according to the formula, equations can be listed, and there are unknowns. If two controllable points are strictly stable or have known subsidence for this series network, the vertical displacement of marker points and double-headed cameras and the change in pitching angle of double-headed cameras can be calculated exactly by the principal components analysis.55,56Figure 13 shows the dynamic bridge deflection measurement system developed by our research group based on the theory of displacement-relay videometrics with series camera network, which is applied to the real-time measurement of the multipoint dynamic deflection of the Nanjing Yangtze River Bridge. Figures 13(b) and 13(c) show the measured deflection of a span on the north side of the bridge. The total length of the span is 128 m with nine measuring points arranged and the distance between each measuring point is 16 m. It can be seen from the experimental results that the system performs well in measuring the real-time deflection of the bridge when a train passes by, and the field measurement system noise is . The system has a great application prospect in the multipoint dynamic measurement of bridge deflection for the advantages of high resolution and dynamic measurement. 3.5.Deflection Measurement Based on UAV PlatformAlthough the vision-based bridge deflection measurement method can achieve long-distance measurement, it still requires a fixed platform to place the camera. In addition, if the fixed camera is too far from the target, the measurement accuracy will be greatly affected by atmospheric disturbances and camera vibration. Therefore, it is very difficult to find a suitable place to fix the camera for bridges that cross the rivers and valleys. To overcome this shortcoming, the deflection measurement method based on the UAV platform equipped with camera was proposed to improve the flexibility and applicability of the vision-based bridge deflection measurement method. The deflection measurement based on UAV platform uses the same measurement principle as the deflection measurement based on fixed platform. The key difference is that the former needs to eliminate the influence from the error of position and attitude of the UAV because the UAV is not stationary during the measurement. The commonly used method is to estimate the motion of the camera on the UAV by a fixed reference target. Yoon et al.57 estimated the camera motion by tracking fixed Artificial feature points in the background to recover the absolute structural displacement. Perry and Guo58 integrated both optical and IR cameras with a UAV platform to measure three-component dynamic structural displacement, again using a fixed reference target and reference plane in the background. Chen et al.59 achieved geometric correction of images by establishing the plane homography transformation between the reference image and the image to be correct with fixed points, so as to obtain the real displacements of the bridge model. Besides, Wu et al.60 took four corner points on a fixed object plane as reference points, estimated the projection matrix between the bridge plane and each frame image plane through UAV camera calibration, and then recovered the 3D world coordinates of the target points on the bridge model. However, the above methods require the measured target and reference target to be visible in the same field of view. Therefore, the field of view must be enlarged when measuring long-span bridges. But this will reduce the resolution of the target in the image, thus increasing the measurement error. Zhuge et al.61 developed a noncontact deflection measurement for the bridge through a multi-UAVs system. In this method, multiple UAVs equipped with cameras are used to measure the position to be measured and the fixed position of the bridge, respectively. According to the collinearity of the spot projected on the plane by the coplanar laser designator, as shown in Fig. 14, the motion of the UAV can be eliminated and the vertical displacement of the measured position relative to the bridge pier can be calculated. Therefore, the change of deflection from to can be expressed as where the coordinate system () is set as the coordinate system of bridge (). The vertical displacement of relative to at is , and the vertical displacement changes to at . The coordinates of laser spot in each local coordinate system are , , and at . The offsets of and between are and .3.6.Advantages and Limitations of Different Measurement MethodsTable 2 shows the advantages and limitations of different measurement methods and each method has its pros and cons. Different methods can be selected according to different application scenarios. For quasistatic deflection measurement, the measurement method based on monocular photogrammetry is highly recommended. For long-term high-resolution dynamic monitoring, the measurement method based on displacement-relay videometrics should be chosen. As for short-term detection, the 2D measurement method using a single camera should be more flexible. If the multidimensional motion can by wind load is considered, the 3D measurement with dual cameras may be more helpful. Table 2Advantages and limitations of different measurement methods.
4.Influencing Factors4.1.Camera FactorsThe errors caused by the cameras are mainly in two aspects: (1) Image noise. The camera will produce noise in the process of transforming optical signals into electrical signals and forming images. To reduce the error, the following methods can be adopted. First, select the camera with high signal-to-noise ratio for image acquisition. Second, take the average of multiple images to reduce the displacement measurement error caused by image noise. Third, select reasonable calculation parameters to resist the influence of noise.62 (2) Camera self-heating. The temperature of the electronic device inside the camera will rise when it is working, which causes a slight change in image distance and leads to virtual deformation. Ma et al.63,64 studied the systematic errors of DIC caused by camera self-heating. Several techniques could be used to eliminate the error. First, preheat the camera for 1 to 2 h before measurement to reach the thermal balance stage. Second, use the corresponding influence curve of temperature on strain to correct the measured results if preheating is impossible. Third, observe the fixed point without thermal deformation near the measuring area at the same time for temperature compensation. 4.2.Calibration FactorsThe errors caused by camera calibration factors mainly come from two aspects. (1) Lens distortion. For 2D and 3D measurement, the lens distortion without calibration will cause the measurement error. The distortion coefficient can be introduced to correct the points in the image coordinate system to reduce the error during the lens distortion calibration.18,19 (2) The failure of camera calibration parameters caused by camera motion. After the camera extrinsic parameters calibration, the calibrated extrinsic parameters are usually used directly for subsequent measurement, which means that the extrinsic parameters are considered to remain unchanged in the measurement process. However, in the field measurement, the camera itself is affected by wind or ground vibration, which results in the failure of extrinsic parameters. The general solution is to look for fixed points (such as piers) while ensuring that the target area is within the camera’s field of view. The influence of camera motion can be eliminated by subtracting the displacement of the measured fixed point from the displacement of the point in the area to be measured.65 Besides, it is also common to calculate the displacements using averaged images by redundancy measurements,66,67 but it cannot be used for dynamic measurements. For the relative extrinsic parameters of the two cameras in the binocular system, the method in Sec. 2.1.3 can be used to calculate the real-time extrinsic parameters of the camera. 4.3.Algorithm FactorsThe influence of algorithm factor mainly refers to the error in the process of feature matching, which mainly includes two aspects. (1) Feature matching algorithm. The accuracy of different feature matching algorithms is also different. Compared with the matching algorithm based on feature points, the subpixel matching algorithm based on gray features can usually achieve higher accuracy, but the former performs better in estimating the initial value of large deformation and large rotation. Therefore, the result of feature point matching can be taken as the initial value of template matching in DIC.68 (2) Interpolation error of the subpixel matching algorithm based on grayscale. These methods can be used to reduce the error: higher-order interpolation,69 image prefilter processing,70 or other interpolation error elimination algorithms.71–73 4.4.Environmental FactorsThe most difficult challenge to overcome in the field measurement of bridge deflection is the influence of environmental factors, which mainly comes from five aspects. (1) Environmental temperature. According to the analysis of 2-h deflection measurement results in Ref. 58, it is pointed out that the environmental temperature has little influence on the deflection measurement values in a short time, which can be basically ignored. However, Zhou et al.74 found that the error of environmental temperature on displacement measurement fluctuated daily and showed a cumulative trend over time through more than half a year of intermittent measurement. (2) Heat flow disturbance. The air between the camera and the target will flow because of the uneven temperature, which will distort the image acquired by the camera. To maintain the sharpness of the original pixel grid after averaging multiple images, Joshi and Cohen75 proposed a novel local weighted averaging method based on ideas from “lucky imaging” that minimizes blur, resampling, and alignment errors, as well as effects of sensor dust. Anantrasirichai et al.76 extracted accurate detail about objects behind the distorting layer by selecting informative ROIs only from good quality frames and solved the space-varying distortion problem using region-level fusion based on the dual-tree complex wavelet transform (DT-CWT). Luo and Feng77 filtered the heat haze distortion by establishing the distortion basis to match with the most similar sample image in terms of the shortest Euclidean distance. (3) Influence of rain, snow, and fog. If there is rain, snow and fog between the camera and the measured object during the measurement process, the image will appear blurred or even error, which makes the feature detection and matching difficult.78 (4) Luminance fluctuation. The luminance fluctuation of the measuring environment will affect the quality of the collected images and the result of the feature detection and matching. It can be reduced by using a feature detection algorithm with brightness invariance49 or adding a light source.47 (5) Influence of strong wind or ground vibration. In the outdoor measurement whose object distance between the camera and the bridge to be measured is often large, the shaking and swaying of the camera caused by the wind and ground vibration often result in large errors because of the optical lever. 5.Applications and AccuracyWith the improvement of the accuracy and computational efficiency and development of measuring equipment, the vision-based bridge deflection measurement methods perform well in the static and dynamic deflection measurement of various bridges and its measurement distance is constantly increasing. In addition, based on the measured deflection data of the bridge, the strain of the bridge surface can be also obtained and the dynamic parameters of the bridge can be further identified, which can be used to evaluate the SHM of the bridge. The following describes the application and analysis of vision-based bridge methods at different measuring distances. To facilitate classification, the measuring distances within 10 m, between 10 and 100 m, and above 100 m are called short distance, medium distance, and long distance, respectively. 5.1.Short Distance MeasurementDhanasekar et al.79 measured the deflections and strains of two aged masonry arch bridges with the internal span length of 7.85 and 13.11 m, respectively. The distances from the camera to three key regions (crown, support, and quarter-point) are . When the noise amplitude was 0.05 pixels and the corresponding measurement uncertainty after applying the Savitzky–Golay filter was , the measured maximum deflection and strain were 0.5 mm and 110 microstrain, respectively, which were validated through a 3D finite element model. Ngeljaratan and Moustafa80 used two cameras to monitor the 3D displacement of targets (circular black and white stickers) attached to a 27-m footbridge under pedestrian dynamic loads at the 100-Hz sampling rate when the cameras were located away from the targets and separated at 1.56 m. Figure 15 shows the view of the monitored footbridge as well as locations of monitoring equipment, sensor locations, and pedestrian loads. Experimental results showed that the displacements of the bridge less than 0.1 in (2.54 mm) under pedestrian impact load can be captured. Based on these data, the vibration frequencies of the full bridge were determined and compared well to the value of the SAP2000 analytical model. Jáuregui et al.81 measured vertical deflections of bridges using digital close-range terrestrial photogrammetry in a laboratory and two field experiments. Results from laboratory testing of a steel beam showed an accuracy ranging from 0.51 to 1.3 mm. Field evaluation of a prestressed concrete bridge showed an average difference of as compared with elevation measurements made with a total station. Based on the conventional control point method used by Jáuregui and Jiang53 proposed the refined distance constraint (RDC) approach to make the measurement more convenient for engineers. Compared with the laboratory and field measurement results of dial gage and differential level, the proposed method differed within 1 mm from the gage measurement and within 2 mm from the level readings, respectively. 5.2.Medium Distance MeasurementPan et al.46 measured the deflections of the middle point of the span and the point near the pier of a 60 m three-span railway bridge at the object distance of 22.825 and 22.438 m when a freight train passed at a speed of . The measurement results show that the average deflection and the amplitude of the former point are 4.2 and 3 mm respectively, while that of the latter point were 0.9 and 0.75 mm, respectively. Moreover, Fourier analysis of the vertical displacements of the two points all indicates that the first-order natural frequency of the test bridge is 1.0250 Hz, which is equal to that measured by LVDTs. Alipour et al.82 measured the midspan deflection of the Hampton Roads Bridge–Tunnel with a low-cost consumer-grade imaging system mounted at the pier cap directly beneath each girder. The span under study was 22.86-m long and consisted of seven girders. The average accuracy was consistently by using the targets speckled with a random dot pattern for the correlation analysis. Lee et al.83 conducted a feasibility test on a 120-m-long pedestrian suspension bridge with stiffened steel girders, as shown in Fig. 16, to check the applicability of the vision-based method to a suspension bridge. The distance between the camera placed on the ground near an abutment and the target placed at the center point of midspan is about 70 m. Compared with the first-order natural frequency of 1.83 Hz measured by the accelerometer, the first-order natural frequency based on the image processing technology is 1.82 Hz. 5.3.Long Distance MeasurementTian et al.47 measured the deflection-time curves of the six measurement points of the Wuhan Yangtze River Bridge under static loading by using actively illuminated LED targets. When the minimum distance between the camera sensor and the LED target is 107.3 m and the maximum distance is 288.9 m, although the mean errors randomly fluctuate around zeros values, standard deviation errors increase with the increase of the measuring distances, which reaching 0.57 mm at the maximum distance. Fukuda et al.43 monitored the existing features on the deflection of the Vincent Thomas Bridge, a 1500-ft long suspension bridge without using a target panel. The cameras were placed at a stationary location from the midspan of the bridge main span, as shown in Fig. 17. Experiments showed that the average of the standard deviation between the measured displacements with and without the target panel was 6 mm and the dominant frequency calculated by the measured displacements is consistent with the bridge fundamental frequency measured by accelerometers installed on the bridge. 6.ConclusionStarting from the necessity of bridge deflection measurement, this paper introduces the principle of the vision-based deflection measurement method, including camera calibration, 3D stereo vision, monocular photogrammetry, and feature detection and matching. Besides, the paper analyzes the advantages and disadvantages of the single-camera 2D measurement, dual camera 3D measurement, quasistatic measurement based on photogrammetry, multipoint dynamic measurement based on displacement-relay videometrics with series camera network and the deflection measurement based on UAV platform. Moreover, the paper expounds the influencing factors and applications of the bridge deflection measurement and summarizes the research results of relevant scholars. We hope that this study will offer some reference value to the community of optics as well as civil engineering to select a proper bridge deflection measurement method for a given application. The measurement method of bridge deflection based on vision is still not fully mature. How to reduce the error caused by various influencing factors is the main direction of future research. In addition, it is also the focus of the subsequent research about how to improve the calculation rate while ensuring the accuracy, so as to realize the real-time monitoring of engineering. By reducing measurement errors caused by various influencing factors and further improving measurement accuracy and efficiency, the long-term vision-based bridge deflection monitoring will absolutely play a more significant role in bridge health monitoring. AcknowledgmentsThis work was supported by the National Natural Science Foundation of China (under Grant Nos. 11827801 and 11332012). ReferencesP. Paultre, J. Proulx and M. Talbot,
“Dynamic testing procedures for highway bridges using traffic loads,”
J. Struct. Eng., 121
(2), 362
–376
(1995). https://doi.org/10.1061/(ASCE)0733-9445(1995)121:2(362) Google Scholar
K. Park et al.,
“The determination of bridge displacement using measured acceleration,”
Eng. Struct., 27
(3), 371
–378
(2004). https://doi.org/10.1016/j.engstruct.2004.10.013 ENSTDF 0141-0296 Google Scholar
H. Nassif, M. Gindy and J. Davis,
“Comparison of laser Doppler vibrometer with contact sensors for monitoring bridge deflection and vibration,”
NDT & E Int., 38
(3), 213
–218
(2004). https://doi.org/10.1016/j.ndteint.2004.06.012 Google Scholar
H. Xia et al.,
“Experimental analysis of a high-speed railway bridge under Thalys trains,”
J. Sound Vibr., 268
(1), 103
–113
(2003). https://doi.org/10.1016/S0022-460X(03)00202-5 Google Scholar
C. J. Brown et al.,
“Monitoring of structures using the global positioning system,”
Proc. Inst. Civil Eng.-Struct. Build., 134
(1), 97
–105
(1999). https://doi.org/10.1680/istbu.1999.31257 Google Scholar
T. Yi, H. Li and M. Gu,
“Recent research and applications of GPS based technology for bridge health monitoring,”
Sci. China Technol. Sci., 53
(10), 2597
–2610
(2010). https://doi.org/10.1007/s11431-010-4076-3 Google Scholar
M. Pieraccini et al.,
“Static and dynamic testing of bridges through microwave interferometry,”
NDT & E Int., 40
(3), 208
–214
(2007). https://doi.org/10.1016/j.ndteint.2006.10.007 Google Scholar
Q. Yu and Y. Shang, Image-Based Precise Measurement and Motion Measurement, Science Press, Beijing
(2009). Google Scholar
Y. Xu and J. M. W. Brownjohn,
“Review of machine-vision based methodologies for displacement measurement in civil structures,”
J. Civil Struct. Health Monit., 8
(1), 91
–110
(2018). https://doi.org/10.1007/s13349-017-0261-4 Google Scholar
X. W. Ye, C. Z. Dong and T. Liu,
“A review of machine vision-based structural health monitoring: methodologies and applications,”
J. Sens., 2016 7103039
(2016). https://doi.org/10.1155/2016/7103039 Google Scholar
D. Feng and M. Q. Feng,
“Computer vision for SHM of civil infrastructure: from dynamic response measurement to damage detection–a review,”
Eng. Struct., 156 105
–117
(2018). https://doi.org/10.1016/j.engstruct.2017.11.018 ENSTDF 0141-0296 Google Scholar
R. Jiang, D. V. Jáuregui and K. R. White,
“Close-range photogrammetry applications in bridge measurement: literature review,”
Measurement, 41
(8), 823
–834
(2008). https://doi.org/10.1016/j.measurement.2007.12.005 Google Scholar
M. Fazzini et al.,
“Study of image characteristics on digital image correlation error assessment,”
Opt. Lasers Eng., 48
(3), 335
–339
(2010). https://doi.org/10.1016/j.optlaseng.2009.10.012 Google Scholar
T. Siebert et al.,
“High-speed digital image correlation: error estimations and applications,”
Opt. Eng., 46
(5), 051004
(2007). https://doi.org/10.1117/1.2741217 Google Scholar
P. L. Reu,
“A study of the influence of calibration uncertainty on the global uncertainty for digital image correlation using a Monte Carlo approach,”
Exp. Mech., 53
(9), 1661
–1680
(2013). https://doi.org/10.1007/s11340-013-9746-1 EXMCAZ 0014-4851 Google Scholar
R. Balcaen et al.,
“Influence of camera rotation on stereo-dic and compensation methods,”
Exp. Mech., 58
(7), 1101
–1114
(2018). https://doi.org/10.1007/s11340-017-0368-x EXMCAZ 0014-4851 Google Scholar
B. Pan et al.,
“Performance of sub-pixel registration algorithms in digital image correlation,”
Meas. Sci. Technol., 17
(6), 1615
–1621
(2006). https://doi.org/10.1088/0957-0233/17/6/045 MSTCEP 0957-0233 Google Scholar
J. Weng and P. Cohen,
“Camera calibration with distortion models and accuracy evaluation,”
IEEE Trans. Pattern Anal. Mach. Intell., 14
(10), 965
–980
(1992). https://doi.org/10.1109/34.159901 ITPIDJ 0162-8828 Google Scholar
J. Wang et al.,
“A new calibration model of camera lens distortion,”
Pattern Recognit., 41
(2), 607
–615
(2008). https://doi.org/10.1016/j.patcog.2007.06.012 Google Scholar
Z. Zhuang et al.,
“A single-image linear calibration method for camera,”
Measurement, 130 298
–305
(2018). https://doi.org/10.1016/j.measurement.2018.07.085 0263-2241 Google Scholar
R. Galego et al.,
“Uncertainty analysis of the DLT-Lines calibration algorithm for cameras with radial distortion,”
Comput. Vision Image Understanding, 140 115
–126
(2015). https://doi.org/10.1016/j.cviu.2015.05.015 Google Scholar
F. Devernay and O. Faugeras,
“Straight lines have to be straight,”
Mach. Vision Appl., 13
(1), 14
–24
(2001). https://doi.org/10.1007/PL00013269 Google Scholar
M. Ahmed and A. Farag,
“Nonmetric calibration of camera lens distortion: differential methods and robust estimation,”
IEEE Trans. Image Process., 14
(8), 1215
–1230
(2005). https://doi.org/10.1109/TIP.2005.846025 IIPRE4 1057-7149 Google Scholar
M. A. Sutton, J. J. Orteu and H. Schreier, Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications, 27
–28 Springer Science & Business Media, New York
(2009). Google Scholar
Z. Zhang,
“A flexible new technique for camera calibration,”
IEEE Trans. Pattern Anal. Mach. Intell., 22
(11), 1330
–1334
(2000). https://doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar
E. E. Hemayed,
“A survey of camera self-calibration,”
in Proc. IEEE Conf. Adv. Video and Signal Based Surveill.,
351
–357
(2003). https://doi.org/10.1109/AVSS.2003.1217942 Google Scholar
Q. Sun et al.,
“Camera self-calibration with lens distortion,”
Optik, 127
(10), 4506
–4513
(2016). https://doi.org/10.1016/j.ijleo.2016.01.123 OTIKAJ 0030-4026 Google Scholar
W. Liu et al.,
“Calibration method based on the image of the absolute quadratic curve,”
IEEE Access, 7 29856
–29868
(2019). https://doi.org/10.1109/ACCESS.2019.2893660 Google Scholar
D. Nistér,
“An efficient solution to the five-point relative pose problem,”
IEEE Trans. Pattern Anal. Mach. Intell., 26
(6), 756
–770
(2004). https://doi.org/10.1109/TPAMI.2004.17 ITPIDJ 0162-8828 Google Scholar
X. Shao et al.,
“Calibration of stereo-digital image correlation for deformation measurement of large engineering components,”
Meas. Sci. Technol., 27
(12), 125010
(2016). https://doi.org/10.1088/0957-0233/27/12/125010 MSTCEP 0957-0233 Google Scholar
S. Dong et al.,
“Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry,”
Appl. Opt., 55
(23), 6363
–6370
(2016). https://doi.org/10.1364/AO.55.006363 APOPAI 0003-6935 Google Scholar
S. Ri et al.,
“Dynamic deformation measurement by the sampling Moiré method from video recording and its application to bridge engineering,”
Exp. Tech., 44 313
(2020). https://doi.org/10.1007/s40799-019-00358-4 Google Scholar
C. Chen, F. Mao and J. Yu,
“A digital image correlation-aided sampling Moiré method for high-accurate in-plane displacement measurements,”
Meas. Sci. Technol., 182 109699
(2021). https://doi.org/10.1016/j.measurement.2021.109699 MSTCEP 0957-0233 Google Scholar
W. Tong,
“An evaluation of digital image correlation criteria for strain mapping applications,”
Strain, 41
(4), 167
–175
(2005). https://doi.org/10.1111/j.1475-1305.2005.00227.x Google Scholar
L. Tian et al.,
“Application of digital image correlation for long-distance bridge deflection measurement,”
Proc. SPIE, 8769 87692V
(2013). https://doi.org/10.1117/12.2020139 PSISDG 0277-786X Google Scholar
B. Pan, K. Li and W. Tong,
“Fast, robust and accurate digital image correlation calculation without redundant computations,”
Exp. Mech., 53
(7), 1277
–1289
(2013). https://doi.org/10.1007/s11340-013-9717-6 EXMCAZ 0014-4851 Google Scholar
Y. Gao et al.,
“High-efficiency and high-accuracy digital image correlation for three-dimensional measurement,”
Opt. Lasers Eng., 65 73
–80
(2015). https://doi.org/10.1016/j.optlaseng.2014.05.013 Google Scholar
X. Shao, X. Dai and X. He,
“Noise robustness and parallel computation of the inverse compositional Gauss–Newton algorithm in digital image correlation,”
Opt. Lasers Eng., 71 9
–19
(2015). https://doi.org/10.1016/j.optlaseng.2015.03.005 Google Scholar
C. Harris and M. Stephens,
“A combined corner and edge detector,”
in Alvey Vision Conf.,
147
–151
(1988). https://doi.org/10.5244/c.2.23 Google Scholar
D. G. Lowe,
“Distinctive image features from scale-invariant keypoints,”
Int. J. Comput. Vision, 60
(2), 91
–110
(2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94 IJCVEQ 0920-5691 Google Scholar
H. Bay et al.,
“Speeded-up robust features (SURF),”
Comput. Vision Image Understanding, 110
(3), 346
–359
(2008). https://doi.org/10.1016/j.cviu.2007.09.014 Google Scholar
R. Schnabel, R. Wahl and R. Klein,
“Efficient RANSAC for point-cloud shape detection,”
Comput. Graphics Forum, 26
(2), 214
–226
(2007). https://doi.org/10.1111/j.1467-8659.2007.01016.x CGFODY 0167-7055 Google Scholar
Y. Fukuda et al.,
“Vision-based displacement sensor for monitoring dynamic response using robust object search algorithm,”
IEEE Sens. J., 13
(12), 4725
–4732
(2010). https://doi.org/10.1109/JSEN.2013.2273309 ISJEAZ 1530-437X Google Scholar
S. Yoneyama et al.,
“Bridge deflection measurement using digital image correlation,”
Exp. Tech., 31
(1), 34
–40
(2007). https://doi.org/10.1111/j.1747-1567.2006.00132.x Google Scholar
M. Feng et al.,
“Nontarget vision sensor for remote measurement of bridge dynamic response,”
J. Bridge Eng., 20
(12), 04015023
(2015). https://doi.org/10.1061/(ASCE)BE.1943-5592.0000747 Google Scholar
B. Pan, L. Tian and X. Song,
“Real-time, non-contact and targetless measurement of vertical deflection of bridges using off-axis digital image correlation,”
NDT & E Int., 79 73
–80
(2016). https://doi.org/10.1016/j.ndteint.2015.12.006 Google Scholar
L. Tian and B. Pan,
“Remote bridge deflection measurement using an advanced video deflectometer and actively illuminated LED targets,”
Sensors (Basel), 16
(9), 1344
(2016). https://doi.org/10.3390/s16091344 Google Scholar
S. Yu, J. Zhang and X. He,
“An advanced vision-based deformation measurement method and application on a long-span cable-stayed bridge,”
Meas. Sci. Technol., 31 065201
(2020). https://doi.org/10.1088/1361-6501/ab72c8 MSTCEP 0957-0233 Google Scholar
F. Ullah and S. Kaneko,
“Using orientation codes for rotation-invariant template matching,”
Pattern Recognit., 37
(2), 201
–209
(2004). https://doi.org/10.1016/S0031-3203(03)00184-5 Google Scholar
L. Tian et al.,
“Full-field bridge deflection monitoring with off-axis digital image correlation,”
Sensors, 21
(15), 5058
(2021). https://doi.org/10.3390/s21155058 SNSRES 0746-9462 Google Scholar
J. Huang et al.,
“Multi-point single-camera bridge deflection measurement method with self-calibration of full-field scale information,”
Google Scholar
W. Feng et al.,
“Unmanned aerial vehicle-aided stereo camera calibration for outdoor applications,”
Opt. Eng., 59
(1), 014110
(2020). https://doi.org/10.1117/1.OE.59.1.014110 Google Scholar
R. Jiang and D. Jauregui,
“Development of a digital close-range photogrammetric bridge deflection measurement system,”
Measurement, 43 1431
–1438
(2010). https://doi.org/10.1016/j.measurement.2010.08.015 0263-2241 Google Scholar
Q. Yu et al.,
“A displacement-relay videometric method for surface subsidence surveillance in unstable areas,”
Sci. China Technol. Sci., 58
(6), 1105
–1111
(2015). https://doi.org/10.1007/s11431-015-5811-6 Google Scholar
H. Gao et al.,
“Robust principal component analysis-based four-dimensional computed tomography,”
Phys. Med. Biol., 56
(11), 3181
(2011). https://doi.org/10.1088/0031-9155/56/11/002 PHMBA7 0031-9155 Google Scholar
E. J. Candès et al.,
“Robust principal component analysis?,”
J. ACM, 58
(3), 1
–37
(2011). https://doi.org/10.1145/1970392.1970395 Google Scholar
H. Yoon, J. Shin and Jr. B. F. Spencer,
“Structural displacement measurement using an unmanned aerial system,”
Comput.-Aided Civil Infrastruct. Eng., 33
(3), 183
–192
(2018). https://doi.org/10.1111/mice.12338 Google Scholar
B. J. Perry and Y. Guo,
“A portable three-component displacement measurement technique using an unmanned aerial vehicle (UAV) and computer vision: a proof of concept,”
Measurement, 176 109222
(2021). https://doi.org/10.1016/j.measurement.2021.109222 0263-2241 Google Scholar
G. Chen et al.,
“Homography-based measurement of bridge vibration using UAV and DIC method,”
Measurement, 170 108683
(2021). https://doi.org/10.1016/j.measurement.2020.108683 0263-2241 Google Scholar
Z. Wu et al.,
“Three-dimensional reconstruction-based vibration measurement of bridge model using UAVs,”
Appl. Sci., 11
(11), 5111
(2021). https://doi.org/10.3390/app11115111 Google Scholar
S. Zhuge et al.,
“Noncontact deflection measurement for bridge through a multi-UAVs system,”
Comput.-Aided Civil Infrastruct. Eng., 37
(6), 746
–761
(2022). https://doi.org/10.1111/mice.12771 Google Scholar
Z. Wang et al.,
“Statistical analysis of the effect of intensity pattern noise on the displacement measurement precision of digital image correlation using self-correlated images,”
Exp. Mech., 47
(5), 701
–707
(2007). https://doi.org/10.1007/s11340-006-9005-9 EXMCAZ 0014-4851 Google Scholar
S. Ma, J. Pang and Q. Ma,
“The systematic error in digital image correlation induced by self-heating of a digital camera,”
Meas. Sci. Technol., 23
(2), 025403
(2012). https://doi.org/10.1088/0957-0233/23/2/025403 MSTCEP 0957-0233 Google Scholar
Q. Ma and S. Ma,
“Experimental investigation of the systematic error on photomechanic methods induced by camera self-heating,”
Opt. Express, 21
(6), 7686
–7698
(2013). https://doi.org/10.1364/OE.21.007686 OPEXFF 1094-4087 Google Scholar
S. Yoneyama and H. Ueda,
“Bridge deflection measurement using digital image correlation with camera movement correction,”
Mater. Trans., 53
(2), 285
–290
(2012). https://doi.org/10.2320/matertrans.I-M2011843 MTJIEY 0916-1821 Google Scholar
M. A. Sutton et al.,
“Effects of subpixel image restoration on digital correlation error estimates,”
Opt. Eng., 27
(10), 271070
(1988). https://doi.org/10.1117/12.7976778 Google Scholar
G. Vendroux and W. G. Knauss,
“Submicron deformation field measurements: Part 2. Improved digital image correlation,”
Exp. Mech., 38
(2), 86
–92
(1998). https://doi.org/10.1007/BF02321649 EXMCAZ 0014-4851 Google Scholar
Z. Wang et al.,
“Automated fast initial guess in digital image correlation,”
Strain, 50
(1), 28
–36
(2014). https://doi.org/10.1111/str.12063 Google Scholar
H. Schreier, J. Braasch and M. Sutton,
“Systematic errors in digital image correlation caused by intensity interpolation,”
Opt. Eng., 39
(11), 2915
–2921
(2000). https://doi.org/10.1117/1.1314593 Google Scholar
B. Pan,
“Bias error reduction of digital image correlation using Gaussian pre-filtering,”
Opt. Lasers Eng., 51
(10), 1161
–1167
(2013). https://doi.org/10.1016/j.optlaseng.2013.04.009 Google Scholar
Y. Su et al.,
“Elimination of systematic error in digital image correlation caused by intensity interpolation by introducing position randomness to subset points,”
Opt. Lasers Eng., 114 60
–75
(2019). https://doi.org/10.1016/j.optlaseng.2018.10.012 Google Scholar
D. Wang et al.,
“Bias reduction in sub-pixel image registration based on the anti-symmetric feature,”
Meas. Sci. Technol., 27
(3), 035206
(2016). https://doi.org/10.1088/0957-0233/27/3/035206 MSTCEP 0957-0233 Google Scholar
W. Heng et al.,
“Digital image correlation with reduced bias error based on digital signal upsampling theory,”
Appl. Opt., 58
(15), 3962
–3973
(2019). https://doi.org/10.1364/AO.58.003962 APOPAI 0003-6935 Google Scholar
H. Zhou et al.,
“Performance of videogrammetric displacement monitoring technique under varying ambient temperature,”
Adv. Struct. Eng., 22
(16), 3371
–3384
(2019). https://doi.org/10.1177/1369433218822089 Google Scholar
N. Joshi and M. F. Cohen,
“Seeing Mt. Rainier: lucky imaging for multi-image denoising, sharpening, and haze removal,”
in IEEE Int. Conf. Comput. Photogr. (ICCP),
1
–8
(2010). https://doi.org/10.1109/ICCPHOT.2010.5585096 Google Scholar
N. Anantrasirichai et al.,
“Atmospheric turbulence mitigation using complex wavelet-based fusion,”
IEEE Trans. Image Process., 22
(6), 2398
–2408
(2013). https://doi.org/10.1109/TIP.2013.2249078 IIPRE4 1057-7149 Google Scholar
L. Luo and M. Q. Feng,
“Vision based displacement sensor with heat haze filtering capability,”
in Int. Workshop of Struct. Health Monit.,
3255
–3262
(2017). Google Scholar
X. Ye et al.,
“Vision-based structural displacement measurement: system performance evaluation and influence factor analysis,”
Measurement, 88 372
–384
(2016). https://doi.org/10.1016/j.measurement.2016.01.024 0263-2241 Google Scholar
M. Dhanasekar et al.,
“Serviceability assessment of masonry arch bridges using digital image correlation,”
J. Bridge Eng., 24
(2), 04018120
(2019). https://doi.org/10.1061/(ASCE)BE.1943-5592.0001341 Google Scholar
L. Ngeljaratan and M. A. Moustafa,
“Structural health monitoring and seismic response assessment of bridge structures using target-tracking digital image correlation,”
Eng. Struct., 213 110551
(2020). https://doi.org/10.1016/j.engstruct.2020.110551 ENSTDF 0141-0296 Google Scholar
D. V. Jáuregui et al.,
“Noncontact photogrammetric measurement of vertical bridge deflection,”
J. Bridge Eng., 8
(4), 212
–222
(2003). https://doi.org/10.1061/(ASCE)1084-0702(2003)8:4(212) Google Scholar
M. Alipour, S. J. Washlesky and D. K. Harris,
“Field deployment and laboratory evaluation of 2D digital image correlation for deflection sensing in complex environments,”
J. Bridge Eng., 24
(4), 04019010
(2019). https://doi.org/10.1061/(ASCE)BE.1943-5592.0001363 Google Scholar
J. J. Lee and M. Shinozuka,
“Real-time displacement measurement of a flexible bridge using digital image processing techniques,”
Exp. Mech., 46
(1), 105
–114
(2006). https://doi.org/10.1007/s11340-006-6124-2 EXMCAZ 0014-4851 Google Scholar
BiographyXinxing Shao is an assistant professor in the School of Civil Engineering at Southeast University. His current works are focus on real-time, high-resolution and fully automatic deformation measurement, development of scientific instruments and experimental fracture mechanics. He is a member of SPIE and Optica (formerly OSA), and serves as reviewer for more than 20 international journals. He has published more than 60 journal and conference papers in the field of optical deformation measurement. |