As buildings age, energy begins to leak through various locations such as window seals, walls, subsurface cracks, and damaged areas, even in seemingly healthy structures. Most of the time, areas of energy loss remain undetected because they are not visible to the naked eye. Due to the increasing amount of energy lost through such areas and defects which has an impact on overall energy efficiency. However, infrared images (IR) can be used to detect energy leaks as well as identify subsurface damages. Infrared thermography (IRT) is a popular method for assessing the condition of buildings and infrastructures. While IRT can provide information about the location and severity of energy leaks, manually analyzing the collected data can be a cumbersome process. As a result, there is a need to automate the detection of the areas from where energy is lost. Image segmentation methods based on deep learning algorithms can effectively automate the inspection process. In this study, an approach based on a pre-trained mask region-based convolutional neural network (mask RCNN) is proposed for the first time in conjunction with IR images to localize and quantify areas of heat loss. Mask RCNN demonstrated significant accuracy in identifying the location and quantifying the size of the area of heat loss in inspected buildings with above 99% confidence.
KEYWORDS: Cameras, Calibration, Sensors, Gyroscopes, Sensor calibration, Imaging systems, Digital image correlation, 3D image processing, Structural health monitoring, Monte Carlo methods
Three-dimensional digital image correlation (3D-DIC) has become a strong alternative to traditional contact-based techniques for structural health monitoring. 3D-DIC can extract the full-field displacement of a structure from a set of synchronized stereo images. Before performing 3D-DIC, a complex calibration process must be completed to obtain the stereovision system’s extrinsic parameters (i.e., cameras’ distance and orientation). The time required for the calibration depends on the dimensions of the targeted structure. For example, for large-scale structures, the calibration may take several hours. Furthermore, every time the cameras’ position changes, a new calibration is required to recalculate the extrinsic parameters. The approach proposed in this research allows determining the 3D-DIC extrinsic parameters using the data measured with commercially available sensors. The system utilizes three Inertial Measurement Units with a laser distance meter to compute the relative orientation and distance between the cameras. In this paper, an evaluation of the sensitivity of the newly developed sensor suite is provided by assessing the errors in the measurement of the extrinsic parameters. Analytical simulations performed on a 7.5 x 5.7 m field of view using the data retrieved from the sensors show that the proposed approach provides an accuracy of ~10-6 m and a promising way to reduce the complexity of 3D-DIC calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.