|
1.IntroductionThe spatial noise caused by nonuniformity of individual detector elements in the infrared focal plane array (IRFPA) can limit the overall performance of imaging systems. In general, the responsivity of individual detector elements is assumed to be linear. Therefore, it is possible to correct the nonuniformity using the well-known two-point correction (TPC) method, where two blackbodies at different temperatures are employed.1 Since the nonuniformity normally drifts in time2 and the correction capability of the TPC is degraded with repeated operations,3 we need to recalibrate the IRFPA. In practice, there are two difficulties in using the TPC method to compensate the residual nonuniformity (RNU) resulting from the temporal drifit or repeated operations: (1) we have to maintain two distinct heat sources in imaging systems; and (2) real-time video operation is interrupted during the correction process.4 Various scene-based nonuniformity correction (SBNUC) algorithms have been proposed to solve these problems. In general, SBNUC schemes can be broadly divided into two categories: constant statistics (CS) methods4–6 and least mean square (LMS) methods.7–10 The original CS method assumes that the temporal mean and standard deviation of each pixel are constant over time and space.5 The performance of the original CS method is reliable as long as the assumption is valid. As pointed out in Refs. 4 and 6, however, thousands of image frames are required to hold the assumption. Zhang and Zhao4 proposed a local constant statistics (LCS) method, which assumes that the temporal statistics are constant in a local region around each pixel. The LCS method improved the correction performance for the same number of input frames.4 Later, Zuo et al.6 generalized the LCS method by introducing a new constraint called multiscale CS. In the LMS methods, the correction parameters are learned by minimizing the LMS error between corrected images and desired output images. The minimization is performed frame-by-frame using the stochastic gradient descent (SGD) technique.11 Since the desired output images are not available, spatially low-pass filtered input images are used as the desired ones.7 However, the performance of the LMS method is degraded at strong edge points as reported in Refs. 8 to 10. Thus, several methods improve the estimation accuracy of correction parameters by suppressing the influence of strong edge points in the minimization process. Vera and Torres8 adaptively adjust the learning rate which is a fixed parameter in the original SGD techinque, according to the local spatial standard deviations of input images. In this way, the influence of strong edge regions, where the local standard deviation is large, is reduced in the minimization process. This method is further improved in Ref. 9 to remove burn-in artifacts caused by temporally slowly-varying image regions. An approach proposed in Ref. 10 updates correction parameters only when sufficient change occurs between consecutive images. Lately, different error functions are proposed as alternatives to the LMS error in the original LMS method.12–14 Interframe registration-based LMS method12,13 first registers the previous corrected image with the current input image by assuming that only slight translational motion exists. Then, this technique minimizes the LMS error between the previously shifted image and the currently corrected image. Vera et al.14 minimize the total variation of corrected images to obtain the correction parameters. Although these previous SBNUC algorithms may reduce the RNU, they have a common problem: a relatively large number of image frames are required to acquire the correction parameters.15 Two recently proposed SBNUC algorithms perform NUC using several image frames16 or even only two image frames.15 These methods can achieve good performance when the relative motion in successive image frames is a small translation along vertical or horizontal directions. However, large displacements between consecutive image frames can occur in some applications.17,18 To deal with the motion constraint of the previous approaches,15,16 we propose a new SBNUC method that estimates the correction parameters using several image frames. In the proposed method, we utilize the prior information on the parameters regarding the responsivity and the true scene irradiance. There is no restriction on the motion in successive image frames unless it is static. The rest of this article is organized as follows. In Sec. 2, the proposed SBNUC method is detailed. In Sec. 3, the performance of the proposed method is evaluated. Conclusion is drawn in Sec. 4. 2.Proposed MethodIn this section, we first formulate an optimization problem for correcting the RNU followed by its numerical solution. 2.1.FormulationLet us assume that the characteristic of each detector element in the IRFPA is linear.5,7 Then, the acquired signal for the ’th detector element at time is given by where represents the offset of each detector element and indicates the scene irradiance. Here, we assume that there is no gain nonuniformity since the offset component is the dominant source of the RNU.15,19,20 Given the image observation model [Eq. (1)], our objective is to estimate and . This problem can be solved by minimizing the proposed energy function, which consists of three terms: where and denote the set of given image frames and the image domain, respectively. and are regularization parameters for and , respectively. The data-fidelity term which measures the mismatch between the observed image and the estimates is given bySolving Eq. (3) alone is an underconstrained problem in which the number of unknowns is greater than that of equations. Thus, a regularization approach is taken to estimate and in this work. The regularization term for the offset is defined as follows: is derived from the observation that the offset changes very slowly in time.7,9 In other words, the offset remains almost constant for several consecutive image frames.20 This regularization term favors the offset with small changes along the time axis. If we correctly estimate for the given image frames, their temporal variation is negligible, which means that is very close to zero. The last term in Eq. (2) is introduced to regularize the scene irradiance . In general, is smooth in the spatial domain. This fact is implictly used in the original LMS method, where the desired image is the spatially low-pass filtered input image.7 Thus, it is natural for us to require to be smooth in the spatial domain. Since the degree of the spatial smoothness can be measured via the image gradient, is given by The smoothness term is proportional to the magnitude of the spatial intensity change of the scene irradiance. Therefore, the smoother the scene irradiance is, the smaller the value of is. However, the smoothness constraint is not appropriate at edge points as pointed out in Sec. 1. Since the spatial variation of is normally greater than that of RNU,2 the large spatial variation in the input image is mainly due to the edge points of . We adaptively adjust the effect of the smoothness constraint according to the gradient of . The weighting factors and in Eq. (5) are defined as follows The exponent controls the sensitivity to the spatial gradients of and is a small constant that prevents dividing by zero. Since the weighting factors are inversely proportional to the spatial gradients of , the smoothness constraint has little effect on edge regions. These weighting factors are the same as smoothness weights for the image smoothing operator in Ref. 21. 2.2.Numerical SolutionThe proposed energy function in Eq. (2), a function of two variables, is nonconvex. We minimize the energy function by solving two convex subproblems in an alternating way with initial estimates and : The above process is repeated until there is no significant change in the estimates and . We compute the partial derivative of Eq. (2) with respect to in order to solve Eq. (8). First, we represent Eq. (2) in matrix notation as follows: where , , and are the lexicographically ordered vectors corresponding to the acquired signal, the scene irradiance, and the offset at time , respectively. and denote diagonal matrices containing values of the weighting factors and at time , respectively. and are the backward difference operators along and directions, which approximate the spatial partial gradients. Note that the regularization term for the offset is omitted in Eq. (10) since is constant with respect to . Then, in matrix notation is given by whereTherefore, we solve a large system of linear equations [i.e., ] for each . The conjugate-gradient (CG) method is used to solve the linear equations in this work since the matrix is sparse, symmetric, and positive definite.22 We rewrite Eq. (2) in different matrix notation from that represented in Eq. (10) to solve Eq. (9): where , , and are vectors formed by lexicographically stacking the acquired signal, the scene irradiance, and the offset in the time domain, respectively, for each detector element located in . is the temporal backward difference operator. Similar to Eq. (10), we exclude the regularization term in Eq. (14) since it is constant with respect to . Differentiation with respect to produces wheredenotes identity matrix in Eq. (16). The CG method is used here again to obtain the values of the offset for each . 3.Simulation ResultsTo our best knowledge, no study has been reported on correcting the RNU with several image frames that have large displacements. Therefore, no comparison is made with existing SBNUC methods in this work. The regularization parameters are empirically set to , for all experiments in this article, and the value of is determined to be 0.2. At first, we test the convergence of the proposed method with eight synthetic images as shown in Fig. 1. The eight images are generated by adding the artificial offset9,15 to calibrated infrared images caputured by a InSb focal plane array camera operating in the 3 to 5 μm range. The RNU is generally composed of two patterns, the low-frequency one and white noise-like one as reported in Ref. 23. However, only the white noise-like pattern is usually prominent to observers together with natural scenes. This is due to the masking effect of the human visual system, which attenuates contrast sensitivity at low-spatial frequencies.24 Therefore, the artificial offset is generated as realizations of independent identically distributed Gaussian random variables.9 We plot the proposed energy function in Eq. (2) against the number of iterations. As shown in Fig. 2, the value of the energy function drops quickly. We obtain good results with 11 iterations in our expeirments. In Fig. 3, we present images corrected by the proposed NUC method. Close-up views of some parts of the images are depicted in Fig. 3(b) to help the reader observe the visual quality improvement. The proposed method produces acceptable results no matter how complex the spatial distribution is, as shown in Fig. 3. We investigate the effect of the number of input images on the estimated scene irradiance using the eight synthetic images. Table 1 shows the peak signal-to-noise ratio results for different numbers of input images. As the number of input images increases, we can obtain more accurate results. This can be described by the fact that the information gained from consecutive image frames leads to high-quality NUC results and enhanced temporal consistency in the offset. Note that, however, raising the number of input images increases processing time. Thus, selecting a proper number of input images demands trade-off between the computational complexity and the image quality. Table 1Peak signal-to-noise ratio results of the proposed method with various numbers of input images.
We also perform an experiment on two sets of real infrared images as shown in Fig. 4. We have collected the two sets of images using a InSb focal plane array camera operating in the 3 to 5 μm range. One set of images in Fig. 4(a) shows drastic intensity changes among them due to atmospheric effects. The other set of images in Fig. 4(b) has relatively large motion. Objective results for the proposed method are provided in Table 2. We employ a roughness metric8,9,15 which is defined by where and are horizontal and vertical difference filters, respectively, represents the image under test, is the norm, and * denotes discrete convolution. The roughness metric measures the amount of high-frequency energy due to the RNU. As pointed out in Ref. 9, the metric cannot distinguish between true high-frequency energy and that from the RNU. However, the metric can be a useful indicator of the RNU to some degree when taken along with subjective evaluation.9 The correction results of the proposed method are depicted in Figs. 5 and 6. Similar to the simulated nonuniformity case, our method consistently suppresses the RNU as shown in Figs. 5(b) and 6(b). We also present the difference images between the input and the corrected in Fig. 7 to visualize the RNU corrected by the proposed method.Table 2Roughness results for real images.
We have implemented the proposed method using C language. The simulation is performed on a PC with an Intel i7 3.40-GHz CPU and 4-GB memory. Our optimization procedure takes 5.6 and 3.8 s for the set of images in Figs. 4(a) and 4(b), respectively. 4.ConclusionWe presented a regularization approach to SBNUC with several image frames. Our work formulated the SBNUC process as the energy minization problem that incorporates the slowly varying nature of the detector responsivity and the smoothness constraint for the scene irradiance. In the proposed method, no assumption was made about the motion among input images except that the motion is static. Therefore, the proposed method can be used in applications where only several image frames are available and large displacements exist among the given images. Simulation results on both synthetic and real infrared images demonstrated that the proposed method can reduce the RNU. In future works, we plan to apply more advanced numerical techniques to reduce the computational complexity of the proposed method. ReferencesD. L. PerryE. L. Dereniak,
“Linear theory of nonuniformity correction in infrared staring sensors,”
Opt. Eng., 32
(8), 1854
–1859
(1993). http://dx.doi.org/10.1117/12.145601 OPEGAR 0091-3286 Google Scholar
W. GrossT. HierlM. Schulz,
“Correctability and long-term stability of infrared focal plane arrays,”
Opt. Eng., 38
(5), 862
–869
(1999). http://dx.doi.org/10.1117/1.602055 OPEGAR 0091-3286 Google Scholar
L. Shkedyet al.,
“Megapixel digital InSb detector for midwave infrared imaging,”
Opt. Eng., 50
(6), 061008
(2011). http://dx.doi.org/10.1117/1.3572163 OPEGAR 0091-3286 Google Scholar
C. ZhangW. Zhao,
“Scene-based nonuniformity correction using local constant statistics,”
JOSA A, 25
(6), 1444
–1453
(2008). http://dx.doi.org/10.1364/JOSAA.25.001444 JOAOD6 1084-7529 Google Scholar
J. G. HarrisY.-M. Chiang,
“Nonuniformity correction of infrared image sequences using the constant-statistics constraint,”
IEEE Trans. Image Process., 8
(8), 1148
–1151
(1999). http://dx.doi.org/10.1109/83.777098 IIPRE4 1057-7149 Google Scholar
C. Zuoet al.,
“Scene-based nonuniformity correction method using multiscale constant statistics,”
Opt. Eng., 50
(8), 087006
(2011). http://dx.doi.org/10.1117/1.3610978 OPEGAR 0091-3286 Google Scholar
D. A. Scribneret al.,
“Adaptive nonuniformity correction for ir focal-plane arrays using neural networks,”
Proc. SPIE, 1541 100
–109
(1991). http://dx.doi.org/10.1117/12.49324 PSISDG 0277-786X Google Scholar
E. VeraS. Torres,
“Fast adaptive nonuniformity correction for infrared focal-plane array detectors,”
EURASIP J. Appl. Signal Process., 2005 1994
–2004
(2005). http://dx.doi.org/10.1155/ASP.2005.1994 1110-8657 Google Scholar
R. C. Hardieet al.,
“Scene-based nonuniformity correction with reduced ghosting using a gated lms algorithm,”
Opt. Express, 17
(17), 14918
–14933
(2009). http://dx.doi.org/10.1364/OE.17.014918 OPEXFF 1094-4087 Google Scholar
A. RossiM. DianiG. Corsini,
“Temporal statistics de-ghosting for adaptive non-uniformity correction in infrared focal plane arrays,”
Electron. Lett., 46
(5), 348
–349
(2010). http://dx.doi.org/10.1049/el.2010.3559 ELLEAK 0013-5194 Google Scholar
L. Bottou,
“Large-scale machine learning with stochastic gradient descent,”
in Proc. COMPSTAT’2010,
177
–186
(2010). Google Scholar
C. Zuoet al.,
“Scene-based nonuniformity correction algorithm based on interframe registration,”
JOSA A, 28
(6), 1164
–1176
(2011). http://dx.doi.org/10.1364/JOSAA.28.001164 JOAOD6 1084-7529 Google Scholar
C. Zuoet al.,
“Improved interframe registration based nonuniformity correction for focal plane arrays,”
Infrared Phys. Technol., 55
(4), 263
–269
(2012). http://dx.doi.org/10.1016/j.infrared.2012.04.002 IPTEEY 1350-4495 Google Scholar
E. VeraP. MezaS. Torres,
“Total variation approach for adaptive nonuniformity correction in focal-plane arrays,”
Opt. Lett., 36
(2), 172
–174
(2011). http://dx.doi.org/10.1364/OL.36.000172 OPLEDP 0146-9592 Google Scholar
C. Zuoet al.,
“A two-frame approach for scene-based nonuniformity correction in array sensors,”
Infrared Phys. Technol., 60 190
–196
(2013). http://dx.doi.org/10.1016/j.infrared.2013.05.001 IPTEEY 1350-4495 Google Scholar
C. Zuoet al.,
“Scene based nonuniformity correction based on block ergodicity for infrared focal plane arrays,”
Optik, 123
(9), 833
–840
(2012). http://dx.doi.org/10.1016/j.ijleo.2011.06.050 OTIKAJ 0030-4026 Google Scholar
M. Maoet al.,
“Based on airborne multi-array butting for IRFPA staring imagery,”
Proc. SPIE, 7658 765858
(2010). http://dx.doi.org/10.1117/12.865976 Google Scholar
C. R. del BlancoF. JaureguizarN. García,
“Robust tracking in aerial imagery based on an ego-motion bayesian model,”
EURASIP J. Adv. Signal Process., 2010 1
–18
(2010). http://dx.doi.org/10.1155/2010/837405 1110-8657 Google Scholar
E. GurevichA. Fein,
“Maintaining uniformity of ir focal plane arrays by updating offset correction coefficients,”
Proc. SPIE, 4820 809
–820
(2003). http://dx.doi.org/10.1117/12.453552 Google Scholar
O. Nesheret al.,
“Digital cooled insb detector for ir detection,”
Proc. SPIE, 5074 120
–129
(2003). http://dx.doi.org/10.1117/12.498154 PSISDG 0277-786X Google Scholar
Z. Farbmanet al.,
“Edge-preserving decompositions for multi-scale tone and detail manipulation,”
ACM Trans. Graph., 27
(3), 1
–67
(2008). http://dx.doi.org/10.1145/1360612 ATGRDF 0730-0301 Google Scholar
B. P. Flanneryet al., Numerical Recipes in C, Press Syndicate of the University of Cambridge, New York
(1992). Google Scholar
G. Gershonet al.,
“3 Mega-pixel InSb detector with 10 μm pitch,”
Proc. SPIE, 8704 870438
(2013). http://dx.doi.org/10.1117/12.2015583 PSISDG 0277-786X Google Scholar
P. J. BexS. G. SolomonS. C. Dakin,
“Contrast sensitivity in natural scenes depends on edge as well as spatial frequency structure,”
J. Vision, 9
(10), 11
–19
(2009). http://dx.doi.org/10.1167/9.10.1 1534-7362 Google Scholar
BiographyJun-Hyung Kim received his BS and PhD degrees in electronic engineering from Korea University in 2006 and 2012, respectively. He has worked for the Agency for Defense Development since 2012. His current research interests are in the area of image processing and infrared imaging system. He is a member of SPIE. Jieun Kim received her BS degree in electrical engineering from Busan National University in 2002, and her MS degree in electrical engineering from KAIST in 2004. She has worked for the Agency for Defense Development since 2004. Her current interests include digital image processing, target detection, and tracking. Sohyun Kim is currently a research member at the Agency for Defense Development in Korea, and has over 10 years of experience in developing electro-optic systems. Her experience includes target detection algorithm design and real-time implementation for video tracker for infrared images. She holds a BS in physics from Sogang University and an MS in information and communications from Gwangju Institute of Science and Technology. Joohyoung Lee received his BS and MS degress in electronics engineering from Dankuk University, Republic of Korea, in 1990 and 1992, respectively. Since 1992, he has been a principal researcher in the Electro-Optics Laboratory at the Agency for Defense Development. His research interests include analog and digital signal processing for IRST, low-noise electronics, IRST system test and evaluation for small target detection. Boohwan Lee received his BS, MS, and PhD degrees in electrical engineering and computer science from Kyungpook National University, Daegu, Republic of Korea in 1991, 1993, and 2006, respectively. He has worked as a principal researcher for the Agency for Defense Development since 1993. His current interests include digital image processing, target detection, and tracking. |