|
1.IntroductionAutomatic autofocusing (AF) in digital microscopy is highly dependent on the sample topography variability and also its color distribution. As stated by Qu et al.,1 different focus criterion functions perform quite differently even for the same sample. The majority of these methods have been addressed to study AF in the context of monochromatic frames.2–5 Furthermore, many works have been published that present a comparative evaluation of the performance of these kinds of AF techniques.6–8 Some research has determined that the best AF metric is based on the Brenner function;2 other research gives priority to the variance,9 Vollath-4,10–12 or the sum-modified-Laplacian,13 among other methods. In the case of the RGB space, few works for AF have been reported.14,15 In addition, the effectiveness of the AF algorithms depends on the color space selection wherever the numerical computation is done.16 To avoid it, a wavelet-based technique for converting multichannel (e.g., color) data to a single channel by principal components analysis has been reported for this task;17 unfortunately, it is computationally intense. In this paper, we propose an extension of the procedures currently used to digitally compute focus measure in the monochromatic version of an image; these techniques now will be utilized for color images with an adjustment of the AF algorithms through the modulus of the gradient of the color planes (MGC) operator.18–20 Hence, it is possible to improve the performance of a large quantity of AF algorithms since all of them are capable of indicating a focused slice from the MGC image. Even more, because first derivative methods can be efficiently implemented in GPUs, the MGC algorithm can work in parallel. In widefield microscopy, it may be possible to focus the transverse sections that are placed at the depth of field (DOF) of the objective lens. To record the three-dimensional (3-D) volume, it is necessary to axially scan the sample. Additionally, an extra difficulty arises: the DOF of the optical objectives decreases when the numerical aperture (NA) increases. It abruptly produces blurry images in the portion of the object that lies outside of the DOF. A common approach to digitally extend depth of field (EDoF) is by the use of a digital image fusion scheme. Typically, the image fusion schemes select the in-focus pixels along the -axis to reconstruct an all-in-focus composite image. Due to the high computational effort, these methods have been implemented in parallel computer systems such as clusters and GPUs.21–23 In this work, a parallel implementation in GPU of a pixel-by-pixel image fusion of multifocus color images based on MGC is done. According to the image quality metrics, the proposed method is competitive to merge these kinds of images. The 3-D visualization of the in-focus images verifies the fusion results. This work is organized as follows: in Sec. 2, the MGC transformation for multichannel to grayscale frames is briefly reviewed, and the AF functions and image fusion technique used in this paper are analyzed. In Sec. 3, the procedure for acquiring the different -stacks of digital images is described. In this research, human and animal tissue samples have been employed as test objects to prove the proposed algorithms. The human tissue samples were prepared by Mikroskope. Net24, and the animal tissue came from the Human Connective Tissues Microscope Slide Set.25 In Sec. 4, the AF and fusion results of the experiments, which we conducted to evaluate the algorithms are presented. Finally, the conclusions of the work are presented in Sec. 5. 2.Mathematical Methods2.1.Multichannel Conversion to a Grayscale ImageIn the RGB space, the red, green, and blue components of a vector are commonly related to the pixels of an RGB image of size . They can be represented by , as in the following equation: where , , and are the RGB space channels and , , are the unitary vectors, respectively.Typically, a compound gradient image is determined by18,19 where , , and are the gradient images for each channel.In general, the modulus of the gradient of the color planes is computed using the Euclidean distance,20 as follows: where is the dimensionality of the color space. An alternative representation of the MGC operator is the expression .Conventionally, the partial derivative along the -axis of a two-dimensional function can be numerically approximated as . Likewise, the partial derivative along the -axis is given by . In color image processing, the gradient is commonly used as a procedure of color edge detection. Therefore, the modulus of the gradient of the color planes is a sharp image, which can be computed using the equation The color difference formula of Eq. (4) is valid in the RGB color space.In addition to the RGB space, color images are also processed in the hue, saturation, and intensity (HSI) color space, because it is a suitable model for color description and analysis. The HSI space is modeled as a double cone where hue represents the dominant color, saturation represents the purity of the color, and intensity represents the brightness, respectively. As stated by Gonzalez and Woods,26 this model decouples the intensity component from the color-carrying information (hue-saturation) in a color image. The intensity channel is an essential descriptor of monochromatic images, and it is classically used for multichannel conversion to a grayscale image. The difference measurement when working in the HSI color space is modified as established by Koschan and Abidi.18 In this work, the multichannel conversion to a grayscale image has been done by means of the MGC operator as is shown in Fig. 1. As the MGC operator is a color edge detection technique for digital images, the MGC(RGB) and MGC(HSI) matrices show the high spatial frequency content of the input color images. Thereby, this makes it suitable for finding focused regions. 2.2.Autofocus MethodsIn the literature, there exist some comparisons about the performance of AF algorithms.4,6,8,9,12 Each algorithm is capable of producing a figure of merit (FM) that is analyzed by taking into account the global or local variance in the image intensity values . Customarily, the AF algorithms can be classified into five groups according to their mathematical nature: derivative-based algorithms,27,28 statistical algorithms,10 histogram-based algorithms,6,12 intuitive algorithms,9 and image transformations-based algorithms.3 Throughout this paper 15 AF algorithms, which have been widely reported in the literature, are tested and compared using the MGC images. This task was carried out to improve the performance of AF algorithms. Table 1 summarizes the definitions of the most typical AF metrics defined in the new approach, namely the MGC transformation. The output of an ideal AF algorithm is commonly defined as having a maximum value in relation to the best focused image position. Moreover, this value clearly decreases as defocus increases. As noted by Tian,3 the fundamental requirements for an FM are unimodality and monotonicity, which ensure that the FM has only one extreme value and is monotonic on each side of its peak or valley. Furthermore, Redondo et al.4 defined the number of local maxima, the width of the focus curve given by the width of the focus curve at 80% and 40%, and noise/illumination invariance as important features of the autofocus curve. Another complementary characteristic of the AF algorithms is their accuracy and fast response. To evaluate the AF performance (AP) of each AF algorithm, the following score is proposed: where represents the number of focal planes along the -axis far away from the origin , and is the distance between axial planes. For instance, if then the AP metric is equal to 0.7. This happens because the mentioned measurements do not locate the focused plane until precisely three steps away from the plane . At best, AP is equal to 1. Low AP results from two conditions: (a) the AF algorithms do not reveal the focal plane at or (b) the AF algorithms indicate a wrong focal plane, which is too far from .Table 1AF algorithms rewritten in terms of the modulus of the color gradient operator gc(x,y).
2.3.Multifocus Image FusionAs mentioned previously, any microscopic imaging system can only focus the field of view (FOV) of the sample that is inside the DOF of the objective lens. This means that only certain axial planes of the sample are in-focus. A current solution to this drawback is a multifocus image fusion to reconstruct an all-in-focus image of the complete FOV for a particular specimen. This can be done by capturing images of the sample on different focal axial planes. In this section, a color image fusion scheme based on the MGC method is proposed as shown in Fig. 2. Let be a set of input images, where . The index is related with the channel/band used. For each axial plane, Eq. (2) is computed to create a compound gradient image and then for each pixel the maximum value is selected using . In other words, the matrix denotes the slice axial position of in-focus pixels along the -axis. A postprocessing stage involves a spatial consistency algorithm.17 This postprocessing is carried out by means of a low pass filtered matrix using a median filter. This algorithm ensures that the majority of the intensity pixels in a neighborhood of , come from the same -slice or from the closest one. For example, the spatial consistency of the matrix is shown in Fig. 3. Figure 3(a) contains three neighborhoods, where the value of the slice axial position is higher than its neighbors. Figure 3(b) shows these values adjusted to match the values of the neighborhood of to conserve the continuity of the surface of the sample. To avoid the introduction of artificial information, the fused image is composed from the (multichannel) pixels that are present in the original input data , only if the slice position fulfills the condition for each pixel . Therefore, a multifocus image fusion algorithm can be defined as follows: Schematically, the proposed image fusion procedure is shown in Fig. 2. As can be seen, the fused image is composed of the sharp regions provided by the in-focus pixels of the input color images. To accelerate the numerical computation, the fusion process is migrated to GPU.The resulting fused images are evaluated with a nonreference image quality metric based on measuring the anisotropy of the images. The anisotropic quality index (AQI) of an image is given by29 where is the mean of the values of the Rényi entropy , measured in directions . The Rényi entropy measures the frequency content of an image through its directional pseudo-Wigner distribution.292.3.1.Simulated dataFor testing purposes, a simulated stack of 20 frames is constructed from a color image of . Figure 2(a) shows some digitally defocused slices using the software package extended EDoF plug-in.17 Each blurred image was obtained by convolving an image with a Gaussian point spread function (PSF) with increasing width. The 3-D visualization of the resulting in-focus image using the fused scheme of Eq. (6) is sketched in Fig. 2(d). Their spatial consistency of the matrix is shown in Fig. 3. In addition, the results of the AQI of the fused image and the normalized mean square error (NMSE)30 between the original image and the merged image are shown in Table 2. Table 2Image quality assessment of in-focus images.
3.Image Acquisition of Histological SamplesA motorized Axio-Imager-M1 optical microscope system manufactured by Carl Zeiss is used to image the histological samples and to capture their color digital images. Some examples of these kinds of tissue samples are shown in Fig. 4. These microscopic objects are imaged using bright-field illumination in the optical microscope system. The microscope incorporates an AxioCam Mid Range Color camera of 5 megapixels with an image resolution of , a chip size of , a pixel size of , and a spectral range of 400 to 710 nm. Furthermore, as part of the optical microscope device, an to mechanical platform and a motorized stage are integrated to control the focus movements along the -axis. From Table 3, it is evident that interplanar distance between different optical sections is determined by the NA of the objective lens. Table 3Specifications of the EC plan-Neofluar objective lenses (Carl Zeiss microscopy, retrieved from Ref. 32) employed during image acquisition.33,34
4.Results and Discussions4.1.Focusing ResultsTo obtain a performance evaluation of the 15 AF techniques on the MGC images, six -stacks of 21 multichannel images are recorded using two histological samples. Each stack has a particular color that is highly dominant as shown in Fig. 4. This allows us to evaluate the MGC method for different color distributions and amplifications inside of the digital image.
Some research4 has reported that beyond a magnification of , the performance of the various AF metrics is drastically impaired. According to results shown in the graphs of Fig. 8, when using the MGC approach all the FM curves present monotonic behavior, even when the magnification is increased to (oil immersion). This last experimental result supports the advantage of a color-to-MGC space transformation. Nevertheless, a problem arises when images of a sample of thickness are acquired at a magnification of . There exist portions of the image partly in focus. In the graphs of Fig. 8(d), two regions in focus located at and can be seen. According to the results given in Tables 4 and 5, all the AF measures realized in the MGC space are accurate in spite of the different magnifications, unlike some typically used channels for focus measure. Table 4Autofocusing Performance AP of all metrics in different grayscale channels and MGC images. The mean and standard deviation of AP is given in bold.
Table 5Autofocusing Performance AP of all metrics in different grayscale channels and MGC images. The mean and standard deviation of AP is given in bold.
Another advantage of the MGC method is its computational simplicity and inherent parallelism. Figure 9 shows the computational cost of the MGC(RGB) method in a -stack of digital images of , when they run on Intel© X©(R) 2.10 GHz, 16 GB RAM, NVIDIA Quadro K4000. The parallelized MGC method on the GPU is one order of magnitude faster than the same application implemented in CPU. 4.2.Multifocus Image Fusion ResultsIt is well known that the digital images of thick microscopic objects provided by an optical widefield microscope device are strongly blurred for the portion of the object that lies outside of the DOF of the objective lens. We can seek those regions of the FOV, which are conveniently located in-focus. The present subsection will describe the results of a method to merge multifocus frames based on the MGC approach. Our experiment starts with the acquisition of a digital image -stack from a histological sample. This set of -images are obtained by moving the microscope stage along the optical axis. For this, the axial extension of the sample is defined and then the axial stage with the sample is moving to cover this extension. The interplanar distance between different optical sections is less than the axial resolution of the microscope, defined as the DOF in Table 3. From this table, it is evident that is determined by the NA of the objective lens. Digital images of a beetle shell are acquired with amplification of and interplanar distance . The given -stack is composed of 42 images with . Figure 10(a) shows the in-focus image obtained with the software package EDoF plug-in31 based on a complex wavelets algorithm for EDoF.17 The fusion process takes an average execution time of 52.2 s, whereas the fused image of Fig. 10(b) is based on the proposed MGC fusion method. In this technique, the resulted slice axial position matrix is low-pass filtered using a median filter with . The total execution time is 32.1 s. The 3-D visualization of the resulting in-focus image is sketched in Fig. 10(c). Finally, the nonreference image quality metric of Eq. (7) is computed for the in-focus image quality assessment. The results are shown in Fig. 10(d). Another example is the case of an umbilical cord that is imaged at the amplification of and interplanar distance . The given -stack is composed of 39 images with . Again, Fig. 11(a) shows the in-focus image obtained with the EDoF plug-in.17,31 It takes an average execution time of 477.43 s. The fused image of Fig. 11(b) is based on the proposed MGC fusion method, where the resulted slice axial position matrix is again low-pass filtered using a median filter with . The total execution time is 238.9 s, and the fusion evaluation is shown in Fig. 11(d). As we can see, the proposed fusion method reveals a high-quality image independent of faulty illumination during the image acquisition. 5.ConclusionsIn this research, the MGC operator has been applied to digital color images. This procedure transforms the multichannel information to a grayscale image, which is used for (a) focus measurements during the AF process and (b) for extending the DOF in the framework of digital microscopy applications. The AF experimental results of this work demonstrate the effectiveness of the MGC method when it is applied to several -stacks of images. From this point of view, we can conclude that the use of the proposed MGC image increases the performance of currently used passive AF algorithms and produces monotonic FM curves with an only one local maximum and a similar width of the focus curve, as shown in Figs. 5, 6, and 8. The test frames have been acquired from two histological samples, which are amplified at the magnifications of , , , and (oil immersion). The AF graphs in Fig. 8 that are obtained by the MGC method present similar behaviors even up to a magnification of . Therefore, all the AF algorithms reveal the image slice on . Contrastingly, as shown in the AP results in Tables 4 and 5, the same AF algorithms in other color spaces only work properly in some cases. As can be seen in the same tables, the mean and the standard deviation of the AF performance for the MGC image are 1 and 0, respectively, for both amplifications. We can conclude that the effectiveness of the AF algorithms depends on several factors (1) the color space selection for doing the numerical computation, (2) the color distribution of the sample under inspection, and (3) the sample magnification. Only in the MGC space does the AF performance tend to be invariant according to these factors. Another remarkable characteristic of the MGC method is that it is computationally simple and inherently parallel. The computational cost of the MGC(RGB) algorithm implemented on a GPU can be reduced by an order of magnitude, for images with , as is shown in Fig. 9. On the other hand, the fusion scheme was implemented on an image -stack for EDoF. The fused image is composed of the sharp regions provided by the in-focus pixels of the input data. Our fusion method has been quantitatively and qualitatively compared with the EDoF plug-in, which is widely used in digital microscopy for DOF extension. From a simulated image stack, the resulting image fusion was compared with the corresponding original images using the NMSE, as shown in Fig. 2. Also, a nonreference image quality metric AQI was implemented for image quality assessment. These quantitative evaluations given in Table 2 show that the quality of the resulting fused image is better than the fused image given by the EDoF plug-in. The 3-D visualization of the in-focus images verifies the fusion results. Based on the experimental results of Figs. 10 and 11, the MGC method is an algorithm sufficiently competitive to merge multifocus images. In general, the main advantages of the proposed fusion method based on MGC transformation are that it is computationally simpler, faster, and more efficient than other methods which have been typically used to fuse multifocus information. Additionally, the comparisons in Figs. 11(a) and 11(b) show that our method reveals a high-quality image independent of faulty illumination during the image acquisition. AcknowledgmentsR. Hurtado thanks to Consejo Nacional de Ciencia y Tecnología (CONACyT); Award no. 578446. Also we thank by the support to PADES program; Award no. 2017-13-011-053. We extend our gratitude to the reviewers and Jennifer Speier for their useful suggestions. ReferencesY. Qu, S. Zhu and P. Zhang,
“A self-adaptive and nonmechanical motion autofocusing system for optical microscopes,”
Microsc. Res. Tech., 79
(11), 1112
–1122
(2016). https://doi.org/10.1002/jemt.v79.11 MRTEEO 1059-910X Google Scholar
S. Yazdanfar et al.,
“Simple and robust image-based autofocusing for digital microscopy,”
Opt. Express, 16
(12), 8670
–8677
(2008). https://doi.org/10.1364/OE.16.008670 OPEXFF 1094-4087 Google Scholar
Y. Tian,
“Autofocus using image phase congruency,”
Opt. Express, 19
(1), 261
–270
(2011). https://doi.org/10.1364/OE.19.000261 OPEXFF 1094-4087 Google Scholar
R. Redondo et al.,
“Autofocus evaluation for brightfield microscopy pathology,”
J. Biomed. Opt., 17
(3), 036008
(2012). https://doi.org/10.1117/1.JBO.17.3.036008 JBOPFO 1083-3668 Google Scholar
A. Lipton and E. J. Breen,
“On the use of local statistical properties in focusing microscopy images,”
Microsc. Res. Tech., 31
(4), 326
–333
(1995). https://doi.org/10.1002/(ISSN)1097-0029 MRTEEO 1059-910X Google Scholar
L. Firestone et al.,
“Comparision of autofocus method for use in automated algorithms,”
Cytometry, 12 195
–206
(1991). https://doi.org/10.1002/(ISSN)1097-0320 CYTODQ 0196-4763 Google Scholar
M. Subbarao and J. K. Tyan,
“Selecting the optimal focus measure for autofocusing and depth from focus,”
IEEE Trans. Pattern Anal. Mach. Intell., 20 864
–870
(1998). https://doi.org/10.1109/34.709612 ITPIDJ 0162-8828 Google Scholar
Y. Sun, S. Duthaler and B. J. Nelson,
“Autofocusing in computer microscopy: selecting the optimal focus algorithm,”
Microsc. Res. Tech., 65 139
–149
(2004). https://doi.org/10.1002/(ISSN)1097-0029 MRTEEO 1059-910X Google Scholar
X. Y. Liu, W. H. Wang and Y. Sun,
“Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear,”
J. Microsc., 227
(1), 15
–23
(2007). https://doi.org/10.1111/jmi.2007.227.issue-1 JMICAR 0022-2720 Google Scholar
D. Vollath,
“The influence of the scene parameters and of noise on the behavior of automatic focusing algorithms,”
J. Microsc., 151
(2), 133
–146
(1988). https://doi.org/10.1111/jmi.1988.151.issue-2 JMICAR 0022-2720 Google Scholar
O. A. Osibote et al.,
“Automated focusing in bright-field microscopy for tuberculosis detection,”
J. Microsc., 240
(2), 155
–163
(2010). https://doi.org/10.1111/jmi.2010.240.issue-2 JMICAR 0022-2720 Google Scholar
A. Santos et al.,
“Evaluation of autofocus functions in molecular cytogenetic analysis,”
J. Microsc., 188
(3), 264
–272
(1997). https://doi.org/10.1046/j.1365-2818.1997.2630819.x JMICAR 0022-2720 Google Scholar
J. Cao et al.,
“Method based on bioinspired sample improves autofocusing performances,”
Opt. Eng., 55
(10), 103103
(2016). https://doi.org/10.1117/1.OE.55.10.103103 Google Scholar
K. Omasa, M. Kouda,
“3-D color video microscopy of intact plants,”
Image Analysis: Methods and Applications, 257
–263 CRC Press, New York
(2001). Google Scholar
H. Shi, Y. Shi and X. Li,
“Study on auto-focus methods of optical microscope,”
in 2nd Int. Conf. on Circuits, System and Simulation (ICCSS 2012), IPCSIT,
(2012). Google Scholar
M. Selek,
“A new autofocusing method based on brightness and contrast for color cameras,”
Adv. Electr. Comput. Eng., 16
(4), 39
–44
(2016). https://doi.org/10.4316/AECE.2016.04006 Google Scholar
B. Forster et al.,
“Complex wavelets for extended depth of field: a new method for the fusion of multichannel microscopy images,”
Microsc. Res. Tech., 65 33
–42
(2004). https://doi.org/10.1002/(ISSN)1097-0029 MRTEEO 1059-910X Google Scholar
A. Koschan and M. Abidi, Digital Color Image Processing, Wiley Interscience, New Jersey
(2008). Google Scholar
R. M. Rangayyan, B. Acha and C. Serrano, Color Image Processing with Biomedical Applications, SPIE Press, Bellingham, Washington
(2011). Google Scholar
T. Gevers and H. Stokman,
“Classifying color edges in video into shadow-geometry, highlight, or material transitions,”
IEEE Trans. Multimedia, 5
(2), 237
–243
(2003). https://doi.org/10.1109/TMM.2003.811620 Google Scholar
W. A. Carrington and D. Lisin,
“Cluster computing for digital microscopy,”
Microsc. Res. Tech., 64
(2), 204
–213
(2004). https://doi.org/10.1002/(ISSN)1097-0029 MRTEEO 1059-910X Google Scholar
J. M. Castillo-Secilla et al.,
“Autofocus method for automated microscopy using embedded GPUs,”
Biomed. Opt. Express, 8
(31), 1731
–1740
(2017). https://doi.org/10.1364/BOE.8.001731 Google Scholar
J. C. Valdiviezo-N et al.,
“Autofocusing in microscopy systems using graphics processing units,”
Proc. SPIE, 8856 88562K
(2013). https://doi.org/10.1117/12.2024967 PSISDG 0277-786X Google Scholar
“Konus prepared slides: the human body I and III sets,”
(2017) http://www.microscopes.eu/en/Brand/Konus/ February ). 2017). Google Scholar
Carolina, “Human connective tissues microscope slide set,”
(2017) http://www.carolina.com/ February ). 2017). Google Scholar
R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed.Prentice Hall(2002). Google Scholar
J. M. Tenenbaum,
“Accommodation in computer vision,”
Department of Computer Science, Stanford University,
(1970). Google Scholar
J. F. Brenner et al.,
“An automated microscope for cytologic research a preliminary evaluation,”
J. Histochem. Cytochem., 24
(1), 100
–111
(1976). https://doi.org/10.1177/24.1.1254907 Google Scholar
S. Gabarda et al.,
“Image denoising and quality assessment through the Rényi entropy,”
Proc. SPIE, 7444 744419
(2009). https://doi.org/10.1117/12.826153 Google Scholar
S. Sumathi, L. A. Kumar and P. Surekha, Computational Intelligence Paradigms for Optimization Problems Using MATLAB®/SIMULINK®, CRC Press, New York
(2016). Google Scholar
A. Prudencio, J. Berent and D. Sage,
“Extended depth of field plug-in,”
(2017) http://bigwww.epfl.ch/demo/edf June ). 2017). Google Scholar
Carl Zeiss Microscopy GmbH, “Objectives from Carl Zeiss,”
(2017) https://www.zeiss.com/microscopy/int/home.html June ). 2017). Google Scholar
R. Hurtado et al.,
“Extending the depth-of-field for microscopic imaging by means of multifocus color image fusion,”
Proc. SPIE, 9578 957811
(2015). https://doi.org/10.1117/12.2188927 PSISDG 0277-786X Google Scholar
K. R. Spring and M. W. Davidson,
“The source for microscopy education, depth of field and depth of focus,”
(2017) https://www.microscopyu.com/microscopy-basics/depth-of-field-and-depth-of-focus December ). 2017). Google Scholar
BiographyRomán Hurtado-Pérez received his bachelor’s degree in computational systems and his master’s degree from the Polytechnic University of Tulancingo (UPT) in 2004 and 2013, respectively. He is a PhD degree student in optomechatronics from UPT. His current research areas include multifocus image fusion, autofocusing, GPU, and computer vision. Carina Toxqui-Quitl is an assistant professor at the Polytechnic University of Tulancingo. She received her BS degree from the Puebla Autonomous University, Mexico, in 2004. She received her MS and PhD degrees in optics from the National Institute of Astrophysics, Optics, and Electronics in 2006 and 2010, respectively. Her current research areas include image moments, multifocus image fusion, wavelet analysis, and computer vision. Alfonso Padilla-Vivanco received his bachelor in physics from Puebla Autonomous University, Mexico, and his MS and PhD degrees both in optics from the National Institute of Astrophysics, Optics, and Electronics in 1995 and 1999, respectively. In 2000, he held a postdoctoral position in the physics department at the University of Santiago de Compostela, Spain. He is a professor at the Polytechnic University of Tulancingo. His research interests include optical information processing, image analysis, and computer vision. J. Félix Aguilar-Valdez received his BS degree in physics from National Autonomous University of Mexico in 1980. He received his MS and PhD degrees in optics from Center for Scientific Research and Higher Education of Ensenada, Baja California, México, in 1994. His current research areas include high-resolution microscopy, confocal microscopy, near-field diffraction, and microscopic imaging. Gabriel Ortega-Mendoza received his PhD from the FCFM-BUAP, Puebla, México, in 2013 and has been a full-professor at Universidad Politécnica de Tulancingo, Hidalgo, México, since 2013. His research areas include multifocus image fusion, image microscopy, optical fiber laser, plasmon resonance, and manipulation of photothermal-induced microbubbles. |