Open Access
16 February 2018 Focus measure method based on the modulus of the gradient of the color planes for digital microscopy
Author Affiliations +
Abstract
The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

1.

Introduction

Automatic autofocusing (AF) in digital microscopy is highly dependent on the sample topography variability and also its color distribution. As stated by Qu et al.,1 different focus criterion functions perform quite differently even for the same sample. The majority of these methods have been addressed to study AF in the context of monochromatic frames.25 Furthermore, many works have been published that present a comparative evaluation of the performance of these kinds of AF techniques.68 Some research has determined that the best AF metric is based on the Brenner function;2 other research gives priority to the variance,9 Vollath-4,1012 or the sum-modified-Laplacian,13 among other methods.

In the case of the RGB space, few works for AF have been reported.14,15 In addition, the effectiveness of the AF algorithms depends on the color space selection wherever the numerical computation is done.16 To avoid it, a wavelet-based technique for converting multichannel (e.g., color) data to a single channel by principal components analysis has been reported for this task;17 unfortunately, it is computationally intense.

In this paper, we propose an extension of the procedures currently used to digitally compute focus measure in the monochromatic version of an image; these techniques now will be utilized for color images with an adjustment of the AF algorithms through the modulus of the gradient of the color planes (MGC) operator.1820 Hence, it is possible to improve the performance of a large quantity of AF algorithms since all of them are capable of indicating a focused slice from the MGC image. Even more, because first derivative methods can be efficiently implemented in GPUs, the MGC algorithm can work in parallel.

In widefield microscopy, it may be possible to focus the transverse sections that are placed at the depth of field (DOF) of the objective lens. To record the three-dimensional (3-D) volume, it is necessary to axially scan the sample. Additionally, an extra difficulty arises: the DOF of the optical objectives decreases when the numerical aperture (NA) increases. It abruptly produces blurry images in the portion of the object that lies outside of the DOF.

A common approach to digitally extend depth of field (EDoF) is by the use of a digital image fusion scheme. Typically, the image fusion schemes select the in-focus pixels along the z-axis to reconstruct an all-in-focus composite image. Due to the high computational effort, these methods have been implemented in parallel computer systems such as clusters and GPUs.2123 In this work, a parallel implementation in GPU of a pixel-by-pixel image fusion of multifocus color images based on MGC is done. According to the image quality metrics, the proposed method is competitive to merge these kinds of images. The 3-D visualization of the in-focus images verifies the fusion results.

This work is organized as follows: in Sec. 2, the MGC transformation for multichannel to grayscale frames is briefly reviewed, and the AF functions and image fusion technique used in this paper are analyzed. In Sec. 3, the procedure for acquiring the different z-stacks of digital images is described. In this research, human and animal tissue samples have been employed as test objects to prove the proposed algorithms. The human tissue samples were prepared by Mikroskope. Net24, and the animal tissue came from the Human Connective Tissues Microscope Slide Set.25 In Sec. 4, the AF and fusion results of the experiments, which we conducted to evaluate the algorithms are presented. Finally, the conclusions of the work are presented in Sec. 5.

2.

Mathematical Methods

2.1.

Multichannel Conversion to a Grayscale Image

In the RGB space, the red, green, and blue components of a vector are commonly related to the pixels of an RGB image of size M×N. They can be represented by C(x,y), as in the following equation:

Eq. (1)

C(x,y)=R(x,y)i^+G(x,y)j^+B(x,y)k^,
where R(x,y), G(x,y), and B(x,y) are the RGB space channels and i^, j^, k^ are the unitary vectors, respectively.

Typically, a compound gradient image gc(x,y) is determined by18,19

Eq. (2)

gc(x,y)=[gR(x,y)]2+[gG(x,y)]2+[gB(x,y)]2,
where gR(x,y), gG(x,y), and gB(x,y) are the gradient images for each channel.

In general, the modulus of the gradient of the color planes gc is computed using the Euclidean distance,20 as follows:

Eq. (3)

gc(x,y)=i=1band{[C(x,y,i)x]2+[C(x,y,i)y]2},
where i=1,,band is the dimensionality of the color space. An alternative representation of the MGC operator is the expression gc(x,y)=|MGC[C(x,y,i)]|.

Conventionally, the partial derivative along the x-axis of a two-dimensional function C(x,y,i) can be numerically approximated as C(x,y,i)xC(x+1,y,i)C(x,y,i). Likewise, the partial derivative along the Y-axis is given by C(x,y,i)yC(x,y+1,i)C(x,y,i).

In color image processing, the gradient is commonly used as a procedure of color edge detection. Therefore, the modulus of the gradient of the color planes is a sharp image, which can be computed using the equation

Eq. (4)

gc(x,y)=[i=1band[C(x+1,y,i)C(x,y,i)]2+i=1band[C(x,y+1,i)C(x,y,i)]2]1/2.
The color difference formula of Eq. (4) is valid in the RGB color space.

In addition to the RGB space, color images are also processed in the hue, saturation, and intensity (HSI) color space, because it is a suitable model for color description and analysis. The HSI space is modeled as a double cone where hue represents the dominant color, saturation represents the purity of the color, and intensity represents the brightness, respectively. As stated by Gonzalez and Woods,26 this model decouples the intensity component from the color-carrying information (hue-saturation) in a color image. The intensity channel is an essential descriptor of monochromatic images, and it is classically used for multichannel conversion to a grayscale image. The difference measurement when working in the HSI color space is modified as established by Koschan and Abidi.18

In this work, the multichannel conversion to a grayscale image has been done by means of the MGC operator as is shown in Fig. 1. As the MGC operator is a color edge detection technique for digital images, the MGC(RGB) and MGC(HSI) matrices show the high spatial frequency content of the input color images. Thereby, this makes it suitable for finding focused regions.

Fig. 1

Multichannel conversion to grayscale images by means of the MGC operator from (a) defocused and (b) focused color images. The MGC(RGB) and MGC(HSI) images show sharp regions for the case of focused images, unlike the dark uniform intensity distributions for the case of defocused images. As can be seen, the MGC operator is sensitive to variations in intensity but not very sensitive to variations in hue and saturation.

OE_57_2_023106_f001.png

2.2.

Autofocus Methods

In the literature, there exist some comparisons about the performance of AF algorithms.4,6,8,9,12 Each algorithm is capable of producing a figure of merit (FM) that is analyzed by taking into account the global or local variance in the image intensity values f(x,y). Customarily, the AF algorithms can be classified into five groups according to their mathematical nature: derivative-based algorithms,27,28 statistical algorithms,10 histogram-based algorithms,6,12 intuitive algorithms,9 and image transformations-based algorithms.3 Throughout this paper 15 AF algorithms, which have been widely reported in the literature, are tested and compared using the MGC images. This task was carried out to improve the performance of AF algorithms. Table 1 summarizes the definitions of the most typical AF metrics defined in the new approach, namely the MGC transformation. The output of an ideal AF algorithm is commonly defined as having a maximum value in relation to the best focused image position. Moreover, this value clearly decreases as defocus increases. As noted by Tian,3 the fundamental requirements for an FM are unimodality and monotonicity, which ensure that the FM has only one extreme value and is monotonic on each side of its peak or valley. Furthermore, Redondo et al.4 defined the number η of local maxima, the width of the focus curve α/β given by the width of the focus curve at 80% and 40%, and noise/illumination invariance as important features of the autofocus curve. Another complementary characteristic of the AF algorithms is their accuracy and fast response. To evaluate the AF performance (AP) of each AF algorithm, the following score is proposed:

Eq. (5)

AP(Focus Measure,Space)=1|Z10|,
where Z=z/ΔZ represents the number of focal planes along the z-axis far away from the origin z=0, and ΔZ is the distance between axial planes. For instance, if Z=150  μm50  μm=3 then the AP metric is equal to 0.7. This happens because the mentioned measurements do not locate the focused plane until precisely three ΔZ steps away from the plane z=0. At best, AP is equal to 1. Low AP results from two conditions: (a) the AF algorithms do not reveal the focal plane at z=0 or (b) the AF algorithms indicate a wrong focal plane, which is too far from z=0.

Table 1

AF algorithms rewritten in terms of the modulus of the color gradient operator gc(x,y).

A. Derivative based algorithmsBrenner gradient (BG)Thresholder absolute gradient (TAG)
x=0M3y=0N1[gc(x+2,y)gc(x,y)]2x=0M2y=0N1|gc(x+1,y)gc(x,y)|
if [gc(x+2,y)gc(x,y)]2if |gc(x+1,y)gc(x,y)|
Squared gradient (SG)Energy Laplace (EL)
x=0M2y=0N1[gc(x+1,y)gc(x,y)]2x=0M1y=0N1[gc(x,y)*Lap(x,y)]2
if [gc(x+1,y)gc(x,y)]2Lap(x,y)=[1,4,1;4,20,4;1,4,1]
Tenenbaum gradient (TG)
x=0M1y=0N1TG(x,y)
TG(x,y)=[gc(x,y)*S(x,y)]2+[gc(x,y)*S(x,y)T]2
S(x,y)=[1,0,1;2,0,2;1,0,1]
Spatial frequency (SF) (RF)2+(CF)2
RF=1MNx=0M1y=0N2[gc(x,y)gc(x,y+1)]2
CF=1MNx=0M2y=0N1[gc(x,y)gc(x+1,y)]2
B. Statistical algorithmsVariance (V)Autocorrelation (V4)
1MNx=0M1y=0N1[gc(x,y)μ]2x=0M2y=0N1[gc(x,y)gc(x+1,y)]x=0M3y=0N1[gc(x,y)gc(x+2,y)]
μ=1MNx=0M1y=0N1gc(x,y)
Normalized variance (NV)Standard deviation-based correlation (V5)
1MNμx=0M1y=0N1[gc(x,y)μ]2x=0M2y=0N1[gc(x,y)gc(x+1,y)]MNμ2
C. Histogram-based algorithmsEntropy algorithm (EA)Squared minimum (SM)
l[Pllog2(Pl)]l=1[Pl21L]
Pl=h(l)/MNl=0..L1; L=256
D. Intuitive algorithmsThresholder content (TC)Power square (PS)
x=0M1y=0N1[gc(x,y)]; if [gc(x,y)]εx=0M1y=0N1[gc(x,y)]2; if [gc(x,y)]2ε
E. Image transformations based algorithmsMidfrequency-DCT (MDCT)
MDCT=x=0M1y=0N1[gc(x,y)*O(x,y)]2
O(x,y)=[1,1,1,1;1,11,1;1,1,1,1;1,1,1,1]

2.3.

Multifocus Image Fusion

As mentioned previously, any microscopic imaging system can only focus the field of view (FOV) of the sample that is inside the DOF of the objective lens. This means that only certain axial planes of the sample are in-focus. A current solution to this drawback is a multifocus image fusion to reconstruct an all-in-focus image of the complete FOV for a particular specimen. This can be done by capturing images of the sample on different focal axial planes. In this section, a color image fusion scheme based on the MGC method is proposed as shown in Fig. 2.

Fig. 2

DOF extension on GPU. (a) Source color images Cz(x,y,i), (b) modulus of the gradient of color planes gzc(x,y), (c) fusion rule, and (d) 3-D visualization of an in-focus image.

OE_57_2_023106_f002.png

Let Cz(x,y,i) be a set of input images, where z=1,2,,Z. The index i=1,2,3 is related with the channel/band used. For each axial plane, Eq. (2) is computed to create a compound gradient image and then for each pixel (x,y) the maximum value is selected using sap(x,y)=maxz{g1c(x,y),gZc(x,y)}. In other words, the sap(x,y) matrix denotes the slice axial position of in-focus pixels along the z-axis. A postprocessing stage involves a spatial consistency algorithm.17 This postprocessing is carried out by means of a low pass filtered sap˜(x,y) matrix using a p×q median filter. This algorithm ensures that the majority of the intensity pixels in a p×q neighborhood of sap˜(x,y), come from the same z-slice or from the closest one. For example, the spatial consistency of the sap˜(x,y) matrix is shown in Fig. 3. Figure 3(a) contains three p×q neighborhoods, where the value of the slice axial position is higher than its neighbors. Figure 3(b) shows these values adjusted to match the values of the p×q neighborhood of sap˜(x,y) to conserve the continuity of the surface of the sample.

Fig. 3

(a) Slice axial position sap(x,y) of in-focus pixels (x,y) along the z-axis, (b) postprocessing sap˜(x,y) matrix by means of a low-pass filter to reach spatial consistency.

OE_57_2_023106_f003.png

To avoid the introduction of artificial information, the fused image ϕ(x,y,i) is composed from the (multichannel) pixels that are present in the original input data Cz(x,y,i), only if the slice position fulfills the condition zsap˜(x,y) for each pixel (x,y). Therefore, a multifocus image fusion algorithm can be defined as follows:

Eq. (6)

ϕ(x,y,i)=Csap˜(x,y)(x,y,i).
Schematically, the proposed image fusion procedure is shown in Fig. 2. As can be seen, the fused image ϕ(x,y,i) is composed of the sharp regions provided by the in-focus pixels of the input color images. To accelerate the numerical computation, the fusion process is migrated to GPU.

The resulting fused images are evaluated with a nonreference image quality metric based on measuring the anisotropy of the images. The anisotropic quality index (AQI) of an image ϕ(x,y,i) is given by29

Eq. (7)

AQI(ϕ)=s=1S[μϕR(t,θϕ)]2/S,
where μϕ is the mean of the values of the Rényi entropy R¯(ϕ,θs), measured in directions θs[θ1,θ2,,θS]. The Rényi entropy measures the frequency content of an image through its directional pseudo-Wigner distribution.29

2.3.1.

Simulated data

For testing purposes, a simulated stack of 20 frames is constructed from a color image of 2584×1936  pixels. Figure 2(a) shows some digitally defocused slices using the software package extended EDoF plug-in.17 Each blurred image was obtained by convolving an image with a Gaussian point spread function (PSF) with increasing width. The 3-D visualization of the resulting in-focus image using the fused scheme of Eq. (6) is sketched in Fig. 2(d). Their spatial consistency of the sap˜(x,y) matrix is shown in Fig. 3. In addition, the results of the AQI of the fused image and the normalized mean square error (NMSE)30 between the original image and the merged image are shown in Table 2.

Table 2

Image quality assessment of in-focus images.

MetricsIn-focus image ϕ(x,y,i)In-focus image using EDoF plug-in31
AQI0.06750.0240
NMSE300.19550.5462

3.

Image Acquisition of Histological Samples

A motorized Axio-Imager-M1 optical microscope system manufactured by Carl Zeiss is used to image the histological samples and to capture their color digital images. Some examples of these kinds of tissue samples are shown in Fig. 4. These microscopic objects are imaged using bright-field illumination in the optical microscope system. The microscope incorporates an AxioCam Mid Range Color camera of 5 megapixels with an image resolution of 2584×1936  pixels, a chip size of 8.7  mm×6.6  mm, a pixel size of 3.4  μm  ×  3.4  μm, and a spectral range of 400 to 710 nm. Furthermore, as part of the optical microscope device, an x to y mechanical platform and a motorized stage are integrated to control the focus movements along the z-axis. From Table 3, it is evident that interplanar distance ΔZ between different optical sections is determined by the NA of the objective lens.

Fig. 4

Digital images of histological tissue sections used to evaluate the performance of AF algorithms. (a) Human carotid and (b) elastic cartilage. The samples are amplified at 2.5×. The all-in-focus images fit within the DOF of the microscope system under use.

OE_57_2_023106_f004.png

Table 3

Specifications of the EC plan-Neofluar objective lenses (Carl Zeiss microscopy, retrieved from Ref. 32) employed during image acquisition.33,34

MTNADOF (μm) nλ(NA)2Lateral resolution d=0.61λ/NA (μm)
2.5×0.07597.644.47
10×0.305.971.11
40×0.750.8120.44
100×1.30.2120.25

4.

Results and Discussions

4.1.

Focusing Results

To obtain a performance evaluation of the 15 AF techniques on the MGC images, six z-stacks of 21 multichannel images are recorded using two histological samples. Each stack has a particular color that is highly dominant as shown in Fig. 4. This allows us to evaluate the MGC method for different color distributions and amplifications inside of the digital image.

  • Case I: Figures 5 and 6 show the focus measure graphs for a z-stack of images obtained from the human carotid tissue amplified at 2.5× and 40×, respectively. According to the focus measure curves, the MGC image turns out to be a suitable space for AF measurements because all the FM decrease monotonically as the defocus increases. Also, the focus curves of all the AF algorithms show a monotonic behavior, a single local maxima η, a narrow width α/β, and they achieve the highest performance.

  • Case II: Figure 7 shows an elastic cartilage sample amplified at 2.5×, 10×, 40×, and 100×. From this histological sample, four z-stacks of frames are acquired. Every RGB image is transformed into the MGC space to measure the focusing. As can be seen, when the objective lens is modified, it is necessary to modify the illumination intensity over the sample, which causes the color distribution to change. Data are processed and the results are graphed in Fig. 8.

Fig. 5

Focus measure for a z-stack of images amplified at 2.5×, which have been processed on the MGC(RGB) of color images.

OE_57_2_023106_f005.png

Fig. 6

Focus measure for a z-stack of images amplified at 40×, which have been processed on the MGC(HSI) of color images.

OE_57_2_023106_f006.png

Fig. 7

Elastic cartilage sample, magnified using four microscope objectives of (a) 2.5×, (b) 10×, (c) 40×, and (d) 100× (oil immersion).

OE_57_2_023106_f007.png

Fig. 8

Focus measure from a z-stack of images, which are acquired using different objectives. All the FM present monotic behavior even up to an amplification of 100×, and the new color distribution inside of the images acquired from the amplified sample under inspection.

OE_57_2_023106_f008.png

Some research4 has reported that beyond a magnification of 63×, the performance of the various AF metrics is drastically impaired. According to results shown in the graphs of Fig. 8, when using the MGC approach all the FM curves present monotonic behavior, even when the magnification is increased to 100× (oil immersion). This last experimental result supports the advantage of a color-to-MGC space transformation. Nevertheless, a problem arises when images of a sample of thickness tDOF are acquired at a magnification of 100×. There exist portions of the image partly in focus. In the graphs of Fig. 8(d), two regions in focus located at z=0 and z=6  μm can be seen. According to the results given in Tables 4 and 5, all the AF measures realized in the MGC space are accurate in spite of the different magnifications, unlike some typically used channels for focus measure.

Table 4

Autofocusing Performance AP of all metrics in different grayscale channels and MGC images. The mean and standard deviation of AP is given in bold.

Elastic cartilage magnified at 40×
L (CieLab)Brightness (YIQ)Value (HSV)Intensity (HSI)MGC (RGB)MGC (HSI)
EL0.711111
TAG111111
SG111111
BG111111
EA110.90.911
SM110.90.911
PS0.70.90.90.911
V-4111111
V-50.710.90.911
V110.9111
VN111111
TG111111
MDTC0.711111
SF111111
TC0.711111
μ(AP)0.900.990.970.9711
σ(AP)0.140.020.050.0400

Table 5

Autofocusing Performance AP of all metrics in different grayscale channels and MGC images. The mean and standard deviation of AP is given in bold.

Elastic cartilage magnified at 100×
L (CieLab)Brightness (YIQ)Value (HSV)Intensity (HSI)MGC (RGB)MGC (HSI)
EL110.90.611
TAG110.90.811
SG111111
BG111111
EA10.90.9111
SM10.90.90.911
PS0.80.80.80.811
V-4111111
V-50.8110.911
V111111
VN1110.811
TG111111
MDTC111111
SF111111
TC0.80.60.60.611
μ(AP)0.960.950.930.8911
σ(AP)0.080.110.110.1400

Another advantage of the MGC method is its computational simplicity and inherent parallelism. Figure 9 shows the computational cost of the MGC(RGB) method in a z-stack of digital images of 2584×1936  pixels, when they run on Intel© X©(R) 2.10 GHz, 16 GB RAM, NVIDIA Quadro K4000. The parallelized MGC method on the GPU is one order of magnitude faster than the same application implemented in CPU.

Fig. 9

Execution time of the MGC(RGB) method in a z-stack of digital images.

OE_57_2_023106_f009.png

4.2.

Multifocus Image Fusion Results

It is well known that the digital images of thick microscopic objects provided by an optical widefield microscope device are strongly blurred for the portion of the object that lies outside of the DOF of the objective lens. We can seek those regions of the FOV, which are conveniently located in-focus. The present subsection will describe the results of a method to merge multifocus frames based on the MGC approach.

Our experiment starts with the acquisition of a digital image z-stack from a histological sample. This set of z-images are obtained by moving the microscope stage along the optical axis. For this, the axial extension t of the sample is defined and then the axial stage with the sample is moving to cover this extension. The interplanar distance ΔZ between different optical sections is less than the axial resolution of the microscope, defined as the DOF in Table 3. From this table, it is evident that ΔZ is determined by the NA of the objective lens.

Digital images of a beetle shell are acquired with amplification of 10× and interplanar distance ΔZ=3  μm. The given z-stack is composed of 42 images with 1024×768  pixels. Figure 10(a) shows the in-focus image obtained with the software package EDoF plug-in31 based on a complex wavelets algorithm for EDoF.17 The fusion process takes an average execution time of 52.2 s, whereas the fused image of Fig. 10(b) is based on the proposed MGC fusion method. In this technique, the resulted slice axial position matrix sap(x,y) is low-pass filtered using a p×q median filter with p=q=3,15,35. The total execution time is 32.1 s. The 3-D visualization of the resulting in-focus image is sketched in Fig. 10(c). Finally, the nonreference image quality metric of Eq. (7) is computed for the in-focus image quality assessment. The results are shown in Fig. 10(d).

Fig. 10

Image fusion results using (a) the software package EDoF plug-in and (b) the MGC fusion method. (c) 3-D visualization of (b). (d) Fusion evaluation of in-focus images. The readjustment of in-focus pixels of sap˜(x,y) along the z-axis avoid false edge detection.

OE_57_2_023106_f010.png

Another example is the case of an umbilical cord that is imaged at the amplification of 10× and interplanar distance ΔZ=3  μm. The given z-stack is composed of 39 images with 2584×1936  pixels. Again, Fig. 11(a) shows the in-focus image obtained with the EDoF plug-in.17,31 It takes an average execution time of 477.43 s. The fused image of Fig. 11(b) is based on the proposed MGC fusion method, where the resulted slice axial position matrix sap(x,y) is again low-pass filtered using a p×q median filter with p=q=3,15,35. The total execution time is 238.9 s, and the fusion evaluation is shown in Fig. 11(d). As we can see, the proposed fusion method reveals a high-quality image independent of faulty illumination during the image acquisition.

Fig. 11

Image fusion result using (a) the software package EDoF plug-in and (b) the MGC fusion method. (c) 3-D visualization of (b). (d) Image fusion evaluation. As it can be seen, the proposed fusion method reveals a high quality image independently of faulty illumination during image acquisition.

OE_57_2_023106_f011.png

5.

Conclusions

In this research, the MGC operator has been applied to digital color images. This procedure transforms the multichannel information to a grayscale image, which is used for (a) focus measurements during the AF process and (b) for extending the DOF in the framework of digital microscopy applications.

The AF experimental results of this work demonstrate the effectiveness of the MGC method when it is applied to several z-stacks of images. From this point of view, we can conclude that the use of the proposed MGC image increases the performance of currently used passive AF algorithms and produces monotonic FM curves with an only one local maximum η and a similar width α/β of the focus curve, as shown in Figs. 5, 6, and 8. The test frames have been acquired from two histological samples, which are amplified at the magnifications of 2.5×, 10×, 40×, and 100× (oil immersion). The AF graphs in Fig. 8 that are obtained by the MGC method present similar behaviors even up to a magnification of 100×. Therefore, all the AF algorithms reveal the image slice on z=0. Contrastingly, as shown in the AP results in Tables 4 and 5, the same AF algorithms in other color spaces only work properly in some cases. As can be seen in the same tables, the mean and the standard deviation of the AF performance for the MGC image are 1 and 0, respectively, for both amplifications. We can conclude that the effectiveness of the AF algorithms depends on several factors (1) the color space selection for doing the numerical computation, (2) the color distribution of the sample under inspection, and (3) the sample magnification. Only in the MGC space does the AF performance tend to be invariant according to these factors. Another remarkable characteristic of the MGC method is that it is computationally simple and inherently parallel. The computational cost of the MGC(RGB) algorithm implemented on a GPU can be reduced by an order of magnitude, for images with 2584×1936  pixels, as is shown in Fig. 9.

On the other hand, the fusion scheme ϕ(x,y,i) was implemented on an image z-stack for EDoF. The fused image is composed of the sharp regions provided by the in-focus pixels sap˜(x,y) of the input data. Our fusion method has been quantitatively and qualitatively compared with the EDoF plug-in, which is widely used in digital microscopy for DOF extension. From a simulated image stack, the resulting image fusion was compared with the corresponding original images using the NMSE, as shown in Fig. 2. Also, a nonreference image quality metric AQI was implemented for image quality assessment. These quantitative evaluations given in Table 2 show that the quality of the resulting fused image ϕ(x,y,i) is better than the fused image given by the EDoF plug-in.

The 3-D visualization of the in-focus images verifies the fusion results. Based on the experimental results of Figs. 10 and 11, the MGC method is an algorithm sufficiently competitive to merge multifocus images. In general, the main advantages of the proposed fusion method based on MGC transformation are that it is computationally simpler, faster, and more efficient than other methods which have been typically used to fuse multifocus information. Additionally, the comparisons in Figs. 11(a) and 11(b) show that our method reveals a high-quality image independent of faulty illumination during the image acquisition.

Acknowledgments

R. Hurtado thanks to Consejo Nacional de Ciencia y Tecnología (CONACyT); Award no. 578446. Also we thank by the support to PADES program; Award no. 2017-13-011-053. We extend our gratitude to the reviewers and Jennifer Speier for their useful suggestions.

References

1. 

Y. Qu, S. Zhu and P. Zhang, “A self-adaptive and nonmechanical motion autofocusing system for optical microscopes,” Microsc. Res. Tech., 79 (11), 1112 –1122 (2016). https://doi.org/10.1002/jemt.v79.11 MRTEEO 1059-910X Google Scholar

2. 

S. Yazdanfar et al., “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express, 16 (12), 8670 –8677 (2008). https://doi.org/10.1364/OE.16.008670 OPEXFF 1094-4087 Google Scholar

3. 

Y. Tian, “Autofocus using image phase congruency,” Opt. Express, 19 (1), 261 –270 (2011). https://doi.org/10.1364/OE.19.000261 OPEXFF 1094-4087 Google Scholar

4. 

R. Redondo et al., “Autofocus evaluation for brightfield microscopy pathology,” J. Biomed. Opt., 17 (3), 036008 (2012). https://doi.org/10.1117/1.JBO.17.3.036008 JBOPFO 1083-3668 Google Scholar

5. 

A. Lipton and E. J. Breen, “On the use of local statistical properties in focusing microscopy images,” Microsc. Res. Tech., 31 (4), 326 –333 (1995). https://doi.org/10.1002/(ISSN)1097-0029 MRTEEO 1059-910X Google Scholar

6. 

L. Firestone et al., “Comparision of autofocus method for use in automated algorithms,” Cytometry, 12 195 –206 (1991). https://doi.org/10.1002/(ISSN)1097-0320 CYTODQ 0196-4763 Google Scholar

7. 

M. Subbarao and J. K. Tyan, “Selecting the optimal focus measure for autofocusing and depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell., 20 864 –870 (1998). https://doi.org/10.1109/34.709612 ITPIDJ 0162-8828 Google Scholar

8. 

Y. Sun, S. Duthaler and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech., 65 139 –149 (2004). https://doi.org/10.1002/(ISSN)1097-0029 MRTEEO 1059-910X Google Scholar

9. 

X. Y. Liu, W. H. Wang and Y. Sun, “Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear,” J. Microsc., 227 (1), 15 –23 (2007). https://doi.org/10.1111/jmi.2007.227.issue-1 JMICAR 0022-2720 Google Scholar

10. 

D. Vollath, “The influence of the scene parameters and of noise on the behavior of automatic focusing algorithms,” J. Microsc., 151 (2), 133 –146 (1988). https://doi.org/10.1111/jmi.1988.151.issue-2 JMICAR 0022-2720 Google Scholar

11. 

O. A. Osibote et al., “Automated focusing in bright-field microscopy for tuberculosis detection,” J. Microsc., 240 (2), 155 –163 (2010). https://doi.org/10.1111/jmi.2010.240.issue-2 JMICAR 0022-2720 Google Scholar

12. 

A. Santos et al., “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc., 188 (3), 264 –272 (1997). https://doi.org/10.1046/j.1365-2818.1997.2630819.x JMICAR 0022-2720 Google Scholar

13. 

J. Cao et al., “Method based on bioinspired sample improves autofocusing performances,” Opt. Eng., 55 (10), 103103 (2016). https://doi.org/10.1117/1.OE.55.10.103103 Google Scholar

14. 

K. Omasa, M. Kouda, “3-D color video microscopy of intact plants,” Image Analysis: Methods and Applications, 257 –263 CRC Press, New York (2001). Google Scholar

15. 

H. Shi, Y. Shi and X. Li, “Study on auto-focus methods of optical microscope,” in 2nd Int. Conf. on Circuits, System and Simulation (ICCSS 2012), IPCSIT, (2012). Google Scholar

16. 

M. Selek, “A new autofocusing method based on brightness and contrast for color cameras,” Adv. Electr. Comput. Eng., 16 (4), 39 –44 (2016). https://doi.org/10.4316/AECE.2016.04006 Google Scholar

17. 

B. Forster et al., “Complex wavelets for extended depth of field: a new method for the fusion of multichannel microscopy images,” Microsc. Res. Tech., 65 33 –42 (2004). https://doi.org/10.1002/(ISSN)1097-0029 MRTEEO 1059-910X Google Scholar

18. 

A. Koschan and M. Abidi, Digital Color Image Processing, Wiley Interscience, New Jersey (2008). Google Scholar

19. 

R. M. Rangayyan, B. Acha and C. Serrano, Color Image Processing with Biomedical Applications, SPIE Press, Bellingham, Washington (2011). Google Scholar

20. 

T. Gevers and H. Stokman, “Classifying color edges in video into shadow-geometry, highlight, or material transitions,” IEEE Trans. Multimedia, 5 (2), 237 –243 (2003). https://doi.org/10.1109/TMM.2003.811620 Google Scholar

21. 

W. A. Carrington and D. Lisin, “Cluster computing for digital microscopy,” Microsc. Res. Tech., 64 (2), 204 –213 (2004). https://doi.org/10.1002/(ISSN)1097-0029 MRTEEO 1059-910X Google Scholar

22. 

J. M. Castillo-Secilla et al., “Autofocus method for automated microscopy using embedded GPUs,” Biomed. Opt. Express, 8 (31), 1731 –1740 (2017). https://doi.org/10.1364/BOE.8.001731 Google Scholar

23. 

J. C. Valdiviezo-N et al., “Autofocusing in microscopy systems using graphics processing units,” Proc. SPIE, 8856 88562K (2013). https://doi.org/10.1117/12.2024967 PSISDG 0277-786X Google Scholar

24. 

Konus prepared slides: the human body I and III sets,” (2017) http://www.microscopes.eu/en/Brand/Konus/ February ). 2017). Google Scholar

25. 

Carolina, “Human connective tissues microscope slide set,” (2017) http://www.carolina.com/ February ). 2017). Google Scholar

26. 

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed.Prentice Hall(2002). Google Scholar

27. 

J. M. Tenenbaum, “Accommodation in computer vision,” Department of Computer Science, Stanford University, (1970). Google Scholar

28. 

J. F. Brenner et al., “An automated microscope for cytologic research a preliminary evaluation,” J. Histochem. Cytochem., 24 (1), 100 –111 (1976). https://doi.org/10.1177/24.1.1254907 Google Scholar

29. 

S. Gabarda et al., “Image denoising and quality assessment through the Rényi entropy,” Proc. SPIE, 7444 744419 (2009). https://doi.org/10.1117/12.826153 Google Scholar

30. 

S. Sumathi, L. A. Kumar and P. Surekha, Computational Intelligence Paradigms for Optimization Problems Using MATLAB®/SIMULINK®, CRC Press, New York (2016). Google Scholar

31. 

A. Prudencio, J. Berent and D. Sage, “Extended depth of field plug-in,” (2017) http://bigwww.epfl.ch/demo/edf June ). 2017). Google Scholar

32. 

Carl Zeiss Microscopy GmbH, “Objectives from Carl Zeiss,” (2017) https://www.zeiss.com/microscopy/int/home.html June ). 2017). Google Scholar

33. 

R. Hurtado et al., “Extending the depth-of-field for microscopic imaging by means of multifocus color image fusion,” Proc. SPIE, 9578 957811 (2015). https://doi.org/10.1117/12.2188927 PSISDG 0277-786X Google Scholar

34. 

K. R. Spring and M. W. Davidson, “The source for microscopy education, depth of field and depth of focus,” (2017) https://www.microscopyu.com/microscopy-basics/depth-of-field-and-depth-of-focus December ). 2017). Google Scholar

Biography

Román Hurtado-Pérez received his bachelor’s degree in computational systems and his master’s degree from the Polytechnic University of Tulancingo (UPT) in 2004 and 2013, respectively. He is a PhD degree student in optomechatronics from UPT. His current research areas include multifocus image fusion, autofocusing, GPU, and computer vision.

Carina Toxqui-Quitl is an assistant professor at the Polytechnic University of Tulancingo. She received her BS degree from the Puebla Autonomous University, Mexico, in 2004. She received her MS and PhD degrees in optics from the National Institute of Astrophysics, Optics, and Electronics in 2006 and 2010, respectively. Her current research areas include image moments, multifocus image fusion, wavelet analysis, and computer vision.

Alfonso Padilla-Vivanco received his bachelor in physics from Puebla Autonomous University, Mexico, and his MS and PhD degrees both in optics from the National Institute of Astrophysics, Optics, and Electronics in 1995 and 1999, respectively. In 2000, he held a postdoctoral position in the physics department at the University of Santiago de Compostela, Spain. He is a professor at the Polytechnic University of Tulancingo. His research interests include optical information processing, image analysis, and computer vision.

J. Félix Aguilar-Valdez received his BS degree in physics from National Autonomous University of Mexico in 1980. He received his MS and PhD degrees in optics from Center for Scientific Research and Higher Education of Ensenada, Baja California, México, in 1994. His current research areas include high-resolution microscopy, confocal microscopy, near-field diffraction, and microscopic imaging.

Gabriel Ortega-Mendoza received his PhD from the FCFM-BUAP, Puebla, México, in 2013 and has been a full-professor at Universidad Politécnica de Tulancingo, Hidalgo, México, since 2013. His research areas include multifocus image fusion, image microscopy, optical fiber laser, plasmon resonance, and manipulation of photothermal-induced microbubbles.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Román Hurtado-Pérez, Carina Toxqui-Quitl, Alfonso Padilla-Vivanco, J. Félix Aguilar-Valdez, and Gabriel Ortega-Mendoza "Focus measure method based on the modulus of the gradient of the color planes for digital microscopy," Optical Engineering 57(2), 023106 (16 February 2018). https://doi.org/10.1117/1.OE.57.2.023106
Received: 4 July 2017; Accepted: 19 January 2018; Published: 16 February 2018
Lens.org Logo
CITATIONS
Cited by 8 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Image fusion

Image quality

RGB color model

Microscopy

Digital imaging

Image processing

3D image processing

RELATED CONTENT

Statistics of visual representation
Proceedings of SPIE (July 30 2002)
Efficient space-leaping method for volume rendering
Proceedings of SPIE (March 25 1999)
Hybrid color spaces applied to image database
Proceedings of SPIE (December 15 2003)

Back to Top