Open Access
28 January 2019 Noniterative spatially partially coherent diffractive imaging using pinhole array mask
Xingyuan Lu, Yifeng Shao, Chengliang Zhao, Sander Konijnenberg, Xinlei Zhu, Ying Tang, Yangjian Cai, H. Paul Urbach
Author Affiliations +
Abstract
We propose and experimentally demonstrate a noniterative diffractive imaging method for reconstructing the complex-valued transmission function of an object illuminated by spatially partially coherent light from the far-field diffraction pattern. Our method is based on a pinhole array mask, which is specially designed such that the correlation function in the mask plane can be obtained directly by inverse Fourier transforming the diffraction pattern. Compared to the traditional iterative diffractive imaging methods using spatially partially coherent illumination, our method is noniterative and robust to the degradation of the spatial coherence of the illumination. In addition to diffractive imaging, the proposed method can also be applied to spatial coherence property characterization, e.g., free-space optical communication and optical coherence singularity measurement.

1.

Introduction

Coherent diffractive imaging (CDI) is an important tool for reconstructing the complex-valued transmission function of an object from the far-field diffraction pattern and has been widely applied in material and biological sciences.1,2 Miao et al.3 first experimentally realized imaging of submicrometer sized noncrystalline specimen using CDI. Many CDI approaches have been developed in the past decades; they can be divided into two types: the iterative methods49 and the noniterative methods.1014 However, most of the traditional CDI methods cannot be directly applied to using spatially partially coherent illumination without a proper modification, which hence have limited applications at short wavelengths, e.g., in the x-ray and electron regime, or in the unstable experimental environment. For example, the degradation of the spatial coherence may also be caused by the mechanical movement of the sample and the experimental setup, or by the fluctuation of the ambient medium, e.g., atmospheric turbulence.1517

Iterative algorithms retrieve the complex-valued transmission function of an object by propagating the field back-and-forth between the object plane and the far-field diffraction plane, and imposing constraints on the field in both planes. Gerchberg and Saxton pioneered the iterative algorithms in 1972 by proposing a straightforward method using two intensities measured in the object plane and in the far-field, respectively.4 Iterative algorithms using only one intensity measurement of the far-field diffraction pattern, as proposed by Fienup,5,6 require prior knowledge of the support of the object for imposing the support constraint in the object plane. Recently, ptychographic algorithms have become an essential technique for imaging nanoscale objects using short-wavelength sources.8 The key feature is that illumination areas at the neighboring shift positions must overlap, and this overlap improves the convergence of the ptychographic algorithms.7,8 Recent developments in ptychography allow for simultaneous reconstruction of the probe. This significantly reduces the complexity of experimental setup compared to other CDI methods.9

For spatially partially coherent illumination, the propagation of light is described using the mutual coherence function (MCF) instead of the field. Many efforts have been spent to adapt CDI methods to spatially partially coherent illumination.1524 The modification of the iterative algorithms was first reported by Whitehead et al.15 using mode decomposition of the MCF.16 Later, ptychographic algorithms have also been modified to work for spatially partially coherent illumination by decomposing the MCF into orthogonal modes.1721 Thibault and Menzel17 proposed a mixed state model from the quantum perspective, which effectively deals with a series of multistate mixing problems including partially coherent illumination and enables more applications of ptychography, such as continuous-scan ptychography18 and dynamic imaging of a vibrating sample.19 Furthermore, ptychographic imaging with the simultaneous presence of both multiple probe and multiple object states was also demonstrated.21 However, the accuracy of mode decomposition relies on the number of modes for accurately representing the MCF, and it increases as the spatial coherence of the illumination decreases.

Compared to iterative methods, noniterative methods10,11,14 do not suffer from issues such as stagnation or nonuniqueness of the solution to the diffractive imaging problem, especially when the illumination becomes spatially partially coherent.21,24 In holographic methods, the field transmitted by the object is perturbed such that the object’s transmission function can be directly extracted from the inverse Fourier transform of the diffraction pattern.14 This perturbation can be achieved by introducing a pinhole, e.g., Fourier transform holography (FTH),2527 or by changing the transmission function at a point of the object, e.g., Zernike quantitative phase imaging.28 The performance of applying holographic methods to spatially partially coherent illumination has been discussed in Ref. 14. Alternative methods extract the object information from the three-dimensional autocorrelation functions obtained by inverse Fourier transforming the three-dimensional data set (e.g., the data set measured by varying focus12 or another optical parameter13).

It has been demonstrated that using noniterative methods can avoid errors due to truncating the number of the modes for representing the MCF.1214 However, the degree of the spatial coherence of the illumination limits the field of view (FOV) of the reconstructed object. To be precise, what is reconstructed is a product of the object’s transmission function and a correlation function of the illumination. This correlation function has a maximum at the perturbation point, and its value decreases at a rate that depends on the degree of spatial coherence, as the distance between the perturbation point and the observation point increases. Therefore, the lower the illumination’s degree of spatial coherence is, the smaller the region of the object that can be reconstructed reliably.

In this paper, we propose a noniterative method based on a pinhole array mask (PAM). We place the PAM between the object and the detector, and we measure the far-field diffraction pattern of the spatially partially coherent field transmitted by the PAM. The PAM consists of a periodic array of measurement pinholes and an extra reference pinhole, which is analogous to the perturbation point in FTH.14 It forms an interference between fields transmitted by the reference pinhole and by the measurement pinholes, and we can directly retrieve the correlation function of the incident light with respect to the reference pinhole at the locations of the measurement pinholes from the interference pattern.

In FTH, since the reference pinhole is placed far from the measurement window, the FOV is rather small due to the low correlation. Our method is advantageous compared to FTH, because splitting the measurement window into a periodic array of measurement pinholes keeps the reference pinhole close to all measurement pinholes and thus maintains a high correlation that results in a large FOV.

Our method places the object at a certain distance before the PAM, instead of superposing the object with the PAM. In practice, this not only offers flexibility for arranging the experimental setup but also allows us to adjust the sampling of the reconstructed object. When the propagation distance satisfies the condition for Fresnel or Fraunhofer diffraction, an object with finite support can be reconstructed from the retrieved correlation function, and its sampling is related to the sampling of the PAM by the Shannon–Nyquist sampling theorem.

Because our method reconstructs the product of the object’s transmission function and the illumination’s correlation function and hence can be used not only for object reconstruction but also for characterization of the spatial coherence structure, it is useful for a broad range of applications in coherent optics, such as the measurement of optical coherence singularity29,30 and free-space optical communication through turbulent media.31,32

2.

Methods

The schematic plot of the experimental setup of our method is shown in Fig. 1, which shows that a transmissive object is illuminated by spatially partially coherent light. In our method, unlike traditional CDI algorithms, we place a PAM between the object and the detector. The location of the PAM is chosen such that light propagation from the object to the PAM and from the PAM to detector obeys either Fresnel or Fraunhofer propagation and hence can be described using Fourier transforms. By doing so, we can divide our method into two steps: (1) retrieving the correlation function of incident light in the PAM plane using a noniterative approach and (2) reconstructing the product of the object’s transmission function and the illumination’s correlation function using a differential method, which requires two diffraction patterns corresponding to the object with and without transmission perturbation, respectively. It is worth noting that the use of this differential method is not necessary for completely spatially coherent illumination.

Fig. 1

Schematic plot of the experimental setup and the concept of the noniterative diffractive imaging method. (a) A PAM is placed between the object and the camera. The PAM is specially designed such that we can retrieve the correlation function of the incident light by inverse Fourier transform of the measured diffraction pattern. (b) The PAM consists of a reference pinhole at the origin (gray square) and a periodic array of the measurement pinholes with certain offset (white squares). (c) In the inverse Fourier transform of the diffraction pattern, the autocorrelation terms (gray squares) and the two interference terms (red and blue squares) are separated by the offset. Each interference term directly contains the correlation between the fields at the reference pinhole and at the measurement pinholes.

AP_1_1_016005_f001.png

2.1.

Retrieving the Correlation Function in the PAM Plane

Let the coordinate of the PAM plane be denoted by r=(x,y). The PAM consists of a reference pinhole at the origin shown by the gray square in Fig. 1(b), and a periodic array of measurement pinholes around the reference pinhole is shown by the white squares in Fig. 1(b). The center of the periodic array is shifted relative to the reference pinhole by certain offset and is depicted by the white spot at the corner of the reference pinhole. We assume that the reference pinhole and the measurement pinholes are identical and are so small that each pinhole can be approximated by a delta function. This assumption allows us to write the transmission function of the PAM by

Eq. (1)

M(r)=δ(r0)+m,nδ(rrm,n),
where 0=(0,0) denotes the location of the origin, rm,n=(Δx+mpx,Δy+npy) denotes the location of the measurement pinhole in the periodic array, where (m,n) is the index of the measurement pinhole, (px,py) is the pitch of the periodic array, and (Δx,Δy) is the offset of the periodic array relative to the reference pinhole.

The incident light transmitted by the PAM generates a diffraction pattern in the detector plane. We denote the MCF of the incident light in the PAM plane by W(r1,r2), which describes the correlation between the fields at r1 and r2. Because the light propagation from the PAM to the detector satisfies the condition for either Fresnel or Fraunhofer propagation, we can express the diffraction pattern measured by the detector using the Fourier transform as follows:

Eq. (2)

I(k)=M(r1)M(r2)*W(r1,r2)exp[i2πk·(r1r2)]d2r1d2r2=W(0,0)+m1,n1m2,n2W(rm1,n1,rm2,n2)exp[i2πk·(rm1,n1rm2,n2)]+m1,n1W(rm1,n1,0)exp(i2πk·rm1,n1)+m2,n2W(0,rm2,n2)exp(+i2πk·rm2,n2),
where k denotes the detector plane coordinate. Denoting the original sampling grid of the detector plane by k0, we have k=k0/(λz), where λ is the wavelength of illumination and z is the propagation distance, and k is conjugated with the coordinate of the PAM plane r. By taking the inverse Fourier transform of the diffraction pattern [Eq. (2)], we obtain

Eq. (3)

F1[I(k)](r)=W(0,0)δ(r0)+m1,n1m2,n2W(rm1,n1,rm2,n2)δ[r(rm1,n1rm2,n2)]+m1,n1W(rm1,n1,0)δ(rrm1,n1)+m2,n2W(0,rm2,n2)δ(r+rm2,n2),
where F1 denotes the operation of the inverse Fourier transform. We can observe that Eq. (3) consists of four terms:

  • 1. The first term is W(0,0), which is a constant multiplied by a delta function that appears only at the origin of the coordinate system r(0,0).

  • 2. The second term is W(rm1,n1,rm2,n2) located on the periodic array defined by r=rm1,n1rm2,n2=[(m1m2)px,(n1n2)py], which is depicted by the gray squares in Fig. 1(c). This periodic array has the same pitch as the periodic array of the measurement pinholes (px,py) but zero offset relative to the origin.

  • 3. The third term is W(rm,n,0) located on the periodic array defined by r=rm,n, which is depicted by the blue squares in Fig. 1(c).

  • 4. The fourth term is W(0,rm,n)=W(rm,n,0)* located on the periodic array defined by r=rm,n, which is depicted by the red squares in Fig. 1(c).

The role of the reference pinhole of the PAM is analogous to the perturbation point in FTH,14 namely to create interference between the incident light transmitted by the reference pinhole and by the measurement pinholes. The interference induces the third term and the fourth term in Eq. (3), which are referred to as the “interference terms.” The first term and the second term in Eq. (3) are the autocorrelation of the reference pinhole and of the measurement pinholes, respectively, and hence are referred to as the “autocorrelation terms.”

The layout of the four terms of Eq. (3) is illustrated by the schematic plot in Fig. 1(c). Figure 1(c) shows that the periodic arrays of the autocorrelation term (the gray squares) and the two interference terms (the blue squares and the red squares) have the same pitch but different offset. This allows us to separate the two interference terms and the autocorrelation term by multiplying a spatial filter to the inverse Fourier transform of the diffraction pattern. The expression for this spatial filter is given by

Eq. (4)

M(r)=m,nδ(rrm,n).
We can multiply F1[I(k)](r) either by M(r) to obtain W(rm,n,0) or by M(r) to obtain W(rm,n,0)*. As a consequence, we can retrieve the correlation function W(rm,n,0) of the incident light in the PAM plane with respect to the location of the reference pinhole r=0 at the locations of the measurement pinholes r=rm,n.

Note that W(rm,n,0) is a function of the locations of the measurement pinholes on the periodic array defined by r=rm,n. The sampling interval of rm,n is given by the pitch of the PAM (px,py). For rectangular pinholes with size (wx,wy), the sampling interval should be (px3wx,py3wy) such that the autocorrelation term and the two interference terms do not overlap as shown in Fig. 1(c). However, this sampling interval (px,py) is usually larger than the diffraction-limited sampling interval according to the Shannon–Nyquist criterion: (px,0=λz/lx, py,0=λz/ly), where (lx,ly) is the size of the detector.

2.2.

Reconstructing the Transmission Function of the Object and the Correlation Function of the Illumination

The sampling interval of the reconstructed object will be higher in the plane at distance z before the PAM plane than exactly in the PAM plane. We denote the coordinate of the object plane by ρ, which is original sampling grid ρ0 normalized by λz, and the complex-valued transmission function of the object by O(ρ). In our method, the MCF in the object plane and the MCF in the PAM plane are related to the Fourier transform in the case of either Fresnel or Fraunhofer propagation. Here we give the example of Fraunhofer propagation as follows:

Eq. (5)

W(r1,r2)=O(ρ1)O(ρ2)*W0(ρ1,ρ2)exp[i2π(ρ1·r1ρ2·r2)]d2ρ1d2ρ2,
where W0(ρ1,ρ2) is the MCF of the incident beam that describes the correlation between fields at ρ1 and ρ2. According to the Shannon–Nyquist criterion, the pitch of the PAM (px,py) determines the size of the object: (lxλz/px, lyλz/py). Therefore, we need to find the propagation distance z such that the size of the reconstructed object matches the pitch of the PAM.

In Eq. (5), by setting r1=rm,n and r2=0, we can obtain an expression for computing the retrieved correlation function in the PAM plane as follows:

Eq. (6)

W(rm,n,0)=O(ρ1)O(ρ2)*W0(ρ1,ρ2)exp(i2πρ1·rm,n)d2ρ1d2ρ2.
By integrating Eq. (6) first over variable ρ2 and then over variable ρ1, we obtain

Eq. (7)

W(rm,n,0)=T(ρ1)O(ρ1)exp(i2πρ1·rm,n)d2ρ1,
where

Eq. (8)

T(ρ1)=O(ρ2)*W0(ρ1,ρ2)d2ρ2.
Equation (7) indicates that by inverse Fourier transforming the retrieved correlation function in the PAM plane, which is equivalent to propagating the retrieved correlation function from the PAM plane to the object plane as shown in Fig. 1(a), we can reconstruct the modulated object’s transmission function T(ρ)O(ρ).

We shall note that the modulation T(ρ) depends on not only the MCF of the illumination W0(ρ1,ρ2) but also the transmission function of the object O(ρ). To eliminate this modulation, we use the differential approach. This approach requires two measurements: one with a point perturbation to the transmission function of the object at ρ=ρ0 and the other without perturbing the object. The perturbation is achieved by the change of either the amplitude or the phase of the transmission for ρ0 inside the object or by introducing an extra spot that lets light pass through for ρ0 outside the object. It is worth mentioning that for completely spatially coherent illumination, the MCF of the illumination, W0(ρ1,ρ2), becomes a constant, and there is no need for using the differential approach to eliminate the modulation T(ρ).

Substituting the transmission function of the perturbed object O(ρ)+Cδ(ρρ0), where C is a complex-valued constant representing the perturbation, into Eq. (6), we obtain

Eq. (9)

Wδ(rm,n,0)=[O(ρ1)+Cδ(ρ1ρ0)][O(ρ2)+Cδ(ρ2ρ0)]*×W0(ρ1,ρ2)exp(i2πρ1·rm,n)d2ρ1d2ρ2.
Expanding the brackets in this expression leads to

Eq. (10)

Wδ(rm,n,0)=W(rm,n,0)+|C|2W0(ρ0,ρ0)+Cδ(ρ1ρ0)O(ρ2)*W0(ρ1,ρ2)exp(i2πρ1·rm,n)d2ρ1d2ρ2+O(ρ1)C*δ(ρ2ρ0)W0(ρ1,ρ2)exp(i2πρ1·rm,n)d2ρ1d2ρ2.
Using the property of the delta function, we can derive the expression of the retrieved correlation function in the PAM plane for the perturbed object as follows:

Eq. (11)

Wδ(rm,n,0)=W(rm,n,0)+|C|2W0(ρ0,ρ0)+[C·O(ρ2)*W0(ρ0,ρ2)d2ρ2]exp(i2πρ0·rm,n)+O(ρ1)C*W0(ρ1,ρ0)exp(i2πρ1·rm,n)d2ρ1.
Finally, we take the inverse Fourier transform of the difference between the retrieved correlations in the PAM plane for the unperturbed object and for the perturbed object, and the result yields

Eq. (12)

F1[Wδ(rm,n,0)W(rm,n,0)]=O(ρ1)C*W0(ρ1,ρ0)+|C|2W0(ρ0,ρ0)δ(ρ0)+[C·O(ρ2)*W0(ρ0,ρ2)d2ρ2]δ(ρρ0).
Neglecting the delta functions in Eq. (12), we can obtain the product of the transmission function of the object O(ρ) and the correlation function W0(ρ,ρ0), which describes the correlation between the fields at the point of perturbation ρ0 and at all other points W0(ρ,ρ0). The correlation function is determined by the MCF of the illumination but also depends on the location of the perturbation ρ=ρ0. Usually, W0(ρ,ρ0) is simply Gaussian distributed, e.g., when using the Gaussian–Schell-model (GSM) beam for illumination. However, when the MCF of the illumination is complicated and not known, we need to calibrate W0(ρ,ρ0) by performing a reconstruction using an empty window as object and then divide the reconstructed product O(ρ)W0(ρ,ρ0) by W0(ρ,ρ0).

3.

Results and Discussions

In the experiment, we use the GSM beam and the Laguerre–Gaussian–Schell-model (LGSM) beam as illumination to validate our method. In the object plane, the MCF of the GSM beam can be expressed as

Eq. (13)

W0(ρ1,ρ2)=exp(ρ12+ρ22w02)exp[(ρ1ρ2)22σ2],
where w0 and σ are the width of the Gaussian distribution of the intensity distribution and the correlation function, respectively. The experimental setup for generating the GSM beam is shown in Fig. 2. We expand a coherent laser beam at wavelength λ=532  nm using a beam expander (BE) and then focus it on a rotating ground-glass disk (RGGD) using a lens L1. Because the focal spot follows a Gaussian distribution, the spatially partially coherent light generated due to the scattering by the RGGD satisfies Gaussian statistics, namely, the correlation between the fields in any pair of points follows a Gaussian distribution. We then collimate the spatially partially coherent light by lens L2. By passing the collimated beam through a Gaussian amplitude filter (GAF), we can obtain the GSM beam whose intensity distribution also follows the Gaussian distribution. The MCF of the LGSM beam in the object plane is described by

Eq. (14)

W0(ρ1,ρ2)=exp(ρ12+ρ22w02)exp[(ρ1ρ2)22σ2]Ln0[(ρ1ρ2)22σ2],
where Lnm[] is the Laguerre polynomial of order n and m=0. The experimental generation of LGSM beam has been reported in Refs. 33 and 34. In the experimental setup shown in Fig. 2, we need to insert a spiral phase plate between the BE and the focusing lens L1, which produces a dark hollow focal spot on the RGGD. The order n of the LGSM beam is determined by the topological charge of the spiral phase plate. When n=0, the spiral phase plate has a constant phase and the LGSM beam becomes the GSM beam. However, when n0, the MCFs of the LGSM beam and the GSM beam have the same amplitude but different phases.

Fig. 2

The experimental setup for generating the GSM beam and for diffractive imaging. BE, beam expander; RGGD, rotating ground-glass disk; GAF, Gaussian amplitude filter; BS, beam splitter; SLM, spatial light modulator; PAM, pinhole array mask; L1, L2, and L3, thin lenses; and CCD, charge-coupled device.

AP_1_1_016005_f002.png

In the experiment, the beam width w0 of the Gaussian distribution of the intensity distribution is determined by the GAF and is set to be 0.85 mm, whereas the coherence width σ of the Gaussian distribution of the correlation function, also known as the degree of coherence, is determined by the size of the focal spot on the RGGD. We can control σ by translating back-and-forth the focusing lens L1, which determines the size of the focal spot. The degree of spatial coherence σ is calibrated using the method proposed in Refs. 3536.37.

For the diffractive imaging experiment, we use a phase object, whose amplitude is flat and phase has a binary distribution (0.1π and 0.9π) in the shape of a panda, displayed on the phase spatial light modulator (SLM) (Pluto, Holoeye Inc., with resolution 1920×1080  pixel size 8  μm, and frame rate 60 Hz). The pixels inside the support of the object reflect the incident beam back to the beam splitter, whereas the pixels outside the support direct the incident beam to other directions. The beam reflected by the SLM propagates to the PAM. Finally, we measure the far-field diffraction pattern of the light transmitted by the PAM using a charge-coupled device (CCD) (Eco445MVGe, SVS-Vistek Inc. with resolution 1296×964  pixel size 3.75  μm, and frame rate 30 fps) camera, which is placed at the focal plane of the Fourier transform lens L3. In the experiment, we set the pitch of the PAM and the size of the pinhole to be px=py=270  μm and wx=wy=54  μm, respectively. The object size is lu=lv=1.92  mm, which requires the propagation distance between the object and the PAM to be z=1170  mm.

3.1.

Experimental Results Using GSM Beam Illumination

Equation (12) indicates that for spatially partially coherent illumination, to reconstruct the product of the object’s transmission function O(ρ) and the illumination’s correlation function W0(ρ,ρ0), our method needs two measurements of the diffraction pattern, one without the perturbation and the other with the perturbation of the object’s transmission at ρ0. W0(ρ,ρ0) describes the correlation between the fields at the perturbation point ρ0 and other points ρ, which decreases as the distance between ρ0 and ρ increases. As a consequence, the reconstructed object’s transmission function O(ρ) has a limited FOV since the O(ρ) cannot be reconstructed at locations, where W0(ρ,ρ0) is corrupted by the noise.

In the experiment, we place the perturbation point at the head of the panda by 0.3π. We show the object’s transmission function with and without the perturbation point in Figs. 3(a) and 3(b), and the amplitude and the phase of the reconstructed product for various degrees of spatial coherence in Figs. 3(c1)3(c3) and 3(d1)3(d3). Because the MCF of the GSM beam has a uniform phase, the phase of the reconstructed product is only given by the phase of the object. The amplitude of the reconstructed product follows the Gaussian distribution of the MCF of the GSM beam. The panda shape in the amplitude is due to the discontinuity of the phase and low-pass filtering. As mentioned in Ref. 38, it is the phase jump between the inside and outside area that enables destructive interference along the outline, therefore, leading to the observation of the dark panda shape in a very bright background. In addition, the finite boundaries of PAM, Fourier transform lens (L3), and CCD constitute low-pass filters, which result in the disappearance of the panda contour that corresponds to high spatial frequencies.21 We can observe in Fig. 3 that for a lower degree of spatial coherence σ, the amplitude of the correlation function W0(ρ,ρ0) decreases more rapidly as the distance ρ0ρ increases, and hence the FOV of object’s transmission function O(ρ) is smaller.

Fig. 3

The unperturbed and the perturbed object transmission function and the experimental results using GSM beam illumination with various degrees of spatial coherence. The phase of (a) the unperturbed and (b) the perturbed objects. The perturbation is at the head of the panda. (c1)–(c3) The amplitude and (d1)–(d3) the phase of the reconstructed product of the object’s transmission and the illumination’s correlation using GSM beam for illumination with various degrees of spatial coherence.

AP_1_1_016005_f003.png

Figure 3 shows that, to increase the FOV, we can either increase the degree of spatial coherence σ or decrease the noise level. In Fig. 4, we demonstrate that using more than only one perturbation point, placed at different locations of the object, the object’s transmission function can still be reconstructed in the whole FOV in the case of the lowest degree of spatial coherence (σ=0.21  mm). This requires us to repeat the measurement and the reconstruction procedure for each perturbation point to reconstruct different parts of the object. By combining the different parts reconstructed using low σ illumination together, we can obtain the object’s transmission function as if using high σ illumination.

Fig. 4

The experimental results under GSM beam illumination with low degree of spatial coherence (σ=0.21  mm) using the perturbation point at different locations of the object. (a)–(f) The experimental reconstruction results with the perturbation point at different locations. Each result shows a reduced FoV in the vicinity of the location of the perturbation point. (g) The combination of the results in (a)–(f), which shows a clear panda in the whole FoV.

AP_1_1_016005_f004.png

3.2.

Experimental Results Using LGSM Beam Illumination

In Figs. 3 and 4, the phase of the reconstructed product is given by only the object since the MCF of the GSM beam has a uniform phase. However, for illumination using LGSM beam, the phase of its MCF is not uniform. Therefore, we need to calibrate W0(ρ,ρ0) so that we can divide the reconstructed product by W0(ρ,ρ0) to obtain the object’s transmission function O(ρ) alone. We show the amplitude and the phase of the reconstructed product using LGSM illumination beam in Fig. 5(a). Compared to the reconstructed results using GSM illumination beam in Figs. 3 and 4, we can see that now the phase of the object’s transmission function O(ρ) is modulated by the phase of the correlation function W0(ρ,ρ0) of the LGSM beam, and we cannot see the panda in the phase of the reconstructed product. We show the amplitude and the phase of the correlation function W0(ρ,ρ0) calibrated using an empty window as the object in Fig. 5(b). In Fig. 5(c), we demonstrate the object’s transmission function O(ρ) obtained by dividing the reconstructed product O(ρ)W0(ρ,ρ0) by the calibrated W0(ρ,ρ0). The panda in the phase of the reconstructed object can clearly be seen. This example verifies that our method can be applied to object reconstruction in cases using an illumination beam whose MCF is not known prior.

Fig. 5

The experimental results using LGSM beam illumination: (a) the reconstructed product of the object’s transmission and the illumination’s correlation; (b) the calibration results of the illumination’s correlation using an empty window as object; and (c) the object’s transmission obtained by dividing the results in (a) by the results in (b).

AP_1_1_016005_f005.png

4.

Conclusion

In summary, we develop and validate a noniterative method to reconstruct the complex-valued transmission function of an object illuminated by spatially partially coherent beam using a PAM placed between the object and the detector. Our method overcomes several challenges of conventional iterative CDI algorithms and holographic methods. In particular, our method does not depend on the mode decomposition of the MCF of the spatially partially coherent light and has the freedom to choose the location of the point where the transmission function of the object is perturbed, which is particularly beneficial for achieving large FOV when using a low degree of spatial coherence illumination. Moreover, we also demonstrate that our method can be used to calibrate the MCF of an arbitrary spatially partially coherent beam. This calibration allows us to reconstruct the object’s transmission function almost as accurately as if using complete illumination. The calibration itself can also be used for spatial coherence property characterization, which is needed as an approach for applications like the measurement of optical coherence singularity.29,30 Therefore, in addition to diffractive imaging, our method also provides an approach. Finally, our method is wavelength independent and can be applied to a wide range of wavelengths, from x-rays to infrared light.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Nos. 11774250 and 91750201), the National Natural Science Fund for Distinguished Young Scholars (No. 11525418), and the sponsorship of Jiangsu Overseas Research and Training Program for Prominent Young and Middle-aged University Teachers and Presidents. This work is also part of the research program “Novel design shapes for complex optical systems,” with Project No. 12797, which is (partly) financed by the Netherlands Organization for Scientific Research (NWO).

References

1. 

J. Miao et al., “Quantitative image reconstruction of GaN quantum dots from oversampled diffraction intensities alone,” Phys. Rev. Lett., 95 (8), 085503 (2005). https://doi.org/10.1103/PhysRevLett.95.085503 PRLTAO 0031-9007 Google Scholar

2. 

C. Song et al., “Quantitative imaging of single, unstained viruses with coherent x rays,” Phys. Rev. Lett., 101 (15), 158101 (2008). https://doi.org/10.1103/PhysRevLett.101.158101 PRLTAO 0031-9007 Google Scholar

3. 

J. Miao et al., “Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature, 400 (6742), 342 –344 (1999). https://doi.org/10.1038/22498 Google Scholar

4. 

R. W. Gerchberg, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik, 35 237 –246 (1972). OTIKAJ 0030-4026 Google Scholar

5. 

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt., 21 (15), 2758 –2769 (1982). https://doi.org/10.1364/AO.21.002758 APOPAI 0003-6935 Google Scholar

6. 

J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A, 4 (1), 118 –123 (1987). https://doi.org/10.1364/JOSAA.4.000118 JOAOD6 0740-3232 Google Scholar

7. 

H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett., 93 (2), 023903 (2004). https://doi.org/10.1103/PhysRevLett.93.023903 PRLTAO 0031-9007 Google Scholar

8. 

J. M. Rodenburg et al., “Hard-x-ray lensless imaging of extended objects,” Phys. Rev. Lett., 98 (3), 034801 (2007). https://doi.org/10.1103/PhysRevLett.98.034801 PRLTAO 0031-9007 Google Scholar

9. 

A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy, 109 (10), 1256 –1262 (2009). https://doi.org/10.1016/j.ultramic.2009.05.012 ULTRD6 0304-3991 Google Scholar

10. 

N. Nakajima, “Noniterative phase retrieval from a single diffraction intensity pattern by use of an aperture array,” Phys. Rev. Lett., 98 (22), 223901 (2007). https://doi.org/10.1103/PhysRevLett.98.223901 PRLTAO 0031-9007 Google Scholar

11. 

C. S. Guo et al., “Real-time coherent diffractive imaging with convolution-solvable sampling array,” Opt. Lett., 35 (6), 850 –852 (2010). https://doi.org/10.1364/OL.35.000850 OPLEDP 0146-9592 Google Scholar

12. 

A. P. Konijnenberg et al., “A non-iterative method for phase retrieval and coherence characterization by focus variation using a fixed star-shaped mask,” Opt. Express, 26 (7), 9332 –9343 (2018). https://doi.org/10.1364/OE.26.009332 OPEXFF 1094-4087 Google Scholar

13. 

A. P. Konijnenberg, W. M. J. Coene and H. P. Urbach, “Non-iterative phase retrieval by phase modulation through a single parameter,” Ultramicroscopy, 174 70 –78 (2017). https://doi.org/10.1016/j.ultramic.2016.12.017 ULTRD6 0304-3991 Google Scholar

14. 

Y. Shao et al., “Spatial coherence measurement and partially coherent diffractive imaging using self-referencing holography,” Opt. Express, 26 (4), 4479 –4490 (2018). https://doi.org/10.1364/OE.26.004479 OPEXFF 1094-4087 Google Scholar

15. 

L. W. Whitehead et al., “Diffractive imaging using partially coherent x rays,” Phys. Rev. Lett., 103 (24), 243902 (2009). https://doi.org/10.1103/PhysRevLett.103.243902 PRLTAO 0031-9007 Google Scholar

16. 

S. Flewett et al., “Extracting coherent modes from partially coherent wavefields,” Opt. Lett., 34 (14), 2198 –2200 (2009). https://doi.org/10.1364/OL.34.002198 OPLEDP 0146-9592 Google Scholar

17. 

P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature, 494 (7435), 68 –71 (2013). https://doi.org/10.1038/nature11806 Google Scholar

18. 

J. N. Clark et al., “Continuous scanning mode for ptychography,” Opt. Lett., 39 (20), 6066 –6069 (2014). https://doi.org/10.1364/OL.39.006066 OPLEDP 0146-9592 Google Scholar

19. 

J. N. Clark et al., “Dynamic imaging using ptychography,” Phys. Rev. Lett., 112 (11), 113901 (2014). https://doi.org/10.1103/PhysRevLett.112.113901 PRLTAO 0031-9007 Google Scholar

20. 

B. Chen et al., “Diffraction imaging: the limits of partial coherence,” Phys. Rev. B, 86 (23), 235401 (2012). https://doi.org/10.1103/PhysRevB.86.235401 Google Scholar

21. 

P. Li et al., “Breaking ambiguities in mixed state ptychography,” Opt. Express, 24 (8), 9038 –9052 (2016). https://doi.org/10.1364/OE.24.009038 OPEXFF 1094-4087 Google Scholar

22. 

N. Burdet et al., “Evaluation of partial coherence correction in x-ray ptychography,” Opt. Express, 23 (5), 5452 –5467 (2015). https://doi.org/10.1364/OE.23.005452 OPEXFF 1094-4087 Google Scholar

23. 

M. Lurie, “Fourier-transform holograms with partially coherent light: holographic measurement of spatial coherence,” J. Opt. Soc. Am., 58 (5), 614 –619 (1968). https://doi.org/10.1364/JOSA.58.000614 JOSAAH 0030-3941 Google Scholar

24. 

D. H. Parks, X. Shi and S. D. Kevan, “Partially coherent x-ray diffractive imaging of complex objects,” Phys. Rev. A, 89 (6), 063824 (2014). https://doi.org/10.1103/PhysRevA.89.063824 Google Scholar

25. 

I. McNulty et al., “High-resolution imaging by Fourier transform x-ray holography,” Science, 256 (5059), 1009 –1012 (1992). https://doi.org/10.1126/science.256.5059.1009 SCIEAS 0036-8075 Google Scholar

26. 

S. Eisebitt et al., “Lensless imaging of magnetic nanostructures by x-ray spectro-holography,” Nature, 432 (7019), 885 –888 (2004). https://doi.org/10.1038/nature03139 Google Scholar

27. 

L. M. Stadler et al., “Hard x ray holographic diffraction imaging,” Phys. Rev. Lett., 100 (24), 245503 (2008). https://doi.org/10.1103/PhysRevLett.100.245503 PRLTAO 0031-9007 Google Scholar

28. 

P. Gao et al., “Phase-shifting Zernike phase contrast microscopy for quantitative phase measurement,” Opt. Lett., 36 (21), 4305 –4307 (2011). https://doi.org/10.1364/OL.36.004305 OPLEDP 0146-9592 Google Scholar

29. 

D. M. Palacios et al., “Spatial correlation singularity of a vortex field,” Phys. Rev. Lett., 92 (14), 143905 (2004). https://doi.org/10.1103/PhysRevLett.92.143905 PRLTAO 0031-9007 Google Scholar

30. 

W. Wang et al., “Experimental study of coherence vortices: local properties of phase singularities in a spatial coherence function,” Phys. Rev. Lett., 96 (7), 073902 (2006). https://doi.org/10.1103/PhysRevLett.96.073902 PRLTAO 0031-9007 Google Scholar

31. 

L. C. Andrews and R. L. Phillips, 152 SPIE Press, Bellingham, Washington (2005). Google Scholar

32. 

Y. Cai et al., “Generation of partially coherent beams,” Prog. Opt., 62 157 –223 (2017). https://doi.org/10.1016/bs.po.2016.11.001 POPTAN 0079-6638 Google Scholar

33. 

Y. Chen and Y. Cai, “Generation of a controllable optical cage by focusing a Laguerre–Gaussian correlated Schell-model beam,” Opt. Lett., 39 (9), 2549 –2552 (2014). https://doi.org/10.1364/OL.39.002549 OPLEDP 0146-9592 Google Scholar

34. 

Y. Chen et al., “Experimental demonstration of a Laguerre–Gaussian correlated Schell-model vortex beam,” Opt. Express, 22 (5), 5826 –5838 (2014). https://doi.org/10.1364/OE.22.005826 OPEXFF 1094-4087 Google Scholar

35. 

F. Wang and Y. Cai, “Experimental observation of fractional Fourier transform for a partially coherent optical beam with Gaussian statistics,” J. Opt. Soc. Am. A, 24 (7), 1937 –1944 (2007). https://doi.org/10.1364/JOSAA.24.001937 JOAOD6 0740-3232 Google Scholar

36. 

R. H. Brown and R. Q. Twiss, “Correlation between photons in two coherent beams of light,” Nature, 177 (4497), 27 –29 (1956). https://doi.org/10.1038/177027a0 Google Scholar

37. 

F. Wang et al., “Experimental generation of partially coherent beams with different complex degrees of coherence,” Opt. Lett., 38 (11), 1814 –1816 (2013). https://doi.org/10.1364/OL.38.001814 OPLEDP 0146-9592 Google Scholar

38. 

J. Wang et al., “Gradual edge enhancement in spiral phase contrast imaging with fractional vortex filters,” Sci. Rep., 5 15826 (2015). https://doi.org/10.1038/srep15826 SRCEC3 2045-2322 Google Scholar

Biography

Xingyuan Lu received both her bachelor’s and master’s degree in physics from Soochow University, China, studying the light-field manipulation and measurement. Currently, she is a PhD candidate at Laboratory of Light Manipulation, Soochow University. Her research focuses on the combination of light-field manipulation and quantitative phase imaging.

Yifeng Shao received his bachelor’s degree from Sun Yat-sen University (Guangzhou, China). Later, he was enrolled by the Erasmus Mundus master’s programme and studied at the Institut d’optique Graduate School (Palaiseau, France) in 2012, and at Delft University of Technology (Delft, Netherlands) in 2013. In September 2013, he became a PhD candidate in the Optics Research Group at TUD. His research topics include optical design, aberration measurement techniques, and spatial coherence effect for imaging applications.

Chengliang Zhao is the cofounder of the Laboratory of Light Manipulation at Soochow University. He received his PhD in physics from Zhejiang University. His research interests include coherent optics, diffractive imaging, phase retrieval, and optical tweezers.

Sander Konijnenberg holds his BSc degree in applied mathematics and applied physics and his MSc degree in applied physics from Delft University of Technology (TUD), Netherlands. Currently, he is doing a PhD in optics at TUD on the topic of computational imaging.

Xinlei Zhu is a graduate student at School of Physical Science and Technology, Soochow University, Suzhou, China. Currently, she is working toward a PhD in light manipulation. Her research interests include the generation of light sources and computer graphics.

Ying Tang is a PhD researcher at Delft University of Technology, Netherlands. He received both his bachelor’s and master’s degree in optical engineering from Huazhong University of Science and Technology, China. During his master, he worked on biomedical imaging. Starting from 2013, he joint Optics Research Group of TUD and work on various topics such as meta-materials, optical communication, optical tweezer, and optical metrology.

Yangjian Cai is a professor at School of Physical Science and Technology, Soochow University, China. He received his BSc degree in physics from Zhejiang University, his PhD in physics from Zhejiang University, and his PhD in electromagnetic theory. His research interests include optical coherence and polarization, propagation, optical imaging, particle trapping, and turbulent atmosphere. He has published over 290 papers in refereed international journals, and he is a topical editor for JOSA A.

H. Paul Urbach is a professor and leader at the Optics Research Group of TUD. His current research interests are electromagnetic optics, improving resolution using phase and polarization sensitive illumination and detection and the use of new materials in optics. He has been president of the European Optical Society from 2014 to 2016 and from 2017 to 2018. He is a member of the board of ICO, scientific director of the Dutch Optics Centre, and member of the board of PhotonicsNL.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Xingyuan Lu, Yifeng Shao, Chengliang Zhao, Sander Konijnenberg, Xinlei Zhu, Ying Tang, Yangjian Cai, and H. Paul Urbach "Noniterative spatially partially coherent diffractive imaging using pinhole array mask," Advanced Photonics 1(1), 016005 (28 January 2019). https://doi.org/10.1117/1.AP.1.1.016005
Received: 22 June 2018; Accepted: 8 November 2018; Published: 28 January 2019
Lens.org Logo
CITATIONS
Cited by 37 scholarly publications and 3 patents.
Advertisement
Advertisement
KEYWORDS
Correlation function

Spatial coherence

Global system for mobile communications

Fourier transforms

Coherence imaging

Diffraction

Sensors

RELATED CONTENT


Back to Top