Unidirectional imagers form images of input objects only in one direction, e.g., from field-of-view (FOV) A to FOV B, while blocking the image formation in the reverse direction, from FOV B to FOV A. Here, we report unidirectional imaging under spatially partially coherent light and demonstrate high-quality imaging only in the forward direction (A → B) with high power efficiency while distorting the image formation in the backward direction (B → A) along with low power efficiency. Our reciprocal design features a set of spatially engineered linear diffractive layers that are statistically optimized for partially coherent illumination with a given phase correlation length. Our analyses reveal that when illuminated by a partially coherent beam with a correlation length of |
1.IntroductionControlling and engineering the properties and behavior of light as a function of the wave propagation direction has been crucial for various advancements in optical sensing and imaging,1,2 including, e.g., unidirectional wave transmission systems. Common strategies for unidirectional transmission include employing, e.g., temporal modulation, the magneto-optical effect, nonlinear materials, or multilayer spatial modulation of light.3–12 For example, nonlinear optical materials with intensity-dependent permittivity can be used to create nonreciprocal devices.3–7 As another example, the engineering of structural asymmetry introduced by multilayer, lossy linear diffractive systems, despite being reciprocal, can also be used to create asymmetric wave transmission under spatially coherent illumination.8–12 These strategies have been used for the unidirectional transmission of waves, ensuring high-fidelity delivery of forward signals while featuring losses and distortions for backward signals. Nevertheless, these existing methods typically require high-power beams or rely on spatially coherent illumination. Partially coherent light, in general, helps mitigate image degradation due to speckle noise, minimizes cross talk among imaged objects, and is less susceptible to misalignments or defects in the optical system. Some of these benefits make partially coherent illumination particularly attractive for mobile microscopy,13–15 quantitative phase imaging,16 virtual and augmented reality displays,17 and light-beam shaping,18 among other applications.19 Here, we present unidirectional diffractive imagers that operate under spatially partially coherent illumination, featuring high image quality and power efficiency in the forward direction (A → B), while distorting the image formation in the backward direction (B → A) along with reduced power efficiency. Each partially coherent unidirectional imager represents a reciprocal and lossy linear optical device, designed through a set of spatially engineered diffractive layers that are jointly optimized using deep learning.8,20–22 Our findings indicate that the degree of the spatial coherence of the illumination, statistically quantified with the phase correlation length (), significantly impacts the performance of the unidirectional imager. Specifically, unidirectional diffractive imagers designed with a training correlation length () greater than exhibit very good unidirectional imaging behavior, accurately reproducing the input images with high structural fidelity and diffraction efficiency in the forward direction (A → B), while suppressing image formation in the reverse path (B → A). Diffractive unidirectional imagers designed for a smaller correlation length of still maintain asymmetric image formation, however, with a reduced figure of merit (FOM). We validated the unidirectional imaging performance of diffractive visual processors using different levels of spatial coherence, even though they were trained with a specific phase correlation length (), demonstrating the resilience of partially coherent unidirectional imagers to unknown changes in the spatial coherence properties of the illumination beam. We also demonstrated the internal and external generalization of the unidirectional imager designs across various image datasets, further highlighting their resilience to unknown data distribution shifts. With the unique advantages of being compact (with an axial length of ), polarization-insensitive, and compatible with different types of partially coherent sources, including light-emitting diodes, the presented unidirectional imager design offers new capabilities for asymmetric visual information processing. 2.Materials and Methods2.1.Forward Model of a Diffractive Visual Processor under Spatially Coherent IlluminationThe propagation of a coherent field from the to the diffractive plane is calculated using the angular spectrum method:23 where denotes the free-space propagation operation and represents the axial distance between two successive planes. and represent the two-dimensional (2D) Fourier transform and the inverse Fourier transform operations, respectively. The transfer function is defined as where and . and denote the spatial frequencies along the and directions, respectively. The diffractive layer modulates the phase of the transmitted optical field with a transmission function, : where denotes the learnable phase profile of the diffractive features located at the diffractive layer. The output intensity of a -layer diffractive visual processor can be written asHere, is the complex field at the input field of view (FOV): where represents the phase profile of the incident optical field and refers to the intensity profile.2.2.Forward Model of a Diffractive Visual Processor under Spatially Partially Coherent IlluminationTo model the propagation of a partially coherent field, we define the phase profile of the incident field as24–26 where is the wavelength of the illumination light. ‘’ refers to the 2D convolution operation. follows a normal distribution with a mean value of and a standard deviation of , i.e., . represents a zero-mean Gaussian smoothing kernel with a standard deviation of , defined by . We numerically tailored these phase profiles to the desired correlation length by adjusting the standard deviation, , of the Gaussian smoothing kernel. The phase correlation length, , of a partially coherent field was approximated using the autocorrelation function, , of the phase profile, :24,27,28Using Eq. (7), for a given combination of , , and values, we numerically approximated based on 2048 randomly selected phase profiles, . In this study, we used values ranging from to with increments of , corresponding to values of , , , , and , respectively. Figure S1 in the Supplementary Material provides more information on the generation of these phase profiles with the above-described parameters. The time-averaged intensity of the diffractive visual processor under spatially partially coherent illumination is calculated as Due to limited computing resources, we used in the blind testing stage of each trained diffractive model. 2.3.Training Loss FunctionThe partially coherent unidirectional diffractive processors presented in this work aim to align the forward output intensity closely with the target intensity profile, , while suppressing the backward output intensity, . To optimize the design of a partially coherent unidirectional diffractive imager, the loss function is composed of a forward loss function and a backward loss function , i.e., where the forward loss function is composed of three parts:Here, the normalized mean square error (NMSE) is used to penalize the structural differences between and target intensity , defined as where and refer to the number of pixels at the input and the output FOVs, respectively. is a constant used to normalize the forward output intensity to ensure that the NMSE is not affected by the output diffraction efficiency, and it is calculated byThe Pearson correlation coefficient (PCC) measures the linear correlation between the input intensity, , and the forward output intensity, , resulting in a value between −1 and 1; it is calculated by where and are the mean values of and , respectively.The third term in Eq. (10) refers to a diffraction efficiency-related loss function, which is used to increase the forward diffraction efficiency, , which is calculated by The backward loss function in Eq. (9) contains two parts: where the clip function is used to clip a 10% maximum value of the backward intensity, helping to avoid the unidirectional imager concealing the input information in the backward output. The first term in Eq. (15) is devised to block the backward output image formation, while the second term is used to reduce the backward diffraction efficiency. The weights , , , , and were empirically set to 1.0, 0.1, 0.05, 0.05, and 0.01, respectively.2.4.Performance Evaluation MetricsTo evaluate the performance of each unidirectional diffractive imager, four different metrics are used:
When calculating the backward metrics, such as , , and , we simply replaced with while keeping the other parameters unchanged. 3.Results3.1.Unidirectional Imager Design under Spatially Partially Coherent IlluminationFigure 1(a) illustrates the concept of a unidirectional imager under a spatially partially coherent, monochromatic illumination at a wavelength of . The processor is designed to implement a unidirectional imaging task: high structural similarity and high diffraction efficiency for the forward operation A → B [blue line in Fig. 1(a)] together with a distorted image and reduced diffraction efficiency at the backward operation B → A [brown line in Fig. 1(a)]. We used four diffractive layers, axially spaced by in this design. Each diffractive layer contains diffractive features used to modulate the phase of the transmitted optical field, with each feature having a lateral size of and a trainable phase value covering . The transmission functions of these diffractive features are optimized using a training dataset composed of MNIST handwritten digits (see Sec. 2). An engineered loss function guides the optimization of the unidirectional visual processor toward achieving two primary goals: (1) for A → B, it aims to minimize the structural differences between the output images and the ground-truth images using the NMSE and PCC, while concurrently maximizing the forward diffraction efficiency; (2) for B → A, the loss function maximizes the differences between the backward output images and the ground-truth images and simultaneously minimizes the backward diffraction efficiency (refer to Sec. 2 for details). For spatially partially coherent monochromatic illumination, we used the phase correlation length, , to quantify the degree of the spatial coherence of the source.24,29 Figure 1(b) illustrates some examples of the random phase profiles of partially coherent illumination with different correlation lengths, varying from to ; also see Sec. 2 and Fig. S1 in the Supplementary Material for details. During the training of a unidirectional diffractive imager, for each input object, we use different random phase profiles that follow a given correlation length at the input plane; the time-averaged intensity of the resulting complex output fields for these different phase profiles is then used to optimize the unidirectional imager performance based on our training loss function. Details about the optical model of a diffractive unidirectional imager, the training strategy, and loss functions can be found in Sec. 2. Figure 2(a) illustrates the layout of the partially coherent unidirectional imager design with a compact axial length of , which was optimized using and . This deep-learning-optimized diffractive design is composed of four spatially engineered diffractive layers, which are displayed in Fig. 2(b). We blindly tested this partially coherent unidirectional imager using 10,000 handwritten digits never seen during the training process; for each unknown input test object, 2048 random phase profiles, each with a phase correlation length of , were used to obtain the time-averaged intensity at the output plane (i.e., —which remained the same throughout our paper). We refer to this testing scheme as “internal generalization” from the perspective of the spatial coherence properties of the illumination light, since we maintained the same statistical phase correlation length in the testing stage as used in the training, i.e., . Some of these blind testing results are illustrated in Fig. 2(c). The first row in Fig. 2(c) shows the input amplitude objects used for both the forward and backward directions. The following two rows in Fig. 2(c) show the forward output images and the backward output images. To better illustrate the details of the backward output images, the last row in Fig. 2(c) further displays higher contrast images of the backward direction B → A, covering a lower intensity range. These visual comparisons clearly demonstrate the success of the partially coherent diffractive unidirectional imager design, reproducing the input images in the forward direction while blocking them in the backward direction—as desired. We also quantified the performance of this unidirectional visual processor using various metrics, including PCC, diffraction efficiency, and PSNR of the forward and backward directions, as shown in Figs. 2(d)–2(f). These metrics are calculated across 10,000 MNIST test objects, never used in training. The PCC values for the forward and backward are and , respectively. Furthermore, the forward diffraction efficiency, , is about fourfold higher than the backward diffraction efficiency, , as shown in Fig. 2(e). A similar desired performance is also observed between the forward PSNR and backward PSNR ( and , respectively), as shown in Fig. 2(f). 3.2.Impact of on the Performance of Partially Coherent Unidirectional Diffractive ImagersTo explore the influence of on the performance of partially coherent unidirectional imagers, we trained seven diffractive processors with different values ranging from 1 to 64 (see Fig. 3). All these diffractive models were trained and tested using the same partially coherent illumination with , and in our blind testing, we used . We observed that the diffractive processors trained with a larger exhibited improved asymmetric imaging performance between the forward and backward directions, as quantified in Figs. 3(a)–3(c). To better quantify the unidirectional imaging capability of a partially coherent diffractive processor, we defined an FOM by calculating the sum of the diffraction efficiency ratio and the image PSNR ratio between the forward and backward imaging directions (see Sec. 2). The resulting FOM is reported as a function of in Fig. 3(d), which reveals that increasing improves the FOM of the unidirectional imager up to , and beyond 16, the performance differences among different designs diminish. Figure 3(e) further presents the blind testing results for different values. The first three rows of Fig. 3(e) display the input amplitude objects (used for both the forward and backward directions), the forward outputs, and the backward outputs, respectively. The last row in Fig. 3(e) also displays the backward outputs with increased contrast. As desired, all the backward output images of these diffractive models appear as noise with poor diffraction efficiency. Based on these performance analyses, we conclude that is sufficient to design/train a partially coherent diffractive unidirectional imager, and therefore, we adopted in subsequent diffractive models to speed up the training process. Furthermore, Fig. S2 in the Supplementary Material illustrates the numerical performance comparison of a diffractive model trained with and tested under different values, revealing similar performance results for ranging from 16 to 2048. In the rest of our blind testing analysis, we used , since the testing time is negligible compared to the training process. 3.3.Impact of on the Performance of Partially Coherent Unidirectional Diffractive ImagersIn previous subsections, we analyzed the performance of partially coherent unidirectional diffractive imager designs with . To understand the impact of the on the performance of unidirectional imaging, we trained six additional diffractive visual processors under partially coherent illumination with different correlation lengths ranging from to . Each diffractive visual processor was tested with the same phase correlation length as used in the training, i.e., (see Fig. 4). To evaluate the performances of these diffractive visual processors in both the forward and backward directions, we calculated various metrics, such as PCC, diffraction efficiency, PSNR, and FOM, using 10,000 MNIST test images [see Figs. 4(a)–4(d)]. Figure 4(b) shows a sharp decline in the diffraction efficiency when illuminated by partially coherent light with shorter phase correlation lengths, suggesting that, in these cases, the optimization was empirically dominated by the structural loss term rather than the diffraction efficiency-related loss term. A visualization of the blind testing results is also displayed in Fig. 4(e), where the first three rows depict the input amplitude objects, the forward outputs, and the backward outputs, respectively. For better comparison, the last row in Fig. 4(e) also shows the backward output images with increased contrast. Note that we use a reduced intensity range for diffractive models with smaller phase correlation lengths due to their poor diffraction efficiency, as indicated by the red box in Fig. 4(e). According to the FOM values presented in Fig. 4(d) and the visualizations in Fig. 4(e), partially coherent diffractive processors exhibit a very good unidirectional imaging performance () when trained with a larger correlation length of , enabling unidirectional imaging with a large diffraction efficiency in the forward direction while suppressing image formation with a significantly reduced diffraction efficiency in the opposite direction. However, for diffractive processors trained with , the unidirectional imaging performance appears to diminish. Specifically, both the forward and backward output diffraction efficiencies of these diffractive models trained with fall below 1%, accompanied by a reduced FOM of to 2 [Fig. 4(d)]. When illuminated with partially coherent light with shorter correlation lengths, images of the input patterns can be observed in both the forward and backward outputs. To further explore the design characteristics of diffractive unidirectional imagers trained under different correlation lengths ranging from to , Fig. 5 illustrates the phase profiles of the optimized diffractive layers of each design. We observe that the central parts of the diffractive layers trained using and are relatively flat, which indicates poor imaging performance, since these central parts are crucial for image formation. In contrast, the diffractive layers trained with larger correlation lengths, , exhibit a completely different topology in each diffractive layer, indicating better learning/convergence and more effective utilization of the independent degrees of freedom at each diffractive layer, which is at the heart of better unidirectional imaging performance achieved for these diffractive models trained with , as also shown in Fig. 4. So far, we used the same level of partial spatial coherence during both the training and blind testing stages, i.e., . Next, we explored the generalization performance of a unidirectional diffractive imager under different levels of partial coherence during the blind testing stage (see Fig. 6). Each line in Fig. 6 depicts the performance of a unidirectional imager design (trained using a specific ) with respect to varying values. These findings support our previous observations, revealing that the unidirectional diffractive imagers trained with perform well, and these diffractive designs do not necessarily overfit to a specific value, exhibiting improved FOM as long as ; also see Figs. S3–S5 in the Supplementary Material for some blind testing examples further supporting these conclusions. In Fig. 6 and Figs. S4 and S5 in the Supplementary Material, we also observe that none of these diffractive designs achieves a decent unidirectional imaging FOM when tested with and , i.e., when the phase correlation length of the illumination beam approaches the diffraction limit of light in air. This poor performance of the unidirectional imager designs reported in Fig. 6 with is due to the fact the diffractive layers shown in Fig. 5 for overfitted to the relatively larger spatial coherence diameter of the illumination, failing external generalization and unidirectional imaging for and . However, the failure of the diffractive designs with and can be attributed to the lower spatial resolution and relative sparsity of our training dataset, failing to cover phase correlation lengths closer to the diffraction limit of light. To better shed light on this, we trained two new diffractive unidirectional imagers ( and ) with higher resolution training image datasets featuring random intensity patterns (see Figs. S6 and S7 in the Supplementary Material as well as Sec. 2 for details). Figure S6(e) in the Supplementary Material reveals that the diffractive layers of this new unidirectional imager design () better utilize the independent degrees of freedom at each layer, and avoid relatively smooth large regions at the center of each layer, which indicates a better optimization of the unidirectional imager. This improved performance is also evident in the comparisons provided in Figs. S6(a)–S6(d), Figs. S7(a)–S7(d), and Fig. 4(e) in the Supplementary Material, where the FOM of unidirectional imaging increased to 3.32 and 3.75, respectively, revealing improvements of -fold compared to our earlier designs with and . Figures S6 and S7 in the Supplementary Material also report unidirectional imaging metrics of these designs, i.e., PCC, diffraction efficiency, and PSNR, which are calculated using 10,000 random test images, further supporting the improved performance of these new designs. These analyses underscore the crucial role of the training image dataset in the performance of a diffractive unidirectional imager, especially for a phase correlation length of . 3.4.External Generalization of Partially Coherent Unidirectional Diffractive Imager Designs to New Image DatasetsNext, we showcase the external generalization capabilities of partially coherent diffractive unidirectional imager designs to other datasets that had never been used before. For this analysis, we used the EMNIST dataset30 containing images of handwritten English letters and a customized image dataset featuring various types of gratings. Both of these were never used during the training, which only used handwritten digits. These external generalization test results shown in Fig. 7 are obtained using the unidirectional imager design with . The first three rows in Figs. 7(a) and 7(b) depict the input amplitude objects, the forward outputs, and the backward outputs, respectively, all using the same intensity range. The last row shows the backward output images with increased image contrast, clearly confirming the unidirectional imaging capability and the successful external generalization of the diffractive design to unseen input images from different datasets. We also quantified the spatial resolution of this partially coherent unidirectional imager () using various resolution test targets with different linewidths that were previously unseen (see Fig. 8). The minimum resolvable linewidth (period) by this unidirectional imager design was found to be (), as indicated by the results in Fig. 8. These results further validate the successful external generalization of the unidirectional imager design for general-purpose imaging operation exclusively in the forward direction, despite being trained solely using the MNIST image dataset. Additionally, we calculated the average gradient of the image cross sections to estimate the forward and backward point-spread functions (PSFs) of the unidirectional imager, shown in Fig. S8 in the Supplementary Material. Each line displayed in Fig. S8 in the Supplementary Material was averaged over five evenly spaced cross sections (along the and directions) within the resolution test targets derived from the diffractive output images in Fig. 8. The sharp gradient peaks in the forward direction and the flat gradients in the backward direction highlight the asymmetric visual information processing of our unidirectional imager, as desired, demonstrating a decent resolution in the forward imaging direction while effectively blocking image information in the backward direction. Note that such gratings or resolution test targets were never seen by the unidirectional imager during the training process, and these presented results reflect the external generalization performance. The resulting imaging quality could be further improved by incorporating higher-resolution targets into the optimization process. 4.Discussion and ConclusionIn this work, we introduced a unidirectional diffractive imager that works under spatially partially coherent light, designed to image in one direction with high power efficiency while blocking/suppressing the image formation in the opposite direction with reduced diffraction efficiency. The presented unidirectional imager comprises phase-only diffractive layers optimized by deep learning, and it axially spans only , making it very compact. Our analyses revealed that when the phase correlation length of the illumination source exceeds , the partially coherent unidirectional processor designs exhibit a very good unidirectional imaging performance with an FOM of , enabling high diffraction efficiency imaging in the forward direction while inhibiting the image formation in the opposite direction with reduced diffraction efficiency. However, diffractive processors trained with show diminished asymmetric transmission, with both the forward and backward output diffraction efficiencies falling below 1% along with an FOM of to 2. As a mitigation strategy, we demonstrated that this performance limitation can be addressed using a higher-resolution training image dataset, which improved the unidirectional imaging FOM to even for . Furthermore, we successfully demonstrated both the internal and external generalizability of our unidirectional imager designs across different image datasets. Being a reciprocal optical design, the asymmetric information transmission that is achieved by diffractive linear optical processors is based on the task-specific engineering of the forward and backward PSFs that are spatially varying;31–34 this conclusion is true for spatially coherent, spatially incoherent, or partially coherent diffractive optical processors. In general, a diffractive optical processor can be trained, through image data, to approximate any arbitrary set of spatially varying PSFs between an input and output FOV. For example, under spatially coherent illumination, with sufficient degrees of freedom within the diffractive layers, a diffractive processor can approximate any arbitrary PSF set, , where is the spatially coherent complex-valued PSF, and and define the coordinates of the output and input FOVs, respectively. Similarly, under spatially incoherent illumination, any arbitrary spatially varying intensity impulse response, , can be approximated through data-driven learning.33 The loss function and the training image data are important for the accuracy, spatial resolution, and generalization behavior of these linear transformations. For the context of unidirectional imaging using lossy diffractive linear processors, however, the goal is to engineer the forward and backward spatially varying PSFs and make them “asymmetric,” suppressing the image formation in the backward direction while maintaining decent images in the forward direction. There is no unique solution for this task, since infinitely many combinations of and can be devised to achieve a desired unidirectional FOM value, where and refer to the spatially coherent PSF sets for the forward and backward directions, respectively. It should be emphasized that spatially incoherent and partially coherent diffractive unidirectional imagers can all be modeled through the behavior of and under statistically varying illumination phase patterns (defined by ) (see Sec. 2). In addition to the degree of coherence of the incident light and the spatial features of the training image dataset, the performance of a unidirectional imager design is also influenced by several system properties, including the number of diffractive layers (Fig. S9 in the Supplementary Material), the number of trainable features in each diffractive layer, the wavelength, the bandwidth of the source, the axial distance between successive layers (Fig. S10 in the Supplementary Material), and the pixel pitch. Increasing the number of trainable parameters, such as the number of layers and/or the number of diffractive features, enhances the overall performance of the system, albeit at the cost of longer training and fabrication/assembly time. Furthermore, by physically adjusting the structure of the diffractive layers, the presented design can be specifically tailored to perform unidirectional imaging with a desired magnification or demagnification factor.12 To analyze the impact of the number of diffractive layers on the performance of the unidirectional imager, we compared five diffractive designs featuring varying numbers of diffractive layers () ranging from 2 to 5, all illuminated with (see Fig. S9 in the Supplementary Material). The FOM for a diffractive unidirectional imager was 4.2, which increased to 6.3 for a unidirectional imager. Additionally, the output visualization demonstrates that the forward output of the unidirectional imager exhibits less noise and higher image contrast than the unidirectional imager, as shown in Fig. S9(e) in the Supplementary Material. These results align well with our previous findings,31,32 indicating that increasing the number of diffractive layers significantly enhances the performance of a diffractive optical system. The layer-to-layer distance () was empirically set as for all the reported designs. To assess the impact of on the performance of the unidirectional imager, we varied within a range of to (see Fig. S10 in the Supplementary Material). As illustrated in Fig. S10(d) in the Supplementary Material, increasing from to resulted in a slight improvement in the unidirectional imaging performance, with the FOM rising from 5.9 to 6.1. The output examples in Fig. S10(e) in the Supplementary also indicate that increasing to yields a higher contrast forward output image compared to the results with shorter distances, such as or . To experimentally validate the presented concept, a partially coherent illumination can be obtained by filtering an incoherent source with a 2D aperture, which then illuminates the input object plane. According to the van Cittert–Zernike theorem, the spatial coherence diameter of this input light can be controlled by the aperture size and the axial distance between the aperture and the input object plane.13 Once the input field is modulated by a diffractive unidirectional imager, the output intensity profile under partially coherent illumination can be measured using an image sensor array at the output FOV. Finally, the unidirectional imagers introduced in this work are highly compact, axially spanning , and they exhibit significant versatility that can be adapted to various parts of the electromagnetic spectrum; by scaling the resulting diffractive features of each transmissive layer proportional to the illumination wavelength, the same design can operate at different parts of the spectrum, including the visible and infrared wavelengths—without the need to redesign the diffractive layers of the unidirectional imager. This adaptability is poised to facilitate various novel applications in diverse fields, such as asymmetric visual information processing and communication, potentially enhancing privacy protection and mitigating multipath interference within optical communication systems, among others. 5.Appendix: Training DetailsThe unidirectional diffractive imagers reported in this work are designed for spatially partially coherent illumination at a wavelength of . The FOV A and FOV B (Fig. 1) share the same physical size of (i.e., ), discretized into . Each diffractive layer contains trainable diffractive features, modulating only the phase of the transmitted field. The axial distance between any two successive diffractive planes of a unidirectional imager is set to 11 mm, i.e., , corresponding to a numerical aperture of 0.96 within the diffractive system. All the unidirectional diffractive imagers were optimized using a training dataset composed of 55,000 MNIST handwritten digits, except for one diffractive processor illustrated in Fig. S6 in the Supplementary Material. To enhance their generalization capabilities, we randomly applied dilation or erosion operations to the original MNIST images using OpenCV’s built-in functions, “cv2.dilate” or “cv2.erode,” respectively. After data augmentation, the dilated, eroded, and original MNIST images were combined into a mixed dataset. This dataset was then divided into training, validation, and testing sets, each containing 55,000, 5000, and 10,000 images, respectively, with no overlap. Note that the unidirectional visual processor depicted in Fig. S6 in the Supplementary Material was trained using a higher-resolution image dataset consisting of random intensity patterns within an intensity range of [0, 1]. The total number of images in this random image dataset is equivalent to that of the MNIST dataset. All the diffractive visual processors are trained using the default optimizer (optax) in JAX, with a learning rate of 0.001 and a batch size of 32, over 50 epochs. All the models were trained and tested on JAX (version 0.4.1), utilizing a GeForce RTX 4090 graphical processing unit from NVIDIA Inc. Training a partially coherent diffractive unidirectional imager with four diffractive layers typically takes . Code and Data AvailabilityAll the data and methods needed to evaluate the conclusions of this work are presented in the main text and supporting information. Additional data can be requested from the corresponding author. The codes used in this work use standard libraries and scripts that are publicly available in JAX. Author ContributionsA.O. conceived and initiated the research; G.M., C.S., J.L., L.H., Ç.I., F.O., and A.O. conducted simulations. All the authors contributed to the preparation of this paper. A.O. supervised the research. ReferencesA. F. Cihan et al.,
“Silicon Mie resonators for highly directional light emission from monolayer ,”
Nat. Photonics, 12 284 https://doi.org/10.1038/s41566-018-0155-y NPAHBY 1749-4885
(2018).
Google Scholar
P. P. Iyer et al.,
“Unidirectional luminescence from InGaN/GaN quantum-well metasurfaces,”
Nat. Photonics, 14 543 https://doi.org/10.1038/s41566-020-0641-x NPAHBY 1749-4885
(2020).
Google Scholar
Y. Shi, Z. Yu and S. Fan,
“Limitations of nonlinear optical isolators due to dynamic reciprocity,”
Nat. Photonics, 9 388 https://doi.org/10.1038/nphoton.2015.79 NPAHBY 1749-4885
(2015).
Google Scholar
D. L. Sounas and A. Alù,
“Non-reciprocal photonics based on time modulation,”
Nat. Photonics, 11 774 https://doi.org/10.1038/s41566-017-0051-x NPAHBY 1749-4885
(2017).
Google Scholar
J. D. Adam et al.,
“Ferrite devices and materials,”
IEEE Trans. Microwave Theory Tech., 50 721 https://doi.org/10.1109/22.989957
(2002).
Google Scholar
H. Dötsch et al.,
“Applications of magneto-optical waveguides in integrated optics: review,”
JOSA B, 22 240 https://doi.org/10.1364/JOSAB.22.000240
(2005).
Google Scholar
D. L. Sounas and A. Alù,
“Time-reversal symmetry bounds on the electromagnetic response of asymmetric structures,”
Phys. Rev. Lett., 118 154302 https://doi.org/10.1103/PhysRevLett.118.154302 PRLTAO 0031-9007
(2017).
Google Scholar
J. Li et al.,
“Unidirectional imaging using deep learning–designed materials,”
Sci. Adv., 9 eadg1505 https://doi.org/10.1126/sciadv.adg1505 STAMCV 1468-6996
(2023).
Google Scholar
D. Frese et al.,
“Nonreciprocal asymmetric polarization encryption by layered plasmonic metasurfaces,”
Nano Lett., 19 3976 https://doi.org/10.1021/acs.nanolett.9b01298 NALEFD 1530-6984
(2019).
Google Scholar
Q. Sun et al.,
“Asymmetric transmission and wavefront manipulation toward dual-frequency meta-holograms,”
ACS Photonics, 6 1541 https://doi.org/10.1021/acsphotonics.9b00303
(2019).
Google Scholar
B. Yao et al.,
“Dual-layered metasurfaces for asymmetric focusing,”
Photonics Res., 8 830 https://doi.org/10.1364/PRJ.387672
(2020).
Google Scholar
B. Bai et al.,
“Pyramid diffractive optical networks for unidirectional image magnification and demagnification,”
Light Sci. Appl., 13 178 https://doi.org/10.1038/s41377-024-01543-w
(2024).
Google Scholar
O. Mudanyali et al.,
“Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,”
Lab. Chip, 10 1417 https://doi.org/10.1039/c000453g LCAHAM 1473-0197
(2010).
Google Scholar
S. O. Isikman, W. Bishara and A. Ozcan,
“Partially coherent lensfree tomographic microscopy,”
Appl. Opt., 50 H253 https://doi.org/10.1364/AO.50.00H253 APOPAI 0003-6935
(2011).
Google Scholar
D. Tseng et al.,
“Lensfree microscopy on a cellphone,”
Lab. Chip, 10 1787 https://doi.org/10.1039/c003477k LCAHAM 1473-0197
(2010).
Google Scholar
J. A. Rodrigo and T. Alieva,
“Rapid quantitative phase imaging for partially coherent light microscopy,”
Opt. Express, 22 13472 https://doi.org/10.1364/OE.22.013472 OPEXFF 1094-4087
(2014).
Google Scholar
Y. Peng et al.,
“Speckle-free holography with partially coherent light sources and camera-in-the-loop calibration,”
Sci. Adv., 7 eabg5040 https://doi.org/10.1126/sciadv.abg5040 STAMCV 1468-6996
(2021).
Google Scholar
Y. Chen, F. Wang and Y. Cai,
“Partially coherent light beam shaping via complex spatial coherence structure engineering,”
Adv. Phys. X, 7 2009742 https://doi.org/10.1080/23746149.2021.2009742
(2022).
Google Scholar
L. Liu et al.,
“Ultra-robust informational metasurfaces based on spatial coherence structures engineering,”
Light Sci. Appl., 13 131 https://doi.org/10.1038/s41377-024-01485-3
(2024).
Google Scholar
X. Lin et al.,
“All-optical machine learning using diffractive deep neural networks,”
Science, 361 1004 https://doi.org/10.1126/science.aat8084 SCIEAS 0036-8075
(2018).
Google Scholar
J. Hu et al.,
“Diffractive optical computing in free space,”
Nat. Commun., 15 1525 https://doi.org/10.1038/s41467-024-45982-w NCAOBW 2041-1723
(2024).
Google Scholar
D. Mengu et al.,
“Analysis of diffractive optical neural networks and their integration with electronic neural networks,”
IEEE J. Sel. Top. Quantum Electron., 26 1 https://doi.org/10.1109/JSTQE.2019.2921376 IJSQEN 1077-260X
(2020).
Google Scholar
B. Bai et al.,
“To image, or not to image: class-specific diffractive cameras with all-optical erasure of undesired objects,”
eLight, 2 14 https://doi.org/10.1186/s43593-022-00021-3
(2022).
Google Scholar
O. Korotkova, Random Light Beams: Theory and Applications, CRC Press,
(2013). Google Scholar
Y. Luo et al.,
“Computational imaging without a computer: seeing through random diffusers at the speed of light,”
eLight, 2 4 https://doi.org/10.1186/s43593-022-00012-4
(2022).
Google Scholar
Y. Li et al.,
“Quantitative phase imaging (QPI) through random diffusers using a diffractive optical network,”
Light Adv. Manuf., 4 1 https://doi.org/10.37188/lam.2023.017
(2023).
Google Scholar
J. W. Goodman, Statistical Optics, John Wiley & Sons,
(2015). Google Scholar
S. Lowenthal and D. Joyeux,
“Speckle removal by a slowly moving diffuser associated with a motionless diffuser,”
J. Opt. Soc. Am., 61 847 https://doi.org/10.1364/JOSA.61.000847 JOSAAH 0030-3941
(1971).
Google Scholar
M. W. Hyde,
“Controlling the spatial coherence of an optical source using a spatial filter,”
Appl. Sci., 8 1465 https://doi.org/10.3390/app8091465
(2018).
Google Scholar
G. Cohen et al.,
“EMNIST: extending MNIST to handwritten letters,”
in 2017 Int. Jt. Conf. Neural Networks IJCNN,
2921
–2926
(2017). Google Scholar
O. Kulce et al.,
“All-optical synthesis of an arbitrary linear transformation using diffractive surfaces,”
Light Sci. Appl., 10 196 https://doi.org/10.1038/s41377-021-00623-5
(2021).
Google Scholar
O. Kulce et al.,
“All-optical information-processing capacity of diffractive surfaces,”
Light Sci. Appl., 10 25 https://doi.org/10.1038/s41377-020-00439-9
(2021).
Google Scholar
M. S. S. Rahman et al.,
“Universal linear intensity transformations using spatially incoherent diffractive processors,”
Light Sci. Appl., 12 195 https://doi.org/10.1038/s41377-023-01234-y
(2023).
Google Scholar
Y. Li, J. Li and A. Ozcan,
“Nonlinear encoding in diffractive information processing using linear optical materials,”
Light Sci. Appl., 13 173 https://doi.org/10.1038/s41377-024-01529-8
(2024).
Google Scholar
|