Open Access
17 September 2024 Multifocal microscopy for functional imaging of neural systems
Nizan Meitav, Inbar Brosh, Limor Freifeld, Shy Shoham
Author Affiliations +
Abstract

Significance

Rapid acquisition of large imaging volumes with microscopic resolution is an essential unmet need in biological research, especially for monitoring rapid dynamical processes such as fast activity in distributed neural systems.

Aim

We present a multifocal strategy for fast, volumetric, diffraction-limited resolution imaging over relatively large and scalable fields of view (FOV) using single-camera exposures.

Approach

Our multifocal microscopy approach leverages diffraction to image multiple focal depths simultaneously. It is based on a custom-designed diffractive optical element suited to low magnification and large FOV applications and customized prisms for chromatic correction, allowing for wide bandwidth fluorescence imaging. We integrate this system within a conventional microscope and demonstrate that our design can be used flexibly with a variety of magnification/numerical aperture (NA) objectives.

Results

We first experimentally and numerically validate this system for large FOV microscope imaging (three orders-of-magnitude larger volumes than previously shown) at resolutions compatible with cellular imaging. We then demonstrate the utility of this approach by visualizing high resolution three-dimensional (3D) distributed neural network at volume rates up to 100 Hz. These demonstrations use genetically encoded Ca2+ indicators to measure functional neural imaging both in vitro and in vivo. Finally, we explore its potential in other important applications, including blood flow visualization and real-time, microscopic, volumetric rendering.

Conclusions

Our study demonstrates the advantage of diffraction-based multifocal imaging techniques for 3D imaging of mm-scale objects from a single-camera exposure, with important applications in functional neural imaging and other areas benefiting from volumetric imaging.

1.

Introduction

Imaging is inherently a two-dimensional (2D) process; however, many biological samples of interest are three-dimensional (3D). This presents a challenge for traditional microscopy when fast, dynamic processes such as neuronal activity are the object of study. With these methods, volumetric imaging rates are hampered by the relatively slow movement of high inertia focusing mechanisms.13 Recent technological advances have improved the speed of these focal plane shifts through rapid movement of the objective using piezo drives, “remotely” focusing the microscope with low-inertia devices or completely inertia-less defocusing methods.4,5 Still, volumetric rates over relevant tissue volumes remain inadequate for most fast dynamical processes.5 Together with the need for imaging large numbers of neurons over large fields of view (FOV), dynamic volumetric imaging is a challenge that is not fully met using these “serial” acquisition methods.

To further advance volumetric imaging rates, other recent light sheet68 and light field-based methods leverage the incredible advances in imaging sensor technology for large-scale parallelism in the spatial domain and fast acquisition speeds. Light field microscopy (LFM)9,10 is one such spatially parallelized volumetric imaging technique that enables volumetric imaging of fluorescent and non-fluorescent samples. By measuring the light pattern at the focal plane of a lenslet array, both spatial and angular information about the object are gathered, and 3D information can be reconstructed.10 However, because the quality of the volumetric reconstruction depends on the number of lenslets, there is a fundamental trade-off between resolution and depth: As fewer camera pixels are capturing light from the object at a specific angular position when the lenslets are small, the spatial resolution of the LFM image is compromised relative to conventional 2D imaging and is not uniform along all depths.10 Although benefiting from high-resolution camera sensors, LFM does not optimally use the high pixel counts of modern sCMOS sensors, and the time-consuming and computation-intensive reconstruction process1012 severely underutilizes real-time visualization capabilities theoretically afforded by high frame rate cameras.

Multifocus microscopy (MFM) techniques offer a possible solution to these primary limitations of LFM. These methods reassign depth information from multiple focal planes into a tiled 2D array that can be imaged simultaneously on a camera.13 Sub-cellular resolution can be easily maintained even over large volumes by leveraging modern, large-format imaging sensors, in a more efficient manner than LFM, with speeds limited only by the camera frame rate. These techniques have been used for decades in various forms, including using beam-splitting strategies with single14,15 or multiple cameras,15,16 multiple imaging lenses of varied focal lengths,17 and diffractive optical elements (DOEs).1821 These methods also have the advantage of directly acquiring adjacent focal planes without the need for computational reconstruction, enabling real-time volumetric visualization. The DOE approach in particular is promising due to its simplicity, scalability to many simultaneous focal planes, and long history of use.13 Nevertheless, these methods are typically used to image tiny volumes for particle tracking applications.1821 Although they have been demonstrated for fluorescence functional neuroimaging19 of a small volume (40×40×16  μm), they have not found widespread use in large FOV volumetric fluorescence imaging typically used in systems neuroscience.5

Here, we present a powerful adaptation of the multifocal DOE approach for real-time acquisition of a large number of focal planes simultaneously, with diffraction-limited spatial resolution and importantly, over large (mm scale) volumetric FOVs. Similar to previous work,18 our technique achromatically diverts multiple focal plane images into different offset positions on the camera sensor, so they can be acquired in a single, fast (up to 10 ms) camera exposure [Fig. 1(a)]. The uniqueness of this work is its extension to low magnification and low NA configurations, enabling the extension of MFM to the rapid, 3D functional imaging of neural activity across FOVs containing hundreds of neurons. The novel versatility in our implementation also enables other important applications such as volumetric visualization of single blood cell flow in Zebrafish larvae and one-shot rendering of microscopic 3D objects.

Fig. 1

MFG-based multifocal imaging. (a) Imaging concept. 3D imaging is achieved by transforming the different focal plane images into a mosaic of images on the camera sensor. (b) Schematic of the optical design. The multiple depths optical imaging setup is based on three customized optical elements. The first element (MFG) is a diffraction grating that is responsible for splitting the microscope image into N×N different images (diffraction orders) while adding to each order a phase curvature to refocus different focal depths. The CCG and prism are responsible for correcting the chromatic dispersion caused by MFG. (c) Zoomed view of the different customized elements described in panel (b). Both CCG and prism are made of nine different panels, each corresponding to a different direction of the MFG diffraction light. The central panel is a plain optical window. (d) The mosaic of images on the camera sensor for the case of nine focal planes, with their associated diffraction orders from the grating. (e) Simulation validation of the nine PSFs obtained by a single point source at the focal plane. (f) Experimental validation of the method for the 10× microscope objective. The two extreme countercases of defocus are shown. (g) The stage shift distance versus each sub-image best focus. The experimental depth separation (graph slope, 16.46  μm) corresponds well to the original design (17  μm) and is well within the error bars range (the objective depth of field). (h) Zoom of the USAF resolution target verifying that the spatial resolution was not reduced, allowing for a resolution better than 2  μm/cycle.

NPh_11_S1_S11515_f001.png

2.

Materials and Methods

Similar to earlier work in this field, our multifocal system is added to a Nikon TE2000-U inverted microscope and consists of three customized elements that form N×N different images on the camera sensor (Andor Zyla 4.2 sCMOS), each of a different depth in the sample. The first element, a multifocal grating (MFG), is a 2D diffractive grating that has two roles: splitting the light into N×N different orders and adding an inverse phase curvature to each order to compensate for the defocus of each object’s depth. The splitting of light into N2 directions is achieved with a binary phase grating that was designed by a pixel flip phase retrieval algorithm adapted from previous work18 for an object of N×N point objects. To obtain large FOV images at each depth, we designed our element to have nine different sub-images (N=3) with a light efficiency of 60% after optimizing for multifocal performance factors such as the focal plane spacing and uniform brightness across sub-images [Fig. 1(b)]. This gives three orders of diffraction in both the x and y axes (mx, my=0, ±1). The inverse defocus phase compensation to each order of diffraction was done by a geometrical distortion at each axis of the grating13 according to

Eq. (1)

δx(xp,yp)=(d/λ)×n×Δz×1(xp2+yp2)/(n×fobj)2,δy(xp,yp)=N×δx(xp,yp),
where (xp,yp) is the objective pupil plane coordinates, d is the grating period, n is the refractive index of the medium between the sample and the objective, λ is the wavelength, Δz is the object depth separation, and fobj is the focal length of the microscope objective. Applying this kind of distortion on the grating will form a Δz×(mx+N×my) focus shift to each (mx,my) diffraction order. Consequently, the MFG forms a matrix of images on the camera sensor, each of different focal depths [Fig. 1(d)]. The MFG is relayed to the objective pupil plane to create a conjugate image of the object at the camera [Fig. 1(b)].

MFGs serve as an excellent solution for the case of multiple-depth imaging of monochromatic light. However, for polychromatic light, such as fluorescence emission from common fluorophores, the linear dependence of the wavelength on the diffraction angle in a diffraction grating22 will form chromatic dispersion that distorts the image. To overcome this problem, we designed a chromatic correction grating (CCG) consisting of a customized blazed grating and prism. The MFG and CCG were custom-manufactured by Holo/Or. The blazed grating is made of N×N different panels, each designed to compensate for the chromatic dispersion of an individual MFG order of diffraction. This is done by deflecting the angle of diffraction of each MFG order (for the central wavelength) by its inverse angle. Correcting the chromatic dispersion of the MFG inverts the angular separation of the focal planes, leading to overlapping images at the camera sensor. To compensate for this effect, a customized prism was added to deflect each order back to the inverse angular direction that was formed by the MFG.18 Both the chromatic correction grating and the prism are made of N2 panels, each oriented according to the expected diffraction angle from the MFG (the central panel that corresponds to light that was not diffracted by the MFG is a plane window). Figure 1(c) shows the drawing of each of the customized elements.

Unlike previous multifocal microscopes that were aimed at high magnification via a specific high-power objective, our goal was to image large numbers of depths at large FOVs with the flexibility to use different objectives with various magnifications. The condition of using different objectives with the same setup is found by calculating the MFG size:

Eq. (2)

DMFG=Mobj·Dobj=ftubef1×(2N.A×fobj),
where Mobj is the objective magnification, Dobj is the pupil size diameter, ftube is the microscope tube lens focal length, f1 is the first relay lens focal length, and N.A is the objective numerical aperture (NA). Because the microscope tube lens and relay lens focal length are well known, Eq. (2) is written as a linear equation of the NA and objective magnification:

Eq. (3)

DMFG=2cN.A×fobj=2cN.A×ftubeMobj,
where c is a constant factor equal to ftube/f1. By taking into account that ftube is also constant, using different objectives is possible as long as the ratio of N.A/Mobj does not change. In our design, we used a 10×, N.A 0.25, and a 4×, N.A 0.1 objective (Olympus). Higher NA objectives that do not adhere to the N.A/Mobj constraint could be utilized, but this would result in an altered pupil size, and a redesigned MFG would be required. Alternatively, demagnification of the pupil’s additional optics could be used, at the expense of a change in the design focal spacing. For the 10× objective, the focal depth separation between planes was designed to be 17  μm, giving a total depth of field of 8×17=136  μm, whereas for the 4× objective, the depth separation was restricted by Eq. (1) to be 115  μm (total depth of 920  μm). To avoid overlapping between different depths, the depth separation was chosen to be well above the depth of field of the objectives. The FOV of each image on the sensor is 0.5×0.5  mm2 and is 1.2×1.2  mm2 for the different microscope objectives. The chromatic correction of our setup enables a bandwidth of 520±20  nm, which suits various genetically encoded fluorescent indicators (GCaMPs) as well as bright field imaging applications.

3.

Results

We first validated the physical characteristics of our method with a series of optical simulations. We verified the design of our MFG by simulating nine different point sources at the depths that correspond to the desired depth corrections. The obtained point spread function at each depth demonstrates the depth correction [Fig. 1(e)]. The validation of the chromatic correction as well as the agreement with the camera sensor dimensions (to avoid overlapping between the sub-images) was done by ray transfer matrix analysis simulation, which gave us both the transverse and angular positions of each ray at each element position (see Fig. S1 in the Supplementary Material for the chromatic correction validation). To experimentally validate our multiple-depth ability, we focused each of the sub-images using a 2D USAF resolution target as an object and gradually taking the microscope out of focus [Fig. 1(f)]. By measuring the vertical shift of the microscope stage from its focus plane, we can measure the focal depth separation between the planes of each sub-image. This analysis shows excellent agreement with our design [Fig. 1(g)], as well as the method’s excellent spatial resolution [Fig. 1(h)].

To test our system’s ability to visualize large-scale neural networks, we first imaged 3D neural “Optonet” cultures,2325 an optically accessible and sensitive bioengineered cortical neural network exhibiting spontaneous neural activity. Imaging of these samples was done using a 10× objective at a 5 Hz frame rate. To further improve the in-plane resolution, we deconvolved each of the frames with the compatible 2D PSF, which was measured in advance with a sub-diffraction-limit bead.26 The application of the multiple-depth technique to the Optonet cultures enables the observation of different cells over the entire volume while preserving effective resolution, further allowing for the visualization of axons between cells at different depths [Fig. 2(a)]. We also measured the spontaneous neural activity of an Optonet culture that expresses the genetically encoded Ca2+ indicators (GECI) GCaMP6m [Fig. 2(b)], demonstrating that sub-cellularly resolved, independent neural activity can be observed throughout a large volume. Figure 2(c) shows the spontaneous activity of two cells in the volume, having both synchronized and independent Ca2+ traces.

Fig. 2

Experimental results for the neural Optonet culture. (a) The different depths images of the Optonet as taken by a single image of the multiple-depth microscope. The different cells in focus at different layers demonstrate the optical sectioning of this method. Notice the axon in panel 6 that connects two cells at different depths (denoted by an arrow). Scalebar=50  μm. (b) An example of two different depth images that were captured simultaneously for measurement of fast neural activity. (c) Spontaneous neural activity of two different cells [denoted by corresponding colors in panel (b)]. The results show correlated and independent activity of the cells. Scalebar=30  μm.

NPh_11_S1_S11515_f002.png

We next used our multifocal strategy in Tg(HuC:GCaMP5G)27 Zebrafish larvae expressing GCaMP5G pan-neuronally, in which network hyper-activity was induced by the application of 0.25 mM 4-AP, a potassium channel inhibitor. The larval zebrafish is a popular model for linking large-scale neural activity to simple behaviors. To better differentiate the neural activity from the background image, we subtracted the average image from each of the frames and visualized the Ca2+ events by pseudocolor [Fig. 3(a) and Video 1]. To further reduce the crosstalk effect of the scattered light across depths, we weakly thresholded the activity pattern (passing 95% of the detected signal at each depth, after first applying Gaussian spatial smoothing). The obtained neuronal activity pattern demonstrates peaks at different sub-regions in different planes as well as the time progress of the network along the volume at a resolution of single-cell activations [Figs. 3(a)3(d) and Video 1]. Figure 3(b) shows the activity of two different areas inside the volume, demonstrating both synchronized and non-synchronized activity in those areas.

Fig. 3

Simultaneous multiple depths calcium imaging in vivo. (a) Demonstration of the neural activity imaging at different depths of the larval zebrafish. Spatially and temporally distributed neural activity is shown across the midbrain and part of the forebrain with single-cell resolution. Scale bar: 100  μm, Δz=17  μm. (b) Neural activity of two single subregions [denoted by colored rectangles in panel (b)], demonstrating both independent and correlated neural activity. Inset, Image showing putative single-cell-resolved functional activity. Scale bar: 50  μm.

NPh_11_S1_S11515_f003.png

In addition to its application in multiple-depth visualizations of neural networks and their activity, the fast acquisition rate of this technique, limited only by the camera’s frame rate, can potentially be used in other interesting applications, such as visualizing 3D blood flow. To test this possibility, we imaged an agarose-embedded Zebrafish larva’s tail at 100 frames per second (camera readout limit), using brightfield imaging with the 4× objective. We confirmed that this enabled us to discern the flow of single blood cells at different depths, including the observation of high-velocity flow rates in veins as well as slower ones in capillaries (Video 2) at almost 1 mm depth. Finally, we also used our technique to surface-render a continuous 3D outline of a mesoscopic object from the nine independent focal depths. This was done by removing the out-of-focus information of each depth by a semi-automatic image processing algorithm that seeks sharp edges to estimate the focused region. For instance, we demonstrate this for an ant head [Figs. 4(a) and 4(b), see also Video 3]. The 3D rendering was done using a commercial software package (Imaris, Oxford Instruments, UK).

Fig. 4

3D reconstruction by a single multiple-depth image. (a) An ant head at nine different focal planes, taken by a single camera exposure. (b) Zoomed view of four different sub-images showing how different parts of the ant head are in focus at different depths. Scale bar, 75  μm. (c) 3D reconstruction of the head after removal of the out-of-focus data from (b).

NPh_11_S1_S11515_f004.png

4.

Discussion and Conclusion

In this study, we demonstrated a technique that enables rapid imaging of large volumes at microscopic resolution, using both fluorescence and bright-field modalities. Inspired by previous work on diffractive multi-focus microscopes,1821 the main advance in this work is the transition to the low magnification/large FOV domain, which now enables covering large areas (millimeter-scale) in each sub-image. This development moves multifocal techniques away from the small FOV, high NA, particle tracking regime into the realm of biological systems and large-scale functional neurophotonics, both of which benefit from volumetric imaging. This application domain requires both high spatial resolution and a large FOV, in addition to excellent temporal resolution. As an additional improvement, we also show the versatility of this approach to different microscope objectives by setting a selection rule for the objectives and demonstrating our approach with two different magnifications. These capabilities are becoming increasingly desirable in neuroscience to take advantage of next-generation GECIs and voltage indicators,28,29 which require extremely high volumetric rates to accurately sample.

In comparison with LFM, which also enables scan-less volumetric imaging, at the price of a lower spatial resolution, the trade-off in our approach is the limited field of view rather than the resolution. However, this limitation is generally much easier to overcome as it only requires a larger imaging sensor. In addition, this multiple-depth technique has other distinct advantages: the extraction of the volume information is immediate, all depths have the same resolution (which is the microscope resolution), there is no need for prior calibration of the setup, and it is simple to use. At the theoretical level, one of the major advantages of our method over the microscopic light field is that the point spread function is space invariant, as in most imaging techniques. This allows us to use conventional 3D deconvolution algorithms to reduce the out-of-focus crosstalk between the different planes. In future studies, we intend to investigate different algorithms for this goal.30 Although our method in its current implementation is most suitable for transparent or semi-transparent samples, in many cases, the effect of out-of-focus crosstalk can be reduced computationally (as described) or optically by designing a setup with depth separation that is much larger than the depth of field of the microscope objective. Moreover, further development of near-infrared functional indicators will further mitigate these issues,31 especially in shallow neocortical structures.

In addition to the important application of this approach in the field of neuroscience,29 the ability to measure large volumes rapidly without degradation of resolution has promising applications in the study of fluidics (see Video 2). It can also be advantageous in essentially all cases in which scan-based microscopic techniques are not fast enough to capture a dynamic process in all three dimensions. Moreover, the simplicity of this method, which enables scanless 3D microscopic imaging at a single camera exposure, may promote it as a favorable alternative even in cases in which high temporal resolution is not essential (e.g., Fig. 4).

5.

Appendix: Supplemental Videos

Disclosures

The authors declare that they have no competing interests.

Code and Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

We thank Sara Abrahamsson for a helpful introduction to multifocal microscopy, Lior Appelbaum for providing the Zebrafish, and Justin Little for helping to edit and prepare the manuscript.

References

1. 

W. Denk, J. H. Strickler and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science, 248 73 –76 https://doi.org/10.1126/science.2321027 SCIEAS 0036-8075 (1990). Google Scholar

2. 

P. J. Keller et al., “Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,” Science, 322 1065 –1069 https://doi.org/10.1126/science.1162493 SCIEAS 0036-8075 (2008). Google Scholar

3. 

S. W. Paddock, “Confocal laser scanning microscopy,” Biotechniques, 27 992 –1004 https://doi.org/10.2144/99275ov01 BTNQDO 0736-6205 (1999). Google Scholar

4. 

J. Lecoq, N. Orlova and B. F. Grewe, “Wide, fast, deep: recent advances in multiphoton microscopy of in vivo neuronal activity,” J. Neurosci., 39 9042 –9052 https://doi.org/10.1523/JNEUROSCI.1527-18.2019 JNRSDS 0270-6474 (2019). Google Scholar

5. 

J. Wu, N. Ji and K. K. Tsia, “Speed scaling in multiphoton fluorescence microscopy,” Nat. Photonics, 15 800 –812 https://doi.org/10.1038/s41566-021-00881-0 NPAHBY 1749-4885 (2021). Google Scholar

6. 

O. E. Olarte et al., “Decoupled illumination detection in light sheet microscopy for fast volumetric imaging,” Optica, 2 702 –705 https://doi.org/10.1364/OPTICA.2.000702 (2015). Google Scholar

7. 

S. Quirin et al., “Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy,” Opt. Lett., 41 855 –858 https://doi.org/10.1364/OL.41.000855 OPLEDP 0146-9592 (2016). Google Scholar

8. 

R. Tomer et al., “SPED light sheet microscopy: fast mapping of biological system structure and function,” Cell, 163 1796 –1806 https://doi.org/10.1016/j.cell.2015.11.061 CELLB5 0092-8674 (2015). Google Scholar

9. 

T. Nöbauer, A. Vaziri, Light field microscopy for in vivo Ca2+ imaging, CRC Press( (2020). Google Scholar

10. 

M. Broxton et al., “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express, 21 25418 –25439 https://doi.org/10.1364/OE.21.025418 OPEXFF 1094-4087 (2013). Google Scholar

11. 

T. Nöbauer et al., “Video rate volumetric Ca2+ imaging across cortex using seeded iterative demixing (SID) microscopy,” Nat. Methods, 14 811 –818 https://doi.org/10.1038/nmeth.4341 1548-7091 (2017). Google Scholar

12. 

R. Prevedel et al., “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods, 11 727 –730 https://doi.org/10.1038/nmeth.2964 1548-7091 (2014). Google Scholar

13. 

P. M. Blanchard and A. H. Greenaway, “Simultaneous multiplane imaging with a distorted diffraction grating,” Appl. Opt., 38 6692 –6699 https://doi.org/10.1364/AO.38.006692 APOPAI 0003-6935 (1999). Google Scholar

14. 

S. Xiao et al., “High-contrast multifocus microscopy with a single camera and z-splitter prism,” Optica, 7 1477 –1486 https://doi.org/10.1364/OPTICA.404678 (2020). Google Scholar

15. 

S. Geissbuehler et al., “Live-cell multiplane three-dimensional super-resolution optical fluctuation imaging,” Nat. Commun., 5 5830 https://doi.org/10.1038/ncomms6830 NCAOBW 2041-1723 (2014). Google Scholar

16. 

L. Sacconi et al., “KHz-rate volumetric voltage imaging of the whole zebrafish heart,” Biophys. Rep., 2 100046 https://doi.org/10.1016/j.bpr.2022.100046 (2022). Google Scholar

17. 

J. N. Hansen et al., “Multifocal imaging for precise, label-free tracking of fast biological processes in 3D,” Nat. Commun., 12 4574 https://doi.org/10.1038/s41467-021-24768-4 NCAOBW 2041-1723 (2021). Google Scholar

18. 

S. Abrahamsson et al., “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Methods, 10 60 –63 https://doi.org/10.1038/nmeth.2277 1548-7091 (2013). Google Scholar

19. 

S. Abrahamsson et al., “Multifocus microscopy with precise color multi-phase diffractive optics applied in functional neuronal imaging,” Biomed. Opt. Express, 7 855 –869 https://doi.org/10.1364/BOE.7.000855 BOEICL 2156-7085 (2016). Google Scholar

20. 

P. A. Dalgarno et al., “Multiplane imaging and three dimensional nanoscale particle tracking in biological microscopy,” Opt. Express, 18 877 –884 https://doi.org/10.1364/OE.18.000877 OPEXFF 1094-4087 (2010). Google Scholar

21. 

B. Hajj et al., “Whole-cell, multicolor superresolution imaging using volumetric multifocus microscopy,” Proc. Natl. Acad. Sci. U. S. A., 111 17480 –17485 https://doi.org/10.1073/pnas.1412396111 (2014). Google Scholar

22. 

A. Lipson, S. G. Lipson and H. Lipson, Optical Physics, Cambridge University Press( (2010). Google Scholar

23. 

H. Dana et al., “Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks,” Nat. Commun., 5 3997 https://doi.org/10.1038/ncomms4997 NCAOBW 2041-1723 (2014). Google Scholar

24. 

A. Marom et al., “Microfluidic chip for site-specific neuropharmacological treatment and activity probing of 3D neuronal “Optonet” cultures,” Adv. Healthcare Mater., 4 1478 –1483 https://doi.org/10.1002/adhm.201400643 (2015). Google Scholar

25. 

A. Marom et al., “Spontaneous activity characteristics of 3D “optonets”,” Front. Neurosci., 10 602 https://doi.org/10.3389/fnins.2016.00602 1662-453X (2017). Google Scholar

26. 

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J., 79 745 https://doi.org/10.1086/111605 ANJOAA 0004-6256 (1974). Google Scholar

27. 

V. Pérez-Schuster et al., “Sustained rhythmic brain activity underlies visual motion perception in zebrafish,” Cell Rep., 17 1098 –1112 https://doi.org/10.1016/j.celrep.2016.09.065 (2016). Google Scholar

28. 

A. S. Abdelfattah et al., “Neurophotonic tools for microscopic measurements and manipulation: status report,” Neurophotonics, 9 013001 https://doi.org/10.1117/1.NPh.9.S1.013001 (2022). Google Scholar

29. 

Y. Zhang et al., “Fast and sensitive GCaMP calcium indicators for imaging neural populations,” Nature, 615 884 –891 https://doi.org/10.1038/s41586-023-05828-9 (2023). Google Scholar

30. 

P. Sarder and A. Nehorai, “Deconvolution methods for 3-D fluorescence microscopy images,” IEEE Signal Process Mag., 23 32 –45 https://doi.org/10.1109/MSP.2006.1628876 ISPRE6 1053-5888 (2006). Google Scholar

31. 

Y. Qian et al., “A genetically encoded near-infrared fluorescent calcium ion indicator,” Nat. Methods, 16 171 –174 https://doi.org/10.1038/s41592-018-0294-6 1548-7091 (2019). Google Scholar

Biography

Nizan Meitav is a senior product engineer at Nvidia. He received his MSc and PhD degrees in physics from Technion, Israel Institute of Technology, and was a post-doctoral fellow in Technion’s Biomedical Engineering Department. His research interests include digital image processing, 3D microscopic imaging and novel imaging techniques.

Inbar Brosh is a lab manager in the Biomedical Engineering Department at the Technion, Israel. She received her MSc and PhD degrees in physiology from the Faculty of Health Sciences at Ben Gurion University and completed postdoctoral training at Haifa University. Her research interests are using fluorescent microscopy and electrophysiology for the development of medical tools.

Limor Freifeld is a senior lecturer of Biomedical Engineering at the Technion, Israel, and a Zuckerman faculty scholar. Her lab develops and applies microscopy technologies to reveal new structure-function relations in the nervous system. She received BSc and MSc degrees in biomedical engineering from Tel-Aviv University, both summa cum laude, and a PhD in electrical engineering from Stanford University. Limor was a postdoc at the Simons Center for the Social Brain at MIT.

Shy Shoham is a professor of neuroscience and ophthalmology at NYU School of Medicine, and co-director of NYU Tech4Health Institute. His lab develops photonic, acoustic and computational tools for neural interfacing. He holds a BSc in physics (Tel-Aviv University), a PhD in bioengineering (University of Utah), and was a Lewis-Thomas postdoctoral fellow at Princeton University. He serves on the editorial boards of SPIE Neurophotonics and the Journal of Neural Engineering, and he co-edited the Handbook of Neurophotonics. He is a fellow of SPIE.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Nizan Meitav, Inbar Brosh, Limor Freifeld, and Shy Shoham "Multifocal microscopy for functional imaging of neural systems," Neurophotonics 11(S1), S11515 (17 September 2024). https://doi.org/10.1117/1.NPh.11.S1.S11515
Received: 5 January 2024; Accepted: 22 August 2024; Published: 17 September 2024
Advertisement
Advertisement
KEYWORDS
Imaging systems

Biological imaging

Microscopes

Microscopy

Cameras

Objectives

3D image processing

Back to Top