|
1.IntroductionAs light from objects enters the eye (Fig. 1 ) it undergoes refraction, governed by Snell’s law, at the transition between the outer air and the anterior surface of the cornea. The light front undergoes further refraction as it passes through the posterior surface of the cornea and enters the anterior chamber of the eye. The light front then passes through the pupillary opening of the iris to enter the eye lens or crystalline lens. This is followed by several additional refractions taking place within the eye lens itself. Upon exiting the eye lens, the light front travels through the vitreous cavity, ultimately striking the retina of the eye (thereby forming an upside down, left-right inverted, and warped retinal image), where it is received by the photoreceptors—the rods and cones—and converted to electrochemical signals. These electrochemical signals are subsequently processed and compressed by the neural network cascade of the retina, at which point the “retinal image” does not exist any more as an image, but rather as a spatiotemporal neural spike pattern. This spatiotemporal neural spike pattern is then transmitted via the optic nerve to the visual cortex, where it is further processed at multiple levels and merged with other sensory inputs, ultimately leading to what we would call “visual perception” in a rather abstract (virtual) manner [see, for example, chapters 1 to 5 in Ref. 1 for an interesting (popular) description of the concept of vi sual perception (cognition)]. Therefore, we would like to emphasize the fact that visual perception is profoundly different from retinal images [and point-spread-functions (PSF) for that matter, which are a useful metric for optical engineering], and that we do not attempt to simulate the retinal processing cascade, let alone the visual cortex in this work. After all, even a successful simulation of these processing schemes would yield only a spatiotemporal neural spike pattern, but not a visible image that would resemble “visual perception.” In contrast, the visual perception simulation technique discussed here is admittedly not biologically motivated. However, it is a rather straightforward technique that yields results people can relate to, judging from their own personal experience (e.g., subjects who have one emmetropic eye, and one eye with a certain defect can confirm the visual experience by viewing the original image source and the simulated outcome with the respective other eye). Computer ray-tracing permits simulations of the optical properties of the human eye (e.g., Refs. 2, 3, 4, 5, 6, 7) and of visual perception under various eye defects.4 Three-dimensional (3D) scenes or two-dimensional patterns of point sources, including digital photographs (images), serve as “light-giving” input objects for a ray-tracing simulation. In ophthalmic ray-tracing, the path of the light rays is calculated between these input objects and the retina of a computer eye model (e.g., Gullstrand’s exact schematic eye model,8, 9 Fig. 2 , or more elaborate eye models, e.g., Refs. 10, 11, 12, 13, 14) using Snell’s law of refraction. The image formed on the retina is upside down, left-right inverted, and warped due to the curvature of the retina. To obtain a first-order visual perception without having to simulate the neural function (processing) of the retina and the visual cortex, Fink 4 devised a back-projection method for the retinal image through an idealized eye (i.e., an emmetropic eye with only minor aberrations but no additional eye defects) to an output screen of the same dimensions and at the same distance from the eye as the input object (Fig. 3 ). We have previously used Gullstrand’s exact schematic eye model with improved parameters (Figs. 2 and 3)4 with six spherical refractive surfaces, a spherical retina, and an added iris (for determining the degree of both on-axis and off-axis line-of-sight aberrations to be simulated), to study visual perception under various eye conditions such as myopia (nearsightedness), hyperopia (farsightedness), cataract caused by microvacuoles, dislocated intraocular lens after cataract surgery, and refractive scotomata (visual field defects) caused by the usage of correction lenses in automated perimetry (visual field testing method).4, 15, 16, 17, 18 While a qualitatively valuable, analytically calculable, and successful test environment for studying visual perception, Gullstrand’s exact schematic eye model also has its limitations, predominantly because of the sphericity of its surfaces and, resulting from that, a very limited customizability. To obtain more realistic and quantitative results, aspheric surfaces must be considered (e.g., Refs. 10, 11, 12). Ray tracing with aspheric surfaces can still be analytically calculable: Langenbucher, 19 for example, report on an algebraic method for ray tracing through the optical system of an eye with aspheric surfaces. Their method is restricted to second-order surfaces (quadric surfaces). In Sec. 2, we introduce a new ophthalmic ray-tracing tool for the simulation of visual perception, termed simEye,20 and discuss its underlying mathematical framework using Zernike polynomials2, 21, 22, 23 for extending the ray-tracing process to include “arbitrary,” nonspherical surfaces. In Sec. 3, we give examples of simulations, obtained with simEye, of visual perception under instances of emmetropia, regular astigmatism, irregular astigmatism, and (central symmetric) keratoconus—eye conditions that are characterized by aspheric corneal surfaces. 2.Methods2.1.Surface ModelingZernike polynomials are a set of orthogonal polynomials used in geometrical optics for representing different kinds of eye aberrations.2 More recently, their use for modeling or fitting of the refractive surfaces responsible for the aberrations has been proposed (e.g., Ref. 23). The Zernike polynomials can be defined in cylindrical coordinates as a function of (radius), (polar angle), and (height)2 withWe assume that cylindrical coordinates are provided for a set of points on the surface to be fitted, and hence, each point on this surface is specified by a radius , an angle , and a height . In the case of eye surfaces, such surface data can either originate from biometric measurements performed on real eyes or from model eyes. The goal is to obtain a surface that best approximates the surface defined by the input points. This is done by calculating the height from the Zernike polynomial fit with and as input parameterswhere is the number of input points, is the number of polynomials (from index 0 to index ), is the height of the input point at , and is the coefficient for the ’th Zernike polynomial. We perform a least-squares minimization of the average distance between the surface to be fitted and the fitting surface created by the Zernike polynomials (e.g., Ref. 22), expressed as the mean squared errorThe optimal fit (minimal ) is obtained when all partial derivatives vanishFrom this system of equations, we can extract the values of all the coefficients for the Zernike polynomials via matrix inversion and multiplication (e.g., Refs. 22, 23, 24, 25).This mathematical formulation has been implemented as a software package, termed simEye, in standard American National Standards Institute C and runs on UNIX platforms, such as Linux and Mac OS X. The surface input data are arranged in three columns (one column for each coordinate) that define the surface in cylindrical coordinates. simEye then attempts to fit the given surface starting with the user-specified number of Zernike polynomials. The measure chosen for evaluating the quality of the surface fit is the root-mean-squared error (RMSE) defined as (other error measures may be applied as well) If the RMSE between the given surface and the calculated fitting surface exceeds a maximum accuracy error prespecified by the user, the number of polynomials will be incremented by one and the software will try to fit the surface with the new, extended set of polynomials. This loop is repeated until the RMSE reaches the user-defined accuracy threshold, and at this point, the coefficients for each polynomial for the best surface fit are returned to the user. It should be cautioned that there is a possibility of noise fitting (overfitting) if the accuracy threshold is chosen too aggressively. This manifests itself in numerical instabilities and inefficient convergence behavior of the fitting procedure. In general, the accuracy threshold should be governed by the magnitude of the visual effect to be studied [e.g., visual effect of laser assisted in situ keratomileusis (LASIK) ablation pattern versus visual effect of astigmatism].2.2.Computer Ray-TracingIn the ray-tracing procedure used here (for more details see Ref. 4), light rays are propagated from an input object (e.g., digital image) through the eye under investigation (i.e., the eye with modeled eye defects), taking Snell’s law of refraction into account, to form a warped retinal image (due to the retinal curvature) that is upside down and left-right inverted. This is apparently not how we visually experience (perceive) the world. To obtain a first-order realistic visual perception without having to simulate the neural function (processing) of the retina and the visual cortex as mentioned in Sec. 1, this retinal image is then back-propagated through a defect-free, idealized eye (e.g., Gullstrand’s exact schematic eye with no additional eye defects, Fig. 2) to an output screen of the same dimensions and at the same distance from the eye as the input object (Fig. 3).4, 15 This ray-tracing procedure unwarps the retinal image, flips it right-side up and, from a visual perception perspective, projects the image to where it originates (i.e., “we see things where they are”). Using the geometric optics ray tracing above, it is not sufficient to flip the retinal image both vertically and horizontally in order to obtain a first-order simulation of visual perception for the following reasons: (1) the retinal image is warped due to the curvature of the retina (eyeball) in contrast to our experience of visual perception; (2) while the source image is, for computer-simulation purposes, a digital image and hence consists of well-defined, discrete, and equidistant individual pixels, the computer-simulated retinal image is characterized by a subpixel resolution and therefore does not adhere to well-defined, equidistant individual pixels any more. A horizontal and vertical flip operation on the retinal image would only be possible if one were to “average out” the subpixel resolution in order to arrive at a pixel-based (i.e., grid-based) retinal image. Such an averaging procedure would produce artifacts and would, in addition, not remove the warping of the retinal image, in other words, it would not produce a realistic experience of visual perception. Because light rays could fail to hit the output image due to refraction while propagating through both the eye with modeled defects and the defect-free eye for back-propagation, the path of the light rays is reversed for practical purposes, making the output image the starting point for the ray-tracing procedure (Fig. 3).4, 15 This means that the light rays are propagated from the pixels of the output image through Gullstrand’s exact schematic eye (i.e., defect-free eye) and subsequently through the eye with the modeled eye defects toward the input source image. This guarantees that all pixels of the output image will be filled with color information from the input source image, thereby reducing void light ray calculations. To describe the trajectory of the individual light ray, several mathematical and optical calculations must be performed for each of these surfaces. These can be summarized as
2.2.1.Calculation of the intersection point between the light ray and the corresponding eye surfaceA light ray is described in a linear algebraic, parameterized form with a point of origin and a (normalized) direction vector multiplied by a scalar parameter Different values of define all points along the trajectory of the light ray. Recalling the mathematical formulation of the eye surfaces, a surface based on Zernike polynomials is defined as follows:or in vector formTo determine the intersection point between the light ray and the eye surface, the following expression must be solved for (details are in Ref. 26):This can be numerically accomplished with the 3D Newton-Raphson method.25, 262.2.2.Calculation of the corresponding surface normal in the intersection pointWith the intersection point determined, we proceed to calculate the normal of the corresponding surface at this point. This is necessary to obtain the new direction vector of the refracted light ray in the next step. The surface normal is defined as follows: where is the partial derivative with respect to of the fitting surface , and is the partial derivative with respect to of the fitting surface . These partial derivatives can be obtained analytically by differentiating Eqs. 1, 2 with respect to and .2.2.3.Calculation of the direction of the refracted light ray by applying Snell’s law of refractionTo determine the direction of the refracted light ray, we apply Snell’s law of refraction where is the angle between the original light ray (before refraction) and the surface normal at the intersection point, is the angle between the refracted light ray and the normal, and and are the refraction indices on either side of the refracting surface. Since the surface normal is already known, one needs only to calculate the new refracted angle and from that the new direction vector for the light ray (details are in Ref. 26). Once the new direction vector is obtained, it only remains to replace the previous one with the new one and to set the point of origin of the light ray as the intersection point with the last surface.3.ResultsWe have performed computer ray-tracing simulations, using simEye,20 of the visual perception under instances of the following four eye conditions:
All four eye conditions above are characterized by aspheric corneal surfaces. To create an eye model to be used with the simEye ray-tracing procedure, we have, without loss of generality, rebuilt Gullstrand’s exact schematic eye model (Fig. 2). We have fitted its spherical refractive surfaces (with the exception of the anterior corneal surface, which is to be modeled according to the respective eye condition) and the spherical retina with respective sets of Zernike polynomials both for distance viewing (see Table 1 for surface parameters8) and for maximal accommodation (see Table 2 for surface parameters8). We would like to emphasize that any of these surfaces can be replaced by surfaces that are fitted to more elaborate and realistic eye models (e.g., Refs. 10, 11, 12, 13), otherwise modeled data, or to actual biometric data27 (see also Sec. 4). Table 1Gullstrand exact schematic eye parameters for focus at infinity (⩾5m) (Ref. 8).
Table 2Gullstrand exact schematic eye parameters for maximal accommodation (10.23cm) (Ref. 8).
It is important to note that in the following, the correct viewing distance for the simulated visual perceptions, depicted in Figs. 4, 5, 6, 7, 8 , is a few centimeters from the picture plane with one eye covered (i.e., monocular viewing). The reason for this is that in the following simulations (see Table 3) the source image has a height of and a width of and is viewed from a distance of (distance viewing) with the eye to be simulated (i.e., eye with eye defect). Therefore, we simulate a visual field of about radially, which allows for both on-axis and off-axis line-of-sight aberrations. The dimension of all simulated images is , including the original image source (Fig. 4, top). The simulation results for maximal accommodation are not shown. The diameter of the pupil is user-adjustable in simEye and was set to for all depicted simulation results (see Table 3 ).
Table 3Parameters and data for surface fitting (columns 1 and 2) and ray tracing with simEye (columns 3 to 6). The computation times are based on an Apple PowerMac G5 Dual 2GHz with 8GB of RAM running Mac OS X Tiger. Only one CPU was used per ray-tracing simulation with simEye.
Table 3 summarizes the parameters, data, and results for surface fitting and ray tracing with simEye for all the eye conditions simulated above. 4.DiscussionThe computer ray-tracing tool, simEye, presented here permits simulations of the optical properties of the human eye (both on-axis and off-axis line-of-sight aberrations). Further it allows for a first-order approximation of the visual perception under various eye defects without the need for simulating the neural processing of the retina and the visual cortex. This is accomplished by back-projecting through an idealized eye the retinal image of an object or scene produced by an eye with a certain eye defect. Obviously the choice of an unimpaired Gullstrand exact schematic eye model as the back-projecting idealized eye introduces spherical aberrations in addition to the visual effects produced by the eye with eye defects. However, simulations using a Gullstrand eye with an aspheric cornea [see case 1(b) in the Sec. 3] as the back-projecting idealized eye show that this is a minor effect that does not impact the overall visual perception, in particular in the central visual field. A more sophisticated eye model with reduced spherical aberration could be employed to further reduce this side effect. simEye, in contrast to earlier ray-tracing simulations,4, 15, 16, 17, 18 permits the introduction of arbitrary surfaces (e.g., aspheric surfaces), represented or fitted by a set of Zernike polynomials. Furthermore, the usefulness of Zernike polynomials for fitting actual surface data, in addition to wave-front data (optical aberrations),21, 22 is demonstrated, in agreement with Carvalho’s findings.23 It should be pointed out that, although the corneal surfaces considered in this study are mathematically modeled (i.e., somewhat artificial), the presented Zernike-based surface-fitting framework is equally applicable to realistic, biometrically measured data of any surface within the eye under investigation (e.g., Ref. 27). Furthermore, surfaces that do not exhibit symmetries with respect to the optical axis, such as tilted or laterally dislocated surfaces, can be fitted as well by expanding the surface-fitting framework to incorporate rotation and translation matrices (e.g., Ref. 24) that operate on the sets of Zernike polynomials used for the fits. For example, this would allow for the simulation of off-center, asymmetric keratoconus conditions. Moreover, additional surfaces can be introduced, necessary for more elaborate and realistic eye models (e.g., Refs. 11, 12). For example, this would allow for the simulation of a multishell crystalline lens with varying refractive index. The Stiles-Crawford effect 28 is currently not considered in simEye since the neural function of the retinal receptors is not modeled. The Stiles-Crawford effect describes the angular dependence of retinal sensitivity. Light rays that enter the pupil near its center (axial light), which are parallel to retinal receptors, are more effective than oblique rays, which enter the pupil near its margins (off-axis light). Therefore, light passing through the periphery of the pupil is less efficient at stimulating vision than light passing near the center of the pupil (i.e., axial light forms sharper images than off-axis light) and hence increases the depth of focus. Since the Stiles-Crawford effect is not modeled, the spherical and aspherical aberrations exhibited in the periphery (i.e., at high eccentricities) of the simulated visual perceptions may be overestimated. One feasible way to mitigate the Stiles-Crawford effect within simEye is to choose a small pupil diameter for the simulations, because the Stiles-Crawford effect, like most aberrations, is bigger with bigger pupils. Similarly, the contrast sensitivity function (CSF), sometimes also called visual acuity, is currently not considered in simEye. The CSF tells us how sensitive we are to the various frequencies of visual stimuli. If the frequency of visual stimuli is too high, we will not be able to recognize the stimuli pattern any more because of the limited number of photoreceptors in the retina. Since the density of photoreceptors in the retina drops exponentially toward higher eccentricities, the effect of the CSF, akin to the Stiles-Crawford effect, can be mitigated in simEye again by using a small pupil size for the simulations. Furthermore, chromatic aberration of the human eye is currently not considered in simEye. The ray-tracing procedure outlined here is ideally suited for parallel processing on a cluster computer (or computers accessible via the Internet akin to, e.g., SETI@home, http://setiathome.ssl.berkeley.edu/) since each light ray can be calculated independently from each other. Thus, an almost perfect linear speedup with the number of available central processing units (CPUs) can be expected for each still image to be generated with simEye. The same holds true for the generation of ray-traced movies 20where each frame of a movie can be independently calculated from one another. Using simEye as a front end, we have been able to create an optimized software package that allows for the generation of animated movies at a rate of about , that is, one movie frame every , enabling near real-time performance. simEye may have a wide range of applications in science, optics, and education. This tool may help educate (train) both the lay public, for example, patients before undergoing eye surgery, and medical personnel, such as medical students and professionals. Moreover, simEye may help simulate and investigate optical lens systems, such as cameras, telescopes, microscopes, and robotic vision systems. Furthermore, it may help study the visual perception through multifocal intraocular lenses and through intracorneal lenses. Finally, simEye may be used as a scientific research tool to investigate the visual perception under a variety of eye conditions, in addition to the ones presented here, and after various ophthalmic surgical procedures such as cataract surgery and LASIK (e.g., Refs. 29, 30). AcknowledgmentsThe authors would like to thank Vincent McKoy for a critical review of the manuscript and Wayne Waller for invaluable discussions about the complexity and concept of visual perception. One of the authors (WF) would like to thank Erich W. Schmid for having introduced him to the field of ray tracing in ophthalmology. One of the authors (DM) would like to acknowledge the support of the Summer Undergraduate Research Fellowship (SURF) program at Caltech. This research and the fellowship were supported by National Science Foundation Grant No. EEC-0310723. ReferencesJ. Hawkins and
S. Blakeslee, On Intelligence,
(2004) Google Scholar
M. Born and
E. Wolf, Principles of Optics, 5th ed.Pergamon Press, Oxford
(1975). Google Scholar
J. E. Greivenkamp,
J. Schwiegerling, and
J. M. Miller,
“Visual acuity modeling using optical raytracing of schematic eyes,”
Am. J. Ophthalmol., 120
(2), 227
–240
(1995). 0002-9394 Google Scholar
W. Fink,
A. Frohn,
U. Schiefer,
E. W. Schmid, and
N. Wendelstein,
“A ray tracer for ophthalmological applications,”
Ger. J. Ophthalmol., 5 118
–125
(1996). 0941-2921 Google Scholar
I. Escudero-Sanz and
R. Navarro,
“Off-Axis Aberrations of a Wide-Angle Schematic Eye Model,”
J. Opt. Soc. Am. A, 16 1881
–1891
(1999). 0740-3232 Google Scholar
J. Y. Huang and
D. Moore,
“Computer simulated human eye modeling with GRIN incorporated in the crystalline lens,”
J. Vision, 4
(11), 58a
(2004). 1534-7362 Google Scholar
R. Navarro,
L. González, and
J. L. Hernández,
“Optics of the average normal cornea from general and canonical representations of its surface topography,”
J. Opt. Soc. Am. A, 23 219
–232
(2006). 0740-3232 Google Scholar
A. Gullstrand, Handbuch der Physiologischen Optik, 1 226 3rd ed.L. Voss, Hamburg Leipzig
(1909). Google Scholar
W. Trendelenburg,
M. Monjé,
I. Schmidt, and
E. Schütz, Der Ge-sichtssinn, 2nd ed.Auflage, Springer, Berlin Göttingen Heidelberg
(1961). Google Scholar
W. Lotmar,
“Theoretical eye model with aspherics,”
J. Opt. Soc. Am., 61 1522
–1529
(1971). 0030-3941 Google Scholar
R. Navarro,
J. Santamaria, and
J. Bescos,
“Accommodation-dependent model of the human eye with aspherics,”
J. Opt. Soc. Am. A, 2 1273
–1281
(1985). 0740-3232 Google Scholar
H.-L. Liou and
N. A. Brennan,
“Anatomically accurate, finite model eye for optical modeling,”
J. Opt. Soc. Am. A, 14 1684
–1695
(1997). 0740-3232 Google Scholar
A. Popiolek-Masajada and
H. Kasprzak,
“Model of the optical system of the human eye during accommodation,”
Ophthalmic Physiol. Opt., 22
(3), 201
(2002). 0275-5408 Google Scholar
M. F. Deering,
“Perception: A photon accurate model of the human eye,”
649
–658
(2005). Google Scholar
W. Fink,
A. Frohn,
U. Schiefer,
E. W. Schmid,
N. Wendelstein, and
E. Zrenner,
“Visuelle wahrnehmung bei hohen ametropien—Computergestützte simulation mittels strahlenoptischer rechnungen,”
Klin. Monatsbl. Augenheilkd., 208 472
–476
(1996). 0023-2165 Google Scholar
W. Fink,
U. Schiefer, and
E. W. Schmid,
“Effect of dislocated and tilted correction glasses on perimetric outcome—A simulation using ray-tracing,”
201
–204
(1997). Google Scholar
W. Fink,
A. Frohn, and
E. W. Schmid,
“Dislocation of intraocular lens analysed by means of ray tracing,”
Invest. Ophthalmol. Visual Sci., 37
(3), 770
(1996). 0146-0404 Google Scholar
W. Fink,
“Project Eyemovie: Motion visualization of eye defects,”
Invest. Ophthalmol. Visual Sci., 42
(4), S705
(2001). 0146-0404 Google Scholar
see also Project “Eyemovie,”
http://www.eyemovie.org Google Scholar
A. Langenbucher,
A. Viestenz,
A. Viestenz,
H. Brünner, and
B. Seitz,
“Ray tracing through a schematic eye containing second-order (quadric) surfaces using matrix notation,”
Ophthalmic Physiol. Opt., 26
(2), 180
–188
(2006). 0275-5408 Google Scholar
D. Micol and
W. Fink,
“SIMEYE: Computer-based simulation of visual perception under various eye defects,”
Invest. Ophthalmol. Visual Sci., 47 578
(2006). 0146-0404 Google Scholar
See also: http//autonomy.caltech. edu/biomedicine/project _simeye.html Google Scholar
D. Malacara,
J. M. Carpio-Valadéz, and
J. J. Sánchez-Mondragón,
“Wavefront fitting with discrete orthogonal polynomials in a unit radius circle,”
Opt. Eng., 29 672
(1990). https://doi.org/10.1117/1.2168920 0091-3286 Google Scholar
J. L. Rayces,
“Least-squares fitting of orthogonal polynomials for the wave-aberration function,”
Appl. Opt., 31 2223
–2228
(1992). 0003-6935 Google Scholar
L. A. Carvalho,
“Accuracy of Zernike polynomials in characterizing optical aberrations and the corneal surface of the eye,”
Invest. Ophthalmol. Visual Sci., 46
(6), 1915
–1926
(2005). 0146-0404 Google Scholar
D. F. Rogers and
J. A. Adams, Mathematical Elements for Computer Graphics, 2nd ed.McGraw-Hill, New York
(1990). Google Scholar
W. H. Press,
B. P. Flannery,
S. A. Teukolsky, and
W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, 286
–289 Cambridge University Press, Cambridge, NY
(1991). Google Scholar
W. Fink,
“Refractive correction method for digital charge-coupled device-recorded Scheimpflug photographs by means of ray tracing,”
J. Biomed. Opt., 10
(2), 024003
(2005). https://doi.org/10.1117/1.1899683 1083-3668 Google Scholar
K. D. Singh,
N. S. Logan, and
B. Gilmartin,
“Three-dimensional modeling of the human eye based on magnetic resonance imaging,”
Invest. Ophthalmol. Visual Sci., 47
(6), 2272
–2279
(2006). 0146-0404 Google Scholar
W. H. Stiles and
B. H. Crawford,
“The luminous efficiency of rays entering the eye pupil at different points,”
Proc. R. Soc. London, Ser. B, 112 428
–450
(1933). 0962-8452 Google Scholar
D. Ortiz,
J. M. Saiz,
J. M. Gonzalez,
J. N. Fernandez del Cotero, and
F. Moreno,
“Geometric ray tracing for design of customized ablation in laser in situ keratomileusis,”
J. Refract. Surg., 18
(3 Suppl.), S327
–S331
(2002). 1081-597X Google Scholar
D. Ortiz,
J. M. Saiz,
J. M. Gonzalez,
J. I. Velarde,
J. N. Fernandez del Cotero, and
F. Moreno,
“Optimization of an individualized LASIK surgery. Geometric ray tracing model,”
Arch. Soc. Esp. Oftalmol., 78
(8), 443
–449
(2003). Google Scholar
|