1.IntroductionRapid advancement in virtual reality (VR) technology has broadened its implementation beyond entertainment1 toward education2 and a wide variety of medical applications.3–5 Clinically, VR has been investigated for the visualization of two-dimensional (2D) and three-dimensional (3D) medical images6,7 and preoperative surgical planning8 in an immersive environment. It has also shown potential for patient-facing vision therapy such as amblyopia treatment9–11 and pain management.12 Technically, to create an immersive user experience, VR head-mounted displays (HMDs) present a pair of virtual images on two eyepieces generalizing a stereoscopic visualization of a virtual scene. Unlike on conventional flat-panel displays, the display hardware and optics are independent of two eyepieces. On the other hand, image rendering and processing techniques also interact interocularly on the display pipeline, followed by binocular fusion by the user or the patient. Therefore, visual experience and clinical effectiveness on VR HMDs are affected by both monocular and binocular image quality. On each HMD eyepiece, technical challenges in optics, display, and sensor technologies limit the monocular image quality on VR HMDs in both spatial and temporal domains.13 Specifically, the pixel resolution limit of VR HMDs is typically to 20 cycles per degree (cpd), which is lower than the human vision limit of . The image resolution can also be affected by the software and rendering techniques such as anti-aliasing,14 image warping,15 foveated rendering,16 and trade-offs between spatial resolution and latency.17 In addition, optical aberration by the VR lenses can substantially degrade the image contrast and resolution on each eyepiece, especially at the periphery of the display field of view (FoV) when the optical axes of the human eye and VR lens are misaligned.18,19 For instance, as illustrated in Fig. 1(a), two example medical images, i.e., axial slices of a brain MR image (left) and a segmented vertebra microstructure image (right), are rendered on a VR HMD at the center of the display FoV shown in Fig. 1(b). These two images are transformed into the Fourier domain as shown in the bottom row of Fig. 1. The vertebra microstructure contains more high-spatial-frequency content. However, image quality degradation through the VR display pipeline including the rendering engine, display pixelation, and optical aberration by the VR lens dramatically suppresses the spatial resolution (see the Fourier-domain vertebra image in Fig. 1(b) captured by a front aperture camera). Note that this is under the condition that the entrance pupil of the camera was placed at the eye point of the HMD and that the image was centered to ensure optimal image quality with the user’s interpupillary distance (IPD) or camera’s entrance pupil position matching the physical IPD setting of the VR HMD. Although most VR HMDs enable physical IPD adjustment, the adjustable IPD range may be limited, which is challenging for users or patients with small (e.g., pediatric patients) or large IPDs. Monocular image quality can be further contaminated if the IPD of the user does not match that of the HMD. For instance, as shown in Fig. 1(c), the camera was shifted laterally by 5 mm toward the temporal direction, emulating an IPD mismatch of 1 cm. At the same time, the medical images were placed at the periphery of the display FoV (at azimuth angle toward the left-hand side of the user). In this case, the optical axes of the lens and the camera are misaligned resulting in a more substantial reduction in spatial resolution as shown in Fig. 1(c), in comparison with the central views with the correct IPD setting in Fig. 1(b). It is clearly visualized that image quality is not identical on both eyepieces. More specifically, in this setup with a temporal direction eye rotation plus pupil position translation, image resolution degradation due to optical aberration is further amplified on the left eyepiece [see the comparison between the left and right eye visualizations shown as the left half and right half images in Fig. 1(c)] leading to binocular image quality discrepancy. This is not only shown in the high-spatial-frequency content such as bone microstructure but also affects the low-contrast content perception such as the brain MR image as an example. Image quality degradation and binocular inconsistency can potentially affect the clinical effectiveness of VR for medical applications. However, the evaluation of binocular image quality on VR HMDs is technically challenging without established methods or standards. Recent work has evaluated the impact of binocular inconsistency on VR HMDs for applications such as gaming.20,21 However, the current evaluation of image quality on VR HMDs is still generally based on monocular optical bench measurements with test methods established in the International Electrotechnical Commission (IEC) 63145-20 standard22 and Information Display Measurements Standard.23 It remains uncertain whether these measurements adequately capture the binocular image quality experienced by users in medical applications. In vision science, a number of binocular summation models have been studied by fitting the contrast perception and phase data from perceptual experiments.24–30 However, it is technically challenging to combine the complicated binocular model (except for the oversimplified “” model31) primarily in the image domain with optical bench testing results. In addition, specific requirements on the light measuring devices (LMDs) and bench setup have been recommended in the standards to emulate the eye anatomy and rotation mechanism to investigate gaze-dependent monocular image quality on VR HMDs. Unfortunately, the experimental setup is nontrivial to enable an eye rotation geometry, e.g., 5 degrees of freedom (DoFs) translational and rotational stages are recommended in the IEC 63145-20-10 standard.32 At the same time, the LMD should be compact to fit the bench setup and carefully calibrated yielding accurate luminance and spatial measurement results.33 In this study, as an alternative to optical bench testing, we present a software platform to evaluate binocular image quality on VR HMDs across the display FoV by performing human contrast perception experiments to measure the contrast sensitivity response (CSR). The perceptual experiments aim to bypass the limitations of bench testing for more efficient image quality evaluation without advanced lab equipment. We compare the monocular and binocular contrast detection results to emphasize the image quality discrepancies interocularly and between monocular and binocular perception using different IPD configurations. 2.Methods2.1.Head-Mounted DisplayA Meta Quest 2 HMD was used for perceptual experiments to measure the monocular and binocular CSRs from the participants. The Meta Quest 2 HMD is based on a fast-switching liquid crystal display (LCD) backplane with a pair of Fresnel lenses. The pixel resolution of the display backplane is 1832 (horizontal) × 1920 (vertical) pixels per eye, refreshing at 90 Hz. The Fresnel lenses magnify the virtual images yielding a horizontal FoV of . It offers three physical IPD settings, i.e., 58 mm (small), 63 mm (nominal), and 68 mm (large), providing users with options to customize the HMD IPD for optimal viewing comfort and visual performance. The Meta Quest 2 HMD was selected to validate the test platform as described in the section below. There are many different optical and display designs for VR HMDs such as using pancake lenses or microdisplays that may vary the perceptual test results. The goal of this paper is to develop the test method instead of comparing the perceptual experimental results across various HMDs and display technologies. 2.2.WebXR Target Detection PlatformWe developed a WebXR platform to perform perceptual experiments by determining the CSR defined as the reciprocal of threshold contrast () that a participant can detect on a VR HMD. Note that the CSR measured in this study is different from the contrast sensitivity function (CSF) of the naked human eyes without HMD modulation.34 In other words, CSR measures the CSR that incorporates the HMD optical aberration, whereas CSF describes the human contrast perception capability without the HMD. WebXR is an application programming interface for developing and hosting VR and AR web-based applications on compatible mixed-reality headsets. The WebXR application was built using the A-Frame library and was executed on an Oculus Browser that displayed Gabor stimuli for detection. The Gabor target is a 2D sinusoidal pattern with predefined contrast (), whose dimension is determined by the standard deviation of a Gaussian bracket () and spatial frequency (). The Gabor target centered at location can be illustrated as where and are the background display gray level (GL) and modulation of the Gabor target determined by the display GL, respectively, and the exponential term describes the Gaussian envelope. is the radial distance with respect to the center of the stimuli that is given by where is associated with the angular location of the stimuli. The standard deviation () of the Gaussian bracket is preset to (radius of ). The spatial frequency of the Gabor target varied from 0.52 to 5.72 cycles per degree covering the most sensitive frequency range of the human eyes.As shown in Fig. 2(a), the Gabor target was placed at nine locations across the FoV following the IEC 63145-20-20 standards22 at a long distance of 150 m away such that the target aligns with the optical axis of eyepiece lenses when placed at the center ( and are zero). The target angular dimension is in diameter. We admit that the long view distance may cause visual discomfort due to the vergence–accommodation conflict.35 The impact of VR visual distance on contrast sensitivity should be evaluated in future work. Table 1 summarizes the 3D coordinates of the target at various locations. For example, moving from the center to the right, the target was laterally translated by 25 m, i.e., corresponding to an azimuth angular rotation of , i.e., . At the top right position, an additional 25-m translation was added on , yielding azimuth and elevation rotations of of , corresponding to an angle of diagonally. This angle will ensure that the target is within the participant’s FoV for all IPD conditions in the perceptual experiments to be described in Sec. 2.3. At each location, the orientation of the target was modified such that the sinusoidal contrast pattern is always circular (or perpendicular to the radial direction) with respect to the target location. It has been shown that the circular orientation of the pattern is suitable to capture the contrast degradation by optical aberration.19 Table 1Physical parameters of the Gabor target.
2.3.Perceptual ExperimentsThe Food and Drug Administration (FDA) Institutional Review Board (IRB) reviewed and approved the study protocol (IRB 2022-CDRH-038) and that all human participant studies were conducted under the approved protocol. We recruited nine human participants (with ages between 21 and 47 years, average age of 27.7, three females, and six males) with normal or corrected to normal vision by wearing refractive glasses in the HMD to participate in the perceptual experiments. Therefore, the potential impacts of visual acuity, astigmatism, and age (presbyopia) on the perceptual experimental results are minimized. In addition, the evaluated spatial frequency range is up to 5.72 cycles per degree, which is primarily limited by the HMD resolution. Such spatial frequencies are much lower than the human visual system capability up to cycles per degree. In this case, we ensure that the impact of visual acuity variation among the participants should not substantially affect the test results. The IPD of the participants was recorded. Among the nine participants, three have small IPDs of less than 61 (mean IPD of 58.3 mm) corresponding to the small HMD IPD setting of 58 mm. Three participants with large IPDs beyond 66 mm (mean IPD of 70.3 mm) agree with the 68-mm IPD setting of the HMD. The remaining three participants have IPDs within the range of 61 to 66 mm (mean IPD of 63.3 mm) in alignment with the nominal HMD IPD setting of 63 mm. Perceptual experiments were conducted in three groups on each combination of participant IPD and HMD hardware setting:
Each human observer experiment contains 45 trials (nine target locations × five Gabor spatial frequencies). Prior to the experiment, the physical IPD setting of the HMD was adjusted. During each trial, the participant was instructed to observe the threshold contrast of the target shown on the Quest 2 HMD by adjusting the contrast of the Gabor target using a controller or a Bluetooth keyboard with a minimum adjustable contrast of 0.004. Once the participant determined the threshold contrast (), the contrast sensitivity, as the reciprocal of the measured threshold contrast, at a specified location and spatial frequency was recorded. For each participant, the above experiment was repeated three times binocularly with both eyes open and monocularly on only the left or right eye by wearing an eye patch on the other eye underneath the headset. The result of each experiment was saved into a JavaScript Object Notation (JSON) file on the headset. We extracted the files and then obtained the HMD-modulated CSR over the spatial frequencies. Although the user-controlled determination of threshold contrast features fast experiments for the evaluation of multiple spatial frequencies and IPD settings, it introduces potential bias and random error from the participant. In future work, a stair-step procedure with a binary forced choice task (i.e., a yes–no detection task) can be investigated to eliminate potential bias from human participants.36 2.4.Impact of Display Luminance Response on Contrast PerceptionAs illustrated in Sec. 2.2, the WebXR engine defines the input contrast () based on the digital content (i.e., display GLs) where and are the peak (bright lines) and minimum (dark lines) GLs of the Gabor target. If we define as the amplitude of the signal (sinusoidal wave) in GLs, then and .However, it should be clarified that the input contrast for the display differs from the perceptual contrast for an observer. The perceptual contrast () is generally determined by the luminance difference of the target, which can be expressed as where and are the peak and minimum luminance of the Gabor target. equals to if the display luminance is linear to the display GL. Otherwise, the display luminance response, also known as the gamma curve, needs to be taken into account when computing the perceptual contrast. If the dark luminance of the display is ignored, a simplified display luminance response can be expressed as where is the display luminance at GL . and are the maximum luminance and GL of the display. Figure 3 shows the measured display luminance response of the evaluated Meta Quest 2 HMD in the logarithm scale with display peak luminance of and display gamma extracted as by fitting the display luminance at various GLs. The luminance measurement was performed using a calibrated imaging photometer (LMK 6, TechnoTeam, Ilmenau, Germany).The display luminance response can be used to convert the display GLs to luminance. Therefore, by substituting Eqs. (3) and (5) into Eq. (4), the input contrast can be converted to perceptual contrast by If the detected threshold contrast is small, i.e., for a small On the other hand, if the image quality is poor leading to a large threshold contrast, i.e., or It is indicated that the nonlinear display luminance response leads to different threshold contrast computed as the GL () and luminance modulation (). Therefore, we distinguish CSR computed using input display GL and luminance as and in the presented results, respectively. 3.Results3.1.Monocular and Binocular Contrast Perception for Participants with Different IPDs3.1.1.Monocular contrast perceptionGroup 1 experiments: First, we focus on monocular contrast sensitivity (see red and yellow lines in Figs. 4Fig. 5–6). For participants using the appropriate IPD configuration on the HMD (group 1 participants), as shown in Fig. 4 (a), is optimized at the center of display FoV, yielding improved contrast perception at spatial frequencies greater than 4 cycles per degree compared with peripheral target locations. It has been reported that optical aberration of the HMD lens degrades the contrast and effective resolution at the periphery of VR display FoV19 due to the shift of visual point on an eyepiece (). Group 2 experiments: IPD misalignment, where the IPD of the participant does not match the physical lens displacement of the HMD, negatively affects the monocular image quality. For participants with smaller IPD than that of the HMD (group 2 participants), Fig. 5(a) shows no obvious difference between nasal (e.g., right eye gazing toward left) and temporal (e.g., right eye gazing at the right) nor major interocular contrast perception difference between the left and right eyes. We suspect this is due to the effect of shifts of visual point and visual axis compensating for each other with details and an experimental validation provided in Sec. 3.3. Group 3 experiments: Fig. 6(a) shows that interocular variation is substantially pronounced for participants with larger IPD than that of the HMD (group 3 participants). drops dramatically if the eye is rotated in the temporal direction (e.g., the right eye rotates toward the right). As illustrated in Sec. 3.3, this is because both the visual point and visual angle shift favor nasal eye rotation. 3.1.2.Binocular contrast perceptionAs shown in Figs. 4Fig. 5–6(a) in blue curves, binocular perception on a VR HMD is primarily dominated by the eye with superior contrast sensitivity. This observation is consistent with the finding of interocular contrast difference by Wang et al.37 in an augmented reality setup. Binocular image quality on VR HMDs, as measured by the , does not always equal to the monocular perception. The difference is particularly enhanced for misaligned IPD between the human participant and HMD optics (e.g., see Fig. 6). A simple “” model31 may be used to compute binocular CSR from monocular perceptual data. However, this model may be oversimplified for the complex human visual system. Recent work has investigated the interocular contrast and color differences in binocular AR displays using binocular summation models in the image domain.37,38 3.2.Comparison Between CSR Computed by Input Display Gray Level and Display LuminanceAs illustrated in Sec. 2.4, the nonlinear display luminance response results in a difference in threshold contrast computed using display GLs () and display luminance (). As shown in Figs. 4Fig. 5–6(b), the ratio of equals to , which approaches to of 2.1 when the threshold contrast is small or for higher CSR values at low spatial frequencies. On the other hand, the ratio drops to 1, indicating approaches when the threshold contrast is close to 1, i.e., very poor contrast detectability at the high spatial frequencies especially for IPD mismatched conditions. It should be emphasized that the WebXR rendering engine does not involve display luminance calibration. Therefore, the contrast and CSR obtained from the WebXR platform are computed based on the display input GLs. To convert threshold contrast and CSR into the luminance domain, display luminance response as shown in Fig. 3 should be obtained. 3.3.Impacts of IPD Misalignment and Eye Rotation GeometryTo better understand the perceptual results shown in Figs. 4Fig. 5–6, we investigate the impact of IPD misalignment and eye rotation geometry. Specifically, we believe two geometrical parameters would affect the image quality on the VR HMD: the spatial variation of the lateral shift of the visual point on the HMD eyepiece denoted as and the angle between the optical axes of the eye and HMD lens defined as . For group 1 participants, as illustrated in Fig. 7(a), without IPD misalignment () is approximately angular symmetric on the right eyepiece but increases radially indicating increased aberration. Comparing monocular CSRs between the left and right eyes with appropriate IPD settings, it shows that contrast detection on the Quest 2 HMD slightly favors nasal eye rotation, i.e., the left eye gazing toward the right or the right eye toward the left (see CSR results in Fig. 4). We suspect that the variation of monocular vision is mainly associated with the angular rotation between the optical axes of the eye (involving angle kappa of the human visual system39,40) and HMD eyepiece lens (see shown in Fig. 7(b) for the right eye). As illustrated in Fig. 7(a), angle kappa is defined as the angle between the pupillary axis (i.e., optical axis of the eye perpendicular to the cornea) and the visual axis (axis that intercepts the fixation point and fovea).39 In other words, angular misalignment between the eye and HMD lenses, as quantified by , can affect image quality. To validate this hypothesis, we performed optical bench measurements of the modulation transfer function (MTF) on the evaluated Meta Quest 2 HMD using an eye rotation geometry22,41 with additional rotation equal to an estimated angle kappa of 4 deg. It is shown in Fig. 8(a), on the right eyepiece, that the MTF measured at an azimuth angle of (left, nasal rotation) is superior to the result at 9 deg (right, temporal rotation). For group 2 experiments, Fig. 5 shows that the monocular CSRs of the left and right eyes are not substantially different. This can be illustrated by Figs. 7(c) and 7(d) that and individually favor temporal and nasal eye rotation, respectively. Therefore, and compensate for the effect of each other. In the MTF measurement shown in Fig. 8(b), with of by shifting the camera lateral position by , the temporal eye rotation (e.g., right eye gazing right, yellow curve) shows slightly higher MTF than the nasal eye rotation. However, the difference may not be substantial to be visualized in the plots in the logarithm scale in Fig. 5, which can be potentially mitigated by increasing the number of group 2 participants in future work. Compared with CSRs measured using the appropriate IPD setting in Fig. 4, participants with smaller IPD than the HMD slightly enhance the contrast sensitivity for temporal eye rotation (e.g., right eye gazing toward right). A similar trend is shown in the MTF measurement in Figs. 8(a) and 8(b). For group 3 experiments, the perceptual experiment shows much higher for the nasal eye rotation (e.g., the right eye gazing toward left). This is consistent with the MTF measurements shown in Fig. 8(c) with of 1 cm. The interocular discrepancy in CSRs can be explained by Figs. 7(c) and 7(e) with both and favor the nasal eye rotation. Smaller values of and indicate that (1) IPD misalignment is minimized and (2) the optical axis of the HMD lens aligns with the pupillary axis of the eye, resulting in superior image quality on the HMD for nasal eye rotation. 4.ConclusionWe develop a WebXR test platform to evaluate monocular and binocular contrast perception on a VR HMD for human participants with various IPDs compared with the physical IPD setting of the HMD. For monocular perception, CSR decreases at the periphery of display FoV. It is illustrated that the interocular contrast sensitivity variation is associated with the shift of visual spot on the HMD eyepiece, which is determined by the participant and HMD’s IPD setting and gaze location (). Besides, monocular vision is adjusted by the angle between the optical axes of the eye and HMD eyepiece lens, namely, . For participant with smaller IPD than the HMD, and compensate each other resulting in similar CSF interocularly. On the other hand, for participant with larger IPD than the HMD optics, both and favor nasal eye rotation leading to substantial difference in CSRs between the left and right eyes. Binocular vision is dominated by the eye with superior image quality. Note that the results presented in this work should only be applied to the evaluated VR HMD (Meta Quest 2) or other HMDs with very similar optical and display designs. On the other hand, the test platform and perceptual experimental method are generalizable to other VR HMDs with WebXR compatibility. Limitations of this work include variations of visual capability among human participants, random error when determining the threshold contrast during the experiments, and a limited number of evaluated HMDs. In addition, unbalanced vision between the left and right eyes (e.g., dominant eye or amblyopia) may affect the monocular and binocular contrast perception results. To address these limitations, in future work, adding more participants can potentially reduce the statistical error. Binocular contrast perceptual performance may be compared with different HMDs to evaluate the impact of display and optics design on contrast sensitivity. Implementing a staircase yes–no detection method in the experiments can potentially minimize the participant input to reduce the random error.36 Finally, sophisticated binocular summation models24,27,42 may need to be investigated to bridge the gap between monocular and binocular contrast perception. DisclosuresThe mention of commercial products, their sources, or their use in connection with material reported herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services. This is a contribution of the US Food and Drug Administration and is not subject to copyright. Human ParticipantsAll human perceptual studies were conducted under a study protocol (IRB 2022-CDRH-038) approved by the FDA IRB. Code and Data AvailabilityThis paper does not have associated code. Contrast sensitivity data from an individual human participant cannot be provided per FDA IRB protocol 2022-CDRH-038. AcknowledgmentsKhushi Bhansali was supported by an appointment to the Research Participation Program at the Center for Devices and Radiological Health administrated by the Oak Ridge Institute for Science and Education through an interagency agreement between the US Department of Energy and the US Food and Drug Administration. ReferencesT. Hartmann, J. Fox,
“Entertainment in virtual reality and beyond: the influence of embodiment, co-location, and cognitive distancing on users’ entertainment experience,”
The Oxford Handbook of Entertainment Theory,
(2021). Google Scholar
D. Kamińska et al.,
“Virtual reality and its applications in education: survey,”
Information, 10 318
(2019). https://doi.org/10.3390/info10100318 Google Scholar
J. T. Verhey et al.,
“Virtual, augmented, and mixed reality applications in orthopedic surgery,”
Int. J. Med. Rob. Comput. Assist. Surg., 16
(2), e2067 https://doi.org/10.1002/rcs.2067
(2020).
Google Scholar
R. Eastgate et al.,
“Modified virtual reality technology for treatment of amblyopia,”
Eye, 20
(3), 370
–374 https://doi.org/10.1038/sj.eye.6701882 12ZYAS 0950-222X
(2006).
Google Scholar
K. E. Laver et al.,
“Virtual reality for stroke rehabilitation,”
Cochr. Database Syst. Rev., 11
(11), CD008349 https://doi.org/10.1002/14651858.CD008349.pub4
(2017).
Google Scholar
J. Sutherland et al.,
“Applying modern virtual and augmented reality technologies to medical images and models,”
J. Digital Imaging, 32 38
–53 https://doi.org/10.1007/s10278-018-0122-7 JDIMEW
(2019).
Google Scholar
F. Pires, C. Costa and P. Dias,
“On the use of virtual reality for medical imaging visualization,”
J. Digital Imaging, 34 1034
–1048 https://doi.org/10.1007/s10278-021-00480-z JDIMEW
(2021).
Google Scholar
C. Boedecker et al.,
“Using virtual 3D-models in surgical planning: workflow of an immersive virtual reality application in liver surgery,”
Langenbeck’s Arch. Surg., 406 911
–915 https://doi.org/10.1007/s00423-021-02127-7
(2021).
Google Scholar
S. Xiao et al.,
“Randomized controlled trial of a dichoptic digital therapeutic for amblyopia,”
Ophthalmology, 129
(1), 77
–85 https://doi.org/10.1016/j.ophtha.2021.09.001 OPANEW 0743-751X
(2022).
Google Scholar
M. B. Coco-Martin et al.,
“The potential of virtual reality for inducing neuroplasticity in children with amblyopia,”
J. Ophthalmol., 2020 7067846 https://doi.org/10.1155/2020/7067846
(2020).
Google Scholar
P. Žiak et al.,
“Amblyopia treatment of adults with dichoptic training using the virtual reality Oculus Rift head mounted display: preliminary results,”
BMC Ophthalmol., 17
(105), 1
–8 https://doi.org/10.1186/s12886-017-0501-8
(2017).
Google Scholar
A. Pourmand et al.,
“Virtual reality as a clinical tool for pain management,”
Curr. Pain Headache Rep., 22
(53), 1
–6 https://doi.org/10.1007/s11916-018-0708-2
(2018).
Google Scholar
R. Beams et al.,
“Evaluation challenges for the application of extended reality devices in medicine,”
J. Digital Imaging, 35
(5), 1409
–1418 https://doi.org/10.1007/s10278-022-00622-x JDIMEW
(2022).
Google Scholar
M. Johnson et al.,
“Quantifying the optical and rendering pipeline contributions to spatial resolution in augmented reality displays,”
J. Soc. Inf. Disp., 32
(8), 555
–567 https://doi.org/10.1002/jsid.1297 JSIDE8 0734-1768
(2024).
Google Scholar
J. Wang et al.,
“Omnidirectional virtual visual acuity: a user-centric visual clarity metric for virtual reality head-mounted displays and environments,”
IEEE Trans. Vis. Comput. Graphics, 30 2033
–2043 https://doi.org/10.1109/TVCG.2024.3372127
(2024).
Google Scholar
A. Patney et al.,
“Towards foveated rendering for gaze-tracked virtual reality,”
ACM Trans. Graphics, 35
(6), 1
–12 https://doi.org/10.1145/2980179.2980246
(2016).
Google Scholar
R. Albert et al.,
“Latency requirements for foveated rendering in virtual reality,”
ACM Trans. Appl. Percept., 14
(4), 1
–13 https://doi.org/10.1145/3127589
(2017).
Google Scholar
R. Beams et al.,
“Angular dependence of the spatial resolution in virtual reality displays,”
in IEEE Conf. Virtual Reality and 3D User Interfaces (VR),
836
–841
(2020). Google Scholar
C. Zhao, R. Beams and A. Badano,
“Radially variant contrast measurement in virtual reality headsets using circular concentric ring patterns,”
J. Soc. Inf. Disp., 31
(5), 387
–397 https://doi.org/10.1002/jsid.1208 JSIDE8 0734-1768
(2023).
Google Scholar
A. M. Aizenman et al.,
“The statistics of eye movements and binocular disparities during VR gaming: implications for headset design,”
ACM Trans. Graphics, 42
(1), 1
–15 https://doi.org/10.1145/3549529 ATGRDF 0730-0301
(2023).
Google Scholar
S. Martn et al.,
“Evaluation of a virtual reality implementation of a binocular imbalance test,”
PloS One, 15
(8), e0238047 https://doi.org/10.1371/journal.pone.0238047 POLNCL 1932-6203
(2020).
Google Scholar
“Eyewear display part 20-20: fundamental measurement methods image quality,”
(2019). Google Scholar
“Information display measurements standard,”
(2023). Google Scholar
J. Ding, S. A. Klein and D. M. Levi,
“Binocular combination of phase and contrast explained by a gain-control and gain-enhancement model,”
J. Vision, 13
(2), 13
–13 https://doi.org/10.1167/13.2.13 1534-7362
(2013).
Google Scholar
J. Ding and D. M. Levi,
“A unified model for binocular fusion and depth perception,”
Vision Res., 180 11
–36 https://doi.org/10.1016/j.visres.2020.11.009
(2021).
Google Scholar
J. Ding and D. M. Levi,
“Binocular combination of luminance profiles,”
J. Vision, 17
(13), 4
–4 https://doi.org/10.1167/17.13.4 1534-7362
(2017).
Google Scholar
J. Ding and G. Sperling,
“A gain-control theory of binocular combination,”
Proc. Natl. Acad. Sci. U. S. A., 103
(4), 1141
–1146 https://doi.org/10.1073/pnas.0509629103
(2006).
Google Scholar
C.-B. Huang et al.,
“Contrast and phase combination in binocular vision,”
PloS One, 5
(12), e15075 https://doi.org/10.1371/journal.pone.0015075 POLNCL 1932-6203
(2010).
Google Scholar
A. W. Freeman,
“Multistage model for binocular rivalry,”
J. Neurophysiol., 94
(6), 4412
–4420 https://doi.org/10.1152/jn.00557.2005 JONEA4 0022-3077
(2005).
Google Scholar
D. H. Baker et al.,
“Nonlinearities in the binocular combination of luminance and contrast,”
Vision Res., 56
(1), 1
–9 https://doi.org/10.1016/j.visres.2012.01.008
(2012).
Google Scholar
D. H. Baker et al.,
“Binocular summation revisited: beyond √2,”
Psychol. Bull., 144
(11), 1186 https://doi.org/10.1037/bul0000163 PSBUAI 0033-2909
(2018).
Google Scholar
“Eyewear display part 20-10: fundamental measurement methods—optical properties,”
(2019). Google Scholar
D. Winters et al.,
““82-3: optical quality requirements for accurate MTF/CTF measurements on near-eye displays,”
SID Symp. Digest Tech. Pap., 55 1147
–1150 https://doi.org/10.1002/sdtp.17742 DTPSDS 0097-966X
(2024).
Google Scholar
P. G. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE PressBellingham, Washington,
(1999). Google Scholar
D. M. Hoffman et al.,
“Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,”
J. Vision, 8
(3), 33
–33 https://doi.org/10.1167/8.3.33 1534-7362
(2008).
Google Scholar
J. Rousson et al.,
“Contrast sensitivity function in stereoscopic viewing of Gabor patches on a medical polarized three-dimensional stereoscopic display,”
J. Electron. Imaging, 25
(2), 023014
–023014 https://doi.org/10.1117/1.JEI.25.2.023014 JEIME5 1017-9909
(2016).
Google Scholar
M. Wang et al.,
“The effect of interocular contrast differences on the appearance of augmented reality imagery,”
ACM Trans. Appl. Percept., 21
(1), 1
–23 https://doi.org/10.1145/3617684
(2023).
Google Scholar
M. Wang et al.,
“16-1: a model for the appearance of interocular colorimetric differences in binocular XR displays,”
SID Symp. Digest Tech. Pap., 55 177
–181 https://doi.org/10.1002/sdtp.17483 DTPSDS 0097-966X
(2024).
Google Scholar
M. Moshirfar, R. N. Hoggan and V. Muthappan,
“Angle kappa and its importance in refractive surgery,”
Oman J. Ophthalmol., 6
(3), 151 https://doi.org/10.4103/0974-620X.122268
(2013).
Google Scholar
H. Basmak et al.,
“Measurement of angle kappa with synoptophore and Orbscan II in a normal population,”
J. Refract. Surg., 23
(5), 456
–460 https://doi.org/10.3928/1081-597X-20070501-06
(2007).
Google Scholar
C. Zhao et al.,
“Integrating eye rotation and contrast sensitivity into image quality evaluation of virtual reality head-mounted displays,”
Opt. Express, 32
(14), 24968
–24984 https://doi.org/10.1364/OE.527660 OPEXFF 1094-4087
(2024).
Google Scholar
R. Blake and H. Wilson,
“Binocular vision,”
Vision Res., 51
(7), 754
–770 https://doi.org/10.1016/j.visres.2010.10.009
(2011).
Google Scholar
|
Head-mounted displays
Virtual reality
Eye
Image quality
Visualization
Contrast sensitivity
Medical imaging