Screening training efficiency highly relies on appropriate interaction and feedbacks provided during training.1,2 A dedicated screening workstation and dedicated viewing software are de rigour for UK breast cancer screener training. These workstation and software are mainly manufactured by leading international vendors without critical technical aspects divulged to allow integrating 3rd party screener training solutions. A non-wearable AR approach has been developed and its accuracy has been quantitatively identified. As a follow-up, this study has refactored previous approaches on the wearable platform, Hololens. Wearable AR solutions are considerably user-friendly in degrees of freedom movements whilst they are seamlessly integrated and less customisable. It has not been aware that screening-suitable room-scale AR approaches have been developed and adopted. However, wearable AR techniques have relatively sophisticated apparatus developed for personal usage. In this study, Hololens is adopted and the difficulties of employing wearable AR techniques on screening training are systematically addressed. It is found infrared sensors of wearable AR solutions cannot retrieve spacious data correctly in the real world while the detected object is a monitor screen or other infrared-relative objects. Moreover, Hololens has the difficulty of detecting large objects and its interaction range and visible ranges are both quite limited. Whereas an alternative method is developed for Hololens and it is fully functional for screening training.
Digital Breast Tomosynthesis has several advantages over traditional 2D mammography. However, the cost-effectiveness to implement DBT modality into breast screening programmes is still under investigation. The DBT modality has been integrated into a regional breast screening program in Italy for several years. The purpose of this study is to examine the experienced Italian DBT readers’ visual search behaviour and summarise their visual inspection patterns. Seven experienced radiologists took part in the study, reading a set of DBT cases with a mixture of both normal and abnormal cases whilst their eye movements data were recorded. They read the cases through a fixed procedure starting with a 2D overview and then went through the DBT view of each side of the breasts. It was found that the experienced readers tended to perform a global-focal scan over the 2D view to detect the abnormality and then ‘drilled’ through the DBT slices, interpreting the details of the feature. The reading speed was also investigated to see if there was any difference in length of time when expert radiologists examine both normal and abnormal cases. The results showed that there was no significant difference in time between normal and abnormal cases. The eye movement patterns revealed that experienced DBT readers covered more areas on the 2D view and fixated longer and with more dwells inside the AOI in the 3D view. Based on these findings it is hoped that by understanding the visual search patterns of the experienced DBT radiologists, it could potentially help DBT trainees to develop more efficient interpretation approaches.
A dedicated workstation and its corresponding viewing software are essential requirements in breast screener training. A major challenge of developing further generic screener training technology (in particular, for mammographic interpretation training) is that high-resolution radiological images are required to be displayed on dedicated workstations whilst actual reporting of the images is generally completed on individual standard workstations. Due to commercial reasons, dedicated clinical workstations manufactured by leading international vendors tend not to have critical technical aspects divulged which would facilitate further integration of third party generic screener training technology. With standard workstations, it is noticeable that the conventional screener training depends highly on manual transcription so that traditional training methods can potentially be deficient in terms of real-time feedback and interaction. Augmented reality (AR) provides the ability to co-operate with both real and virtual environments, and therefore can supplement conventional training with virtual registered objects and actions. As a result, realistic screener training can co-operate with rich feedback and interaction in real time. Previous work1 has shown that it is feasible to employ an AR approach to deliver workstation-independent radiological screening training by superimposing appropriate feedback coupled with the use of interaction interfaces. The previous study addressed presence issues and provided an AR recognisable stylus which allowed for drawing interaction. As a follow-up, this study extends the AR method and investigates realistic effects and the impacts of environmental illumination, application performance and transcription. A robust stylus calibration method is introduced to address environmental changes over time. Moreover, this work introduces a completed AR workflow which allows real time recording, computer analysable training data and further recoverable transcription during post-training. A quantitative evaluation results show an accuracy of more than 80% of user-drawn points being located within a pixel distance of 5.
Appropriate feedback plays an important role in optimising mammographic interpretation training whilst also ensuring good interpretation performance. The traditional keyboard, mouse and workstation technical approach has a critical limitation in providing supplementary image-related information and providing complex feedback in real time. Augmented Reality (AR) provides a possible superior approach in this situation, as feedback can be provided directly overlaying the displayed mammographic images so making a generic approach which can also be vendor neutral. In this study, radiological feedback was dynamically remapped virtually into the real world, using perspective transformation, in order to provide a richer user experience in mammographic interpretation training. This is an initial attempt of an AR approach to dynamically superimpose pre-defined feedback information of a DICOM image on top of a radiologist’s view, whilst the radiologist is examining images on a clinical workstation. The study demonstrates the feasibility of the approach, although there are limitations on interactive operations which are due to the hardware used. The results of this fully functional approach provide appropriate feedback/image correspondence in a simulated mammographic interpretation environment. Thus, it is argued that employing AR is a feasible way to provide rich feedback in the delivery of mammographic interpretation training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.