Multivariate time series data is ubiquitous in the real world, and the study of its modeling and analysis is a popular research topic in meteorology, transportation, finance and other fields. In these studies, classical statistical methods are primarily aimed at single time series analysis, while deep learning demonstrates the power to mine patterns from massive amounts of data. A major application of these studies is to analyze collected historical sequence information to predict what will happen over time in the future. Currently, recurrent neural network-based models and time-convolution-based models realize the predictive power of multivariate time series, but these deep models perform mediocrely at predicting long-sequence tasks. On the one hand, due to the accumulation of errors, on the other hand, the fact that the collected sequence contains a large amount of high-frequency noise. In order to improve the prediction accuracy of the model and mine more valuable features from the series, we propose a novel multivariate time series prediction framework ADWT for time series modeling. By designing an adaptive filtering module in the characteristics of the signal frequency domain, our model removes noise from some of the time series and builds an end-to-end framework by fusing it with the prediction module of deep learning. Experimental results show that our model can effectively improve the prediction accuracy of multivariate time series, and its performance in the three benchmark data sets is competitive with the latest spatial-temporal series prediction model, and has good interpretability.
KEYWORDS: Cancer, RGB color model, Endoscopy, Imaging systems, Image contrast enhancement, Computing systems, Medical research, Decision support systems, Visualization, Video
While white light imaging (WLI) of endoscopy has been set as the gold standard for screening and detecting oesophageal squamous cell cancer (SCC), the early signs of SCC are often missed (1 in 4) due to its subtle change of early onset of SCC. This study firstly enhances colour contrast of each of over 600 WLI images and their accompanying narrow band images (NBI) applying CIE colour appearance model CIECAM02. Then these augmented data together with the original images are employed to train a deep learning based system for classification of low grade dysplasia (LGD), SCC and high grade dysplasia (HGD). As a result, the averaged colour difference (∆E) measured using CIEL*a*b* increased from 11.60 to 14.46 for WLI and from 17.52 to 32.53 for NBI in appearance between suspected regions and their normal neighbours. When training a deep learning system with added enhanced contrasted WLI images, the sensitivity, specific and accuracy for LGD increases by 10.87%, 4.95% and 6.76% respectively. When training with enhanced both WLI and NBI images, these measures for LGD increases by 14.83%, 4.89% and 7.97% respectively, the biggest increase among three classes of SCC, HGD and LGD. In average, the sensitivity, specificity and accuracy for these three classes are 88.26%, 94.44% and 92.63% respectively for classification of SCC, HGD and LGD, being comparable or exceeding existing published work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.