PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper the recurrent back-propagation and Newton algorithms for an important class of recurrent networks and their convergence properties are discussed. To ensure proper convergence behavior, recurrent connections must be suitably constrained during the learning process. Simulation results demonstrate that the algorithms with the suggested constraint have superior performance.
Chung-Ming Kuan,Kurt Hornik, andTung Liu
"Recurrent back-propagation and Newton algorithms for training recurrent neural networks", Proc. SPIE 2093, Substance Identification Analytics, (1 February 1994); https://doi.org/10.1117/12.172502
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Chung-Ming Kuan, Kurt Hornik, Tung Liu, "Recurrent back-propagation and Newton algorithms for training recurrent neural networks," Proc. SPIE 2093, Substance Identification Analytics, (1 February 1994); https://doi.org/10.1117/12.172502