Paper
1 July 1992 Neural nets for massively parallel optimization
Laurence C. W. Dixon, David Mills
Author Affiliations +
Abstract
To apply massively parallel processing systems to the solution of large scale optimization problems it is desirable to be able to evaluate any function f(z), z (epsilon) Rn in a parallel manner. The theorem of Cybenko, Hecht Nielsen, Hornik, Stinchcombe and White, and Funahasi shows that this can be achieved by a neural network with one hidden layer. In this paper we address the problem of the number of nodes required in the layer to achieve a given accuracy in the function and gradient values at all points within a given n dimensional interval. The type of activation function needed to obtain nonsingular Hessian matrices is described and a strategy for obtaining accurate minimal networks presented.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Laurence C. W. Dixon and David Mills "Neural nets for massively parallel optimization", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); https://doi.org/10.1117/12.140088
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Artificial neural networks

Matrices

Data modeling

Tolerancing

Parallel processing

Array processing

Back to Top