Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLikas, A.en
dc.contributor.authorStafylopatis, A.en
dc.rightsDefault Licence-
dc.titleTraining the random neural network using quasi-Newton methodsen
heal.type.enJournal articleen
heal.type.elΆρθρο Περιοδικούel
heal.recordProviderΠανεπιστήμιο Ιωαννίνων. Σχολή Θετικών Επιστημών. Τμήμα Μηχανικών Ηλεκτρονικών Υπολογιστών και Πληροφορικήςel
heal.abstractTraining in the random neural network (RNN) is generally specified as the minimization of an appropriate error function with respect to the parameters of the network (weights corresponding to positive and negative connections). We propose here a technique for error minimization that is based on the use of quasi-Newton optimization techniques. Such techniques offer more sophisticated exploitation of the gradient information compared to simple gradient descent methods, but are computationally more expensive and difficult to implement. In this work we specify the necessary details for the application of quasi-Newton methods to the training of the RNN, and provide comparative experimental results from the use of these methods to some well-known test problems, which confirm the superiority of the approach. (C) 2000 Elsevier Science B.V. All rights reserved.en
heal.journalNameEuropean Journal of Operational Researchen
heal.journalTypepeer reviewed-
Appears in Collections:Άρθρα σε επιστημονικά περιοδικά ( Ανοικτά)

Files in This Item:
File Description SizeFormat 
Likas-2000-Training the random neural network using quasi-Newton methods.pdf116.85 kBAdobe PDFView/Open    Request a copy

This item is licensed under a Creative Commons License Creative Commons