- -

Evolutionary optimization of neural networks with heterogeneous computation: study and implementation

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Evolutionary optimization of neural networks with heterogeneous computation: study and implementation

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Fe, Jorge Deolindo es_ES
dc.contributor.author Aliaga Varea, Ramón José es_ES
dc.contributor.author Gadea Gironés, Rafael es_ES
dc.date.accessioned 2016-05-17T10:42:40Z
dc.date.available 2016-05-17T10:42:40Z
dc.date.issued 2015-08
dc.identifier.issn 0920-8542
dc.identifier.issn 1573-0484
dc.identifier.uri http://hdl.handle.net/10251/64230
dc.description.abstract In the optimization of artificial neural networks (ANNs) via evolutionary algorithms and the implementation of the necessary training for the objective function, there is often a trade-off between efficiency and flexibility. Pure software solutions on general-purpose processors tend to be slow because they do not take advantage of the inherent parallelism, whereas hardware realizations usually rely on optimizations that reduce the range of applicable network topologies, or they attempt to increase processing efficiency by means of low-precision data representation. This paper presents, first of all, a study that shows the need of heterogeneous platform (CPU–GPU–FPGA) to accelerate the optimization of ANNs using genetic algorithms and, secondly, an implementation of a platform based on embedded systems with hardware accelerators implemented in Field Pro-grammable Gate Array (FPGA). The implementation of the individuals on a remote low-cost Altera FPGA allowed us to obtain a 3x–4x acceleration compared with a 2.83 GHz Intel Xeon Quad-Core and 6x–7x compared with a 2.2 GHz AMD Opteron Quad-Core 2354. es_ES
dc.description.sponsorship The translation of this paper was funded by the Universitat Politecnica de Valencia, Spain. en_EN
dc.language Inglés es_ES
dc.publisher Springer Netherlands es_ES
dc.relation.ispartof The Journal of Supercomputing es_ES
dc.rights Reserva de todos los derechos es_ES
dc.subject Evolutionary computation es_ES
dc.subject Embedded system es_ES
dc.subject FPGA es_ES
dc.subject Neural networks es_ES
dc.subject.classification TECNOLOGIA ELECTRONICA es_ES
dc.title Evolutionary optimization of neural networks with heterogeneous computation: study and implementation es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.1007/s11227-015-1419-7
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Departamento de Ingeniería Electrónica - Departament d'Enginyeria Electrònica es_ES
dc.description.bibliographicCitation Fe, JD.; Aliaga Varea, RJ.; Gadea Gironés, R. (2015). Evolutionary optimization of neural networks with heterogeneous computation: study and implementation. The Journal of Supercomputing. 71(8):2944-2962. doi:10.1007/s11227-015-1419-7 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion http://dx.doi.org/10.1007/s11227-015-1419-7 es_ES
dc.description.upvformatpinicio 2944 es_ES
dc.description.upvformatpfin 2962 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 71 es_ES
dc.description.issue 8 es_ES
dc.relation.senia 301857 es_ES
dc.contributor.funder Universitat Politècnica de València es_ES
dc.description.references Farmahini-Farahani A, Vakili S, Fakhraie SM, Safari S, Lucas C (2010) Parallel scalable hardware implementation of asynchronous discrete particle swarm optimization. Eng Appl Artif Intell 23(2):177–187 es_ES
dc.description.references Curteanu S, Cartwright H (2011) Neural networks applied in chemistry. i. Determination of the optimal topology of multilayer perceptron neural networks. J Chemom 25(10):527–549. doi: 10.1002/cem.1401 es_ES
dc.description.references Islam MM, Sattar MA, Amin MF, Yao X, Murase K (2009) A new adaptive merging and growing algorithm for designing artificial neural networks. Ieee Trans Syst Man Cybern Part B-Cybern 39(3):705–722 es_ES
dc.description.references Han KH, Kim JH (2004) Quantum-inspired evolutionary algorithms with a new termination criterion, h-epsilon gate, and two-phase scheme. Ieee Trans Evol Comput 8(2):156–169 es_ES
dc.description.references Leung FHF, Lam HK, Ling SH, Tam PKS (2003) Tuning of the structure and parameters of a neural network using an improved genetic algorithm. Ieee Trans Neural Netw 14(1):79–88 es_ES
dc.description.references Tsai JT, Chou JH, Liu TK (2006) Tuning the structure and parameters of a neural network by using hybrid taguchi-genetic algorithm. Ieee Trans Neural Netw 17(1):69–80 es_ES
dc.description.references Ludermir TB, Yamazaki A, Zanchettin C (2006) An optimization methodology for neural network weights and architectures. Ieee Trans Neural Netw 17(6):1452–1459 es_ES
dc.description.references Palmes PP, Hayasaka T, Usui S (2005) Mutation-based genetic neural network. Trans Neural Netw 16(3):587–600. doi: 10.1109/TNN.2005.844858 es_ES
dc.description.references Mu T, Jiang J, Wang Y, Goulermas JY (2012) Adaptive data embedding framework for multiclass classification. Ieee Trans Neural Netw Learn Syst 23(8):1291–1303 es_ES
dc.description.references Lu T-C, Yu G-R, Juang J-C (2013) Quantum-based algorithm for optimizing artificial neural networks. IEEE Trans Neural Netw Lear Syst 24(8):1266–1278 es_ES
dc.description.references Yao X (1999) Evolving artificial neural networks. Proc Ieee 87(9):1423–1447 es_ES
dc.description.references Yao X, Liu Y (1997) A new evolutionary system for evolving artificial neural networks. Ieee Trans Neural Netw 8(3):694–713 es_ES
dc.description.references Mateo F, Sovilj D, Gadea-Gironés R (2010) Approximate k-NN delta test minimization method using genetic algorithms: application to time series. NEUROCOMPUTING 73(10–12, Sp):2017–2029 es_ES
dc.description.references Hawkins S, He H, Williams G, Baxter R (2002) Outlier detection using replicator neural networks. In: Proceedings of the 5th international conference and data warehousing and knowledge discovery. DaWaK02, pp 170–180 es_ES
dc.description.references Fe J, Aliaga RJ, Gironés RG (2013) Experimental platform for accelerate the training of anns with genetic algorithm and embedded system on fpga. In: IWINAC (2), pp 413–420 es_ES
dc.description.references Prechelt L (1994) Proben1—a set of neural network benchmark problems and benchmarking rules. Technical report es_ES
dc.description.references Abbass HA (2002) An evolutionary artificial neural networks approach for breast cancer diagnosis. Artif Intell Med 25:265–281 es_ES
dc.description.references Ahmad F, Isa NAM, Hussain Z, Sulaiman SN (2013) A genetic algorithm-based multi-objective optimization of an artificial neural network classifier for breast cancer diagnosis. Neural Comput Appl 23(5):1427–1435 es_ES
dc.description.references Sankaradas M, Jakkula V, Cadambi S, Chakradhar S, Durdanovic I, Cosatto E, Graf H (2009) A massively parallel coprocessor for convolutional neural networks. In: Application-specific systems, architectures and processors, 2009. ASAP 2009. 20th IEEE international conference on, July, pp 53–60 es_ES
dc.description.references Prado R, Melo J, Oliveira J, Neto A (2012) Fpga based implementation of a fuzzy neural network modular architecture for embedded systems. In: Neural networks (IJCNN), The 2012 international joint conference on, June, pp 1–7 es_ES
dc.description.references Çavuşlu M, Karakuzu C, Sahin S, Yakut M (2011) Neural network training based on fpga with floating point number format and its performance. Neural Comput Appl 20:195–202. doi: 10.1007/s00521-010-0423-3 es_ES
dc.description.references Wu G-D, Zhu Z-W, Lin B-W (2011) Reconfigurable back propagation based neural network architecture. In: Integrated circuits (ISIC), 2011 13th international symposium on, Dec, pp 67–70 es_ES
dc.description.references Pinjare SL, Kumar A (2012) Implementation of neural network back propagation training algorithm on fpga. Int J Comput Appl 52(6): 1–7, August, published by Foundation of Computer Science, New York, USA es_ES
dc.description.references http://www.altera.com es_ES
dc.description.references Aliaga R, Gadea R, Colom R, Cerda J, Ferrando N, Herrero V (2009) A mixed hardware–software approach to flexible artificial neural network training on fpga. In: Systems, architectures, modeling, and simulation, 2009. SAMOS ’09. International symposium on, July, pp 1–8 es_ES
dc.description.references http://www.matlab.com es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem