- -

Towards a Classifier to Recognize Emotions Using Voice to Improve Recommendations

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Towards a Classifier to Recognize Emotions Using Voice to Improve Recommendations

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Fuentes-López, José Manuel es_ES
dc.contributor.author Taverner-Aparicio, Joaquín José es_ES
dc.contributor.author Rincón Arango, Jaime Andrés es_ES
dc.contributor.author Botti Navarro, Vicente Juan es_ES
dc.date.accessioned 2021-12-27T08:37:09Z
dc.date.available 2021-12-27T08:37:09Z
dc.date.issued 2020-10-09 es_ES
dc.identifier.isbn 978-3-030-51999-5 es_ES
dc.identifier.uri http://hdl.handle.net/10251/178898
dc.description.abstract [EN] The recognition of emotions in tone voice is currently a tool with a high potential when it comes to making recommendations, since it allows to personalize recommendations using the mood of the users as information. However, recognizing emotions using tone of voice is a complex task since it is necessary to pre-process the signal and subsequently recognize the emotion. Most of the current proposals use recurrent networks based on sequences with a temporal relationship. The disadvantage of these networks is that they have a high runtime, which makes it difficult to use in real-time applications. On the other hand, when defining this type of classifier, culture and language must be taken into account, since the tone of voice for the same emotion can vary depending on these cultural factors. In this work we propose a culturally adapted model for recognizing emotions from the voice tone using convolutional neural networks. This type of network has a relatively short execution time allowing its use in real time applications. The results we have obtained improve the current state of the art, reaching 93.6% success over the validation set. es_ES
dc.description.sponsorship This work is partially supported by the Spanish Government project TIN2017-89156-R, GVA-CEICE project PROMETEO/2018/002, Generalitat Valenciana and European Social Fund FPI grant ACIF/2017/085, Universitat Politecnica de Valencia research grant (PAID-10-19), and by the Spanish Government (RTI2018-095390-B-C31). es_ES
dc.language Inglés es_ES
dc.publisher Springer es_ES
dc.relation.ispartof Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness. The PAAMS Collection es_ES
dc.relation.ispartofseries Communications in Computer and Information Science;1233 es_ES
dc.rights Reserva de todos los derechos es_ES
dc.subject Emotion recognition es_ES
dc.subject Voice analysis es_ES
dc.subject Recommendation system es_ES
dc.subject.classification LENGUAJES Y SISTEMAS INFORMATICOS es_ES
dc.title Towards a Classifier to Recognize Emotions Using Voice to Improve Recommendations es_ES
dc.type Comunicación en congreso es_ES
dc.type Capítulo de libro es_ES
dc.identifier.doi 10.1007/978-3-030-51999-5_18 es_ES
dc.relation.projectID info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016/TIN2017-89156-R/ES/AGENTES INTELIGENTES PARA ASESORAR EN PRIVACIDAD EN REDES SOCIALES/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement///ACIF%2F2017%2F085//AYUDA PREDOCTORAL CONSELLERIA-TAVERNER APARICIO/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/RTI2018-095390-B-C31/ES/HACIA UNA MOVILIDAD INTELIGENTE Y SOSTENIBLE SOPORTADA POR SISTEMAS MULTI-AGENTES Y EDGE COMPUTING/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement///PROMETEO%2F2018%2F002//TECNOLOGIES PER ORGANITZACIONS HUMANES EMOCIONALS/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/UPV-VIN//PAID-10-19//Redes de sensores inteligentes en el entorno de las Smart Cities./ es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació es_ES
dc.description.bibliographicCitation Fuentes-López, JM.; Taverner-Aparicio, JJ.; Rincón Arango, JA.; Botti Navarro, VJ. (2020). Towards a Classifier to Recognize Emotions Using Voice to Improve Recommendations. Springer. 218-225. https://doi.org/10.1007/978-3-030-51999-5_18 es_ES
dc.description.accrualMethod S es_ES
dc.relation.conferencename 18th International Conference on Practical Applications of Agents and Multiagent Systems (PAAMS 2020). Workshops es_ES
dc.relation.conferencedate Octubre 07-09,2020 es_ES
dc.relation.conferenceplace L'Aquila, Italy es_ES
dc.relation.publisherversion https://doi.org/10.1007/978-3-030-51999-5_18 es_ES
dc.description.upvformatpinicio 218 es_ES
dc.description.upvformatpfin 225 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.relation.pasarela S\415772 es_ES
dc.contributor.funder European Social Fund es_ES
dc.description.references Balakrishnan, A., Rege, A.: Reading emotions from speech using deep neural networks. Technical report, Stanford University, Computer Science Department (2017) es_ES
dc.description.references Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997) es_ES
dc.description.references Kerkeni, L., Serrestou, Y., Mbarki, M., Raoof, K., Mahjoub, M.: Speech emotion recognition: methods and cases study, pp. 175–182 (2018) es_ES
dc.description.references McCluskey, K.W., Albas, D.C., Niemi, R.R., Cuevas, C., Ferrer, C.: Cross-cultural differences in the perception of the emotional content of speech: a study of the development of sensitivity in Canadian and Mexican children. Dev. Psychol. 11(5), 551 (1975) es_ES
dc.description.references Paliwal, K.K.: Spectral subband centroid features for speech recognition. In: Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSP 1998 (Cat. No. 98CH36181), vol. 2, pp. 617–620. IEEE (1998) es_ES
dc.description.references Paulmann, S., Uskul, A.K.: Cross-cultural emotional prosody recognition: evidence from Chinese and British listeners. Cogn. Emot. 28(2), 230–244 (2014) es_ES
dc.description.references Pépiot, E.: Voice, speech and gender: male-female acoustic differences and cross-language variation in English and French speakers. Corela Cogn. Représent. Lang. (HS-16) (2015) es_ES
dc.description.references Picard, R.W., et al.: Affective computing. Perceptual Computing Section, Media Laboratory, Massachusetts Institute of Technology (1995) es_ES
dc.description.references Rincon, J., de la Prieta, F., Zanardini, D., Julian, V., Carrascosa, C.: Influencing over people with a social emotional model. Neurocomputing 231, 47–54 (2017) es_ES
dc.description.references Russell, J.A., Lewicka, M., Niit, T.: A cross-cultural study of a circumplex model of affect. J. Pers. Soc. Psychol. 57(5), 848 (1989) es_ES
dc.description.references Schuller, B., Rigoll, G., Lang, M.: Hidden Markov model-based speech emotion recognition, vol. 2, pp. 401–404 (2003) es_ES
dc.description.references Schuller, B., Villar, R., Rigoll, G., Lang, M.: Meta-classifiers in acoustic and linguistic feature fusion-based affect recognition, vol. 1, pp. 325–328 (2005) es_ES
dc.description.references Thompson, W., Balkwill, L.-L.: Decoding speech prosody in five languages. Semiotica 2006, 407–424 (2006) es_ES
dc.description.references Tyagi, V., Wellekens, C.: On desensitizing the Mel-cepstrum to spurious spectral components for robust speech recognition. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. ICASSP 2005, vol. 1, pp. I–529. IEEE (2005) es_ES
dc.description.references Ueda, M., Morishita, Y., Nakamura, T., Takata, N., Nakajima, S.: A recipe recommendation system that considers user’s mood. In: Proceedings of the 18th International Conference on Information Integration and Web-based Applications and Services, pp. 472–476. ACM (2016) es_ES
dc.description.references Zhang, B., Quan, C., Ren, F.: Study on CNN in the recognition of emotion in audio and images. In: 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), pp. 1–5, June 2016 es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem