Mostrar el registro sencillo del ítem
dc.contributor.author | Ward,Nigel G. | es_ES |
dc.contributor.author | Werner, Steven D. | es_ES |
dc.contributor.author | García-Granada, Fernando | es_ES |
dc.contributor.author | Sanchís Arnal, Emilio | es_ES |
dc.date.accessioned | 2016-06-20T15:25:49Z | |
dc.date.available | 2016-06-20T15:25:49Z | |
dc.date.issued | 2015-04 | |
dc.identifier.issn | 0167-6393 | |
dc.identifier.uri | http://hdl.handle.net/10251/66181 | |
dc.description.abstract | Search in audio archives is a challenging problem. Using prosodic information to help find relevant content has been proposed as a complement to word-based retrieval, but its utility has been an open question. We propose a new way to use prosodic information in search, based on a vector-space model, where each point in time maps to a point in a vector space whose dimensions are derived from numerous prosodic features of the local context. Point pairs that are close in this vector space are frequently similar, not only in terms of the dialog activities, but also in topic. Using proximity in this space as an indicator of similarity, we built support for a query-by-example function. Searchers were happy to use this function, and it provided value on a large testset. Prosody-based retrieval did not perform as well as word-based retrieval, but the two sources of information were often non-redundant and in combination they sometimes performed better than either separately. | es_ES |
dc.description.sponsorship | We thank Martha Larson, Alejandro Vega, Steve Renals, Khiet Truong, Olac Fuentes, David Novick, Shreyas Karkhedkar, Luis F. Ramirez, Elizabeth E. Shriberg, Catharine Oertel, Louis-Philippe Morency, Tatsuya Kawahara, Mary Harper, and the anonymous reviewers. This work was supported in part by the National Science Foundation under Grants IIS-0914868 and IIS-1241434 and by the Spanish MEC under contract TIN2011-28169-C05-01. | en_EN |
dc.language | Inglés | es_ES |
dc.publisher | Elsevier | es_ES |
dc.relation.ispartof | Speech Communication | es_ES |
dc.rights | Reconocimiento - No comercial - Sin obra derivada (by-nc-nd) | es_ES |
dc.subject | Search | es_ES |
dc.subject | Speech | es_ES |
dc.subject | Audio | es_ES |
dc.subject | Similarity judgments | es_ES |
dc.subject | Similarity metrics | es_ES |
dc.subject | Principal components analysis | es_ES |
dc.subject.classification | LENGUAJES Y SISTEMAS INFORMATICOS | es_ES |
dc.title | A prosody-based vector-space model of dialog activity for information retrieval | es_ES |
dc.type | Artículo | es_ES |
dc.identifier.doi | 10.1016/j.specom.2015.01.004 | |
dc.relation.projectID | info:eu-repo/grantAgreement/NSF//0914868/US/RI:Small Time-Based Language Modeling/ | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/NSF//1241434/US/UTEP Summer Program in Applied Intelligent Systems/ | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/MICINN//TIN2011-28169-C05-01/ES/TIMPANO-UPV: TECNOLOGIAS PARA LA INTERACCION CONVERSACIONAL COMPLAJE PERSONA-MAQUINA CON APRENDIZAJE DINAMICO/ | es_ES |
dc.rights.accessRights | Abierto | es_ES |
dc.contributor.affiliation | Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació | es_ES |
dc.description.bibliographicCitation | Ward, NG.; Werner, SD.; García-Granada, F.; Sanchís Arnal, E. (2015). A prosody-based vector-space model of dialog activity for information retrieval. Speech Communication. 68:85-96. https://doi.org/10.1016/j.specom.2015.01.004 | es_ES |
dc.description.accrualMethod | S | es_ES |
dc.relation.publisherversion | http://dx.doi.org/10.1016/j.specom.2015.01.004 | es_ES |
dc.description.upvformatpinicio | 85 | es_ES |
dc.description.upvformatpfin | 96 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.description.volume | 68 | es_ES |
dc.relation.senia | 292770 | es_ES |
dc.identifier.eissn | 1872-7182 | |
dc.contributor.funder | Ministerio de Ciencia e Innovación | es_ES |
dc.contributor.funder | National Science Foundation, EEUU | es_ES |