Mostrar el registro sencillo del ítem
dc.contributor.author | Stepanov, Evgeny A. | es_ES |
dc.contributor.author | Chowdhury, Shammur Absar | es_ES |
dc.contributor.author | Bayer, Ali Orkan | es_ES |
dc.contributor.author | Ghosh, Arindam | es_ES |
dc.contributor.author | Klasinas, Ioannis | es_ES |
dc.contributor.author | Calvo Lance, Marcos | es_ES |
dc.contributor.author | Sanchís Arnal, Emilio | es_ES |
dc.contributor.author | Riccardi, Giussepe | es_ES |
dc.date.accessioned | 2020-07-07T03:33:14Z | |
dc.date.available | 2020-07-07T03:33:14Z | |
dc.date.issued | 2018-03 | es_ES |
dc.identifier.issn | 1574-020X | es_ES |
dc.identifier.uri | http://hdl.handle.net/10251/147536 | |
dc.description.abstract | [EN] Modern data-driven spoken language systems (SLS) require manual semantic annotation for training spoken language understanding parsers. Multilingual porting of SLS demands significant manual effort and language resources, as this manual annotation has to be replicated. Crowdsourcing is an accessible and cost-effective alternative to traditional methods of collecting and annotating data. The application of crowdsourcing to simple tasks has been well investigated. However, complex tasks, like cross-language semantic annotation transfer, may generate low judgment agreement and/or poor performance. The most serious issue in cross-language porting is the absence of reference annotations in the target language; thus, crowd quality control and the evaluation of the collected annotations is difficult. In this paper we investigate targeted crowdsourcing for semantic annotation transfer that delegates to crowds a complex task such as segmenting and labeling of concepts taken from a domain ontology; and evaluation using source language annotation. To test the applicability and effectiveness of the crowdsourced annotation transfer we have considered the case of close and distant language pairs: Italian-Spanish and Italian-Greek. The corpora annotated via crowdsourcing are evaluated against source and target language expert annotations. We demonstrate that the two evaluation references (source and target) highly correlate with each other; thus, drastically reduce the need for the target language reference annotations. | es_ES |
dc.description.sponsorship | This research is partially funded by the EU FP7 PortDial Project No. 296170, FP7 SpeDial Project No. 611396, and Spanish contract TIN2014-54288-C4-3-R. The work presented in this paper was carried out while the author was affiliated with Universitat Politecnica de Valencia. | es_ES |
dc.language | Inglés | es_ES |
dc.publisher | Springer-Verlag | es_ES |
dc.relation.ispartof | Language Resources and Evaluation | es_ES |
dc.rights | Reserva de todos los derechos | es_ES |
dc.subject | Crowdsourcing | es_ES |
dc.subject | Evaluation | es_ES |
dc.subject | Semantic annotation | es_ES |
dc.subject | Cross-language transfer | es_ES |
dc.subject.classification | LENGUAJES Y SISTEMAS INFORMATICOS | es_ES |
dc.title | Cross-language transfer of semantic annotation via targeted crowdsourcing: task design and evaluation | es_ES |
dc.type | Artículo | es_ES |
dc.identifier.doi | 10.1007/s10579-017-9396-5 | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/FP7/296170/EU/Language Resources for Portable Multilingual Spoken Dialogue Systems/ | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/MINECO//TIN2014-54288-C4-3-R/ES/PROCESADO DE AUDIO, HABLA Y LENGUAJE PARA ANALISIS DE INFORMACION MULTIMEDIA/ | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/FP7/611396/EU/Spoken Dialogue Analytics/ | es_ES |
dc.rights.accessRights | Cerrado | es_ES |
dc.contributor.affiliation | Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació | es_ES |
dc.description.bibliographicCitation | Stepanov, EA.; Chowdhury, SA.; Bayer, AO.; Ghosh, A.; Klasinas, I.; Calvo Lance, M.; Sanchís Arnal, E.... (2018). Cross-language transfer of semantic annotation via targeted crowdsourcing: task design and evaluation. Language Resources and Evaluation. 52(1):341-364. https://doi.org/10.1007/s10579-017-9396-5 | es_ES |
dc.description.accrualMethod | S | es_ES |
dc.relation.publisherversion | http://doi.org/10.1007/s10579-017-9396-5 | es_ES |
dc.description.upvformatpinicio | 341 | es_ES |
dc.description.upvformatpfin | 364 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.description.volume | 52 | es_ES |
dc.description.issue | 1 | es_ES |
dc.relation.pasarela | S\361928 | es_ES |
dc.contributor.funder | Ministerio de Economía y Empresa | es_ES |