Peris Abril, Á.; Domingo-Ballester, M.; Casacuberta Nolla, F. (2017). Interactive neural machine translation. Computer Speech and Language. 1-20. https://doi.org/10.1016/j.csl.2016.12.003
Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10251/83641
Title:
|
Interactive neural machine translation
|
Author:
|
Peris Abril, Álvaro
Domingo-Ballester, Miguel
Casacuberta Nolla, Francisco
|
UPV Unit:
|
Universitat Politècnica de València. Escola Tècnica Superior d'Enginyeria Informàtica
|
Issued date:
|
|
Abstract:
|
Despite the promising results achieved in last years by statistical machine translation, and more precisely, by the neural
machine translation systems, this technology is still not error-free. The outputs of a machine ...[+]
Despite the promising results achieved in last years by statistical machine translation, and more precisely, by the neural
machine translation systems, this technology is still not error-free. The outputs of a machine translation system must be corrected
by a human agent in a post-editing phase. Interactive protocols foster a human computer collaboration, in order to increase productivity.
In this work, we integrate the neural machine translation into the interactive machine translation framework. Moreover,
we propose new interactivity protocols, in order to provide the user an enhanced experience and a higher productivity. Results
obtained over a simulated benchmark show that interactive neural systems can significantly improve the classical phrase-based
approach in an interactive-predictive machine translation scenario.
c 2016 Elsevier Ltd. All rights reserved.
[-]
|
Subjects:
|
Neural machine translation
,
Interactive-predictive machine translation
,
Recurrent neural networks
|
Copyrigths:
|
Reserva de todos los derechos
|
Source:
|
Computer Speech and Language. (issn:
0885-2308
)
|
DOI:
|
10.1016/j.csl.2016.12.003
|
Publisher:
|
Elsevier
|
Publisher version:
|
http://dx.doi.org/10.1016/j.csl.2016.12.003
|
Project ID:
|
info:eu-repo/grantAgreement/GVA//PROMETEOII%2F2014%2F030/
|
Description:
|
This is the author’s version of a work that was accepted for publication in Computer Speech & Language. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Computer Speech & Language 00 (2016) 1 20. DOI 10.1016/j.csl.2016.12.003.
|
Thanks:
|
The authors wish to thank the anonymous reviewers for their careful reading and in-depth criticisms and suggestions. This work was partially funded by the project ALMAMATER (PrometeoII/2014/030). We also acknowledge NVIDIA ...[+]
The authors wish to thank the anonymous reviewers for their careful reading and in-depth criticisms and suggestions. This work was partially funded by the project ALMAMATER (PrometeoII/2014/030). We also acknowledge NVIDIA for the donation of the GPU used in this work.
[-]
|
Type:
|
Artículo
|