- -

Siamese hierarchical attention networks for extractive summarization

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Siamese hierarchical attention networks for extractive summarization

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author González-Barba, José Ángel es_ES
dc.contributor.author Segarra Soriano, Encarnación es_ES
dc.contributor.author García-Granada, Fernando es_ES
dc.contributor.author Sanchís Arnal, Emilio es_ES
dc.contributor.author Hurtado Oliver, Lluis Felip es_ES
dc.date.accessioned 2020-03-26T06:39:29Z
dc.date.available 2020-03-26T06:39:29Z
dc.date.issued 2019 es_ES
dc.identifier.issn 1064-1246 es_ES
dc.identifier.uri http://hdl.handle.net/10251/139459
dc.description.abstract [EN] In this paper, we present an extractive approach to document summarization based on Siamese Neural Networks. Specifically, we propose the use of Hierarchical Attention Networks to select the most relevant sentences of a text to make its summary. We train Siamese Neural Networks using document-summary pairs to determine whether the summary is appropriated for the document or not. By means of a sentence-level attention mechanism the most relevant sentences in the document can be identified. Hence, once the network is trained, it can be used to generate extractive summaries. The experimentation carried out using the CNN/DailyMail summarization corpus shows the adequacy of the proposal. In summary, we propose a novel end-to-end neural network to address extractive summarization as a binary classification problem which obtains promising results in-line with the state-of-the-art on the CNN/DailyMail corpus. es_ES
dc.description.sponsorship This work has been partially supported by the Spanish MINECO and FEDER founds under project AMIC (TIN2017-85854-C4-2-R). Work of Jose-Angel Gonzalez is also financed by Universitat Politecnica de Valencia under grant PAID-01-17. es_ES
dc.language Inglés es_ES
dc.publisher IOS Press es_ES
dc.relation.ispartof Journal of Intelligent & Fuzzy Systems es_ES
dc.rights Reserva de todos los derechos es_ES
dc.subject Siamese neural networks es_ES
dc.subject Hierarchical attention networks es_ES
dc.subject Automatic text summarization es_ES
dc.subject.classification LENGUAJES Y SISTEMAS INFORMATICOS es_ES
dc.title Siamese hierarchical attention networks for extractive summarization es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.3233/JIFS-179011 es_ES
dc.relation.projectID info:eu-repo/grantAgreement/UPV//PAID-01-17/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016/TIN2017-85854-C4-2-R/ES/AMIC-UPV: ANALISIS AFECTIVO DE INFORMACION MULTIMEDIA CON COMUNICACION INCLUSIVA Y NATURAL/ es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació es_ES
dc.description.bibliographicCitation González-Barba, JÁ.; Segarra Soriano, E.; García-Granada, F.; Sanchís Arnal, E.; Hurtado Oliver, LF. (2019). Siamese hierarchical attention networks for extractive summarization. Journal of Intelligent & Fuzzy Systems. 36(5):4599-4607. https://doi.org/10.3233/JIFS-179011 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion https://doi.org/10.3233/JIFS-179011 es_ES
dc.description.upvformatpinicio 4599 es_ES
dc.description.upvformatpfin 4607 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 36 es_ES
dc.description.issue 5 es_ES
dc.relation.pasarela S\388533 es_ES
dc.contributor.funder Agencia Estatal de Investigación es_ES
dc.contributor.funder Universitat Politècnica de València es_ES
dc.description.references N. Begum , M. Fattah , and F. Ren . Automatic text summarization using support vector machine 5(7) (2009), 1987–1996. es_ES
dc.description.references J. Cheng and M. Lapata . Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers, 2016. es_ES
dc.description.references K.M. Hermann , T. Kocisky , E. Grefenstette , L. Espeholt , W. Kay , M. Suleyman , and P. Blunsom . Teaching machines to read and comprehend, CoRR, abs/1506.03340, 2015. es_ES
dc.description.references D.P. Kingma and J. Ba . Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. es_ES
dc.description.references Lloret, E., & Palomar, M. (2011). Text summarisation in progress: a literature review. Artificial Intelligence Review, 37(1), 1-41. doi:10.1007/s10462-011-9216-z es_ES
dc.description.references Louis, A., & Nenkova, A. (2013). Automatically Assessing Machine Summary Content Without a Gold Standard. Computational Linguistics, 39(2), 267-300. doi:10.1162/coli_a_00123 es_ES
dc.description.references Miao, Y., & Blunsom, P. (2016). Language as a Latent Variable: Discrete Generative Models for Sentence Compression. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. doi:10.18653/v1/d16-1031 es_ES
dc.description.references R. Mihalcea and P. Tarau . Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004. es_ES
dc.description.references T. Mikolov , K. Chen , G. S. Corrado , and J. Dean . Efficient estimation of word representations in vector space, CoRR, abs/1301.3781, 2013. es_ES
dc.description.references Minaee, S., & Liu, Z. (2017). Automatic question-answering using a deep similarity neural network. 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP). doi:10.1109/globalsip.2017.8309095 es_ES
dc.description.references R. Paulus , C. Xiong , and R. Socher , A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304, 2017. es_ES
dc.description.references Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681. doi:10.1109/78.650093 es_ES
dc.description.references See, A., Liu, P. J., & Manning, C. D. (2017). Get To The Point: Summarization with Pointer-Generator Networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). doi:10.18653/v1/p17-1099 es_ES
dc.description.references Takase, S., Suzuki, J., Okazaki, N., Hirao, T., & Nagata, M. (2016). Neural Headline Generation on Abstract Meaning Representation. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. doi:10.18653/v1/d16-1112 es_ES
dc.description.references G. Tur and R. De Mori . Spoken language understanding: Systems for extracting semantic information from speech, John Wiley & Sons, 2011. es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem