- -

SceneFND: Multimodal Fake News Detection by Modeling Scene Context Information

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

SceneFND: Multimodal Fake News Detection by Modeling Scene Context Information

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Zhang, Guobiao es_ES
dc.contributor.author Giachanou, Anastasia es_ES
dc.contributor.author Rosso, Paolo es_ES
dc.date.accessioned 2023-07-13T18:02:25Z
dc.date.available 2023-07-13T18:02:25Z
dc.date.issued 2022-04 es_ES
dc.identifier.issn 0165-5515 es_ES
dc.identifier.uri http://hdl.handle.net/10251/194936
dc.description.abstract [EN] Fake news is a threat for the society and can create a lot of confusion to people regarding what is true and what not. Fake news usu ally contain manipulated content, such as text or images that attract the interest of the readers with the aim to convince them on their truthfulness. In this article, we propose SceneFND (Scene Fake News Detection), a system that combines textual, contextual scene and visual representation to address the problem of multimodal fake news detection. The textual representation is based on word embeddings that are passed into a bidirectional long short-term memory network. Both the contextual scene and the visual representations are based on the images contained in the news post. The place, weather and season scenes are extracted from the image. Our statistical analysis on the scenes showed that there are statistically significant differences regarding their frequency in fake and real news. In addition, our experimental results on two real world datasets show that the integration of the contextual scenes is effective for fake news detection. In particular, SceneFND improved the performance of the textual baseline by 3.48% in PolitiFact and by 3.32% in GossipCop datasets. Finally, we show the suitability of the scene information for the task and present some examples to explain its effectiveness in capturing the relevance between images and text. es_ES
dc.description.sponsorship The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: The work of Anastasia Giachanou is funded by the Dutch Research Council (grant VI.Vidi.195.152). The work of Paolo Rosso was in the framework of the Iberian Digital Media Research and Fact-Checking Hub (IBERIFIER) funded by the European Digital Media Observatory (2020-EU-IA0252), and of the XAI-DisInfodemics research project on eXplainable AI for disinformation and conspiracy detection during infodemics, funded by the Spanish Ministry of Science and Innovation (PLEC2021-007681). es_ES
dc.language Inglés es_ES
dc.publisher SAGE Publications es_ES
dc.relation.ispartof Journal of Information Science es_ES
dc.rights Reserva de todos los derechos es_ES
dc.subject Fake news detection es_ES
dc.subject Multimodal feature fusion es_ES
dc.subject Social media es_ES
dc.subject Visual scene information es_ES
dc.subject.classification LENGUAJES Y SISTEMAS INFORMATICOS es_ES
dc.title SceneFND: Multimodal Fake News Detection by Modeling Scene Context Information es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.1177/01655515221087683 es_ES
dc.relation.projectID info:eu-repo/grantAgreement/ //INEA%2FCEF%2FICT%2FA2020%2F2381931//Iberian Digital Media Research and Fact-Checking Hub/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/AEI//PLEC2021-007681//IA EXPLICABLE PARA DESINFORMACIÓN Y DETECCIÓN DE CONSPIRACIÓN DURANTE INFODEMIAS (XAI-DISINFODEMICS)/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/NWO//VI.Vidi.195.152/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/EC//2020-EU-IA0252/ es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Escola Tècnica Superior d'Enginyeria Informàtica es_ES
dc.description.bibliographicCitation Zhang, G.; Giachanou, A.; Rosso, P. (2022). SceneFND: Multimodal Fake News Detection by Modeling Scene Context Information. Journal of Information Science. 1-13. https://doi.org/10.1177/01655515221087683 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion https://doi.org/10.1177/01655515221087683 es_ES
dc.description.upvformatpinicio 1 es_ES
dc.description.upvformatpfin 13 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.relation.pasarela S\488735 es_ES
dc.contributor.funder European Commission es_ES
dc.contributor.funder UNIVERSIDAD DE NAVARRA es_ES
dc.contributor.funder AGENCIA ESTATAL DE INVESTIGACION es_ES
dc.contributor.funder Netherlands Organization for Scientific Research es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem