@ AAAI 2024

Third Workshop on ​Multimodal Fact Checking and Hate Speech Detection
February, 2024


Combating fake news is one of the burning societal crisis. It is difficult to expose false claims before they create a lot of damage. Automatic fact/claim verification has recently become a topic of interest among diverse research communities. Research efforts and datasets on text fact verification could be found, but there is not much attention towards multimodal or cross-modal fact-verification. This workshop will encourage researchers from interdisciplinary domains working on multimodality and/or fact checking to come together and work on multimodal (images, memes, videos) fact checking. At the same time, multimodal hate speech detection is an important problem but has not received much attention. Lastly, learning joint modalities his of interest to both Natural Language Processing (NLP) and Computer Vision (CV) forums.

During the last decade, both the field of studies - NLP and CV have made significant progress due to the success strories of neural network. Mutimodal tasks like visual question-answering (VQA), image captioning, video captioning, caption based image retrieval, etc. started getting into the main spotlight either in NLP/CV forums. Mutimodality is the next big leap for the AI community. De-Factify is a specified forum to discuss on multimodal fake news, and hate speech related challenges. We also encourage discussion on multimodal tasks in general.

Link to previous year workshop : Defactify @ AAAI 2023

★ Shared Tasks Released

  • System description paper : All teams/participants will be invited to submit a paper describing their system. Accepted papers will be published in formal proceedings.
  • Paper submission instruction :TBD
  1. Fact Checking:

    Social media for news consumption is double edged sword. On the one hand, its low cost, easy access and rapid circulation of information lead people to consume news from social media. On the other hand, it enables the wide spread of fake news, i.e., low quality news with the false information. It affects everyone including government, media, individual, health, law and order, and economy. Therefore, fake news detection on social media has recently become an appealing research topic. We encourage solution to fake news like automated fact checking at scale, early detction of fake news etc.

    The image purportedly shows US President Donald Trump in his younger days, shaking hands with global terrorist Osama Bin Laden. It had gone viral during the 2020 US presidential election. The picture also has a quote superimposed on it, praising Laden, which is attributed to Trump.
    US President Joe Biden has announced that Americans who have not taken Covid vaccines will be put in quarantine camps and detained indefinitely till they take their shots. - this is absolutely a false claim.
    A morphed picture of Prime Minister Narendra Modi is going viral on social platforms like Facebook and WhatsApp. Narendra Modi on July 11 was in Turkmenistan where he visited the Mausoleum of the First President of Turkmenistan, in Ashgabat. A picture was clicked during his visit in which Narendra Modi is seen standing with other religious and political leaders of Turkmenistan. While those leaders are seen raising their hands for dua (Islamic way of prayer), Modi is standing folding his hand. A morphed picture of the event is being shared on social media.
    Several people are making a claim on their social media accounts that the CEO of Pfizer had to cancel a planned trip to Israel because he was not fully vaccinated. - the claim is not true.
  2. DE : HATE

    We present a pioneering endeavor: a challenge focused on the automatic blurring of offensive segments within a hateful image. In the context of multimodal content (comprising both text and image) infused with hate, the objective is to pinpoint the pertinent hateful portions within the image and effectively apply blurring techniques to obfuscate the malevolent aspects and thus, mitigate the antagonism conveyed by the media.



Topics of Interests

It is a forum to bring attention towards collecting, measuring, managing, mining, and understanding multimodal disinformation, misinformation, and malinformation data from social media. This workshop covers (but not limited to) the following topics: --

  • Development of corpora and annotation guidelines for multimodal fact checking.
  • Computational models for multimodal fact checking.
  • Development of corpora and annotation guidelines for multimodal hate speech detection and classification.
  • Computational models for multimodal hate speech detection and classification.
  • Analysis of diffusion of Multimodal fake news and hate speech in social networks.
  • Understanding the impact of the hate content on specific groups (like targeted groups).
  • Fake news and hate speech detection in low resourced languages.
  • Hate speech normalization.
  • Case studies and/or surveys related to multimodal fake news or hate speech.
  • Analyzing behavior, psychology of multimodal hate speech/ fake news propagator.
  • Real world/ applied tool development for multimodal hate speech/fake news detection.
  • Early detection of multimodal fake news/hate speech.
  • Use of modalities other than text and images (like audio, video etc).
  • Evolution of multi modal fake news and hate speech.
  • Information extraction, ontology design and knowledge graph for multimodal hate speech and fake news.
  • Cross lingual, code-mixed, code switched multimodal fake news/hate speech analysis.
  • Computational social science.

Submission Instructions:

  • Long papers: Novel, unpublished, high quality research papers. 10 pages excluding references.
  • Short papers: 5 pages excluding references.
  • Previously rejected papers:You can attach comments of previously rejected papers (AAAI, neurips) and 1 page cover letter explaining chages made.
  • Extended abstracts: 2 pages exclusing references. Non archival. can be previously published papers or work in progress.
  • All papers must be submitted via our EasyChair submission page.
  • Regular papers will go through a double-blind peer-review process. Extended abstracts may be either single blind (i.e., reviewers are blind, authors have names on submission) or double blind (i.e., authors and reviewers are blind). Only manuscripts in PDF or Microsoft Word format will be accepted.
  • Paper template: http://ceur-ws.org/Vol-XXX/CEURART.zip or https://www.overleaf.com/read/gwhxnqcghhdt

Paper Submission Link : EasyChair

Important Dates :

  • 10 January 2024: Papers due at 11:59 PM UTC-12
  • 20 January 2024: Notification of papers
  • 25 January 2024: Camera ready submission due of accepted papers at 11:59 PM UTC-12

Shared tasks

Factify5WQA - Please visit this link for details.

DE : HATE - Please visit this link for details.

Shared Task Important Dates:

  • 13 October 2023 : Release of the training set.
  • 8 November 2023 : Release of the test set.
  • 30 November 2023 : Deadline for submitting the final results.
  • 12 December 2023 : Announcement of the results.
  • 05 January 2024 : System paper submission deadline (All teams are invited to submit a paper).
  • 20 January 2024 : Notification of system papers.
  • 25 January 2024 : Camera ready submission.

Accepted Papers

Workshop Regular Papers

  • Bias Detection in Text with Dual Transformer Classifier. Shaina Raza, Shardul Ghuge, Fatemeh Tavakoli, Sana Ayromlou and Syed Raza Bashir

  • Ontology Enhanced Claim Detection. Z. Melce Hüsünbeyi and Tatjana Scheffler

  • Detecting and Correcting Hate Speech in Multimodal Memes with Large Visual Language Model. Minh-Hao Van and Xintao Wu

Factify 5WQA Shared Task Papers

  • Team Trifecta at Factify5WQA: Setting the Standard in Fact Verification with Fine-Tuning. Shang-Hsuan Chiang, Ming-Chih Lo, Lin-Wei Chao and Wen-Chih Peng

  • SRLFactQA at Factify5WQA: Composite Claim-Evidence Consistency Aware Semantic Role Labelling based Question-Answering Entailment. Hariram Veeramani, Surendrabikram Thapa, Rajaraman Kanagasabai and Usman Naseem

De:Hate Shared Task Papers

  • UniteToModerate at DeHate: The Winning Approach for Segmentation-based Content Moderation with Vision-Text-Mask Modality Fused Large Multimodal Models. Hariram Veeramani, Surendrabikram Thapa, Rajaraman Kanagasabai and Usman Naseem


Dr. Amitava Das:

Dr. Amitava Das is a Core Faculty & Research Associate Professor of the Artificial Intelligence Institute, at the University of South Carolina, and an Advisory Scientist to Wipro AI.

Research interests : Code-Mixing and Social Computing.

Organizing Activities [selective] : • Memotion @SemEval2020 • SentiMix @SemEval2020 • Computational Approaches to Linguistic Code-Switching @LREC 2020 • CONSTRAINT @AAAI2021

Dr. Amit Sheth:

Dr. Amit Sheth is the founding Director of the Artificial Intelligence Institute, and a CSE Professor at University of South Carolina.

Research interests : Knowledge Graph, NLP, Analysing Social Media

Organizing Activities [selective] : • Cysoc2021 @ ICWSM2021 • Emoji2021 @ICWSM2021 • KiLKGC 2021 @KGC21

Dr. Asif Ekbal:

Dr. Asif Ekbal is an Associate Professor of CSE at IIT Patna, India.

Research interests : NLP, CodeMixing and Social Computing.

Organizing Activities [selective] : • CONSTRAINT @AAAI2021

Aman Chadha

Aman Chadha is an Applied Sci- ence Manager at Amazon Alexa AI and a Researcher at Stanford AI.

Research interests : Multimodal AI, On-device AI, and Human-Centered AI.


Parth Patwa
University of California Los Angeles (UCLA)

Megha Chakraborty

Suryavardan Suresh
New York University

Anku Rani
University of South Carolina, USA


Jinendra Malekar