Key Dates

Call for Papers

In this workshop, we aim to bring together a multidisciplinary, diverse team of researchers, journalists and stakeholders by focusing on the broader topic harmful content. We welcome research contributions related to the following (but not limited to) topics:
  • Studying different types of harmful content
  • Computational fact-checking & Misinformation Detection
  • Role of Generative AI in Mitigating Harmful Content
  • Harassment, Bullying, and Hate Speech Detection
  • Explainable AI for Harmful Content Analysis
  • Multimodal and Multilingual Harmful Content Detection such as fake news, spam, and troll detection.
  • Deepfake and Synthetic Media
  • Ethical & Societal Implications of AI in Content Moderation
  • Both Qualitative and Quantitative studies on harmful content
  • Psychological effects of harmful content like mental health
  • Approaches for data collection or data annotation using multimodal large models on harmful content
  • User study on the effects of harmful content on human beings
Submission Guidelines: Submissions must be written in English, in ACM MM (double-column) format and must adhere to the ACM MM submission policies The recommended setting for LaTeX is:
\documentclass[sigconf, screen, review, anonymous]{acmart}.
We welcome researchers, practitioners, and industry professionals to submit 6-8 page papers detailing original work, innovative ideas, or case studies in line with any of the workshop topics. All submissions will undergo peer review and selected papers (accepted) will be invited for oral presentations and will be included in our workshop proceedings. Submission must be as a single PDF file, limited to 6-8 content pages for presentation, exluding references.
Authorship.Papers will be double-blind peer-reviewed by at least two reviewers. Please remove author names, affiliations, email addresses, etc. from the paper. Remove personal acknowledgments.
Reviewing. Submissions will be peer-reviewed by members of the Resource Committee based on originality, significance, quality, and clarity. The review process will be double-blind.
Presentation. All accepted papers will be presented as oral presentations, and some would be selected for posters depending on schedule constraints.
Instructions for Camera-Ready Papers: Your camera-ready submission must follow the same guidelines as the main conference of ACM MM 2025. Further, the authors are required to include a proper classification for the paper according to the ACM Classification System (CCS) and Keywords. You should submit a single ZIP file containing all your source files (e.g., *.tex, *.bib, *.sty, and all figures either in Latex or .docx file for Word users).
We will share your information with ACM for the proceeding, and they will contact you (the corresponding author) for the source file and copyright form.
Proceedings. Accepted papers will be presented at the workshop and will be published as an official ACM Workshop proceedings within ACM Multimedia 2025.

Submission Link: https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/DHOW
Submission Due: July 18, 2025, Anywhere on Earth (AoE)


Accepted Paper

  • A Perturbation-Theoretic Model for Fact-Checker Deployment in Dynamic Disinformation Networks
    Spyridon Evangelatos, Mariza Konidi, Eleni Veroni, Karagiorgou Sophia, Christos D. Nikolopoulos
  • Cross-modal Consistency Reasoning with Large Language Models for Short Video-based Fake news Detection
    Qingyan Wang, Lianwei Wu, Yuanxia Zeng, Linyong Wang, wangkang, Yaxiong Wang, Chao Gao
  • Dehumanising Language as a Tool of Ingroup Cohesion – Case of Wartime Dehumanisation in Russian and Ukrainian Telegram
    Elizaveta Chernenko
  • Temperature Matters: Enhancing Watermark Robustness Against Paraphrasing Attacks
    Badr Youbi Idrissi, Monica Millunzi, Amelia Sorrenti, Lorenzo Baraldi, Daryna Dementieva
  • Specializing general-purpose LLM embeddings for implicit hate speech detection across datasets
    Vassiliy Cheremetiev, Quang Long Ho Ngo, Chau Ying Kot, Alina Elena Baia, Andrea Cavallaro
  • Improving Generalization in Deepfake Detection with Face Foundation Models and Metric Learning
    Stelios Mylonas, Symeon Papadopoulos
  • "Humor, Art, or Misinformation?": A Multimodal Dataset for Intent-Aware Synthetic Image Detection
    Anastasios Skoularikis, Stefanos-Iordanis Papadopoulos, Symeon Papadopoulos, Panagiotis C. Petrantonakis
  • Seeing Isn't Believing: Addressing the Societal Impact of Deepfakes in Low-Tech Environments
    Azmine Toushik Wasi, Rahatun Nesa Priti, Mahir Absar Khan, Abdur Rahman, Mst Rafia Islam
  • Beyond Text: Leveraging Vision-Language Models for Misinformation Detection
    Parminder Kaur Grewal, Marina Ernst, Frank Hopfgartner
  • Caste-Based Hate Speech Detection in Low-resource Hindi Language
    Sakshi Gupta, Shunmuga Priya Muthusamy Chinnan, Saranya Rajiakodi, Ratnavel Rajalakshmi, Rahul Ponnusamy, Bharathi Raja Chakravarthi
  • Integrating Semantic, Sentiment, and Object-Level Cues for Multimodal Video-Based Fake News Detection
    Ruby Ham, Yuchen Zhang, Zeyu Fu
  • Global Claims: A Multilingual Dataset of Fact-Checked Claims with Veracity, Topic, and Salience Annotations
    Ana Vranić, José M. Reis, Íris Damião, Paulo David Dias Almeida, Joana G Sa

Registration

Register for DHOW Workshop 2025 through the ACM MM 2025 registration portal.