With the advancement of digital technologies and gadgets, online content is easily accessible. At the same time, harmful content also gets spread. There are different harmful content available on different platforms in multiple languages. The topic of harmful content is broad and covers multiple research directions. But from the user’s aspect, they are affected by them all. Often, it is studied individually, like misinformation and hate speech. Research has been done on one platform, monolingual, on a particular issue. It leads to harmful content spreaders switching platforms and languages to reach the user base. Harmful is not limited to social media but also news media. Spreader shares harmful content in posts, news articles, comments, and hyperlinks. So, there is a need to study the harmful content by combining cross-platform, language, multimodal data and topics.
We will bring the research on harmful content under one umbrella so that research on different topics (hate speech, misinformation, disinformation, self-harm, offensive content, etc.) can bring some novel methods and recommendations for users, leveraging text analysis with image, audio, and video recognition to detect harmful content in diverse formats. The workshop will cover the ongoing issue of war or elections in 2024.
We believe this workshop will provide a unique opportunity for researchers and practitioners to exchange ideas, share latest developments, and collaborate on addressing the challenges associated with harmful contents spread across the Web. We expect that the workshop will generate insights and discussions that will help advance the field of societal artificial intelligence (AI) for the development of safer internet. In addition to attracting high quality research contributions to the workshop, one of the aims of the workshop is to mobilise the researchers working on the related areas to form a community.
Register for DHOW Workshop 2024 through the registration portal.
Session 1 (Chair: Haiming Liu) | |
09:00 - 09:15 | Opening Remarks |
09:15 - 10:15 | Keynote #1 by Stefano Cresci: From Detection to Intervention: Insights into Harmful Content Diffusion and Content Moderation Practices |
10:15 - 10:35 | Long Paper #1: Sexism Detection on a Data Diet |
10:35 - 11:00 | Coffee Break |
Session 2 (Chair: Gautam Kishore Shahi) | |
11:00 - 11:20 | Long Paper #2: Towards Safer Online Spaces: Deep Learning for Hate Speech Detection in Code-Mixed Social Media Conversations(virtual) |
11:20 - 11:40 | Long Paper #3: Towards a crowdsourced framework for online hate speech moderation - a case study in the Indian political scenario(virtual) |
11:40 - 12:00 | Long Paper #4: COVID-19 Fake News: A Systematic Literature Review using "SmartLitReview" (Virtual) |
12:00 - 12:15 | Short Paper #1: Understanding Influence Operations via Images |
12:15 - 12:30 | Short Paper #2: What we can learn from TikTok through its Research API |
12:30 - 14:00 | Lunch |
Session 3 (Chair: Thomas Mandl) | |
14:00 - 15:00 | Keynote #2 by Ralph Ewerth: The Challenge of Multimodality in Harmful Content Detection: Analyzing Cross-modal Consistency and Multimodal Narrative Patterns |
15:00 - 15:20 | Long paper #5: Great Ban: Efficacy and Unintended Consequences of a Massive Deplatforming Operation on Reddit |
15:20 - 15:35 | Short Paper #3:On the Moral Intuitions of Fake News Spreaders (Virtual) |
15:35 - 16:00 | Coffee Break |
Session 4 (Chair: Durgesh Nandini) | |
16:00 - 17:00 | Keynote #3 by Richard Rogers: Machinic critique: A history of algorithmic auditing |
17:00 - 17:15 | Closing |
|
Stefano Cresci |
Institute of Informatics and Telematics of the National Research Council, Italy |
Richard Rogers |
|
|
Ralph Ewerth |
TIB - Leibniz Information Centre for Science and Technology, Leibniz Universität Hannover, Germany |