@@ -13,7 +13,7 @@ blurb: "Missing Pieces and Misinformation: Identifying social media posts with i
1313<!-- # please respect the structure below-->
1414* See the [ MediaEval 2026 webpage] ( https://multimediaeval.github.io/editions/2026/ ) for information on how to register and participate.*
1515
16- ## Task Description
16+ #### Task Description
1717
1818The goal of this task is to develop AI models that are capable of detecting implicit arguments
1919(enthymemes) in tweets. The dataset contains tweets and their annotations, and also includes
@@ -55,9 +55,8 @@ understanding of implicit argumentation, but do not necessarily present complete
5555Example questions for "Quest for Insight" papers include: How do different annotators interpret
5656implicit premises? What linguistic features best signal the presence of enthymemes?
5757
58- ---
5958
60- ## Motivation and Background
59+ #### Motivation and Background
6160
6261Enthymemes—arguments with missing components (premises or conclusions)—represent a fundamental
6362challenge in understanding persuasive discourse and argumentation. These implicit arguments are
@@ -94,9 +93,8 @@ variation as signal rather than noise. This resource adds on an existing dataset
9493investigating enthymemes in controversial political discourse, enabling research into how discourse
9594characteristics of enthymemes can improve their detection with NLP methods.
9695
97- ---
9896
99- ## Target Group
97+ #### Target Group
10098
10199This task is interesting to anyone who is interested in text analysis. We expect it to attract
102100people working in areas such as natural language processing, argument mining, computational
@@ -110,9 +108,8 @@ persuasion, shapes political discourse, and affects the processes by which audie
110108reason about controversial topics. The use of explicit structural modeling, linguistic
111109feature-based approaches, and even rule-based systems of all sorts are encouraged.
112110
113- ---
114111
115- ## Data
112+ #### Data
116113
117114The dataset consists of tweets that have been annotated by multiple annotators who judged whether
118115or not the tweet contains an enthymeme. For each enthymeme, the annotators also propose a
@@ -136,9 +133,8 @@ The data will be released in three parts:
136133> ⚠️ Participants should be aware that the data contains language hurtful towards immigrants and
137134> should be ready for this when reading the data.
138135
139- ---
140136
141- ## Evaluation Methodology
137+ #### Evaluation Methodology
142138
143139** Task 1:** Since this is a label prediction task, we will evaluate using F1 concerning the
144140presence or absence of enthymemes. Three labels are considered in the basic setting:
@@ -149,9 +145,8 @@ used to compare the reconstructions provided by the annotators with the proposit
149145the participants. Second, a subset of the test set will be sampled and evaluated by hand by
150146experienced human annotators.
151147
152- ---
153148
154- ## Quest for Insight
149+ #### Quest for Insight
155150
156151- What systematic patterns emerge in label variation across easy-medium-hard cases, and do they
157152 reveal distinct interpretative frameworks?
@@ -167,18 +162,16 @@ experienced human annotators.
167162- What is the most effective way to leverage annotator reconstructions to evaluate implicit
168163 proposition generation performance?
169164
170- ---
171165
172- ## Task Organizers
166+ #### Task Organizers
173167
174168- ** Martial Pastor** , Radboud University — martial.pastor@ru.nl
175169- ** Nelleke Oostdijk** , Radboud University — nelleke.oostdijk@ru.nl
176170
177171* Data will be made available as of the 1st of March.*
178172
179- ---
180173
181- ## References
174+ #### References
182175
183176[ 1] Aroyo, L., & Welty, C. (2015). Truth is a lie: Crowd truth and the seven myths of human annotation. * AI Magazine, 36* (1), 15–24.
184177
0 commit comments