Registration to the website is open. After registering, you can get access to the data and submit your predictions on the development and test sets. The proceedings, including the lab overview and the systems who participated are now available.

We offer a shared task on the detection of Persuasion Techniques in multilingual online news. This task is part of the CheckThat! Lab 2024 edition and it is a follow-up of the SemEval 2023 Task 3, with respect to which it introduces some new elements.

The participants are provided with training data in various languages (see table below), i.e., news articles with Persuasion techniques. The task consists of building models capable of detecting 23 persuasion techniques at text-span level (see image) in news in English and four new languages: Portuguese, Slovenian, Bulgarian, Arabic.

Example of annotation

Submissions may be made for any number of languages (even just one) and systems may be trained using any other annotated data available for the task. The data we provide is split into training, development and test sets, but the three sets will not be available for all languages (see table below). In the first phase we open a leaderboard for each language for which the development set is provided (the leaderboard will be updated in real-time). In the second phase we'll open a leaderboard for each language for which the test is provided (in this case the leaderboard will be visible only at the end of the test phase). Notice that the official ranking will be based only on the results on the test set, thus only on English, Arabic, Portuguese, Slovenian, Bulgarian.

Training set Development set Test set
English X X X
French X X
Italian X X
German X X
Russian X X
Polish X X
Spanish X
Greek X
Georgian X
Arabic X
Portuguese X
Slovenian X
Bulgarian X

All technical details about the task are given in the readme accessible from your team page after registering an account on this website. We share the annotation guidelines to give more detailed definitions, with examples, of the output classes for the task.

We provide a training set to build your systems locally. We further provide a development set (without annotations) and an online submission website to score your systems. A public leaderboard will show the progress on the task of the researchers involved in the task.

Evaluation

Upon registration, participants will have access to their team page, where they can also download scripts for scoring the task. Here is a brief description of the evaluation measures the scorers compute.

The task is a multi-label multi-class sequence tagging task. We modify the standard micro-averaged F1 to account for partial matching between the spans. In addition, an F1 value is computed for each persuasion technique. In a nutshell, strong partial overlap will be given a full credit (lean approach), while in cases in which the overlap is less strong the partial credit is proportional to the intersection of the two spans, and it is normalized by the length of the ground truth. The official score that will appear on Leaderboard will be computed using the 23 fine-grained persuasion technique labels. On top of this, an evaluation at the coarse-grained level will be computed too, i.e., mapping the labels to the 6 persuasion technique categories (see above) and this will be communicated to the participating teams.

How to Participate

  • Register on the main CLEF website.
  • Ask to participate on the registration page on this website. Once your account is checked you'll be able to access the data and have the possibility to submit your predictions.
  • After we manually verify your account, you will get an email with your team passcode. In case you do not receive the email, after checking your SPAM folder, then send us an email. We recommed you write down the passcode (and bookmark your team page).
    We will use your email only to send you updates on the corpus or to let you know if we organise any event on the topic, we promise.
  • Use the passcode on the top-right box to enter your team page. There you can download the data and submit your runs.
  • Phase 1. Submit your predictions on the development set to check your performance evolution. You will get an immediate feedback for each submission and you can check other participants' performances.
    Avoid submitting an abnormal number of submissions with the purpose of guessing the gold labels.
    Manual predictions are forbidden; the whole process should be automatic.
  • Phase 2. Once the test set is available, you will be able to submit your predictions on it, but you won't get any feedback until the end of the evaluation phase.
    You can make as many submissions as you like, but we will evaluate only the latest one.
  • The dataset may include content which is protected by copyright of third parties. It may only be used in the context of this shared task, and only for scientific research purposes. The dataset may not be redistributed or shared in part or full with any third party. You may not share you passcode with others or give access to the dataset to unauthorised users. Any other use is explicitly prohibited.
    In order to disseminate the results, we give the chance to the users to share a link to a paper or a website describing their systems.

Contact

We have created a google group for the task. Join it to ask any question and to interact with other participants.

Follow us on twitter to get the latest updates on the data and the competition!

If you need to contact the organisers only, send us an email.

The task is the result of the efforts of:

  • Preslav Nakov, Mohamed bin Zayed University of Artificial Intelligence, UAE
  • Jakub Piskorski, Polish Academy of Sciences, Poland
  • Nicolas Stefanovitch, European Commission Joint Research Centre, Italy
  • Giovanni Da San Martino, University of Padova, Italy
  • Elisa Sartori, University of Padova, Italy
  • Ricardo Campos, University of Beira Interior - Covilhã and INESC TEC - Porto, Portugal
  • Senja Pollak, Jozef Stefan Institute, Slovenia
  • Dimitar Dimitrov, Sofia University, Bulgaria
  • Firoj Alam, Qatar Computing Research Institute, HBKU, Qatar
  • Alípio Jorge, University of Porto - Porto and INESC TEC - Porto, Portugal
  • Purificação Silvano, University of Porto - Porto and CLUP - Porto, Portugal
  • Nuno Guimarães, University of Porto - Porto and INESC TEC - Porto, Portugal
  • Ana Filipa Pacheco, University of Porto, Portugal
  • Nana Yu, University of Porto, Portugal
  • Ana Zwitter Vitez, University of Ljubljana, Slovenia
  • Zoran Fijavž, Peace institute, Slovenia
  • Nikolay Ribin, Sofia University, Bulgaria
  • Ivan Koychev, Sofia University, Bulgaria
  • Ivanka Mavrodieva, Sofia University "St. Kliment Ohridski", Sofia, Bulgaria
  • Desislava Angelova, Sofia University "St. Kliment Ohridski", Sofia, Bulgaria
  • Maram Hasanain, Qatar Computing Research Institute, HBKU, Qatar
  • Fatema Ahmed, Qatar Computing Research Institute, HBKU, Qatar
  • Nikolaos Nikolaidis, European Commission Joint Research Centre, Italy

 
 
Template by pFind Goodies