Multilingual Semantic Search

Covid-19 MLIA Eval

Task Description

The goal of the Multilingual Semantic Search task is to collect relevant information for the community, the general public as well as other stakeholders, when searching for health content in different languages and with different levels of knowledge about the specific topic.
There will be two sub-tasks: subtask 1 is a classic ad-hoc multilingual search task focused more on high precision; subtask 2 is more oriented towards high-recall systems, like Technology Assisted Review (TAR) systems.

To participate in the Multilingual Semantic Search task, the groups need to register at the following link:

Register

Important Dates - Round 1

Round starts: October 23, 2020

Corpora and topics released: October 23, 2020

Runs due from participants: November 27, 2020

Ground-truth released and runs scored: December 15, 2020

Rolling report submission deadline (peliminary version): December 23, 2020

Rolling report submission deadline (camera ready): January 8, 2021

Slot for a virtual meeting to discuss the results: January 11-15, 2021

Round ends: January 15, 2021

Participation Guidelines

The overall organization will follow a CLEF-style evaluation process with a shared dataset composed of a: collection of documents, a set of topics, and a set relevance assessments.

The languages of the collections for the task are (ISO 6391-1 codes within parentheses):

Topics will be available in the above languages and, in addition:

In the first round, we plan to evaluate the systems in a classic multilingual lexical search fashion. In the subsequent rounds, the multilingual semantic search aspects will be more prominent by asking the systems, for example, to highlight technical concepts in the topics as well as the documents, to show the contextual meaning of the concept in order to improve the readability of the document and the effectiveness of the system.

For each of the two subtasks described in the following, we welcome to types of submissions

Subtask 1 - High Precision:

In this subtask, participants are required to build systems that will help the general public to retrieve the most relevant documents on the Web concerning COVID-19 efficiently. The main focus of this subtask is on the top ranked documents; evaluation measures like Precision at 5 and 10 documents as well as Normalized Discounted Cumulative Gain will be used to compare systems.

Substask 2 - High recall:

In this subtask, the focus is more on the problem of finding as many relevant documents as possible with the least effort. Given a limited amount of resources, such as a time limit and expert availability in time of crisis, there will be a limit on the maximum number of documents that can be retrieved in order to build a set of relevant documents that should be delivered to the general public. Evaluation measures like Recall@k and Area Under ROC will be used to compare the systems. In the first round, the systems will work without relevant information. From the second round, the systems can use the information about the relevance assessments to optimize their systems.

Corpora:

Topics:

The topics have been created by selecting 1) a subset of the queries created for the TREC-COVID Task (courtesy of TREC-COVID Task organizers) and 2) a selection of queries made available in the Bing search dataset for Coronavirus Intent which includes queries from all over the world that had an explicit/implicit intent related to the Coronavirus or Covid-19.
Topics are structured in the following way:


<topic number"topic identifier" xml:lang="ISO 639-1 code" >
	<keyword>keyword based query</keyword>
	<conversational>the query as a question posed by the user</conversational>
	<explanation>a more detailed explanation of what the set of retrieved documents should look like</explanation>
</topic>
   						

The keyword field represents the “traditional” way a user performs the search on a Web search engine. It is basically a set of keywords, i.e. "surgical mask protection".
The conversational field is more like a way of asking the same thing in a verbal way, i.e. "does a surgical mask protect from covid-19?"
The explanation field is used to provide information to the assessors when performing relevance assessments, i.e. "The documents retrieved should contain information about …".

Please, find hereby the links to download the topics for each round.

Relevance Judgements:

After participants submit their runs, a subset of documents for each run will be pooled for each topic in order to get a sample of documents to judge.

Please, find hereby the links to download the relevance judgements for each round.

Participant Repository:

Participants are provided with a single repository for all the tasks they take part in. The repository contains the runs, resources, code, and report of each participant.

The repository is organised as follows:

Covid-19 MLIA Eval consists of three tasks run in three rounds. Therefore, the submission and score folders are organized into sub-folders for each task and round as follows:

Participants which do not take part in a given task or round can simply delete the corresponding sub-folders.

The goal of Covid-19 MLIA Eval is to speed up the creation of multilingual information acces systems and (language) resources for Covid-19 as well as openly share these systems and resources as much as possible. Therefore, participants are more than encouraged to share their code and any additional (language) resources they have used or created.

All the contents of these repositories are realeased under the Creative Commons Attribution-ShareAlike 4.0 International License.

Task Repository:

Organizers share contents common to all participants through the Multilingual Semantic Search task repository.

The repository is organised as follows:

Covid-19 MLIA Eval runs in three rounds. Therefore, the topics and ground-truth folders are organized into sub-folders for each round, i.e. round1, round2, and round3.

All the contents of this repository are realeased under the Creative Commons Attribution-ShareAlike 4.0 International License.

Rolling Technical Report:

The rolling technical report should be formatted according to the Springer LNCS format, using either the LaTeX template or the Word template. LaTeX is the preferred format.

Submission Guidelines

Participating teams should satisfy the following guidelines:

Submission for subtask 1 - High precision

The run must have a limit of 1,000 documents retrieved per topic. Therefore, the file of the run submitted for this task must contain no more than 30,000 lines. Any additional retrieved document will not be considered in the evaluation.

Submission for subtask 2 - High recall

For this task, there is no limit for the number of documents retrieved per topic. However, the maximum number of retrieved documents allowed in a run is 6,000. Therefore, each run can have a variable number of documents retrieved per topic (on average 200 documents per topic), but the file of the run submitted for this task must contain no more than 6,000 lines. Any additional retrieved document will not be considered in the evaluation.

Trec Format:

Runs should be submitted with the following format:


30 Q0 ZF08-175-870  0 4238 prise1
30 Q0 ZF08-306-044  1 4223 prise1
30 Q0 ZF09-477-757  2 4207 prise1
30 Q0 ZF08-312-422  3 4194 prise1
30 Q0 ZF08-013-262  4 4189 prise1
...
   						
where: It is important to include all the columns and have a white space delimiter between the columns.
Please, find hereby a list of examples of valid document identifiers:

Submission Upload:

Runs should be uploaded in the repository provided by the organizers. Following the repository structure discussed above, for example, a run submitted for the first round of the Multilingual Semantic Search task should be included in submission/task2/round1. In particular:

Runs should be uploaded with the following name convention: <teamname>_task2<N>_<round>_<language>_<freefield> where:

For example, a complete run identifier may look like unipd_task21_round1_bili-it2sv_bm25 where

Performance scores for the submitted runs will be returned by the organizers in the score folder, which follows the same structure as the submission folder.

The rolling technical report has to be uploaded and kept update by participants in the report folder.

Here, you can find a sample participant repository to get a better idea of its layout.

Evaluation:

The effectiveness of the submitted runs will be evaluated with the following measures:

  • Precision at 5 (P@5)
  • Average Precision (AP)
  • normalized Discounted Cumulated Gain (nDCG)
  • R-Precision (RPrec)
  • Recall

Organizers

Giorgio Maria Di Nunzio, University of Padua, Italy
dinunziodei.unipd.it

Maria Eskevich, CLARIN ERIC
mariaclarin.eu