text
stringlengths 0
203
|
---|
Introduction to the CoNLL-2003 Shared Task: |
Language-Independent Named Entity Recognition |
Erik F. Tjong Kim Sang and Fien De Meulder |
CNTS - Language Technology Group |
University of Antwerp |
{erikt,fien.demeulder}@uia.ua.ac.be |
Abstract |
We describe the CoNLL-2003 shared task: |
language-independent named entity recognition. We give background information on |
the data sets (English and German) and |
the evaluation method, present a general |
overview of the systems that have taken |
part in the task and discuss their performance. |
of the 2003 shared task have been offered training |
and test data for two other European languages: |
English and German. They have used the data |
for developing a named-entity recognition system |
that includes a machine learning component. The |
shared task organizers were especially interested in |
approaches that made use of resources other than |
the supplied training data, for example gazetteers |
and unannotated data. |
2 |
1 |
Introduction |
Named entities are phrases that contain the names |
of persons, organizations and locations. Example: |
[ORG U.N. ] official [PER Ekeus ] heads for |
[LOC Baghdad ] . |
This sentence contains three named entities: Ekeus |
is a person, U.N. is a organization and Baghdad is |
a location. Named entity recognition is an important task of information extraction systems. There |
has been a lot of work on named entity recognition, |
especially for English (see Borthwick (1999) for an |
overview). The Message Understanding Conferences |
(MUC) have offered developers the opportunity to |
evaluate systems for English on the same data in a |
competition. They have also produced a scheme for |
entity annotation (Chinchor et al., 1999). More recently, there have been other system development |
competitions which dealt with different languages |
(IREX and CoNLL-2002). |
The shared task of CoNLL-2003 concerns |
language-independent named entity recognition. We |
will concentrate on four types of named entities: |
persons, locations, organizations and names of |
miscellaneous entities that do not belong to the previous three groups. The shared task of CoNLL-2002 |
dealt with named entity recognition for Spanish and |
Dutch (Tjong Kim Sang, 2002). The participants |
Data and Evaluation |
In this section we discuss the sources of the data |
that were used in this shared task, the preprocessing |
steps we have performed on the data, the format of |
the data and the method that was used for evaluating |
the participating systems. |
2.1 |
Data |
The CoNLL-2003 named entity data consists of eight |
files covering two languages: English and German1 . |
For each of the languages there is a training file, a development file, a test file and a large file with unannotated data. The learning methods were trained with |
the training data. The development data could be |
used for tuning the parameters of the learning methods. The challenge of this year’s shared task was |
to incorporate the unannotated data in the learning |
process in one way or another. When the best parameters were found, the method could be trained on |
the training data and tested on the test data. The |
results of the different learning methods on the test |
sets are compared in the evaluation of the shared |
task. The split between development data and test |
data was chosen to avoid systems being tuned to the |
test data. |
The English data was taken from the Reuters Corpus2 . This corpus consists of Reuters news stories |
1 |
Data files (except the words) can be found on |
http://lcg-www.uia.ac.be/conll2003/ner/ |
2 |
http://www.reuters.com/researchandstandards/ |
English data |
Training set |
Development set |
Test set |
Articles |
946 |
216 |
231 |
Sentences |
14,987 |
3,466 |
3,684 |
Subsets and Splits