text
stringlengths
0
203
The participants were given access to the corpus after some linguistic preprocessing had been done: for
all data, a tokenizer, part-of-speech tagger, and a
chunker were applied to the raw data. We created
two basic language-specific tokenizers for this shared
task. The English data was tagged and chunked by
the memory-based MBT tagger (Daelemans et al.,
2002). The German data was lemmatized, tagged
and chunked by the decision tree tagger Treetagger
(Schmid, 1995).
Named entity tagging of English and German
training, development, and test data, was done by
hand at the University of Antwerp. Mostly, MUC
conventions were followed (Chinchor et al., 1999).
An extra named entity category called MISC was
added to denote all names which are not already in
the other categories. This includes adjectives, like
Italian, and events, like 1000 Lakes Rally, making it
a very diverse category.
3
http://www.ldc.upenn.edu/
Data format
All data files contain one word per line with empty
lines representing sentence boundaries. At the end
of each line there is a tag which states whether the
current word is inside a named entity or not. The
tag also encodes the type of named entity. Here is
an example sentence:
U.N.
official
Ekeus
heads
for
Baghdad
.
NNP
NN
NNP
VBZ
IN
NNP
.
I-NP
I-NP
I-NP
I-VP
I-PP
I-NP
O
I-ORG
O
I-PER
O
O
I-LOC
O
Each line contains four fields: the word, its partof-speech tag, its chunk tag and its named entity
tag. Words tagged with O are outside of named entities and the I-XXX tag is used for words inside a
named entity of type XXX. Whenever two entities of
type XXX are immediately next to each other, the
first word of the second entity will be tagged B-XXX
in order to show that it starts another entity. The
data contains entities of four types: persons (PER),
organizations (ORG), locations (LOC) and miscellaneous names (MISC). This tagging scheme is the
IOB scheme originally put forward by Ramshaw and
Marcus (1995). We assume that named entities are
non-recursive and non-overlapping. When a named
entity is embedded in another named entity, usually
only the top level entity has been annotated.
Table 2 contains an overview of the number of
named entities in each data file.
2.4
Evaluation
The performance in this task is measured with Fβ=1
rate:
Fβ =
(β 2 + 1) ∗ precision ∗ recall
(β 2 ∗ precision + recall)
(1)
Florian
Chieu
Klein
Zhang
Carreras (a)
Curran
Mayfield
Carreras (b)
McCallum
Bender