Datasets:
metadata
dataset_info:
features:
- name: sent_text
dtype: string
- name: sent_id
dtype: string
- name: token_ids
sequence: string
- name: token_forms
sequence: string
- name: token_lemmas
sequence: string
- name: token_upos
sequence: string
- name: token_xpos
sequence: string
- name: token_feats
sequence: string
- name: token_head
sequence: int32
- name: token_deprels
sequence: string
- name: token_deps
sequence: string
- name: token_miscs
sequence: string
- name: sent_metadata
dtype: string
- name: corpus
dtype: string
splits:
- name: train
num_bytes: 57596921
num_examples: 27219
- name: validation
num_bytes: 7132032
num_examples: 2595
- name: test
num_bytes: 6901121
num_examples: 2763
download_size: 9544323
dataset_size: 71630074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: cc
language:
- grc
Aggregated Universal Dependencies for Ancient Greek
Based on the Github r2.15 release of all Ancient Greek corpora, namely PROIEL, Perseus, PTNK. Parsed with conllu
.
To ensure compatibility with Hugging Face datasets and the underlying Arrow format, some quirks are present in the dataset:
- the ID field (
token_ids
) is a string rather than an integer. This is needed because in rare cases the ID can be a token range like1-2
. - because features can contain arbitrary content that is only semi-structured (not consistent across tokens), it is dumped as a JSON string for each token. So the
token_feats
field is a list of strings where each string should be JSON-decoded before use. - the same is true for
token_miscs
- and also for
sent_metadata
which contains the sentence metadata apart from the text and ID
License
Note that not all corpora have the same license. Only use those that you can comply with and filter as necessary on the corpus
field:
- "PROIEL": "cc-by-nc-sa-4.0"
- "Perseus": "cc-by-nc-sa-2.5"
- "PTNK": "cc-by-sa-4.0"