|
--- |
|
license: cc-by-nc-sa-4.0 |
|
library_name: transformers |
|
pipeline_tag: token-classification |
|
widget: |
|
- text: "Kysymys: Onko tuo kissa? Vastaus: En osaa sanoa." |
|
--- |
|
### xlm-roberta-base for token classification, specifically fine-tuned for question-answer extraction for Finnish |
|
|
|
This is the `xlm-roberta-base`, fine-tuned on manually annotated Finnish data, ChatGPT-annotated data and a semi-synthetic dataset based on the LFQA dataset. |
|
### Hyperparameters |
|
``` |
|
batch_size = 8 |
|
epochs = 10 (trained for less) |
|
base_LM_model = "xlm-roberta-base" |
|
max_seq_len = 512 |
|
learning_rate = 1e-5 |
|
``` |
|
### Performance |
|
``` |
|
Accuracy = 0.85 |
|
Question F1 = 0.82 |
|
Answer F1 = 0.75 |
|
``` |
|
|
|
### Usage |
|
|
|
To get the best question-answer pairs use the huggingface pipeline with no aggregation strategy and do some post-processing like in this [script](https://github.com/TurkuNLP/register-qa/blob/main/token-classification/scripts/extract_qa_fi_no_entropy.py). |
|
|
|
### Citing |
|
|
|
Citing information coming soon! |