Add ACL citation, specify macro-F1
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ In all cases, this model was finetuned for specific downstream tasks.
|
|
| 22 |
|
| 23 |
## NER
|
| 24 |
|
| 25 |
-
Mean F1 scores were used to evaluate performance. Datasets used: [hr500k](https://huggingface.co/datasets/classla/hr500k), [ReLDI-sr](https://huggingface.co/datasets/classla/reldi_sr), [ReLDI-hr](https://huggingface.co/datasets/classla/reldi_hr), and [SETimes.SR](https://huggingface.co/datasets/classla/setimes_sr).
|
| 26 |
|
| 27 |
| system | dataset | F1 score |
|
| 28 |
|:-----------------------------------------------------------------------|:--------|---------:|
|
|
@@ -103,31 +103,25 @@ The procedure is explained in greater detail in the dedicated [benchmarking repo
|
|
| 103 |
|
| 104 |
# Citation
|
| 105 |
|
| 106 |
-
<!---The following paper has been submitted for review:
|
| 107 |
-
|
| 108 |
-
```
|
| 109 |
-
@misc{ljubesic2024language,
|
| 110 |
-
author = "Ljube\v{s}i\'{c}, Nikola and Suchomel, Vit and Rupnik, Peter and Kuzman, Taja and van Noord, Rik",
|
| 111 |
-
title = "Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining",
|
| 112 |
-
howpublished = "Submitted for review",
|
| 113 |
-
year = "2024",
|
| 114 |
-
}
|
| 115 |
-
```
|
| 116 |
-
--->
|
| 117 |
-
|
| 118 |
Please cite the following paper:
|
| 119 |
```
|
| 120 |
-
|
| 121 |
-
title=
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
}
|
| 132 |
|
| 133 |
```
|
|
|
|
| 22 |
|
| 23 |
## NER
|
| 24 |
|
| 25 |
+
Mean macro-F1 scores were used to evaluate performance. Datasets used: [hr500k](https://huggingface.co/datasets/classla/hr500k), [ReLDI-sr](https://huggingface.co/datasets/classla/reldi_sr), [ReLDI-hr](https://huggingface.co/datasets/classla/reldi_hr), and [SETimes.SR](https://huggingface.co/datasets/classla/setimes_sr).
|
| 26 |
|
| 27 |
| system | dataset | F1 score |
|
| 28 |
|:-----------------------------------------------------------------------|:--------|---------:|
|
|
|
|
| 103 |
|
| 104 |
# Citation
|
| 105 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
Please cite the following paper:
|
| 107 |
```
|
| 108 |
+
@inproceedings{ljubesic-etal-2024-language,
|
| 109 |
+
title = "Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining",
|
| 110 |
+
author = "Ljube{\v{s}}i{\'c}, Nikola and
|
| 111 |
+
Suchomel, V{\'\i}t and
|
| 112 |
+
Rupnik, Peter and
|
| 113 |
+
Kuzman, Taja and
|
| 114 |
+
van Noord, Rik",
|
| 115 |
+
editor = "Melero, Maite and
|
| 116 |
+
Sakti, Sakriani and
|
| 117 |
+
Soria, Claudia",
|
| 118 |
+
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
|
| 119 |
+
month = may,
|
| 120 |
+
year = "2024",
|
| 121 |
+
address = "Torino, Italia",
|
| 122 |
+
publisher = "ELRA and ICCL",
|
| 123 |
+
url = "https://aclanthology.org/2024.sigul-1.23",
|
| 124 |
+
pages = "189--203",
|
| 125 |
}
|
| 126 |
|
| 127 |
```
|