modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-04 18:27:18
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-04 18:26:45
card
stringlengths
11
1.01M
mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili
mbeukman
2021-11-25T09:05:15Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "sw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." --- # xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-yoruba](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) (This model) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 | | [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 | | [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 | | [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 | | [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof
mbeukman
2021-11-25T09:05:13Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "wo", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - wo tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "SAFIYETU BÉEY Céy Koronaa !" --- # xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-wolof](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Wolof part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof) (This model) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | wol | 69.02 | 67.60 | 70.51 | 30.00 | 84.00 | 44.00 | 71.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | wol | 69.01 | 73.25 | 65.23 | 27.00 | 85.00 | 52.00 | 67.00 | | [xlm-roberta-base-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-wolof) | [base](https://huggingface.co/xlm-roberta-base) | wol | 66.12 | 69.46 | 63.09 | 30.00 | 84.00 | 54.00 | 59.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "SAFIYETU BÉEY Céy Koronaa !" ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili
mbeukman
2021-11-25T09:05:10Z
10
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "sw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." --- # xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-wolof](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) (This model) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 | | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 | | [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 | | [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba
mbeukman
2021-11-25T09:05:08Z
8
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "yo", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - yo tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Kò sí ẹ̀rí tí ó fi ẹsẹ̀ rinlẹ̀ ." --- # xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Yoruba part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | yor | 80.29 | 78.34 | 82.35 | 77.00 | 82.00 | 73.00 | 86.00 | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | yor | 83.68 | 79.92 | 87.82 | 78.00 | 86.00 | 74.00 | 92.00 | | [xlm-roberta-base-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-yoruba) | [base](https://huggingface.co/xlm-roberta-base) | yor | 78.22 | 77.21 | 79.26 | 77.00 | 80.00 | 71.00 | 82.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Kò sí ẹ̀rí tí ó fi ẹsẹ̀ rinlẹ̀ ." ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo
mbeukman
2021-11-25T09:04:58Z
14
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "luo", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - luo tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi" --- # xlm-roberta-base-finetuned-swahili-finetuned-ner-luo This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Luo part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | luo | 78.13 | 77.75 | 78.52 | 65.00 | 82.00 | 61.00 | 89.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | luo | 78.71 | 78.91 | 78.52 | 72.00 | 84.00 | 59.00 | 87.00 | | [xlm-roberta-base-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luo) | [base](https://huggingface.co/xlm-roberta-base) | luo | 75.99 | 76.18 | 75.80 | 71.00 | 76.00 | 62.00 | 85.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi" ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda
mbeukman
2021-11-25T09:04:53Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "rw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - rw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ." --- # xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Kinyarwanda part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | kin | 76.31 | 72.64 | 80.37 | 70.00 | 76.00 | 75.00 | 84.00 | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | kin | 79.55 | 75.56 | 83.99 | 69.00 | 79.00 | 77.00 | 90.00 | | [xlm-roberta-base-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda) | [base](https://huggingface.co/xlm-roberta-base) | kin | 74.59 | 72.17 | 77.17 | 70.00 | 75.00 | 70.00 | 82.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ." ner_results = nlp(example) print(ner_results) ```
Manyman3231/lowlight-enhancement
Manyman3231
2021-11-25T09:04:50Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
return im def main(): st.title("Lowlight Enhancement") st.write("This is a simple lowlight enhancement app with great performance and does not require paired images to train.") st.write("The model runs at 1000/11 FPS on single GPU/CPU on images with a size of 1200*900*3") uploaded_file = st.file_uploader("Lowlight Image") if uploaded_file: data_lowlight = Image.open(uploaded_file) col1, col2 = st.columns(2) col1.write("Original (Lowlight)") col1.image(data_lowlight, caption="Lowlight Image", use_column_width=True)
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa
mbeukman
2021-11-25T09:04:48Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "ha", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - ha tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "A saurari cikakken rahoton wakilin Muryar Amurka Ibrahim Abdul'aziz" --- # xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Hausa part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | hau | 89.14 | 87.18 | 91.20 | 82.00 | 93.00 | 76.00 | 93.00 | | [xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | hau | 92.27 | 90.46 | 94.16 | 85.00 | 95.00 | 80.00 | 97.00 | | [xlm-roberta-base-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-hausa) | [base](https://huggingface.co/xlm-roberta-base) | hau | 89.94 | 87.74 | 92.25 | 84.00 | 94.00 | 74.00 | 93.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "A saurari cikakken rahoton wakilin Muryar Amurka Ibrahim Abdul'aziz" ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-ner-yoruba
mbeukman
2021-11-25T09:04:45Z
4
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "yo", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - yo tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Kò sí ẹ̀rí tí ó fi ẹsẹ̀ rinlẹ̀ ." --- # xlm-roberta-base-finetuned-ner-yoruba This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Yoruba part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-yoruba) (This model) | [base](https://huggingface.co/xlm-roberta-base) | yor | 78.22 | 77.21 | 79.26 | 77.00 | 80.00 | 71.00 | 82.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-yoruba) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | yor | 80.29 | 78.34 | 82.35 | 77.00 | 82.00 | 73.00 | 86.00 | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-yoruba) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | yor | 83.68 | 79.92 | 87.82 | 78.00 | 86.00 | 74.00 | 92.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-yoruba' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Kò sí ẹ̀rí tí ó fi ẹsẹ̀ rinlẹ̀ ." ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-ner-luo
mbeukman
2021-11-25T09:04:35Z
7
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "luo", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - luo tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi" --- # xlm-roberta-base-finetuned-ner-luo This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Luo part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luo) (This model) | [base](https://huggingface.co/xlm-roberta-base) | luo | 75.99 | 76.18 | 75.80 | 71.00 | 76.00 | 62.00 | 85.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | luo | 78.71 | 78.91 | 78.52 | 72.00 | 84.00 | 59.00 | 87.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | luo | 78.13 | 77.75 | 78.52 | 65.00 | 82.00 | 61.00 | 89.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-luo' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi" ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda
mbeukman
2021-11-25T09:04:30Z
8
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "rw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - rw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ." --- # xlm-roberta-base-finetuned-ner-kinyarwanda This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Kinyarwanda part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda) (This model) | [base](https://huggingface.co/xlm-roberta-base) | kin | 74.59 | 72.17 | 77.17 | 70.00 | 75.00 | 70.00 | 82.00 | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | kin | 79.55 | 75.56 | 83.99 | 69.00 | 79.00 | 77.00 | 90.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | kin | 76.31 | 72.64 | 80.37 | 70.00 | 76.00 | 75.00 | 84.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ." ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-ner-igbo
mbeukman
2021-11-25T09:04:28Z
7
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "ig", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - ig tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Ike ịda jụụ otụ nkeji banyere oke ogbugbu na - eme n'ala Naijiria agwụla Ekweremmadụ" --- # xlm-roberta-base-finetuned-ner-igbo This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Igbo part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-igbo) (This model) | [base](https://huggingface.co/xlm-roberta-base) | ibo | 86.06 | 85.20 | 86.94 | 76.00 | 86.00 | 90.00 | 87.00 | | [xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | ibo | 88.39 | 87.08 | 89.74 | 74.00 | 91.00 | 90.00 | 91.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-igbo) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | ibo | 84.93 | 83.63 | 86.26 | 70.00 | 88.00 | 89.00 | 84.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-igbo' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Ike ịda jụụ otụ nkeji banyere oke ogbugbu na - eme n'ala Naijiria agwụla Ekweremmadụ" ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili
mbeukman
2021-11-25T09:04:12Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "sw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." --- # xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-luganda](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) (This model) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 | | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 | | [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 | | [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 | | [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili
mbeukman
2021-11-25T09:04:07Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "sw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." --- # xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-kinyarwanda](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) (This model) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 | | [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 | | [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 | | [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 | | [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili
mbeukman
2021-11-25T09:04:02Z
6
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "sw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." --- # xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-igbo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) (This model) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 | | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 | | [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 | | [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." ner_results = nlp(example) print(ner_results) ```
mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili
mbeukman
2021-11-25T09:03:58Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "sw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." --- # xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-hausa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) (This model) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 | | [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 | | [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 | | [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." ner_results = nlp(example) print(ner_results) ```
jpabbuehl/distilbert-base-uncased-finetuned-cola
jpabbuehl
2021-11-25T08:49:51Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5229586822934302 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7588 - Matthews Correlation: 0.5230 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5261 | 1.0 | 535 | 0.5125 | 0.4124 | | 0.3502 | 2.0 | 1070 | 0.5439 | 0.5076 | | 0.2378 | 3.0 | 1605 | 0.6629 | 0.4946 | | 0.1809 | 4.0 | 2140 | 0.7588 | 0.5230 | | 0.1309 | 5.0 | 2675 | 0.8901 | 0.5056 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro
DiegoAlysson
2021-11-25T03:08:55Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: opus-mt-en-ro-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 27.9273 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2915 - Bleu: 27.9273 - Gen Len: 34.0935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
arnolfokam/roberta-base-pcm
arnolfokam
2021-11-24T21:18:39Z
10
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "NER", "pcm", "dataset:masakhaner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - pcm tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall license: apache-2.0 widget: - text: "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida." --- # Model description **roberta-base-pcm** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER) # Intended Use - Intended to be used for research purposes concerning Named Entity Recognition for African Languages. - Not intended for practical purposes. # Training Data This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups. # Training procedure This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com) #### Hyperparameters - **Learning Rate:** 5e-5 - **Batch Size:** 32 - **Maximum Sequence Length:** 164 - **Epochs:** 30 # Evaluation Data We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding. # Metrics - Precision - Recall - F1-score # Limitations - The size of the pre-trained language model prevents its usage in anything other than research. - Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system. - The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance. # Caveats and Recommendations - The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus. # Results Model Name| Precision | Recall | F1-score -|-|-|- **roberta-base-pcm**| 88.55 | 82.45 | 85.39 # Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-pcm") model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-pcm") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida." ner_results = nlp(example) print(ner_results) ```
bgoel4132/twitter-sentiment
bgoel4132
2021-11-24T19:39:02Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:bgoel4132/autonlp-data-twitter-sentiment", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - bgoel4132/autonlp-data-twitter-sentiment co2_eq_emissions: 186.8637425115097 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 35868888 - CO2 Emissions (in grams): 186.8637425115097 ## Validation Metrics - Loss: 0.2020547091960907 - Accuracy: 0.9233253193796257 - Macro F1: 0.9240407542958707 - Micro F1: 0.9233253193796257 - Weighted F1: 0.921800586774046 - Macro Precision: 0.9432284179846658 - Micro Precision: 0.9233253193796257 - Weighted Precision: 0.9247263361914827 - Macro Recall: 0.9139437626409382 - Micro Recall: 0.9233253193796257 - Weighted Recall: 0.9233253193796257 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bgoel4132/autonlp-twitter-sentiment-35868888 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bgoel4132/autonlp-twitter-sentiment-35868888", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bgoel4132/autonlp-twitter-sentiment-35868888", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
castorini/doc2query-t5-large-msmarco
castorini
2021-11-24T19:16:08Z
6
1
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
For more information, check [doc2query.ai](http://doc2query.ai)
Plimpton/distilbert-base-uncased-finetuned-squad
Plimpton
2021-11-24T17:15:45Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 2.4285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5169 | 1.0 | 1642 | 1.6958 | | 1.1326 | 2.0 | 3284 | 2.0009 | | 0.8638 | 3.0 | 4926 | 2.4285 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
sagteam/pharm-relation-extraction
sagteam
2021-11-24T17:12:12Z
8
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "arxiv:2105.00059", "arxiv:1911.02116", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
pharm-relation-extraction === Model trained to recognize 4 types of relationships between significant pharmacological entities in russian-language reviews: ADR–Drugname, Drugname–Diseasename, Drugname–SourceInfoDrug, Diseasename–Indication. The input of the model is a review text and a pair of entities, between which it is required to determine the fact of a relationship and one of the 4 types of relationship, listed above. Data ---- Proposed model is trained on a subset of 908 reviews of the [Russian Drug Review Corpus (RDRS)](https://arxiv.org/pdf/2105.00059.pdf). The subset contains the pairs of entities marked with the 4 listed types of relationships: - ADR-Drugname — the relationship between the drug and its side effects - Drugname-SourceInfodrug — the relationship between the medication and the source of information about it (e.g., “was advised at the pharmacy”, e.g., “was advised at the pharmacy”, “the doctor recommended it”); - Drugname-Diseasname — the relationship between the drug and the disease - Diseasename-Indication — the connection between the illness and its symptoms (e.g., “cough”, “fever 39 degrees”) Also, this subset contains pairs of the same entity types between which there is no relationship: for example, a drug and an unrelated side effect that appeared after taking another drug; in other words, this side effect is related to another drug. Model topology and training ---- Proposed model is based on the [XLM-RoBERTA-large](https://arxiv.org/abs/1911.02116) topology. After the additional training as a language model on corpus of unmarked drug reviews, this model was trained as a classification model on 80% of the texts from subset of the corps described above. How to use ---- See section "How to use" in [our git repository for the model](https://github.com/sag111/Relation_Extraction) Results ---- Here are the accuracy, estimated by the f1 score metric for the recognition of relationships on the best fold. | ADR–Drugname | Drugname–Diseasename | Drugname–SourceInfoDrug | Diseasename–Indication | | ------------- | -------------------- | ----------------------- | ---------------------- | | 0.955 | 0.892 | 0.922 | 0.891 | Citation info ---- If you have found our results helpful in your work, feel free to cite our publication as: ``` @article{sboev2021extraction, title={Extraction of the Relations between Significant Pharmacological Entities in Russian-Language Internet Reviews on Medications}, author={Sboev, Alexander and Selivanov, Anton and Moloshnikov, Ivan and Rybka, Roman and Gryaznov, Artem and Sboeva, Sanna and Rylkov, Gleb}, year={2021}, publisher={Preprints} } ```
AdapterHub/roberta-base-pf-yelp_polarity
AdapterHub
2021-11-24T16:33:21Z
1
0
adapter-transformers
[ "adapter-transformers", "text-classification", "roberta", "en", "dataset:yelp_polarity", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - roberta - adapter-transformers datasets: - yelp_polarity language: - en --- # Adapter `AdapterHub/roberta-base-pf-yelp_polarity` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-yelp_polarity", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-wnut_17
AdapterHub
2021-11-24T16:33:15Z
5
0
adapter-transformers
[ "adapter-transformers", "token-classification", "roberta", "en", "dataset:wnut_17", "arxiv:2104.08247", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - token-classification - roberta - adapter-transformers datasets: - wnut_17 language: - en --- # Adapter `AdapterHub/roberta-base-pf-wnut_17` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wnut_17", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-winogrande
AdapterHub
2021-11-24T16:33:06Z
2
0
adapter-transformers
[ "adapter-transformers", "roberta", "adapterhub:comsense/winogrande", "en", "dataset:winogrande", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - roberta - adapterhub:comsense/winogrande - adapter-transformers datasets: - winogrande language: - en --- # Adapter `AdapterHub/roberta-base-pf-winogrande` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-winogrande", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
AdapterHub/roberta-base-pf-trec
AdapterHub
2021-11-24T16:32:34Z
0
0
adapter-transformers
[ "adapter-transformers", "text-classification", "roberta", "en", "dataset:trec", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - roberta - adapter-transformers datasets: - trec language: - en --- # Adapter `AdapterHub/roberta-base-pf-trec` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-trec", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-social_i_qa
AdapterHub
2021-11-24T16:32:05Z
5
0
adapter-transformers
[ "adapter-transformers", "roberta", "en", "dataset:social_i_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - roberta - adapter-transformers datasets: - social_i_qa language: - en --- # Adapter `AdapterHub/roberta-base-pf-social_i_qa` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-social_i_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
AdapterHub/roberta-base-pf-sick
AdapterHub
2021-11-24T16:31:49Z
10
1
adapter-transformers
[ "adapter-transformers", "text-classification", "roberta", "adapterhub:nli/sick", "en", "dataset:sick", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - roberta - adapter-transformers - adapterhub:nli/sick - text-classification datasets: - sick language: - en --- # Adapter `AdapterHub/roberta-base-pf-sick` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-sick", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-scicite
AdapterHub
2021-11-24T16:31:36Z
4
0
adapter-transformers
[ "adapter-transformers", "text-classification", "roberta", "en", "dataset:scicite", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - roberta - adapter-transformers datasets: - scicite language: - en --- # Adapter `AdapterHub/roberta-base-pf-scicite` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-scicite", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-rotten_tomatoes
AdapterHub
2021-11-24T16:31:26Z
2
0
adapter-transformers
[ "adapter-transformers", "text-classification", "roberta", "adapterhub:sentiment/rotten_tomatoes", "en", "dataset:rotten_tomatoes", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - roberta - adapterhub:sentiment/rotten_tomatoes - adapter-transformers datasets: - rotten_tomatoes language: - en --- # Adapter `AdapterHub/roberta-base-pf-rotten_tomatoes` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-rotten_tomatoes", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-multirc
AdapterHub
2021-11-24T16:30:32Z
2
0
adapter-transformers
[ "adapter-transformers", "text-classification", "adapterhub:rc/multirc", "roberta", "en", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - adapterhub:rc/multirc - roberta - adapter-transformers language: - en --- # Adapter `AdapterHub/roberta-base-pf-multirc` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-multirc", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-emotion
AdapterHub
2021-11-24T16:29:46Z
9
0
adapter-transformers
[ "adapter-transformers", "text-classification", "roberta", "en", "dataset:emotion", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - roberta - adapter-transformers datasets: - emotion language: - en --- # Adapter `AdapterHub/roberta-base-pf-emotion` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-emotion", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-conll2003_pos
AdapterHub
2021-11-24T16:29:03Z
2
0
adapter-transformers
[ "adapter-transformers", "token-classification", "roberta", "adapterhub:pos/conll2003", "en", "dataset:conll2003", "arxiv:2104.08247", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - token-classification - roberta - adapterhub:pos/conll2003 - adapter-transformers - token-classification datasets: - conll2003 language: - en --- # Adapter `AdapterHub/roberta-base-pf-conll2003_pos` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2003_pos", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-conll2000
AdapterHub
2021-11-24T16:28:49Z
5
0
adapter-transformers
[ "adapter-transformers", "token-classification", "roberta", "adapterhub:chunk/conll2000", "en", "dataset:conll2000", "arxiv:2104.08247", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - token-classification - roberta - adapterhub:chunk/conll2000 - adapter-transformers datasets: - conll2000 language: - en --- # Adapter `AdapterHub/roberta-base-pf-conll2000` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2000", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/roberta-base-pf-cola
AdapterHub
2021-11-24T16:27:49Z
11
1
adapter-transformers
[ "adapter-transformers", "text-classification", "roberta", "adapterhub:lingaccept/cola", "en", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - roberta - adapterhub:lingaccept/cola - adapter-transformers language: - en --- # Adapter `AdapterHub/roberta-base-pf-cola` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cola", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/bert-base-uncased-pf-yelp_polarity
AdapterHub
2021-11-24T16:27:20Z
4
0
adapter-transformers
[ "adapter-transformers", "text-classification", "bert", "en", "dataset:yelp_polarity", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - bert - adapter-transformers datasets: - yelp_polarity language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-yelp_polarity` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-yelp_polarity", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/bert-base-uncased-pf-winogrande
AdapterHub
2021-11-24T16:27:05Z
0
0
adapter-transformers
[ "adapter-transformers", "bert", "adapterhub:comsense/winogrande", "en", "dataset:winogrande", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - bert - adapterhub:comsense/winogrande - adapter-transformers datasets: - winogrande language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-winogrande` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-winogrande", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
AdapterHub/bert-base-uncased-pf-scitail
AdapterHub
2021-11-24T16:25:46Z
3
0
adapter-transformers
[ "adapter-transformers", "text-classification", "bert", "adapterhub:nli/scitail", "en", "dataset:scitail", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - bert - adapterhub:nli/scitail - adapter-transformers datasets: - scitail language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-scitail` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scitail", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/bert-base-uncased-pf-quail
AdapterHub
2021-11-24T16:24:59Z
1
0
adapter-transformers
[ "adapter-transformers", "bert", "en", "dataset:quail", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - bert - adapter-transformers datasets: - quail language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-quail` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quail", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
AdapterHub/bert-base-uncased-pf-mnli
AdapterHub
2021-11-24T16:24:19Z
4
0
adapter-transformers
[ "adapter-transformers", "text-classification", "bert", "adapterhub:nli/multinli", "en", "dataset:multi_nli", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - bert - adapterhub:nli/multinli - adapter-transformers datasets: - multi_nli language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-mnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mnli", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/bert-base-uncased-pf-emotion
AdapterHub
2021-11-24T16:23:08Z
4
0
adapter-transformers
[ "adapter-transformers", "text-classification", "bert", "en", "dataset:emotion", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - bert - adapter-transformers datasets: - emotion language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-emotion` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emotion", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/bert-base-uncased-pf-emo
AdapterHub
2021-11-24T16:23:01Z
4
0
adapter-transformers
[ "adapter-transformers", "text-classification", "bert", "en", "dataset:emo", "arxiv:2104.08247", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification - bert - adapter-transformers datasets: - emo language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-emo` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emo", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
AdapterHub/bert-base-uncased-pf-cosmos_qa
AdapterHub
2021-11-24T16:22:41Z
7
0
adapter-transformers
[ "adapter-transformers", "bert", "adapterhub:comsense/cosmosqa", "en", "dataset:cosmos_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - bert - adapterhub:comsense/cosmosqa - adapter-transformers datasets: - cosmos_qa language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-cosmos_qa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cosmos_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
AdapterHub/bert-base-uncased-pf-copa
AdapterHub
2021-11-24T16:22:33Z
5
0
adapter-transformers
[ "adapter-transformers", "bert", "adapterhub:comsense/copa", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - bert - adapterhub:comsense/copa - adapter-transformers language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-copa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-copa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
AdapterHub/bert-base-uncased-pf-conll2003_pos
AdapterHub
2021-11-24T16:22:26Z
12
0
adapter-transformers
[ "adapter-transformers", "token-classification", "bert", "adapterhub:pos/conll2003", "en", "dataset:conll2003", "arxiv:2104.08247", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - token-classification - bert - adapterhub:pos/conll2003 - adapter-transformers datasets: - conll2003 language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-conll2003_pos` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2003_pos", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
Lowin/chinese-bigbird-mini-1024
Lowin
2021-11-24T16:05:17Z
127
1
transformers
[ "transformers", "pytorch", "big_bird", "fill-mask", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - zh license: - apache-2.0 --- ```python import jieba_fast from transformers import BertTokenizer from transformers import BigBirdModel class JiebaTokenizer(BertTokenizer): def __init__( self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs ): super().__init__(*args, **kwargs) self.pre_tokenizer = pre_tokenizer def _tokenize(self, text, *arg, **kwargs): split_tokens = [] for text in self.pre_tokenizer(text): if text in self.vocab: split_tokens.append(text) else: split_tokens.extend(super()._tokenize(text)) return split_tokens model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-mini-1024') tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-mini-1024') ``` https://github.com/LowinLi/chinese-bigbird
Lowin/chinese-bigbird-tiny-1024
Lowin
2021-11-24T16:03:15Z
52
2
transformers
[ "transformers", "pytorch", "big_bird", "feature-extraction", "zh", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
--- language: - zh license: - apache-2.0 --- ```python import jieba_fast from transformers import BertTokenizer from transformers import BigBirdModel class JiebaTokenizer(BertTokenizer): def __init__( self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs ): super().__init__(*args, **kwargs) self.pre_tokenizer = pre_tokenizer def _tokenize(self, text, *arg, **kwargs): split_tokens = [] for text in self.pre_tokenizer(text): if text in self.vocab: split_tokens.append(text) else: split_tokens.extend(super()._tokenize(text)) return split_tokens model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-tiny-1024') tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-tiny-1024') ``` https://github.com/LowinLi/chinese-bigbird
huggingtweets/cupcakkesays
huggingtweets
2021-11-24T12:56:57Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/cupcakkesays/1637758613095/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1061608813730635776/boCDIPDX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">cupcakKe lyrics</div> <div style="text-align: center; font-size: 14px;">@cupcakkesays</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from cupcakKe lyrics. | Data | cupcakKe lyrics | | --- | --- | | Tweets downloaded | 3200 | | Retweets | 0 | | Short tweets | 44 | | Tweets kept | 3156 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3beoi9ei/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cupcakkesays's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kye6z0e) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kye6z0e/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cupcakkesays') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
arnolfokam/mbert-base-uncased-ner-kin
arnolfokam
2021-11-24T11:57:38Z
14
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "NER", "kin", "dataset:masakhaner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - kin tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall license: apache-2.0 widget: - text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n’u Rwanda, bushingiye nanone ku bufatanye hagati y’imigabane ya Afurika n’u Burayi." --- # Model description **mbert-base-uncased-ner-kin** is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER) # Intended Use - Intended to be used for research purposes concerning Named Entity Recognition for African Languages. - Not intended for practical purposes. # Training Data This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups. # Training procedure This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com) #### Hyperparameters - **Learning Rate:** 5e-5 - **Batch Size:** 32 - **Maximum Sequence Length:** 164 - **Epochs:** 30 # Evaluation Data We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding. # Metrics - Precision - Recall - F1-score # Limitations - The size of the pre-trained language model prevents its usage in anything other than research. - Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system. - The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance. # Caveats and Recommendations - The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus. # Results Model Name| Precision | Recall | F1-score -|-|-|- **mbert-base-uncased-ner-kin**| 81.95 |81.55 |81.75 # Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-ner-kin") model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-ner-kin") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Rayon Sports yasinyishije rutahizamu w’Umurundi" ner_results = nlp(example) print(ner_results) ```
arnolfokam/bert-base-uncased-swa
arnolfokam
2021-11-24T11:55:34Z
10
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "NER", "swa", "dataset:masakhaner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - swa tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall license: apache-2.0 widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19." --- # Model description **bert-base-uncased-swa** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER) # Intended Use - Intended to be used for research purposes concerning Named Entity Recognition for African Languages. - Not intended for practical purposes. # Training Data This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups. # Training procedure This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com) #### Hyperparameters - **Learning Rate:** 5e-5 - **Batch Size:** 32 - **Maximum Sequence Length:** 164 - **Epochs:** 30 # Evaluation Data We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding. # Metrics - Precision - Recall - F1-score # Limitations - The size of the pre-trained language model prevents its usage in anything other than research. - Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system. - The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance. # Caveats and Recommendations - The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus. # Results Model Name| Precision | Recall | F1-score -|-|-|- **bert-base-uncased-swa**| 83.38 | 89.32 | 86.26 # Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-swa") model = AutoModelForTokenClassification.from_pretrained("bert-base-uncased-swa") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19." ner_results = nlp(example) print(ner_results) ```
arnolfokam/roberta-base-kin
arnolfokam
2021-11-24T11:46:30Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "NER", "kin", "dataset:masakhaner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - kin tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall license: apache-2.0 widget: - text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n’u Rwanda, bushingiye nanone ku bufatanye hagati y’imigabane ya Afurika n’u Burayi." --- # Model description **roberta-base-kin** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER) # Intended Use - Intended to be used for research purposes concerning Named Entity Recognition for African Languages. - Not intended for practical purposes. # Training Data This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups. # Training procedure This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com) #### Hyperparameters - **Learning Rate:** 5e-5 - **Batch Size:** 32 - **Maximum Sequence Length:** 164 - **Epochs:** 30 # Evaluation Data We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding. # Metrics - Precision - Recall - F1-score # Limitations - The size of the pre-trained language model prevents its usage in anything other than research. - Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system. - The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance. # Caveats and Recommendations - The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus. # Results Model Name| Precision | Recall | F1-score -|-|-|- **roberta-base-kin**| 76.26 | 80.58 |78.36 # Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-kin") model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-kin") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Rayon Sports yasinyishije rutahizamu w’Umurundi" ner_results = nlp(example) print(ner_results) ```
arnolfokam/roberta-base-swa
arnolfokam
2021-11-24T11:41:03Z
13
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "NER", "swa", "dataset:masakhaner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - swa tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall license: apache-2.0 widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19." --- # Model description **roberta-base-swa** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER) # Intended Use - Intended to be used for research purposes concerning Named Entity Recognition for African Languages. - Not intended for practical purposes. # Training Data This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups. # Training procedure This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com) #### Hyperparameters - **Learning Rate:** 5e-5 - **Batch Size:** 32 - **Maximum Sequence Length:** 164 - **Epochs:** 30 # Evaluation Data We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding. # Metrics - Precision - Recall - F1-score # Limitations - The size of the pre-trained language model prevents its usage in anything other than research. - Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system. - The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance. # Caveats and Recommendations - The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus. # Results Model Name| Precision | Recall | F1-score -|-|-|- **roberta-base-swa**| 80.58 | 86.79 | 83.57 # Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-swa") model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-swa") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19." ner_results = nlp(example) print(ner_results) ```
arnolfokam/mbert-base-uncased-swa
arnolfokam
2021-11-24T11:35:54Z
14
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "NER", "swa", "dataset:masakhaner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - swa tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall license: apache-2.0 widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19." --- # Model description **mbert-base-uncased-swa** is a model based on the fine-tuned Multilingual BERT base uncased model. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER) # Intended Use - Intended to be used for research purposes concerning Named Entity Recognition for African Languages. - Not intended for practical purposes. # Training Data This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups. # Training procedure This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com) #### Hyperparameters - **Learning Rate:** 5e-5 - **Batch Size:** 32 - **Maximum Sequence Length:** 164 - **Epochs:** 30 # Evaluation Data We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding. # Metrics - Precision - Recall - F1-score # Limitations - The size of the pre-trained language model prevents its usage in anything other than research. - Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system. - The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance. # Caveats and Recommendations - The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus. # Results Model Name| Precision | Recall | F1-score -|-|-|- **mbert-base-uncased-swa**| 85.59 | 90.80 | 88.12 # Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-swa") model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-swa") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19." ner_results = nlp(example) print(ner_results) ```
team-indain-image-caption/hindi-image-captioning
team-indain-image-caption
2021-11-24T11:22:42Z
23
1
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "endpoints_compatible", "region:us" ]
image-text-to-text
2022-03-02T23:29:05Z
# Hindi Image Captioning Model This is an encoder-decoder image captioning model made with [VIT](https://huggingface.co/google/vit-base-patch16-224-in21k) encoder and [GPT2-Hindi](https://huggingface.co/surajp/gpt2-hindi) as a decoder. This is a first attempt at using ViT + GPT2-Hindi for image captioning task. We used the Flickr8k Hindi Dataset available on kaggle to train the model. This model was trained using HuggingFace course community week, organized by Huggingface. ## How to use Here is how to use this model to caption an image of the Flickr8k dataset: ```python import torch import requests from PIL import Image from transformers import ViTFeatureExtractor, AutoTokenizer, \ VisionEncoderDecoderModel if torch.cuda.is_available(): device = 'cuda' else: device = 'cpu' url = 'https://shorturl.at/fvxEQ' image = Image.open(requests.get(url, stream=True).raw) encoder_checkpoint = 'google/vit-base-patch16-224' decoder_checkpoint = 'surajp/gpt2-hindi' model_checkpoint = 'team-indain-image-caption/hindi-image-captioning' feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_checkpoint) tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint) model = VisionEncoderDecoderModel.from_pretrained(model_checkpoint).to(device) #Inference sample = feature_extractor(image, return_tensors="pt").pixel_values.to(device) clean_text = lambda x: x.replace('<|endoftext|>','').split('\n')[0] caption_ids = model.generate(sample, max_length = 50)[0] caption_text = clean_text(tokenizer.decode(caption_ids)) print(caption_text) ``` ## Training data We used the Flickr8k Hindi Dataset, which is the translated version of the original Flickr8k Dataset, available on Kaggle to train the model. ## Training procedure This model was trained during HuggingFace course community week, organized by Huggingface. The training was done on Kaggle GPU. ## Training Parameters - epochs = 8, - batch_size = 8, - Mixed Precision Enabled ## Team Members - [Sean Benhur](https://www.linkedin.com/in/seanbenhur/) - [Herumb Shandilya](https://www.linkedin.com/in/herumb-s-740163131/)
arnolfokam/mbert-base-uncased-kin
arnolfokam
2021-11-24T11:13:53Z
10
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "NER", "kin", "dataset:masakhaner", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - kin tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall license: apache-2.0 widget: - text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n’u Rwanda, bushingiye nanone ku bufatanye hagati y’imigabane ya Afurika n’u Burayi." --- # Model description **mbert-base-uncased-kin** is a model based on the fine-tuned multilingual BERT base uncased model. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER) # Intended Use - Intended to be used for research purposes concerning Named Entity Recognition for African Languages. - Not intended for practical purposes. # Training Data This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups. # Training procedure This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com) #### Hyperparameters - **Learning Rate:** 5e-5 - **Batch Size:** 32 - **Maximum Sequence Length:** 164 - **Epochs:** 30 # Evaluation Data We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding. # Metrics - Precision - Recall - F1-score # Limitations - The size of the pre-trained language model prevents its usage in anything other than research. - Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system. - The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance. # Caveats and Recommendations - The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus. # Results Model Name| Precision | Recall | F1-score -|-|-|- **mbert-base-uncased-kin**| 81.35 | 83.98 | 82.64 # Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-kin") model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-kin") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Rayon Sports yasinyishije rutahizamu w’Umurundi" ner_results = nlp(example) print(ner_results) ```
jb2k/bert-base-multilingual-cased-language-detection
jb2k
2021-11-24T01:36:01Z
4,142
14
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# bert-base-multilingual-cased-language-detection A model for language detection with support for 45 languages ## Model description This model was created by fine-tuning [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [common language](https://huggingface.co/datasets/common_language) dataset. This dataset has support for 45 languages, which are listed below: ``` Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh ``` ## Evaluation This model was evaluated on the test split of the [common language](https://huggingface.co/datasets/common_language) dataset, and achieved the following metrics: * Accuracy: 97.8%
Hellisotherpeople/debate2vec
Hellisotherpeople
2021-11-23T18:45:27Z
34
7
fasttext
[ "fasttext", "text-classification", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - text-classification library_name: fasttext widget: - text: "dialectics" example_title: "dialectics" - text: "schizoanalysis" example_title: "schizoanalysis" - text: "praxis" example_title: "praxis" - text: "topicality" example_title: "topicality" --- # debate2vec Word-vectors created from a large corpus of competitive debate evidence, and data extraction / processing scripts #usage ``` import fasttext.util ft = fasttext.load_model('debate2vec.bin') ft.get_word_vector('dialectics') ``` # Download Link Github won't let me store large files in their repos. * [FastText Vectors Here](https://drive.google.com/file/d/1m-CwPcaIUun4qvg69Hx2gom9dMScuQwS/view?usp=sharing) (~260mb) # About Created from all publically available Cross Examination Competitive debate evidence posted by the community on [Open Evidence](https://openev.debatecoaches.org/) (From 2013-2020) Search through the original evidence by going to [debate.cards](http://debate.cards/) Stats about this corpus: * 222485 unique documents larger than 200 words (DebateSum plus some additional debate docs that weren't well-formed enough for inclusion into DebateSum) * 107555 unique words (showing up more than 10 times in the corpus) * 101 million total words Stats about debate2vec vectors: * 300 dimensions, minimum number of appearances of a word was 10, trained for 100 epochs with lr set to 0.10 using FastText * lowercased (will release cased) * No subword information The corpus includes the following topics * 2013-2014 Cuba/Mexico/Venezuela Economic Engagement * 2014-2015 Oceans * 2015-2016 Domestic Surveillance * 2016-2017 China * 2017-2018 Education * 2018-2019 Immigration * 2019-2020 Reducing Arms Sales Other topics that this word vector model will handle extremely well * Philosophy (Especially Left-Wing / Post-modernist) * Law * Government * Politics Initial release is of fasttext vectors without subword information. Future releases will include fine-tuned GPT-2 and other high end models as my GPU compute allows. # Screenshots ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec.jpg) ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec2.jpg) ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec3.jpg)
AryanLala/autonlp-Scientific_Title_Generator-34558227
AryanLala
2021-11-23T16:51:34Z
8
19
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "en", "dataset:AryanLala/autonlp-data-Scientific_Title_Generator", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: autonlp language: en widget: - text: "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets." datasets: - AryanLala/autonlp-data-Scientific_Title_Generator co2_eq_emissions: 137.60574081887984 --- # Model Trained Using AutoNLP - Model: Google's Pegasus (https://huggingface.co/google/pegasus-xsum) - Problem type: Summarization - Model ID: 34558227 - CO2 Emissions (in grams): 137.60574081887984 - Spaces: https://huggingface.co/spaces/TitleGenerators/ArxivTitleGenerator - Dataset: arXiv Dataset (https://www.kaggle.com/Cornell-University/arxiv) - Data subset used: https://huggingface.co/datasets/AryanLala/autonlp-data-Scientific_Title_Generator ## Validation Metrics - Loss: 2.578599214553833 - Rouge1: 44.8482 - Rouge2: 24.4052 - RougeL: 40.1716 - RougeLsum: 40.1396 - Gen Len: 11.4675 ## Social - LinkedIn: https://www.linkedin.com/in/aryanlala/ - Twitter: https://twitter.com/AryanLala20 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/AryanLala/autonlp-Scientific_Title_Generator-34558227 ```
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab
Bharathdamu
2021-11-23T09:32:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hindi-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Maltehb/aelaectra-danish-electra-small-uncased
Maltehb
2021-11-23T06:39:20Z
16
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "ælæctra", "danish", "ELECTRA-Small", "replaced token detection", "da", "dataset:DAGW", "arxiv:2003.10555", "arxiv:1810.04805", "arxiv:2005.03521", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: "da" co2_eq_emissions: 4009.5 tags: - ælæctra - pytorch - danish - ELECTRA-Small - replaced token detection license: "mit" datasets: - DAGW metrics: - f1 --- # Ælæctra - A Step Towards More Efficient Danish Natural Language Processing **Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis. Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings! Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂 Here is an example on how to load both the cased and the uncased Ælæctra model in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-cased") model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-cased") ``` ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-uncased") model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-uncased") ``` ### Evaluation of current Danish Language Models Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated: | Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download | | --- | --- | --- | --- | --- | --- | --- | | Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) | | DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) | | mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) | | mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) | On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'. ### Pretraining To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/) The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model ### Fine-tuning To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/) ### References Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555 Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019) Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805 Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565 Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521 #### Acknowledgements As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order. A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020). Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback. Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high! #### Contact For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20Ælæctra) or any of the following platforms: [<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter] [<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin] [<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram] <br /> </details> [twitter]: https://twitter.com/malteH_B [instagram]: https://www.instagram.com/maltemusen/ [linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/
artursz/wav2vec2-large-xls-r-300m-lv-v05
artursz
2021-11-23T02:47:04Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-lv-v05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-lv-v05 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3862 - Wer: 0.2588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8836 | 2.81 | 400 | 0.8722 | 0.7244 | | 0.5365 | 5.63 | 800 | 0.4622 | 0.4812 | | 0.277 | 8.45 | 1200 | 0.4348 | 0.4056 | | 0.1947 | 11.27 | 1600 | 0.4223 | 0.3636 | | 0.1655 | 14.08 | 2000 | 0.4084 | 0.3465 | | 0.1441 | 16.9 | 2400 | 0.4329 | 0.3497 | | 0.121 | 19.72 | 2800 | 0.4371 | 0.3324 | | 0.1062 | 22.53 | 3200 | 0.4202 | 0.3198 | | 0.0937 | 25.35 | 3600 | 0.4063 | 0.3265 | | 0.0871 | 28.17 | 4000 | 0.4253 | 0.3255 | | 0.0755 | 30.98 | 4400 | 0.4368 | 0.3194 | | 0.0627 | 33.8 | 4800 | 0.4067 | 0.2908 | | 0.0595 | 36.62 | 5200 | 0.3929 | 0.2973 | | 0.0523 | 39.44 | 5600 | 0.3748 | 0.2817 | | 0.0434 | 42.25 | 6000 | 0.3769 | 0.2711 | | 0.0391 | 45.07 | 6400 | 0.3901 | 0.2653 | | 0.0319 | 47.88 | 6800 | 0.3862 | 0.2588 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingtweets/kylelchong
huggingtweets
2021-11-23T01:12:59Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/kylelchong/1637629975064/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1363977743021584394/17Z8FHm2_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Kyle L. Chong (he.him.his)</div> <div style="text-align: center; font-size: 14px;">@kylelchong</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Kyle L. Chong (he.him.his). | Data | Kyle L. Chong (he.him.his) | | --- | --- | | Tweets downloaded | 1072 | | Retweets | 213 | | Short tweets | 76 | | Tweets kept | 783 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xlb7d6c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kylelchong's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5bvgy2zz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5bvgy2zz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/kylelchong') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ueb1/IceBERT-finetuned
ueb1
2021-11-23T01:05:03Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: gpl-3.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: IceBERT-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IceBERT-finetuned This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7361 - Accuracy: 0.352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.5309 | 1.0 | 563 | 4.1093 | 0.329 | | 3.9723 | 2.0 | 1126 | 3.8339 | 0.344 | | 3.6949 | 3.0 | 1689 | 3.7490 | 0.346 | | 3.5124 | 4.0 | 2252 | 3.7488 | 0.358 | | 3.3763 | 5.0 | 2815 | 3.7361 | 0.352 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
gayanin/t5-small-mlm-pubmed-45
gayanin
2021-11-22T23:47:01Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-mlm-pubmed-45 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-mlm-pubmed-45 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6395 - Rouge2 Precision: 0.3383 - Rouge2 Recall: 0.2424 - Rouge2 Fmeasure: 0.2753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 2.519 | 0.75 | 500 | 1.9659 | 0.3178 | 0.1888 | 0.2299 | | 2.169 | 1.51 | 1000 | 1.8450 | 0.3256 | 0.2138 | 0.25 | | 2.0796 | 2.26 | 1500 | 1.7900 | 0.3368 | 0.2265 | 0.2636 | | 1.9978 | 3.02 | 2000 | 1.7553 | 0.3427 | 0.234 | 0.2709 | | 1.9686 | 3.77 | 2500 | 1.7172 | 0.3356 | 0.2347 | 0.2692 | | 1.9142 | 4.52 | 3000 | 1.6986 | 0.3358 | 0.238 | 0.2715 | | 1.921 | 5.28 | 3500 | 1.6770 | 0.3349 | 0.2379 | 0.2709 | | 1.8848 | 6.03 | 4000 | 1.6683 | 0.3346 | 0.2379 | 0.2708 | | 1.8674 | 6.79 | 4500 | 1.6606 | 0.3388 | 0.2419 | 0.2752 | | 1.8606 | 7.54 | 5000 | 1.6514 | 0.3379 | 0.2409 | 0.274 | | 1.8515 | 8.3 | 5500 | 1.6438 | 0.3356 | 0.2407 | 0.2731 | | 1.8403 | 9.05 | 6000 | 1.6401 | 0.3367 | 0.2421 | 0.2744 | | 1.8411 | 9.8 | 6500 | 1.6395 | 0.3383 | 0.2424 | 0.2753 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
huggingtweets/dril-horse_ebooks-pukicho
huggingtweets
2021-11-22T22:54:49Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/dril-horse_ebooks-pukicho/1637621684272/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/866045441942487041/xRAnnstd_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1096005346/1_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & Pukicho & Horse ebooks</div> <div style="text-align: center; font-size: 14px;">@dril-horse_ebooks-pukicho</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & Pukicho & Horse ebooks. | Data | wint | Pukicho | Horse ebooks | | --- | --- | --- | --- | | Tweets downloaded | 3226 | 2989 | 3200 | | Retweets | 466 | 90 | 0 | | Short tweets | 308 | 292 | 421 | | Tweets kept | 2452 | 2607 | 2779 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29iqmln0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-horse_ebooks-pukicho's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29cfj39j) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29cfj39j/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dril-horse_ebooks-pukicho') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
gayanin/t5-small-mlm-pubmed-35
gayanin
2021-11-22T22:24:30Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-mlm-pubmed-35 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-mlm-pubmed-35 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1101 - Rouge2 Precision: 0.4758 - Rouge2 Recall: 0.3498 - Rouge2 Fmeasure: 0.3927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 1.8404 | 0.75 | 500 | 1.5005 | 0.4265 | 0.2786 | 0.3273 | | 1.6858 | 1.51 | 1000 | 1.4216 | 0.4318 | 0.2946 | 0.3404 | | 1.6071 | 2.26 | 1500 | 1.3777 | 0.4472 | 0.3148 | 0.3598 | | 1.5551 | 3.02 | 2000 | 1.3360 | 0.4406 | 0.3168 | 0.3586 | | 1.5116 | 3.77 | 2500 | 1.3128 | 0.4523 | 0.3234 | 0.3671 | | 1.4837 | 4.52 | 3000 | 1.2937 | 0.4477 | 0.3215 | 0.3645 | | 1.4513 | 5.28 | 3500 | 1.2766 | 0.4511 | 0.3262 | 0.3689 | | 1.4336 | 6.03 | 4000 | 1.2626 | 0.4548 | 0.3283 | 0.3718 | | 1.4149 | 6.79 | 4500 | 1.2449 | 0.4495 | 0.3274 | 0.3687 | | 1.3977 | 7.54 | 5000 | 1.2349 | 0.4507 | 0.3305 | 0.3712 | | 1.3763 | 8.3 | 5500 | 1.2239 | 0.4519 | 0.3266 | 0.3688 | | 1.371 | 9.05 | 6000 | 1.2171 | 0.4546 | 0.3305 | 0.3727 | | 1.3501 | 9.8 | 6500 | 1.2080 | 0.4575 | 0.3329 | 0.3755 | | 1.3443 | 10.56 | 7000 | 1.2017 | 0.4576 | 0.3314 | 0.3742 | | 1.326 | 11.31 | 7500 | 1.1926 | 0.4578 | 0.333 | 0.3757 | | 1.3231 | 12.07 | 8000 | 1.1866 | 0.4606 | 0.3357 | 0.3782 | | 1.3089 | 12.82 | 8500 | 1.1816 | 0.4591 | 0.3338 | 0.3765 | | 1.3007 | 13.57 | 9000 | 1.1764 | 0.4589 | 0.3361 | 0.3777 | | 1.2943 | 14.33 | 9500 | 1.1717 | 0.4641 | 0.3382 | 0.3811 | | 1.2854 | 15.08 | 10000 | 1.1655 | 0.4617 | 0.3378 | 0.38 | | 1.2777 | 15.84 | 10500 | 1.1612 | 0.464 | 0.3401 | 0.3823 | | 1.2684 | 16.59 | 11000 | 1.1581 | 0.4608 | 0.3367 | 0.3789 | | 1.2612 | 17.35 | 11500 | 1.1554 | 0.4623 | 0.3402 | 0.3818 | | 1.2625 | 18.1 | 12000 | 1.1497 | 0.4613 | 0.3381 | 0.3802 | | 1.2529 | 18.85 | 12500 | 1.1465 | 0.4671 | 0.3419 | 0.3848 | | 1.2461 | 19.61 | 13000 | 1.1431 | 0.4646 | 0.3399 | 0.3824 | | 1.2415 | 20.36 | 13500 | 1.1419 | 0.4659 | 0.341 | 0.3835 | | 1.2375 | 21.12 | 14000 | 1.1377 | 0.4693 | 0.3447 | 0.3873 | | 1.2315 | 21.87 | 14500 | 1.1353 | 0.4672 | 0.3433 | 0.3855 | | 1.2263 | 22.62 | 15000 | 1.1333 | 0.467 | 0.3433 | 0.3854 | | 1.2214 | 23.38 | 15500 | 1.1305 | 0.4682 | 0.3446 | 0.3869 | | 1.2202 | 24.13 | 16000 | 1.1291 | 0.4703 | 0.3465 | 0.3888 | | 1.2155 | 24.89 | 16500 | 1.1270 | 0.472 | 0.348 | 0.3903 | | 1.2064 | 25.64 | 17000 | 1.1261 | 0.4724 | 0.3479 | 0.3905 | | 1.2173 | 26.4 | 17500 | 1.1236 | 0.4734 | 0.3485 | 0.3912 | | 1.1994 | 27.15 | 18000 | 1.1220 | 0.4739 | 0.3486 | 0.3915 | | 1.2018 | 27.9 | 18500 | 1.1217 | 0.4747 | 0.3489 | 0.3921 | | 1.2045 | 28.66 | 19000 | 1.1194 | 0.4735 | 0.3488 | 0.3916 | | 1.1949 | 29.41 | 19500 | 1.1182 | 0.4732 | 0.3484 | 0.3911 | | 1.19 | 30.17 | 20000 | 1.1166 | 0.4724 | 0.3479 | 0.3904 | | 1.1932 | 30.92 | 20500 | 1.1164 | 0.4753 | 0.3494 | 0.3924 | | 1.1952 | 31.67 | 21000 | 1.1147 | 0.4733 | 0.3485 | 0.3911 | | 1.1922 | 32.43 | 21500 | 1.1146 | 0.475 | 0.3494 | 0.3923 | | 1.1889 | 33.18 | 22000 | 1.1132 | 0.4765 | 0.3499 | 0.3933 | | 1.1836 | 33.94 | 22500 | 1.1131 | 0.4768 | 0.351 | 0.3939 | | 1.191 | 34.69 | 23000 | 1.1127 | 0.4755 | 0.3495 | 0.3926 | | 1.1811 | 35.44 | 23500 | 1.1113 | 0.4748 | 0.349 | 0.3919 | | 1.1864 | 36.2 | 24000 | 1.1107 | 0.4751 | 0.3494 | 0.3921 | | 1.1789 | 36.95 | 24500 | 1.1103 | 0.4756 | 0.3499 | 0.3927 | | 1.1819 | 37.71 | 25000 | 1.1101 | 0.4758 | 0.35 | 0.3932 | | 1.1862 | 38.46 | 25500 | 1.1099 | 0.4755 | 0.3497 | 0.3926 | | 1.1764 | 39.22 | 26000 | 1.1101 | 0.4759 | 0.3498 | 0.3928 | | 1.1819 | 39.97 | 26500 | 1.1101 | 0.4758 | 0.3498 | 0.3927 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
gayanin/bart-mlm-pubmed-35
gayanin
2021-11-22T21:16:10Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-mlm-pubmed-35 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-mlm-pubmed-35 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9359 - Rouge2 Precision: 0.5451 - Rouge2 Recall: 0.4232 - Rouge2 Fmeasure: 0.4666 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 1.4156 | 1.0 | 663 | 1.0366 | 0.5165 | 0.3967 | 0.4394 | | 1.1773 | 2.0 | 1326 | 0.9841 | 0.5354 | 0.4168 | 0.4589 | | 1.0894 | 3.0 | 1989 | 0.9554 | 0.5346 | 0.4133 | 0.4563 | | 0.9359 | 4.0 | 2652 | 0.9440 | 0.5357 | 0.4163 | 0.4587 | | 0.8758 | 5.0 | 3315 | 0.9340 | 0.5428 | 0.4226 | 0.465 | | 0.8549 | 6.0 | 3978 | 0.9337 | 0.5385 | 0.422 | 0.4634 | | 0.7743 | 7.0 | 4641 | 0.9330 | 0.542 | 0.422 | 0.4647 | | 0.7465 | 8.0 | 5304 | 0.9315 | 0.5428 | 0.4231 | 0.4654 | | 0.7348 | 9.0 | 5967 | 0.9344 | 0.5462 | 0.4244 | 0.4674 | | 0.7062 | 10.0 | 6630 | 0.9359 | 0.5451 | 0.4232 | 0.4666 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
gayanin/t5-small-mlm-pubmed-15
gayanin
2021-11-22T21:10:30Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-mlm-pubmed-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-mlm-pubmed-15 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5389 - Rouge2 Precision: 0.7165 - Rouge2 Recall: 0.5375 - Rouge2 Fmeasure: 0.5981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 1.1024 | 0.75 | 500 | 0.7890 | 0.6854 | 0.4813 | 0.5502 | | 0.8788 | 1.51 | 1000 | 0.7176 | 0.6906 | 0.4989 | 0.5638 | | 0.8086 | 2.26 | 1500 | 0.6830 | 0.6872 | 0.5052 | 0.5663 | | 0.7818 | 3.02 | 2000 | 0.6650 | 0.6912 | 0.5104 | 0.5711 | | 0.7466 | 3.77 | 2500 | 0.6458 | 0.6965 | 0.5167 | 0.5774 | | 0.731 | 4.52 | 3000 | 0.6355 | 0.6955 | 0.5161 | 0.5763 | | 0.7126 | 5.28 | 3500 | 0.6249 | 0.6924 | 0.517 | 0.576 | | 0.6998 | 6.03 | 4000 | 0.6166 | 0.6995 | 0.5207 | 0.5809 | | 0.6855 | 6.79 | 4500 | 0.6076 | 0.6981 | 0.5215 | 0.5813 | | 0.676 | 7.54 | 5000 | 0.6015 | 0.7003 | 0.5242 | 0.5836 | | 0.6688 | 8.3 | 5500 | 0.5962 | 0.7004 | 0.5235 | 0.583 | | 0.6569 | 9.05 | 6000 | 0.5900 | 0.6997 | 0.5234 | 0.5827 | | 0.6503 | 9.8 | 6500 | 0.5880 | 0.703 | 0.5257 | 0.5856 | | 0.6455 | 10.56 | 7000 | 0.5818 | 0.7008 | 0.5259 | 0.5849 | | 0.635 | 11.31 | 7500 | 0.5796 | 0.7017 | 0.5271 | 0.5861 | | 0.6323 | 12.07 | 8000 | 0.5769 | 0.7053 | 0.5276 | 0.5877 | | 0.6241 | 12.82 | 8500 | 0.5730 | 0.7011 | 0.5243 | 0.5838 | | 0.6224 | 13.57 | 9000 | 0.5696 | 0.7046 | 0.5286 | 0.5879 | | 0.6139 | 14.33 | 9500 | 0.5685 | 0.7047 | 0.5295 | 0.5886 | | 0.6118 | 15.08 | 10000 | 0.5653 | 0.704 | 0.5297 | 0.5886 | | 0.6089 | 15.84 | 10500 | 0.5633 | 0.703 | 0.5272 | 0.5865 | | 0.598 | 16.59 | 11000 | 0.5613 | 0.7059 | 0.5293 | 0.5889 | | 0.6003 | 17.35 | 11500 | 0.5602 | 0.7085 | 0.532 | 0.5918 | | 0.5981 | 18.1 | 12000 | 0.5587 | 0.7106 | 0.5339 | 0.5938 | | 0.5919 | 18.85 | 12500 | 0.5556 | 0.708 | 0.5319 | 0.5914 | | 0.5897 | 19.61 | 13000 | 0.5556 | 0.7106 | 0.5327 | 0.5931 | | 0.5899 | 20.36 | 13500 | 0.5526 | 0.7114 | 0.534 | 0.5939 | | 0.5804 | 21.12 | 14000 | 0.5521 | 0.7105 | 0.5328 | 0.5928 | | 0.5764 | 21.87 | 14500 | 0.5520 | 0.715 | 0.537 | 0.5976 | | 0.5793 | 22.62 | 15000 | 0.5506 | 0.713 | 0.5346 | 0.5951 | | 0.5796 | 23.38 | 15500 | 0.5492 | 0.7124 | 0.5352 | 0.5952 | | 0.5672 | 24.13 | 16000 | 0.5482 | 0.7124 | 0.5346 | 0.5948 | | 0.5737 | 24.89 | 16500 | 0.5470 | 0.7134 | 0.5352 | 0.5956 | | 0.5685 | 25.64 | 17000 | 0.5463 | 0.7117 | 0.5346 | 0.5946 | | 0.5658 | 26.4 | 17500 | 0.5457 | 0.7145 | 0.5359 | 0.5965 | | 0.5657 | 27.15 | 18000 | 0.5447 | 0.7145 | 0.5367 | 0.597 | | 0.5645 | 27.9 | 18500 | 0.5441 | 0.7141 | 0.5362 | 0.5964 | | 0.565 | 28.66 | 19000 | 0.5436 | 0.7151 | 0.5367 | 0.5972 | | 0.5579 | 29.41 | 19500 | 0.5426 | 0.7162 | 0.5378 | 0.5982 | | 0.563 | 30.17 | 20000 | 0.5424 | 0.7155 | 0.5373 | 0.5977 | | 0.556 | 30.92 | 20500 | 0.5418 | 0.7148 | 0.536 | 0.5966 | | 0.5576 | 31.67 | 21000 | 0.5411 | 0.7141 | 0.5356 | 0.5961 | | 0.5546 | 32.43 | 21500 | 0.5409 | 0.7149 | 0.5364 | 0.5967 | | 0.556 | 33.18 | 22000 | 0.5405 | 0.7143 | 0.5356 | 0.596 | | 0.5536 | 33.94 | 22500 | 0.5401 | 0.7165 | 0.5377 | 0.5982 | | 0.5527 | 34.69 | 23000 | 0.5397 | 0.7188 | 0.5389 | 0.5999 | | 0.5531 | 35.44 | 23500 | 0.5395 | 0.7172 | 0.538 | 0.5989 | | 0.5508 | 36.2 | 24000 | 0.5392 | 0.7166 | 0.538 | 0.5985 | | 0.5495 | 36.95 | 24500 | 0.5391 | 0.7176 | 0.5387 | 0.5993 | | 0.5539 | 37.71 | 25000 | 0.5391 | 0.7169 | 0.5372 | 0.598 | | 0.5452 | 38.46 | 25500 | 0.5390 | 0.7179 | 0.5384 | 0.5991 | | 0.5513 | 39.22 | 26000 | 0.5390 | 0.717 | 0.5377 | 0.5984 | | 0.5506 | 39.97 | 26500 | 0.5389 | 0.7165 | 0.5375 | 0.5981 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
gayanin/bart-mlm-pubmed-15
gayanin
2021-11-22T20:33:06Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-mlm-pubmed-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-mlm-pubmed-15 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4822 - Rouge2 Precision: 0.7578 - Rouge2 Recall: 0.5933 - Rouge2 Fmeasure: 0.6511 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.7006 | 1.0 | 663 | 0.5062 | 0.7492 | 0.5855 | 0.6434 | | 0.5709 | 2.0 | 1326 | 0.4811 | 0.7487 | 0.5879 | 0.6447 | | 0.5011 | 3.0 | 1989 | 0.4734 | 0.7541 | 0.5906 | 0.6483 | | 0.4164 | 4.0 | 2652 | 0.4705 | 0.7515 | 0.5876 | 0.6452 | | 0.3888 | 5.0 | 3315 | 0.4703 | 0.7555 | 0.5946 | 0.6515 | | 0.3655 | 6.0 | 3978 | 0.4725 | 0.7572 | 0.5943 | 0.6516 | | 0.319 | 7.0 | 4641 | 0.4733 | 0.7557 | 0.5911 | 0.6491 | | 0.3089 | 8.0 | 5304 | 0.4792 | 0.7577 | 0.5936 | 0.6513 | | 0.2907 | 9.0 | 5967 | 0.4799 | 0.7577 | 0.5931 | 0.6509 | | 0.275 | 10.0 | 6630 | 0.4822 | 0.7578 | 0.5933 | 0.6511 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
deepklarity/poster2plot
deepklarity
2021-11-22T19:56:30Z
39
4
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "image-classification", "image-captioning", "en", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- language: en tags: - image-classification - image-captioning --- # Poster2Plot An image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model. ## Live demo on Hugging Face Spaces: https://huggingface.co/spaces/deepklarity/poster2plot # Model Details The base model uses a Vision Transformer (ViT) model as an image encoder and GPT-2 as a decoder. We used the following models: * Encoder: [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) * Decoder: [gpt2](https://huggingface.co/gpt2) # Datasets Publicly available IMDb datasets were used to train the model. # How to use ## In PyTorch ```python import torch import re import requests from PIL import Image from transformers import AutoTokenizer, AutoFeatureExtractor, VisionEncoderDecoderModel # Pattern to ignore all the text after 2 or more full stops regex_pattern = "[.]{2,}" def post_process(text): try: text = text.strip() text = re.split(regex_pattern, text)[0] except Exception as e: print(e) pass return text def predict(image, max_length=64, num_beams=4): pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) with torch.no_grad(): output_ids = model.generate( pixel_values, max_length=max_length, num_beams=num_beams, return_dict_in_generate=True, ).sequences preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) pred = post_process(preds[0]) return pred model_name_or_path = "deepklarity/poster2plot" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Load model. model = VisionEncoderDecoderModel.from_pretrained(model_name_or_path) model.to(device) print("Loaded model") feature_extractor = AutoFeatureExtractor.from_pretrained(model.encoder.name_or_path) print("Loaded feature_extractor") tokenizer = AutoTokenizer.from_pretrained(model.decoder.name_or_path, use_fast=True) if model.decoder.name_or_path == "gpt2": tokenizer.pad_token = tokenizer.eos_token print("Loaded tokenizer") url = "https://upload.wikimedia.org/wikipedia/en/2/26/Moana_Teaser_Poster.jpg" with Image.open(requests.get(url, stream=True).raw) as image: pred = predict(image) print(pred) ```
samantharhay/wav2vec2-base-myst-demo-colab
samantharhay
2021-11-22T18:15:21Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: name: wav2vec2-base-myst-demo-colab --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-myst-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3125 - eval_wer: 0.3139 - eval_runtime: 57.3226 - eval_samples_per_second: 9.996 - eval_steps_per_second: 1.256 - epoch: 18.68 - step: 17000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
malteos/aspect-cord19-scibert-scivocab-uncased
malteos
2021-11-22T10:13:31Z
7
1
transformers
[ "transformers", "pytorch", "bert", "classification", "similarity", "sci", "en", "dataset:cord19", "arxiv:2010.06395", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - sci - en tags: - classification - similarity license: mit datasets: - cord19 --- # Aspect-based Document Similarity for Research Papers A `scibert-scivocab-uncased` model fine-tuned on the CORD-19 corpus as in [Aspect-based Document Similarity for Research Papers](https://arxiv.org/abs/2010.06395). <img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/docrel.png"> See GitHub for more details: https://github.com/malteos/aspect-document-similarity ## Demo <a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Google Colab"></a> You can try our trained models directly on Google Colab on all papers available on Semantic Scholar (via DOI, ArXiv ID, ACL ID, PubMed ID): <a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/demo.gif" alt="Click here for demo"></a>
kssteven/ibert-roberta-base
kssteven
2021-11-22T10:09:32Z
2,805
1
transformers
[ "transformers", "pytorch", "ibert", "fill-mask", "arxiv:1907.11692", "arxiv:2101.01321", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# I-BERT base model This model, `ibert-roberta-base`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this paper](https://arxiv.org/abs/2101.01321). I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic. In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations. This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU. The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model. ## Finetuning Procedure Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model. ### Full-precision finetuning Full-precision finetuning of I-BERT is similar to RoBERTa finetuning. For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task. ``` python examples/text-classification/run_glue.py \ --model_name_or_path kssteven/ibert-roberta-base \ --task_name MRPC \ --do_eval \ --do_train \ --evaluation_strategy epoch \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --save_steps 115 \ --learning_rate 2e-5 \ --num_train_epochs 10 \ --output_dir $OUTPUT_DIR ``` ### Model Quantization Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`. ``` { "_name_or_path": "kssteven/ibert-roberta-base", "architectures": [ "IBertForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "finetuning_task": "mrpc", "force_dequant": "none", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "ibert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "quant_mode": true, "tokenizer_class": "RobertaTokenizer", "transformers_version": "4.4.0.dev0", "type_vocab_size": 1, "vocab_size": 50265 } ``` Then, your model will automatically run as the integer-only mode when you load the checkpoint. Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory. Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning. ### Integer-only finetuning (Quantization-aware training) Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified. Note that the only difference in the example command below is `model_name_or_path`. ``` python examples/text-classification/run_glue.py \ --model_name_or_path $CHECKPOINT_DIR --task_name MRPC \ --do_eval \ --do_train \ --evaluation_strategy epoch \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --save_steps 115 \ --learning_rate 1e-6 \ --num_train_epochs 10 \ --output_dir $OUTPUT_DIR ``` ## Citation info If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321). ``` @article{kim2021bert, title={I-BERT: Integer-only BERT Quantization}, author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt}, journal={arXiv preprint arXiv:2101.01321}, year={2021} } ```
khalidalt/DeBERTa-v3-large-mnli
khalidalt
2021-11-22T08:38:23Z
54
5
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "zero-shot-classification", "en", "arxiv:2006.03654", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy widget: - text: "The Movie have been criticized for the story. However, I think it is a great movie. [SEP] I liked the movie." --- # DeBERTa-v3-large-mnli ## Model description This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information. The model used is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-large). The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on [official repository](https://github.com/microsoft/DeBERTa) and the [paper](https://arxiv.org/abs/2006.03654) ## Intended uses & limitations #### How to use the model ```python premise = "The Movie have been criticized for the story. However, I think it is a great movie." hypothesis = "I liked the movie." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1) label_names = ["entailment", "neutral", "contradiction"] print(label_names[prediction.argmax(0).tolist()]) ``` ### Training data This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement. ### Training procedure DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters. ``` train_args = TrainingArguments( learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, warmup_ratio=0.06, weight_decay=0.1, fp16=True, seed=42, ) ``` ### BibTeX entry and citation info Please cite the [DeBERTa paper](https://arxiv.org/abs/2006.03654) and [MultiNLI Dataset](https://cims.nyu.edu/~sbowman/multinli/paper.pdf) if you use this model and include this Huggingface hub.
wukevin/tcr-bert-mlm-only
wukevin
2021-11-22T08:32:41Z
4,032
4
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
Pretrained on: * Masked amino acid modeling Please see our [main model](https://huggingface.co/wukevin/tcr-bert) for additional details.
snunlp/KR-Medium
snunlp
2021-11-22T06:19:42Z
173
7
transformers
[ "transformers", "pytorch", "jax", "bert", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - ko --- # KR-BERT-MEDIUM A pretrained Korean-specific BERT model developed by Computational Linguistics Lab at Seoul National University. It is based on our character-level [KR-BERT](https://github.com/snunlp/KR-BERT) model which utilize WordPiece tokenizer. Here, the model name has a suffix 'MEDIUM' since its training data grew from KR-BERT's original dataset. We have another additional model, KR-BERT-EXPANDED with more extensive training data expanded from those of KR-BERT-MEDIUM, so the suffix 'MEDIUM' is used. <br> ### Vocab, Parameters and Data | | Mulitlingual BERT<br>(Google) | KorBERT<br>(ETRI) | KoBERT<br>(SKT) | KR-BERT character | KR-BERT-MEDIUM | | -------------: | ---------------------------------------------: | ---------------------: | ----------------------------------: | -------------------------------------: | -------------------------------------: | | vocab size | 119,547 | 30,797 | 8,002 | 16,424 | 20,000 | | parameter size | 167,356,416 | 109,973,391 | 92,186,880 | 99,265,066 | 102,015,010 | | data size | -<br>(The Wikipedia data<br>for 104 languages) | 23GB<br>4.7B morphemes | -<br>(25M sentences,<br>233M words) | 2.47GB<br>20M sentences,<br>233M words | 12.37GB<br>91M sentences,<br>1.17B words | <br> The training data for this model is expanded from those of KR-BERT, texts from Korean Wikipedia, and news articles, by addition of legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). This data expansion is to collect texts from more various domains than those of KR-BERT. The total data size is about 12.37GB, consisting of 91M and 1.17B words. The user-generated comment dataset is expected to have similar stylistic properties to the task datasets of NSMC and HSD. Such text includes abbreviations, coinages, emoticons, spacing errors, and typos. Therefore, we added the dataset containing such on-line properties to our existing formal data such as news articles and Wikipedia texts to compose the training data for KR-BERT-MEDIUM. Accordingly, KR-BERT-MEDIUM reported better results in sentiment analysis than other models, and the performances improved with the model of the more massive, more various training data. This model’s vocabulary size is 20,000, whose tokens are trained based on the expanded training data using the WordPiece tokenizer. KR-BERT-MEDIUM is trained for 2M steps with the maxlen of 128, training batch size of 64, and learning rate of 1e-4, taking 22 hours to train the model using a Google Cloud TPU v3-8. ### Models #### TensorFlow * BERT tokenizer, character-based model ([download](https://drive.google.com/file/d/1OWXGqr2Z2PWD6ST3MsFmcjM8c2mr8PkE/view?usp=sharing)) #### PyTorch * You can import it from Transformers! ```sh # pytorch, transformers from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("snunlp/KR-Medium", do_lower_case=False) model = AutoModel.from_pretrained("snunlp/KR-Medium") ``` ### Requirements - transformers == 4.0.0 - tensorflow < 2.0 ## Downstream tasks * Movie Review Classification on Naver Sentiment Movie Corpus [(NSMC)](https://github.com/e9t/nsmc) * Hate Speech Detection [(Moon et al., 2020)](https://github.com/kocohub/korean-hate-speech) #### tensorflow * After downloading our pre-trained models, put them in a `models` directory. * Set the output directory (for fine-tuning) * Select task name: `NSMC` for Movie Review Classification, and `HATE` for Hate Speech Detection ```sh # tensorflow python3 run_classifier.py \ --task_name={NSMC, HATE} \ --do_train=true \ --do_eval=true \ --do_predict=true \ --do_lower_case=False\ --max_seq_length=128 \ --train_batch_size=128 \ --learning_rate=5e-05 \ --num_train_epochs=5.0 \ --output_dir={output_dir} ``` <br> ### Performances TensorFlow, test set performances | | multilingual BERT | KorBERT<br>character | KR-BERT<br>character<br>WordPiece | KR-BERT-MEDIUM | |:-----:|-------------------:|----------------:|----------------------------:|-----------------------------------------:| | NSMC (Acc) | 86.82 | 89.81 | 89.74 | 90.29 | | Hate Speech (F1) | 52.03 | 54.33 | 54.53 | 57.91 | <br> ## Contacts [email protected]
snunlp/KR-BERT-char16424
snunlp
2021-11-22T06:19:20Z
830
8
transformers
[ "transformers", "pytorch", "jax", "bert", "ko", "arxiv:2008.03979", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - ko --- ## KoRean based Bert pre-trained (KR-BERT) This is a release of Korean-specific, small-scale BERT models with comparable or better performances developed by Computational Linguistics Lab at Seoul National University, referenced in [KR-BERT: A Small-Scale Korean-Specific Language Model](https://arxiv.org/abs/2008.03979). <br> ### Vocab, Parameters and Data | | Mulitlingual BERT<br>(Google) | KorBERT<br>(ETRI) | KoBERT<br>(SKT) | KR-BERT character | KR-BERT sub-character | | -------------: | ---------------------------------------------: | ---------------------: | ----------------------------------: | -------------------------------------: | -------------------------------------: | | vocab size | 119,547 | 30,797 | 8,002 | 16,424 | 12,367 | | parameter size | 167,356,416 | 109,973,391 | 92,186,880 | 99,265,066 | 96,145,233 | | data size | -<br>(The Wikipedia data<br>for 104 languages) | 23GB<br>4.7B morphemes | -<br>(25M sentences,<br>233M words) | 2.47GB<br>20M sentences,<br>233M words | 2.47GB<br>20M sentences,<br>233M words | | Model | Masked LM Accuracy | | ------------------------------------------- | ------------------ | | KoBERT | 0.750 | | KR-BERT character BidirectionalWordPiece | **0.779** | | KR-BERT sub-character BidirectionalWordPiece | 0.769 | <br> ### Sub-character Korean text is basically represented with Hangul syllable characters, which can be decomposed into sub-characters, or graphemes. To accommodate such characteristics, we trained a new vocabulary and BERT model on two different representations of a corpus: syllable characters and sub-characters. In case of using our sub-character model, you should preprocess your data with the code below. ```python import torch from transformers import BertConfig, BertModel, BertForPreTraining, BertTokenizer from unicodedata import normalize tokenizer_krbert = BertTokenizer.from_pretrained('/path/to/vocab_file.txt', do_lower_case=False) # convert a string into sub-char def to_subchar(string): return normalize('NFKD', string) sentence = '토크나이저 예시입니다.' print(tokenizer_krbert.tokenize(to_subchar(sentence))) ``` ### Tokenization #### BidirectionalWordPiece Tokenizer We use the BidirectionalWordPiece model to reduce search costs while maintaining the possibility of choice. This model applies BPE in both forward and backward directions to obtain two candidates and chooses the one that has a higher frequency. | | Mulitlingual BERT | KorBERT<br>character | KoBERT | KR-BERT<br>character<br>WordPiece | KR-BERT<br>character<br>BidirectionalWordPiece | KR-BERT<br>sub-character<br>WordPiece | KR-BERT<br>sub-character<br>BidirectionalWordPiece | | :-------------------------------------: | :-----------------------: | :-----------------------: | :-----------------------: | :------------------------------: | :-------------------------------------------: | :----------------------------------: | :-----------------------------------------------: | | 냉장고<br>nayngcangko<br>"refrigerator" | 냉#장#고<br>nayng#cang#ko | 냉#장#고<br>nayng#cang#ko | 냉#장#고<br>nayng#cang#ko | 냉장고<br>nayngcangko | 냉장고<br>nayngcangko | 냉장고<br>nayngcangko | 냉장고<br>nayngcangko | | 춥다<br>chwupta<br>"cold" | [UNK] | 춥#다<br>chwup#ta | 춥#다<br>chwup#ta | 춥#다<br>chwup#ta | 춥#다<br>chwup#ta | 추#ㅂ다<br>chwu#pta | 추#ㅂ다<br>chwu#pta | | 뱃사람<br>paytsalam<br>"seaman" | [UNK] | 뱃#사람<br>payt#salam | 뱃#사람<br>payt#salam | 뱃#사람<br>payt#salam | 뱃#사람<br>payt#salam | 배#ㅅ#사람<br>pay#t#salam | 배#ㅅ#사람<br>pay#t#salam | | 마이크<br>maikhu<br>"microphone" | 마#이#크<br>ma#i#khu | 마이#크<br>mai#khu | 마#이#크<br>ma#i#khu | 마이크<br>maikhu | 마이크<br>maikhu | 마이크<br>maikhu | 마이크<br>maikhu | <br> ### Models | | TensorFlow | | PyTorch | | |:---:|:-------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:| | | character | sub-character | character | sub-character | | WordPiece <br> tokenizer | [WP char](https://drive.google.com/open?id=1SG5m-3R395VjEEnt0wxWM7SE1j6ndVsX) | [WP subchar](https://drive.google.com/open?id=13oguhQvYD9wsyLwKgU-uLCacQVWA4oHg) | [WP char](https://drive.google.com/file/d/18lsZzx_wonnOezzB5QxqSliA2KL5BF0x/view?usp=sharing) | [WP subchar](https://drive.google.com/open?id=1c1en4AMlCv2k7QapIzqjefnYzNOoh5KZ) | Bidirectional <br> WordPiece <br> tokenizer | [BiWP char](https://drive.google.com/open?id=1YhFobehwzdbIxsHHvyFU5okp-HRowRKS) | [BiWP subchar](https://drive.google.com/open?id=12izU0NZXNz9I6IsnknUbencgr7gWHDeM) | [BiWP char](https://drive.google.com/open?id=1C87CCHD9lOQhdgWPkMw_6ZD5M2km7f1p) | [BiWP subchar](https://drive.google.com/file/d/1JvNYFQyb20SWgOiDxZn6h1-n_fjTU25S/view?usp=sharing) <!-- #### tensorflow * BERT tokenizer, character model ([download](https://drive.google.com/open?id=1SG5m-3R395VjEEnt0wxWM7SE1j6ndVsX)) * BidirectionalWordPiece tokenizer, character model ([download](https://drive.google.com/open?id=1YhFobehwzdbIxsHHvyFU5okp-HRowRKS)) * BERT tokenizer, sub-character model ([download](https://drive.google.com/open?id=13oguhQvYD9wsyLwKgU-uLCacQVWA4oHg)) * BidirectionalWordPiece tokenizer, sub-character model ([download](https://drive.google.com/open?id=12izU0NZXNz9I6IsnknUbencgr7gWHDeM)) #### pytorch * BERT tokenizer, character model ([download](https://drive.google.com/file/d/18lsZzx_wonnOezzB5QxqSliA2KL5BF0x/view?usp=sharing)) * BidirectionalWordPiece tokenizer, character model ([download](https://drive.google.com/open?id=1C87CCHD9lOQhdgWPkMw_6ZD5M2km7f1p)) * BERT tokenizer, sub-character model ([download](https://drive.google.com/open?id=1c1en4AMlCv2k7QapIzqjefnYzNOoh5KZ)) * BidirectionalWordPiece tokenizer, sub-character model ([download](https://drive.google.com/file/d/1JvNYFQyb20SWgOiDxZn6h1-n_fjTU25S/view?usp=sharing)) --> <br> ### Requirements - transformers == 2.1.1 - tensorflow < 2.0 <br> ## Downstream tasks ### Naver Sentiment Movie Corpus (NSMC) * If you want to use the sub-character version of our models, let the `subchar` argument be `True`. * And you can use the original BERT WordPiece tokenizer by entering `bert` for the `tokenizer` argument, and if you use `ranked` you can use our BidirectionalWordPiece tokenizer. * tensorflow: After downloading our pretrained models, put them in a `models` directory in the `krbert_tensorflow` directory. * pytorch: After downloading our pretrained models, put them in a `pretrained` directory in the `krbert_pytorch` directory. ```sh # pytorch python3 train.py --subchar {True, False} --tokenizer {bert, ranked} # tensorflow python3 run_classifier.py \ --task_name=NSMC \ --subchar={True, False} \ --tokenizer={bert, ranked} \ --do_train=true \ --do_eval=true \ --do_predict=true \ --do_lower_case=False\ --max_seq_length=128 \ --train_batch_size=128 \ --learning_rate=5e-05 \ --num_train_epochs=5.0 \ --output_dir={output_dir} ``` The pytorch code structure refers to that of https://github.com/aisolab/nlp_implementation . <br> ### NSMC Acc. | | multilingual BERT | KorBERT | KoBERT | KR-BERT character WordPiece | KR-BERT<br>character Bidirectional WordPiece | KR-BERT sub-character WordPiece | KR-BERT<br>sub-character Bidirectional WordPiece | |:-----:|-------------------:|----------------:|--------:|----------------------------:|-----------------------------------------:|--------------------------------:|---------------------------------------------:| | pytorch | - | **89.84** | 89.01 | 89.34 | **89.38** | 89.20 | 89.34 | | tensorflow | 87.08 | 85.94 | n/a | 89.86 | **90.10** | 89.76 | 89.86 | <br> ## Citation If you use these models, please cite the following paper: ``` @article{lee2020krbert, title={KR-BERT: A Small-Scale Korean-Specific Language Model}, author={Sangah Lee and Hansol Jang and Yunmee Baik and Suzi Park and Hyopil Shin}, year={2020}, journal={ArXiv}, volume={abs/2008.03979} } ``` <br> ## Contacts [email protected]
kaggleodin/distilbert-base-uncased-finetuned-squad
kaggleodin
2021-11-22T04:08:36Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2291 | 1.0 | 5533 | 1.1581 | | 0.9553 | 2.0 | 11066 | 1.1249 | | 0.7767 | 3.0 | 16599 | 1.1639 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
Ulto/pythonCoPilot3
Ulto
2021-11-22T01:24:16Z
6
1
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: pythonCoPilot3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pythonCoPilot3 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
Ulto/pythonCoPilot
Ulto
2021-11-21T23:49:37Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: pythonCoPilot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pythonCoPilot This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
Abirate/bert_fine_tuned_cola
Abirate
2021-11-21T16:41:00Z
10
1
transformers
[ "transformers", "tf", "bert", "text-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
## Petrained Model BERT: base model (cased) BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/1810.04805) and first released in this [repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between english and English. ## Pretained Model Description BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives: - Masked language modeling (MLM) - Next sentence prediction (NSP) ## Fine-tuned Model Description: BERT fine-tuned Cola The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.  By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable ## How to use ? ###### Directly with a pipeline for a text-classification NLP task ```python from transformers import pipeline cola = pipeline('text-classification', model='Abirate/bert_fine_tuned_cola') cola("Tunisia is a beautiful country") [{'label': 'acceptable', 'score': 0.989352285861969}] ``` ###### Breaking down all the steps (Tokenization, Modeling, Postprocessing) ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification import tensorflow as tf import numpy as np tokenizer = AutoTokenizer.from_pretrained('Abirate/bert_fine_tuned_cola') model = TFAutoModelForSequenceClassification.from_pretrained("Abirate/bert_fine_tuned_cola") text = "Tunisia is a beautiful country." encoded_input = tokenizer(text, return_tensors='tf') #The logits output = model(encoded_input) #Postprocessing probas_output = tf.math.softmax(tf.squeeze(output['logits']), axis = -1) class_preds = np.argmax(probas_output, axis = -1) #Predicting the class acceptable or not acceptable model.config.id2label[class_preds] #Result 'acceptable' ```
abhibisht89/spanbert-large-cased-finetuned-ade_corpus_v2
abhibisht89
2021-11-21T15:23:59Z
79
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "spanbert", "en", "dataset:ade_corpus_v2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: en tags: - spanbert datasets: - ade_corpus_v2 widget: - text: "Having fever after taking paracetamol." example_title: "NER" - text: "Birth defects associated with thalidomide." example_title: "NER" - text: "Deafness and kidney failure associated with gentamicin (an antibiotic)." example_title: "NER" - text: "Bleeding of the intestine associated with aspirin therapy." example_title: "NER" --- spanbert-large-cased fine-tuned for <b>"Adverse drug reaction"</b> and <b>"Drug"</b> span Extraction. <b>Details of spanbert-large-cased:</b> https://huggingface.co/SpanBERT/spanbert-large-cased <b>Details of the downstream task (Adverse drug reaction and Drug Extraction) - Dataset</b> https://huggingface.co/datasets/ade_corpus_v2
huggingtweets/mo_turse
huggingtweets
2021-11-21T11:39:55Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/mo_turse/1637494790715/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1458151390505734144/QnD5NomB_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">⬅️To_Murse💉</div> <div style="text-align: center; font-size: 14px;">@mo_turse</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ⬅️To_Murse💉. | Data | ⬅️To_Murse💉 | | --- | --- | | Tweets downloaded | 3199 | | Retweets | 1128 | | Short tweets | 198 | | Tweets kept | 1873 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18gmbfdi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mo_turse's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/72halqv5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/72halqv5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mo_turse') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
emeraldgoose/bert-base-v1-sports
emeraldgoose
2021-11-21T05:45:05Z
13
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ko mask_token: "[MASK]" widget: - text: 산악 자전거 경기는 상대적으로 새로운 [MASK] 1990년대에 활성화 되었다. --- ## Data-annotation-nlp-10 (BoostCamp AI) 위키피디아(스포츠) dataset 구축을 진행하면서 얻은 문장을 통해 bert 사전학습을 진행 ## How to use ```python from transformers import AutoTokenizer, BertForMaskedLM model = BertForMaskedLM.from_pretrained("emeraldgoose/bert-base-v1-sports") tokenizer = AutoTokenizer.from_pretrained("emeraldgoose/bert-base-v1-sports") text = "산악 자전거 경기는 상대적으로 새로운 [MASK] 1990년대에 활성화 되었다." inputs = tokenizer.encode(text, return_tensors='pt') model.eval() outputs = model(inputs)['logits'] predict = outputs.argmax(-1)[0] print(tokenizer.decode(predict)) ```
arvalinno/distilbert-base-uncased-finetuned-indosquad-v2
arvalinno
2021-11-21T04:15:31Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-indosquad-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-indosquad-v2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6650 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.9015 | 1.0 | 9676 | 1.5706 | | 1.6438 | 2.0 | 19352 | 1.5926 | | 1.4714 | 3.0 | 29028 | 1.5253 | | 1.3486 | 4.0 | 38704 | 1.6650 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
Ulto/avengers2
Ulto
2021-11-21T01:13:26Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model-index: - name: avengers2 results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # avengers2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.0131 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 56 | 3.9588 | | No log | 2.0 | 112 | 3.9996 | | No log | 3.0 | 168 | 4.0131 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0 - Datasets 1.2.1 - Tokenizers 0.10.1
arvalinno/distilbert-base-uncased-finetuned-squad
arvalinno
2021-11-20T17:31:23Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.7604 | 1.0 | 6366 | 1.5329 | | 1.4784 | 2.0 | 12732 | 1.3930 | | 1.3082 | 3.0 | 19098 | 1.4232 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
arvalinno/albert-base-v2-finetuned-squad
arvalinno
2021-11-20T12:05:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: albert-base-v2-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1893 | 1.0 | 3052 | 0.2808 | | 0.1209 | 2.0 | 6104 | 0.2787 | | 0.069 | 3.0 | 9156 | 0.3222 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
Sindhu/muril-large-squad2
Sindhu
2021-11-20T09:43:56Z
23
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
# Muril Large Squad2 This model is finetuned for QA task on Squad2 from [Muril Large checkpoint](https://huggingface.co/google/muril-large-cased). ## Hyperparameters ``` Batch Size: 4 Grad Accumulation Steps = 8 Total epochs = 3 MLM Checkpoint = google/muril-large-cased max_seq_len = 256 learning_rate = 1e-5 lr_schedule = LinearWarmup warmup_ratio = 0.1 doc_stride = 128 ``` ## Squad 2 Evaluation stats: Generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) ```json { "exact": 82.0180240882675, "f1": 85.10110304685352, "total": 11873, "HasAns_exact": 81.6970310391363, "HasAns_f1": 87.87203044454981, "HasAns_total": 5928, "NoAns_exact": 82.3380992430614, "NoAns_f1": 82.3380992430614, "NoAns_total": 5945 } ``` ## Limitations MuRIL is specifically trained to work on 18 Indic languages and English. This model is not expected to perform well in any other languages. See the MuRIL checkpoint for further details. For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man)
overfit/twiner-bert-base-mtl
overfit
2021-11-20T06:52:55Z
6
2
transformers
[ "transformers", "tf", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
## ParsTwiNER: Transformer-based Model for Named Entity Recognition at Informal Persian An open, broad-coverage corpus and model for informal Persian named entity recognition collected from Twitter. Paper presenting ParsTwiNER: [2021.wnut-1.16](https://aclanthology.org/2021.wnut-1.16/) --- ## Results The following table summarizes the F1 score on our corpus obtained by ParsTwiNER as compared to ParsBERT as a SoTA for Persian NER. ### Named Entity Recognition on Our Corpus | Entity Type | ParsTwiNER F1 | ParsBert F1 | |:-----------:|:-------------:|:--------------:| | PER | 91 | 80 | | LOC | 82 | 68 | | ORG | 69 | 55 | | EVE | 41 | 12 | | POG | 85 | - | | NAT | 82.3 | - | | Total | 81.5 | 69.5 | ## How to use ### TensorFlow 2.0 ```python from transformers import TFAutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("overfit/twiner-bert-base-mtl") model = TFAutoModelForTokenClassification.from_pretrained("overfit/twiner-bert-base-mtl") twiner_mtl = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[]) ``` ## Cite Please cite the following paper in your publication if you are using [ParsTwiNER](https://aclanthology.org/2021.wnut-1.16/) in your research: ```markdown @inproceedings{aghajani-etal-2021-parstwiner, title = "{P}ars{T}wi{NER}: A Corpus for Named Entity Recognition at Informal {P}ersian", author = "Aghajani, MohammadMahdi and Badri, AliAkbar and Beigy, Hamid", booktitle = "Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.wnut-1.16", pages = "131--136", abstract = "As a result of unstructured sentences and some misspellings and errors, finding named entities in a noisy environment such as social media takes much more effort. ParsTwiNER contains about 250k tokens, based on standard instructions like MUC-6 or CoNLL 2003, gathered from Persian Twitter. Using Cohen{'}s Kappa coefficient, the consistency of annotators is 0.95, a high score. In this study, we demonstrate that some state-of-the-art models degrade on these corpora, and trained a new model using parallel transfer learning based on the BERT architecture. Experimental results show that the model works well in informal Persian as well as in formal Persian.", } ``` ## Acknowledgments The authors would like to thank Dr. Momtazi for her support. Furthermore, we would like to acknowledge the accompaniment provided by Mohammad Mahdi Samiei and Abbas Maazallahi. ## Contributors - Mohammad Mahdi Aghajani: [Linkedin](https://www.linkedin.com/in/mohammadmahdi-aghajani-821843147/), [Github](https://github.com/mmaghajani) - Ali Akbar Badri: [Linkedin](https://www.linkedin.com/in/aliakbarbadri/), [Github](https://github.com/AliAkbarBadri) - Dr. Hamid Beigy: [Linkedin](https://www.linkedin.com/in/hamid-beigy-8982604b/) - Overfit Team: [Github](https://github.com/overfit-ir), [Telegram](https://t.me/nlp_stuff) ## Releases ### Release v1.0.0 (Aug 01, 2021) This is the first version of our ParsTwiNER.
trnt/twitter_emotions
trnt
2021-11-20T04:31:53Z
30
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: twitter_emotions results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9375 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_emotions This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1647 - Accuracy: 0.9375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2486 | 1.0 | 2000 | 0.2115 | 0.931 | | 0.135 | 2.0 | 4000 | 0.1725 | 0.936 | | 0.1041 | 3.0 | 6000 | 0.1647 | 0.9375 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4b
JazibEijaz
2021-11-19T20:43:38Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: name: bert-base-uncased-finetuned-semeval2020-task4b-append --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-semeval2020-task4b This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ComVE dataset which was part of SemEval 2020 Task 4. It achieves the following results on the test set: - Loss: 0.6760 - Accuracy: 0.8760 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5016 | 1.0 | 688 | 0.3502 | 0.8600 | | 0.2528 | 2.0 | 1376 | 0.5769 | 0.8620 | | 0.0598 | 3.0 | 2064 | 0.6720 | 0.8700 | | 0.0197 | 4.0 | 2752 | 0.6760 | 0.8760 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.12.1 - Tokenizers 0.10.3
mmcquade11/autonlp-reuters-summarization-34018133
mmcquade11
2021-11-19T14:45:38Z
6
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "en", "dataset:mmcquade11/autonlp-data-reuters-summarization", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - mmcquade11/autonlp-data-reuters-summarization co2_eq_emissions: 286.4350821612984 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 34018133 - CO2 Emissions (in grams): 286.4350821612984 ## Validation Metrics - Loss: 1.1805976629257202 - Rouge1: 55.4013 - Rouge2: 30.8004 - RougeL: 52.57 - RougeLsum: 52.6103 - Gen Len: 15.3458 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/mmcquade11/autonlp-reuters-summarization-34018133 ```
alvp/alberti-stanzas
alvp
2021-11-19T13:41:53Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "unk", "dataset:alvp/autonlp-data-alberti-stanza-names", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - alvp/autonlp-data-alberti-stanza-names co2_eq_emissions: 8.612473981829835 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 34318169 - CO2 Emissions (in grams): 8.612473981829835 ## Validation Metrics - Loss: 1.3520570993423462 - Accuracy: 0.6083916083916084 - Macro F1: 0.5420169617715481 - Micro F1: 0.6083916083916084 - Weighted F1: 0.5963328136975058 - Macro Precision: 0.5864033493660455 - Micro Precision: 0.6083916083916084 - Weighted Precision: 0.6364793882921277 - Macro Recall: 0.5545405576555766 - Micro Recall: 0.6083916083916084 - Weighted Recall: 0.6083916083916084 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alvp/autonlp-alberti-stanza-names-34318169 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("alvp/autonlp-alberti-stanza-names-34318169", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("alvp/autonlp-alberti-stanza-names-34318169", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
readerbench/jurBERT-base
readerbench
2021-11-19T11:56:10Z
61
1
transformers
[ "transformers", "pytorch", "tf", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
Model card for jurBERT-base --- language: - ro --- # jurBERT-base ## Pretrained juridical BERT model for Romanian BERT Romanian juridical model trained using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was introduced in this [paper](https://aclanthology.org/2021.nllp-1.8/). Two BERT models were released: **jurBERT-base** and **jurBERT-large**, all versions uncased. | Model | Weights | L | H | A | MLM accuracy | NSP accuracy | |----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:| | *jurBERT-base* | *111M* | *12* | *768* | *12* | *0.8936* | *0.9923* | | jurBERT-large | 337M | 24 | 1024 | 24 | 0.9005 | 0.9929 | All models are available: * [jurBERT-base](https://huggingface.co/readerbench/jurBERT-base) * [jurBERT-large](https://huggingface.co/readerbench/jurBERT-large) #### How to use ```python # tensorflow from transformers import AutoModel, AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("readerbench/jurBERT-base") model = TFAutoModel.from_pretrained("readerbench/jurBERT-base") inputs = tokenizer("exemplu de propoziție", return_tensors="tf") outputs = model(inputs) # pytorch from transformers import AutoModel, AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("readerbench/jurBERT-base") model = AutoModel.from_pretrained("readerbench/jurBERT-base") inputs = tokenizer("exemplu de propoziție", return_tensors="pt") outputs = model(**inputs) ``` ## Datasets The model is trained on a private corpus (that can nevertheless be rented for a fee), that is comprised of all the final ruling, containing both civil and criminal cases, published by any Romanian civil court between 2010 and 2018. Validation is performed on two other datasets, RoBanking and BRDCases. We extracted from RoJur common types of cases pertinent to the banking domain (e.g. administration fee litigations, enforcement appeals), kept only the summary of the arguments provided by both the plaitiffs and the defendants and the final verdict (in the form of a boolean value) to build RoBanking. BRDCases represents a collection of cases in which BRD Groupe Société Générale Romania was directly involved. | Corpus | Scope |Entries | Size (GB)| |-----------|:------------:|:---------:|:---------:| | RoJur | pre-training | 11M | 160 | | RoBanking | downstream | 108k | - | | BRDCases | downstream | 149 | - | ## Downstream performance We report Mean AUC and Std AUC on the task of predicting the outcome of a case. ### Results on RoBanking using only the plea of the plaintiff. | Model | Mean AUC | Std AUC | |--------------------|:--------:|:--------:| | CNN | 79.60 | - | | BI-LSTM | 80.99 | 0.26 | | RoBERT-small | 70.54 | 0.28 | | RoBERT-base | 79.74 | 0.21 | | RoBERT-base + hf | 79.82 | 0.11 | | RoBERT-large | 76.53 | 5.43 | | *jurBERT-base* | **81.47**| **0.18** | | *jurBERT-base + hf*| *81.40* | *0.18* | | jurBERT-large | 78.38 | 1.77 | ### Results on RoBanking using pleas from both the plaintiff and defendant. | Model | Mean AUC | Std AUC | |---------------------|:--------:|:--------:| | BI-LSTM | 84.60 | 0.59 | | RoBERT-base | 84.40 | 0.26 | | RoBERT-base + hf | 84.43 | 0.15 | | *jurBERT-base* | *86.63* | *0.18* | | *jurBERT-base + hf* | **86.73**| **0.22** | | jurBERT-large | 82.04 | 0.64 | ### Results on BRDCases | Model | Mean AUC | Std AUC | |---------------------|:--------:|:--------:| | SVM with SK | 57.72 | 2.15 | | RoBERT-base | 53.24 | 1.76 | | RoBERT-base + hf | 55.40 | 0.96 | | *jurBERT-base* | *59.65* | *1.16* | | *jurBERT-base + hf* | **61.46**| **1.76** | For complete results and discussion please refer to the [paper](https://aclanthology.org/2021.nllp-1.8/). ### BibTeX entry and citation info ```bibtex @inproceedings{masala2021jurbert, title={jurBERT: A Romanian BERT Model for Legal Judgement Prediction}, author={Masala, Mihai and Iacob, Radu Cristian Alexandru and Uban, Ana Sabina and Cidota, Marina and Velicu, Horia and Rebedea, Traian and Popescu, Marius}, booktitle={Proceedings of the Natural Legal Language Processing Workshop 2021}, pages={86--94}, year={2021} } ```
MrBananaHuman/kogpt_6b_fp16
MrBananaHuman
2021-11-19T06:23:58Z
55
4
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
kakao brain에서 공개한 kogpt 6b model('kakaobrain/kogpt')을 fp16으로 저장한 모델입니다. ### 카카오브레인 모델을 fp16으로 로드하는 방법 ```python import torch from transformers import GPTJForCausalLM model = GPTJForCausalLM.from_pretrained('kakaobrain/kogpt', cache_dir='./my_dir', revision='KoGPT6B-ryan1.5b', torch_dtype=torch.float16) ``` ### fp16 모델 로드 후 문장 생성 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_rLDzhGohJPbOD5I_eTIOdx4aOTp43uK?usp=sharing) ```python import torch from transformers import GPTJForCausalLM, AutoTokenizer model = GPTJForCausalLM.from_pretrained('MrBananaHuman/kogpt_6b_fp16', low_cpu_mem_usage=True)) model.to('cuda') tokenizer = AutoTokenizer.from_pretrained('MrBananaHuman/kogpt_6b_fp16') input_text = '이순신은' input_ids = tokenizer(input_text, return_tensors='pt').input_ids.to('cuda') output = model.generate(input_ids, max_length=64) print(tokenizer.decode(output[0])) >>> 이순신은 우리에게 무엇인가? 1. 머리말 이글은 임진왜란 당시 이순인이 보여준 ``` ### 참고 링크 https://github.com/kakaobrain/kogpt/issues/6?fbclid=IwAR1KpWhuHnevQvEWV18o16k2z9TLgrXkbWTkKqzL-NDXHfDnWcIq7I4SJXM
aozorahime/my-new-model
aozorahime
2021-11-19T03:15:33Z
6
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum model-index: - name: my-new-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-new-model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
kevinzyz/chinese-bert-wwm-ext-finetuned-cola
kevinzyz
2021-11-19T03:13:39Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: chinese-bert-wwm-ext-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chinese-bert-wwm-ext-finetuned-cola This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5747 - Matthews Correlation: 0.4085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.5824 | 1.0 | 66375 | 0.5746 | 0.4083 | | 0.5824 | 2.0 | 66376 | 0.5747 | 0.4085 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.7.1 - Datasets 1.15.1 - Tokenizers 0.10.3
huggingtweets/pontifex
huggingtweets
2021-11-19T02:50:28Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/pontifex/1637290224841/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/507818066814590976/KNG-IkT9_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Pope Francis</div> <div style="text-align: center; font-size: 14px;">@pontifex</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Pope Francis. | Data | Pope Francis | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 0 | | Short tweets | 48 | | Tweets kept | 3202 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1k0p2om2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pontifex's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/j7dvalpc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/j7dvalpc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/pontifex') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)