pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fill-mask
|
transformers
|
# Welcome to KanBERTo (ಕನ್ಬರ್ಟೋ)
## Model Description
> This is a small language model for [Kannada](https://en.wikipedia.org/wiki/Kannada) language with 1M data samples taken from
[OSCAR page](https://traces1.inria.fr/oscar/files/compressed-orig/kn.txt.gz)
## Training params
- **Dataset** - 1M data samples are used to train this model from OSCAR page(https://traces1.inria.fr/oscar/) eventhough data set is of 1.7 GB due to resource constraint to train
I have picked only 1M data from the total 1.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.
- **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗
- **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2
__Trainer__ : num_train_epochs=12 - trained for 12 epochs
per_gpu_train_batch_size=64 - batch size for the datasamples is 64
save_steps=10_000 - save model for every 10k steps
save_total_limit=2 - save limit is set for 2
**Intended uses & limitations**
this is for anyone who wants to make use of kannada language models for various tasks like language generation, translation and many more use cases.
**Whatever else is helpful!**
If you are intersted in collaboration feel free to reach me [Naveen](mailto:[email protected])
|
{"language": "kn"}
|
Naveen-k/KanBERTo
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"kn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"kn"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #kn #autotrain_compatible #endpoints_compatible #region-us
|
# Welcome to KanBERTo (ಕನ್ಬರ್ಟೋ)
## Model Description
> This is a small language model for Kannada language with 1M data samples taken from
OSCAR page
## Training params
- Dataset - 1M data samples are used to train this model from OSCAR page(URL eventhough data set is of 1.7 GB due to resource constraint to train
I have picked only 1M data from the total 1.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.
- Preprocessing - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by
- Hyperparameters - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2
__Trainer__ : num_train_epochs=12 - trained for 12 epochs
per_gpu_train_batch_size=64 - batch size for the datasamples is 64
save_steps=10_000 - save model for every 10k steps
save_total_limit=2 - save limit is set for 2
Intended uses & limitations
this is for anyone who wants to make use of kannada language models for various tasks like language generation, translation and many more use cases.
Whatever else is helpful!
If you are intersted in collaboration feel free to reach me Naveen
|
[
"# Welcome to KanBERTo (ಕನ್ಬರ್ಟೋ)",
"## Model Description\n \n> This is a small language model for Kannada language with 1M data samples taken from\n OSCAR page",
"## Training params \n\n- Dataset - 1M data samples are used to train this model from OSCAR page(URL eventhough data set is of 1.7 GB due to resource constraint to train \nI have picked only 1M data from the total 1.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.\n\n- Preprocessing - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by \n- Hyperparameters - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2\n __Trainer__ : num_train_epochs=12 - trained for 12 epochs\n per_gpu_train_batch_size=64 - batch size for the datasamples is 64\n save_steps=10_000 - save model for every 10k steps\n save_total_limit=2 - save limit is set for 2\n\nIntended uses & limitations\n this is for anyone who wants to make use of kannada language models for various tasks like language generation, translation and many more use cases.\n\nWhatever else is helpful!\n If you are intersted in collaboration feel free to reach me Naveen"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #kn #autotrain_compatible #endpoints_compatible #region-us \n",
"# Welcome to KanBERTo (ಕನ್ಬರ್ಟೋ)",
"## Model Description\n \n> This is a small language model for Kannada language with 1M data samples taken from\n OSCAR page",
"## Training params \n\n- Dataset - 1M data samples are used to train this model from OSCAR page(URL eventhough data set is of 1.7 GB due to resource constraint to train \nI have picked only 1M data from the total 1.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.\n\n- Preprocessing - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by \n- Hyperparameters - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2\n __Trainer__ : num_train_epochs=12 - trained for 12 epochs\n per_gpu_train_batch_size=64 - batch size for the datasamples is 64\n save_steps=10_000 - save model for every 10k steps\n save_total_limit=2 - save limit is set for 2\n\nIntended uses & limitations\n this is for anyone who wants to make use of kannada language models for various tasks like language generation, translation and many more use cases.\n\nWhatever else is helpful!\n If you are intersted in collaboration feel free to reach me Naveen"
] |
text-generation
|
transformers
|
#Marty McFly model
|
{"tags": ["conversational"]}
|
Navigator/DialoGPT-medium-martymcfly
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Marty McFly model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Chandler Bing DialoGPT Model
|
{"tags": ["conversational"]}
|
Navya2608/DialoGPT-medium-chandler
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Chandler Bing DialoGPT Model
|
[
"# Chandler Bing DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Chandler Bing DialoGPT Model"
] |
text-generation
|
transformers
|
# Rachel Green DialoGPT Model
|
{"tags": ["conversational"]}
|
Navya2608/DialoGPT-medium-rachel
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rachel Green DialoGPT Model
|
[
"# Rachel Green DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rachel Green DialoGPT Model"
] |
text-generation
|
transformers
|
# Tony Stark dialoGPT model
|
{"tags": ["conversational"]}
|
Navya2608/DialoGPT-small-tonystarkscript
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tony Stark dialoGPT model
|
[
"# Tony Stark dialoGPT model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tony Stark dialoGPT model"
] |
automatic-speech-recognition
| null |
# Norwegian Wav2Vec2 Model - 1B - Bokmål
This achieves the following results on the test set with a 5-gram KenLM:
- WER: 0.0668
- CER: 0.0256
Without using a language model, we are getting these results:
- WER: ???
- CER: ???
## Model description
This is one of several Wav2Vec-models created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). In parallell with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://huggingface.co/datasets/NbAiLab/NPSC) to the 🤗 Dataset format and used that as the main source for training.
We do release all code developed during the event so that the Norwegian NLP community can build upon this to develop even better Norwegian ASR models. The finetuning of these models are not very compute demanding. You should after following the instructions here, be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from HuggingFace](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running this will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As you see from the results above, adding even a simple 5-gram language will significantly improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The following parameters were used during training:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="facebook/wav2vec2-xls-r-1b"
--dataset_config_name="16K_mp3_bokmaal"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="40"
--per_device_train_batch_size="12"
--per_device_eval_batch_size="12"
--gradient_accumulation_steps="2"
--learning_rate="2e-5"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--ctc_zero_infinity=True
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="16"
```
Following this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by tweaking these parameters
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
|
{"language": ["nb-NO"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "xxx-robust-speech-event", false, "nb-NO"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xls-r-1b-npsc-bokmaal-low-27k", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.06686424124625939, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.025697763468940576, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
|
NbAiLab/Wav2Vec-Template
| null |
[
"automatic-speech-recognition",
"NbAiLab/NPSC",
"xxx-robust-speech-event",
"no",
"nb-NO",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"nb-NO"
] |
TAGS
#automatic-speech-recognition #NbAiLab/NPSC #xxx-robust-speech-event #no #nb-NO #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #region-us
|
Norwegian Wav2Vec2 Model - 1B - Bokmål
======================================
This achieves the following results on the test set with a 5-gram KenLM:
* WER: 0.0668
* CER: 0.0256
Without using a language model, we are getting these results:
* WER: ???
* CER: ???
Model description
-----------------
This is one of several Wav2Vec-models created during the hosted Robust Speech Event. In parallell with the event, the team also converted the Norwegian Parliamentary Speech Corpus (NPSC) to the Dataset format and used that as the main source for training.
We do release all code developed during the event so that the Norwegian NLP community can build upon this to develop even better Norwegian ASR models. The finetuning of these models are not very compute demanding. You should after following the instructions here, be able to train your own automatic speech recognition system in less than a day with an average GPU.
Team
----
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
Training procedure
------------------
To reproduce these results, we strongly recommend that you follow the instructions from HuggingFace to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files and from our repo. Running this will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As you see from the results above, adding even a simple 5-gram language will significantly improve the results. has provided another very nice blog about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.
### Parameters
The following parameters were used during training:
Following this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by tweaking these parameters
|
[
"### Language Model\n\n\nAs you see from the results above, adding even a simple 5-gram language will significantly improve the results. has provided another very nice blog about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.",
"### Parameters\n\n\nThe following parameters were used during training:\n\n\nFollowing this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by tweaking these parameters"
] |
[
"TAGS\n#automatic-speech-recognition #NbAiLab/NPSC #xxx-robust-speech-event #no #nb-NO #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #region-us \n",
"### Language Model\n\n\nAs you see from the results above, adding even a simple 5-gram language will significantly improve the results. has provided another very nice blog about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.",
"### Parameters\n\n\nThe following parameters were used during training:\n\n\nFollowing this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by tweaking these parameters"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLSR-1B-bokmaal-low
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1579
- Wer: 0.0722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 34.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.434 | 0.24 | 500 | 0.1704 | 0.1378 |
| 0.2833 | 0.48 | 1000 | 0.1638 | 0.1324 |
| 0.2478 | 0.72 | 1500 | 0.1606 | 0.1240 |
| 0.2276 | 0.97 | 2000 | 0.1562 | 0.1212 |
| 0.2208 | 1.21 | 2500 | 0.1576 | 0.1172 |
| 0.2148 | 1.45 | 3000 | 0.1502 | 0.1119 |
| 0.1994 | 1.69 | 3500 | 0.1409 | 0.1110 |
| 0.1932 | 1.93 | 4000 | 0.1432 | 0.1112 |
| 0.2122 | 2.17 | 4500 | 0.1443 | 0.1098 |
| 0.2177 | 2.42 | 5000 | 0.1329 | 0.1102 |
| 0.2058 | 2.66 | 5500 | 0.1403 | 0.1070 |
| 0.2216 | 2.9 | 6000 | 0.1342 | 0.1067 |
| 0.1984 | 3.14 | 6500 | 0.1370 | 0.1030 |
| 0.2056 | 3.38 | 7000 | 0.1371 | 0.1041 |
| 0.1735 | 3.62 | 7500 | 0.1296 | 0.1003 |
| 0.203 | 3.87 | 8000 | 0.1301 | 0.1005 |
| 0.1835 | 4.11 | 8500 | 0.1310 | 0.1004 |
| 0.178 | 4.35 | 9000 | 0.1300 | 0.0959 |
| 0.1585 | 4.59 | 9500 | 0.1277 | 0.0966 |
| 0.1848 | 4.83 | 10000 | 0.1260 | 0.0974 |
| 0.169 | 5.07 | 10500 | 0.1281 | 0.0969 |
| 0.1666 | 5.32 | 11000 | 0.1291 | 0.1003 |
| 0.1552 | 5.56 | 11500 | 0.1271 | 0.0959 |
| 0.2736 | 5.8 | 12000 | 0.1320 | 0.0935 |
| 0.2845 | 6.04 | 12500 | 0.1299 | 0.0921 |
| 0.1536 | 6.28 | 13000 | 0.1282 | 0.0927 |
| 0.1491 | 6.52 | 13500 | 0.1240 | 0.0906 |
| 0.1579 | 6.77 | 14000 | 0.1208 | 0.0921 |
| 0.16 | 7.01 | 14500 | 0.1182 | 0.0903 |
| 0.1367 | 7.25 | 15000 | 0.1214 | 0.0922 |
| 0.1499 | 7.49 | 15500 | 0.1232 | 0.0916 |
| 0.148 | 7.73 | 16000 | 0.1184 | 0.0896 |
| 0.1426 | 7.97 | 16500 | 0.1201 | 0.0889 |
| 0.1471 | 8.22 | 17000 | 0.1256 | 0.0882 |
| 0.1358 | 8.46 | 17500 | 0.1265 | 0.0909 |
| 0.1245 | 8.7 | 18000 | 0.1263 | 0.0886 |
| 0.1407 | 8.94 | 18500 | 0.1226 | 0.0885 |
| 0.1289 | 9.18 | 19000 | 0.1315 | 0.0873 |
| 0.1326 | 9.42 | 19500 | 0.1233 | 0.0868 |
| 0.1305 | 9.67 | 20000 | 0.1237 | 0.0870 |
| 0.1432 | 9.91 | 20500 | 0.1234 | 0.0857 |
| 0.1205 | 10.15 | 21000 | 0.1303 | 0.0858 |
| 0.1248 | 10.39 | 21500 | 0.1252 | 0.0858 |
| 0.1251 | 10.63 | 22000 | 0.1253 | 0.0869 |
| 0.1143 | 10.87 | 22500 | 0.1266 | 0.0860 |
| 0.1155 | 11.12 | 23000 | 0.1219 | 0.0862 |
| 0.1227 | 11.36 | 23500 | 0.1329 | 0.0864 |
| 0.1229 | 11.6 | 24000 | 0.1244 | 0.0855 |
| 0.1112 | 11.84 | 24500 | 0.1356 | 0.0851 |
| 0.2163 | 12.08 | 25000 | 0.1252 | 0.0847 |
| 0.1146 | 12.32 | 25500 | 0.1211 | 0.0837 |
| 0.1058 | 12.57 | 26000 | 0.1247 | 0.0843 |
| 0.1099 | 12.81 | 26500 | 0.1189 | 0.0833 |
| 0.1028 | 13.05 | 27000 | 0.1303 | 0.0815 |
| 0.1092 | 13.29 | 27500 | 0.1305 | 0.0838 |
| 0.1076 | 13.53 | 28000 | 0.1276 | 0.0842 |
| 0.1074 | 13.77 | 28500 | 0.1268 | 0.0844 |
| 0.0971 | 14.02 | 29000 | 0.1322 | 0.0839 |
| 0.1109 | 14.26 | 29500 | 0.1287 | 0.0821 |
| 0.0991 | 14.5 | 30000 | 0.1289 | 0.0831 |
| 0.1095 | 14.74 | 30500 | 0.1273 | 0.0822 |
| 0.1015 | 14.98 | 31000 | 0.1326 | 0.0816 |
| 0.1051 | 15.22 | 31500 | 0.1337 | 0.0814 |
| 0.0894 | 15.47 | 32000 | 0.1331 | 0.0802 |
| 0.1 | 15.71 | 32500 | 0.1304 | 0.0798 |
| 0.0957 | 15.95 | 33000 | 0.1293 | 0.0824 |
| 0.0921 | 16.19 | 33500 | 0.1382 | 0.0808 |
| 0.0986 | 16.43 | 34000 | 0.1301 | 0.0788 |
| 0.098 | 16.67 | 34500 | 0.1305 | 0.0795 |
| 0.0974 | 16.92 | 35000 | 0.1325 | 0.0796 |
| 0.0886 | 17.16 | 35500 | 0.1332 | 0.0796 |
| 0.0892 | 17.4 | 36000 | 0.1327 | 0.0785 |
| 0.0917 | 17.64 | 36500 | 0.1304 | 0.0793 |
| 0.0919 | 17.88 | 37000 | 0.1353 | 0.0791 |
| 0.1007 | 18.12 | 37500 | 0.1340 | 0.0791 |
| 0.0831 | 18.37 | 38000 | 0.1327 | 0.0786 |
| 0.0862 | 18.61 | 38500 | 0.1343 | 0.0792 |
| 0.0837 | 18.85 | 39000 | 0.1334 | 0.0777 |
| 0.0771 | 19.09 | 39500 | 0.1456 | 0.0778 |
| 0.0841 | 19.33 | 40000 | 0.1365 | 0.0784 |
| 0.0874 | 19.57 | 40500 | 0.1379 | 0.0779 |
| 0.0773 | 19.82 | 41000 | 0.1359 | 0.0776 |
| 0.0771 | 20.06 | 41500 | 0.1392 | 0.0776 |
| 0.0861 | 20.3 | 42000 | 0.1395 | 0.0774 |
| 0.0773 | 20.54 | 42500 | 0.1356 | 0.0775 |
| 0.069 | 20.78 | 43000 | 0.1399 | 0.0765 |
| 0.0823 | 21.02 | 43500 | 0.1469 | 0.0774 |
| 0.0747 | 21.27 | 44000 | 0.1415 | 0.0768 |
| 0.0703 | 21.51 | 44500 | 0.1405 | 0.0778 |
| 0.0776 | 21.75 | 45000 | 0.1492 | 0.0778 |
| 0.0833 | 21.99 | 45500 | 0.1448 | 0.0767 |
| 0.0796 | 22.23 | 46000 | 0.1434 | 0.0761 |
| 0.0613 | 22.47 | 46500 | 0.1446 | 0.0768 |
| 0.0753 | 22.72 | 47000 | 0.1439 | 0.0757 |
| 0.076 | 22.96 | 47500 | 0.1402 | 0.0759 |
| 0.0619 | 23.2 | 48000 | 0.1473 | 0.0767 |
| 0.1322 | 23.44 | 48500 | 0.1431 | 0.0766 |
| 0.0691 | 23.68 | 49000 | 0.1452 | 0.0753 |
| 0.061 | 23.92 | 49500 | 0.1452 | 0.0752 |
| 0.0716 | 24.17 | 50000 | 0.1429 | 0.0756 |
| 0.074 | 24.41 | 50500 | 0.1440 | 0.0746 |
| 0.0696 | 24.65 | 51000 | 0.1459 | 0.0756 |
| 0.081 | 24.89 | 51500 | 0.1443 | 0.0751 |
| 0.0754 | 25.13 | 52000 | 0.1483 | 0.0755 |
| 0.0864 | 25.37 | 52500 | 0.1467 | 0.0757 |
| 0.0662 | 25.62 | 53000 | 0.1471 | 0.0748 |
| 0.109 | 25.86 | 53500 | 0.1472 | 0.0759 |
| 0.0682 | 26.1 | 54000 | 0.1539 | 0.0748 |
| 0.0655 | 26.34 | 54500 | 0.1469 | 0.0743 |
| 0.0651 | 26.58 | 55000 | 0.1553 | 0.0748 |
| 0.0666 | 26.82 | 55500 | 0.1520 | 0.0744 |
| 0.0724 | 27.07 | 56000 | 0.1526 | 0.0738 |
| 0.067 | 27.31 | 56500 | 0.1489 | 0.0738 |
| 0.0658 | 27.55 | 57000 | 0.1518 | 0.0738 |
| 0.0581 | 27.79 | 57500 | 0.1518 | 0.0739 |
| 0.0639 | 28.03 | 58000 | 0.1495 | 0.0736 |
| 0.0606 | 28.27 | 58500 | 0.1549 | 0.0739 |
| 0.0641 | 28.52 | 59000 | 0.1513 | 0.0735 |
| 0.0612 | 28.76 | 59500 | 0.1524 | 0.0739 |
| 0.0536 | 29.0 | 60000 | 0.1565 | 0.0741 |
| 0.0574 | 29.24 | 60500 | 0.1541 | 0.0741 |
| 0.057 | 29.48 | 61000 | 0.1555 | 0.0741 |
| 0.0624 | 29.72 | 61500 | 0.1590 | 0.0736 |
| 0.0531 | 29.97 | 62000 | 0.1590 | 0.0734 |
| 0.0661 | 30.21 | 62500 | 0.1599 | 0.0732 |
| 0.0641 | 30.45 | 63000 | 0.1576 | 0.0730 |
| 0.0562 | 30.69 | 63500 | 0.1593 | 0.0734 |
| 0.0527 | 30.93 | 64000 | 0.1604 | 0.0730 |
| 0.0579 | 31.17 | 64500 | 0.1571 | 0.0734 |
| 0.0508 | 31.42 | 65000 | 0.1603 | 0.0733 |
| 0.0524 | 31.66 | 65500 | 0.1588 | 0.0726 |
| 0.0564 | 31.9 | 66000 | 0.1571 | 0.0727 |
| 0.0551 | 32.14 | 66500 | 0.1584 | 0.0728 |
| 0.0564 | 32.38 | 67000 | 0.1565 | 0.0726 |
| 0.0628 | 32.62 | 67500 | 0.1558 | 0.0725 |
| 0.0561 | 32.87 | 68000 | 0.1582 | 0.0727 |
| 0.0553 | 33.11 | 68500 | 0.1591 | 0.0726 |
| 0.0504 | 33.35 | 69000 | 0.1590 | 0.0725 |
| 0.0539 | 33.59 | 69500 | 0.1582 | 0.0723 |
| 0.0576 | 33.83 | 70000 | 0.1579 | 0.0722 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "XLSR-1B-bokmaal-low", "results": []}]}
|
NbAiLab/XLSR-1B-bokmaal-low
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #endpoints_compatible #region-us
|
XLSR-1B-bokmaal-low
===================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1579
* Wer: 0.0722
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1.7e-05
* train\_batch\_size: 12
* eval\_batch\_size: 12
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 24
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 34.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.0+cu113
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1.7e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 34.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1.7e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 34.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLSR-300M-bokmaal
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3_BOKMAAL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1635
- Wer: 0.1005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0307 | 0.32 | 500 | 3.0026 | 1.0 |
| 2.7865 | 0.64 | 1000 | 2.4849 | 0.9926 |
| 0.7522 | 0.95 | 1500 | 0.4567 | 0.3594 |
| 0.5703 | 1.27 | 2000 | 0.3440 | 0.2586 |
| 0.4762 | 1.59 | 2500 | 0.2925 | 0.2178 |
| 0.4585 | 1.91 | 3000 | 0.2442 | 0.1981 |
| 0.4013 | 2.23 | 3500 | 0.2495 | 0.1818 |
| 0.449 | 2.54 | 4000 | 0.2152 | 0.1808 |
| 0.355 | 2.86 | 4500 | 0.2179 | 0.1670 |
| 0.3142 | 3.18 | 5000 | 0.1953 | 0.1542 |
| 0.3242 | 3.5 | 5500 | 0.2103 | 0.1526 |
| 0.3016 | 3.82 | 6000 | 0.1911 | 0.1477 |
| 0.2713 | 4.13 | 6500 | 0.1836 | 0.1422 |
| 0.2807 | 4.45 | 7000 | 0.1924 | 0.1447 |
| 0.2929 | 4.77 | 7500 | 0.1848 | 0.1402 |
| 0.2595 | 5.09 | 8000 | 0.1783 | 0.1330 |
| 0.2289 | 5.41 | 8500 | 0.1901 | 0.1313 |
| 0.2567 | 5.72 | 9000 | 0.1784 | 0.1298 |
| 0.2401 | 6.04 | 9500 | 0.1956 | 0.1298 |
| 0.2098 | 6.36 | 10000 | 0.1748 | 0.1277 |
| 0.2246 | 6.68 | 10500 | 0.1777 | 0.1254 |
| 0.2197 | 7.0 | 11000 | 0.1703 | 0.1222 |
| 0.2122 | 7.32 | 11500 | 0.1917 | 0.1221 |
| 0.2746 | 7.63 | 12000 | 0.1769 | 0.1215 |
| 0.2148 | 7.95 | 12500 | 0.1736 | 0.1193 |
| 0.1915 | 8.27 | 13000 | 0.1814 | 0.1161 |
| 0.2462 | 8.59 | 13500 | 0.1748 | 0.1166 |
| 0.1872 | 8.91 | 14000 | 0.1769 | 0.1133 |
| 0.1886 | 9.22 | 14500 | 0.1852 | 0.1143 |
| 0.1789 | 9.54 | 15000 | 0.1696 | 0.1126 |
| 0.1692 | 9.86 | 15500 | 0.1817 | 0.1122 |
| 0.1765 | 10.18 | 16000 | 0.1769 | 0.1093 |
| 0.1699 | 10.5 | 16500 | 0.1604 | 0.1084 |
| 0.1591 | 10.81 | 17000 | 0.1777 | 0.1080 |
| 0.1499 | 11.13 | 17500 | 0.1645 | 0.1074 |
| 0.163 | 11.45 | 18000 | 0.1704 | 0.1065 |
| 0.1597 | 11.77 | 18500 | 0.1576 | 0.1064 |
| 0.1484 | 12.09 | 19000 | 0.1637 | 0.1041 |
| 0.1464 | 12.4 | 19500 | 0.1631 | 0.1047 |
| 0.156 | 12.72 | 20000 | 0.1686 | 0.1029 |
| 0.1625 | 13.04 | 20500 | 0.1648 | 0.1023 |
| 0.1395 | 13.36 | 21000 | 0.1688 | 0.1027 |
| 0.1387 | 13.68 | 21500 | 0.1670 | 0.1013 |
| 0.1434 | 13.99 | 22000 | 0.1677 | 0.1017 |
| 0.1442 | 14.31 | 22500 | 0.1688 | 0.1008 |
| 0.1439 | 14.63 | 23000 | 0.1647 | 0.1004 |
| 0.137 | 14.95 | 23500 | 0.1636 | 0.1006 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["nb-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "XLSR-300M-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.07699635320946434, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.0284288464829, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
|
NbAiLab/XLSR-300M-bokmaal
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"nb-NO"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
XLSR-300M-bokmaal
=================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the NBAILAB/NPSC - 16K\_MP3\_BOKMAAL dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1635
* Wer: 0.1005
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 15.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
zero-shot-classification
|
transformers
|
**Release 1.0** (March 11, 2021)
# NB-Bert base model finetuned on Norwegian machine translated MNLI
## Description
The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible.
[Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport").
When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.
## Testing the model
For testing the model, we recommend the [NbAiLab Colab Notebook](https://colab.research.google.com/gist/peregilk/769b5150a2f807219ab8f15dd11ea449/nbailab-mnli-norwegian-demo.ipynb)
## Hugging Face zero-shot-classification pipeline
The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="NbAiLab/nb-bert-base-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = 'Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.'
candidate_labels = ['politikk', 'helse', 'sport', 'religion']
hypothesis_template = 'Dette eksempelet er {}.'
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True)
# {'labels': ['helse', 'politikk', 'sport', 'religion'],
# 'scores': [0.4210019111633301, 0.0674605593085289, 0.000840459018945694, 0.0007541406666859984],
# 'sequence': 'Folkehelseinstituttets mest optimistiske anslag er at alle over 18 år er ferdigvaksinert innen midten av september.'}
```
## More information
For more information on the model, see
https://github.com/NBAiLab/notram
Here you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline.
|
{"language": false, "license": "cc-by-4.0", "tags": ["nb-bert", "zero-shot-classification", "pytorch", "tensorflow", "norwegian", "bert"], "datasets": ["mnli", "multi_nli", "xnli"], "thumbnail": "https://raw.githubusercontent.com/NBAiLab/notram/master/images/nblogo_2.png", "pipeline_tag": "zero-shot-classification", "widget": [{"example_title": "Nyhetsartikkel om FHI", "text": "Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.", "candidate_labels": "helse, politikk, sport, religion"}]}
|
NbAiLab/nb-bert-base-mnli
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"nb-bert",
"zero-shot-classification",
"tensorflow",
"norwegian",
"no",
"dataset:mnli",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:1909.00161",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"1909.00161"
] |
[
"no"
] |
TAGS
#transformers #pytorch #jax #safetensors #bert #text-classification #nb-bert #zero-shot-classification #tensorflow #norwegian #no #dataset-mnli #dataset-multi_nli #dataset-xnli #arxiv-1909.00161 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Release 1.0 (March 11, 2021)
# NB-Bert base model finetuned on Norwegian machine translated MNLI
## Description
The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible.
Yin et al. proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport").
When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.
## Testing the model
For testing the model, we recommend the NbAiLab Colab Notebook
## Hugging Face zero-shot-classification pipeline
The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one.
You can then use this pipeline to classify sequences into any of the class names you specify.
## More information
For more information on the model, see
URL
Here you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline.
|
[
"# NB-Bert base model finetuned on Norwegian machine translated MNLI",
"## Description\nThe most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible. \nYin et al. proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about \"sport\", we simply state that \"This text is about sport\" (\"Denne teksten handler om sport\").\n\nWhen the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.",
"## Testing the model\nFor testing the model, we recommend the NbAiLab Colab Notebook",
"## Hugging Face zero-shot-classification pipeline\nThe easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one. \n\nYou can then use this pipeline to classify sequences into any of the class names you specify.",
"## More information\n\nFor more information on the model, see\n\nURL\n\nHere you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #text-classification #nb-bert #zero-shot-classification #tensorflow #norwegian #no #dataset-mnli #dataset-multi_nli #dataset-xnli #arxiv-1909.00161 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# NB-Bert base model finetuned on Norwegian machine translated MNLI",
"## Description\nThe most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible. \nYin et al. proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about \"sport\", we simply state that \"This text is about sport\" (\"Denne teksten handler om sport\").\n\nWhen the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.",
"## Testing the model\nFor testing the model, we recommend the NbAiLab Colab Notebook",
"## Hugging Face zero-shot-classification pipeline\nThe easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one. \n\nYou can then use this pipeline to classify sequences into any of the class names you specify.",
"## More information\n\nFor more information on the model, see\n\nURL\n\nHere you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline."
] |
token-classification
|
transformers
|
**Release 1.0** (November 17, 2021)
# nb-bert-base-ner
## Description
NB-Bert base model fine-tuned on the Named Entity Recognition task using the [NorNE dataset](https://huggingface.co/datasets/NbAiLab/norne).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-bert-base-ner")
model = AutoModelForTokenClassification.from_pretrained("NbAiLab/nb-bert-base-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jeg heter Kjell og bor i Oslo."
ner_results = nlp(example)
print(ner_results)
```
|
{"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "bert", "ner"], "datasets": ["norne"], "thumbnail": "nblogo_3.png", "pipeline_tag": "token-classification", "inference": {"parameters": {"aggregation_strategy": "first"}}, "widget": [{"text": "Trond Giske har bekreftet p\u00e5 sp\u00f8rsm\u00e5l fra Adresseavisen at Hansen leide et rom i hans leilighet i Trondheim."}]}
|
NbAiLab/nb-bert-base-ner
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"norwegian",
"ner",
"no",
"dataset:norne",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"no"
] |
TAGS
#transformers #pytorch #safetensors #bert #token-classification #norwegian #ner #no #dataset-norne #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
Release 1.0 (November 17, 2021)
# nb-bert-base-ner
## Description
NB-Bert base model fine-tuned on the Named Entity Recognition task using the NorNE dataset.
## Usage
|
[
"# nb-bert-base-ner",
"## Description\nNB-Bert base model fine-tuned on the Named Entity Recognition task using the NorNE dataset.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #norwegian #ner #no #dataset-norne #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# nb-bert-base-ner",
"## Description\nNB-Bert base model fine-tuned on the Named Entity Recognition task using the NorNE dataset.",
"## Usage"
] |
text-classification
|
transformers
|
# NB-BERT-base Sámi Relevant
This a model capable of predicting when a chunk of text could potentially be of interest to the Sámi Bibliographers at the National Library of Norway.
|
{"language": ["se", "no", "en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["sami relevant"], "metrics": ["matthews_correlation"], "pipeline_tag": "text-classification", "widget": [{"text": "Riddu Ri\u0111\u0111u Festiv\u00e1la lea jahk\u00e1sa\u0161 musihkka- ja -kulturfestiv\u00e1la mii l\u00e1giduvvo G\u00e1ivuonas Davvi-Romssas."}, {"text": "The S\u00e1mi languages form a branch of the Uralic language family. According to the traditional view, S\u00e1mi is within the Uralic family most closely related to the Finnic languages (Sammallahti 1998)."}, {"text": "Joseph Robinette Biden Jr. is an American politician who is the 46th and current president of the United States."}]}
|
NbAiLab/nb-bert-base-sami-relevant
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sami relevant",
"se",
"no",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"se",
"no",
"en"
] |
TAGS
#transformers #pytorch #safetensors #bert #text-classification #sami relevant #se #no #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# NB-BERT-base Sámi Relevant
This a model capable of predicting when a chunk of text could potentially be of interest to the Sámi Bibliographers at the National Library of Norway.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #sami relevant #se #no #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
- **Release 1.1** (March 11, 2021)
- **Release 1.0** (January 13, 2021)
# NB-BERT-base
## Description
NB-BERT-base is a general BERT-base model built on the large digital collection at the National Library of Norway.
This model is based on the same structure as [BERT Cased multilingual model](https://github.com/google-research/bert/blob/master/multilingual.md), and is trained on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years.
## Intended use & limitations
The 1.1 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on GitHub, see
* https://github.com/NBAiLab/notram
## Training data
The model is trained on a wide variety of text. The training set is described on
* https://github.com/NBAiLab/notram
## More information
For more information on the model, see
https://github.com/NBAiLab/notram
|
{"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "bert"], "pipeline_tag": "fill-mask", "widget": [{"text": "P\u00e5 biblioteket kan du [MASK] en bok."}, {"text": "Dette er et [MASK] eksempel."}, {"text": "Av og til kan en spr\u00e5kmodell gi et [MASK] resultat."}, {"text": "Som ansat f\u00e5r du [MASK] for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling."}]}
|
NbAiLab/nb-bert-base
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"norwegian",
"fill-mask",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"no"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #norwegian #fill-mask #no #license-cc-by-4.0 #endpoints_compatible #has_space #region-us
|
- Release 1.1 (March 11, 2021)
- Release 1.0 (January 13, 2021)
# NB-BERT-base
## Description
NB-BERT-base is a general BERT-base model built on the large digital collection at the National Library of Norway.
This model is based on the same structure as BERT Cased multilingual model, and is trained on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years.
## Intended use & limitations
The 1.1 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on GitHub, see
* URL
## Training data
The model is trained on a wide variety of text. The training set is described on
* URL
## More information
For more information on the model, see
URL
|
[
"# NB-BERT-base",
"## Description\n\nNB-BERT-base is a general BERT-base model built on the large digital collection at the National Library of Norway.\n\nThis model is based on the same structure as BERT Cased multilingual model, and is trained on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years.",
"## Intended use & limitations\n\nThe 1.1 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on GitHub, see\n\n* URL",
"## Training data\n\nThe model is trained on a wide variety of text. The training set is described on\n\n* URL",
"## More information\n\nFor more information on the model, see\n\nURL"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #norwegian #fill-mask #no #license-cc-by-4.0 #endpoints_compatible #has_space #region-us \n",
"# NB-BERT-base",
"## Description\n\nNB-BERT-base is a general BERT-base model built on the large digital collection at the National Library of Norway.\n\nThis model is based on the same structure as BERT Cased multilingual model, and is trained on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years.",
"## Intended use & limitations\n\nThe 1.1 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on GitHub, see\n\n* URL",
"## Training data\n\nThe model is trained on a wide variety of text. The training set is described on\n\n* URL",
"## More information\n\nFor more information on the model, see\n\nURL"
] |
fill-mask
|
transformers
|
- **Release 1.0beta** (April 29, 2021)
# NB-BERT-large (beta)
## Description
NB-BERT-large is a general BERT-large model built on the large digital collection at the National Library of Norway.
This model is trained from scratch on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years using a monolingual Norwegian vocabulary.
## Intended use & limitations
The 1.0 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on Github, see
* https://github.com/NBAiLab/notram
## Training data
The model is trained on a wide variety of text. The training set is described on
* https://github.com/NBAiLab/notram
## More information
For more information on the model, see
https://github.com/NBAiLab/notram
|
{"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "bert"], "thumbnail": "nblogo_3.png", "pipeline_tag": "fill-mask", "widget": [{"text": "P\u00e5 biblioteket kan du l\u00e5ne en [MASK]."}]}
|
NbAiLab/nb-bert-large
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"norwegian",
"fill-mask",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"no"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #norwegian #fill-mask #no #license-cc-by-4.0 #endpoints_compatible #has_space #region-us
|
- Release 1.0beta (April 29, 2021)
# NB-BERT-large (beta)
## Description
NB-BERT-large is a general BERT-large model built on the large digital collection at the National Library of Norway.
This model is trained from scratch on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years using a monolingual Norwegian vocabulary.
## Intended use & limitations
The 1.0 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on Github, see
* URL
## Training data
The model is trained on a wide variety of text. The training set is described on
* URL
## More information
For more information on the model, see
URL
|
[
"# NB-BERT-large (beta)",
"## Description\n\nNB-BERT-large is a general BERT-large model built on the large digital collection at the National Library of Norway.\n\nThis model is trained from scratch on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years using a monolingual Norwegian vocabulary.",
"## Intended use & limitations\n\nThe 1.0 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on Github, see\n\n* URL",
"## Training data\n\nThe model is trained on a wide variety of text. The training set is described on\n\n* URL",
"## More information\n\nFor more information on the model, see\n\nURL"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #norwegian #fill-mask #no #license-cc-by-4.0 #endpoints_compatible #has_space #region-us \n",
"# NB-BERT-large (beta)",
"## Description\n\nNB-BERT-large is a general BERT-large model built on the large digital collection at the National Library of Norway.\n\nThis model is trained from scratch on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years using a monolingual Norwegian vocabulary.",
"## Intended use & limitations\n\nThe 1.0 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on Github, see\n\n* URL",
"## Training data\n\nThe model is trained on a wide variety of text. The training set is described on\n\n* URL",
"## More information\n\nFor more information on the model, see\n\nURL"
] |
text-generation
|
transformers
|
- **Release ✨v1✨** (January 18th, 2023) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-sharded), [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-float16), and [mesh-transformers-jax](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-mesh) weights*
<details><summary>All checkpoints</summary>
- **Release v1beta5** (December 18th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5-float16) weights*
- **Release v1beta4** (October 28th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4-float16) weights*
- **Release v1beta3** (August 8th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3-float16) weights*
- **Release v1beta2** (June 18th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta2), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta2-float16) weights*
- **Release v1beta1** (April 28th, 2022) *[Half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta1-float16) weights*
</details>
# NB-GPT-J-6B
## Demo: https://ai.nb.no/demo/nb-gpt-j-6B/ (Be patient, it runs on CPU 😅)
## Model Description
NB-GPT-J-6B is a Norwegian finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters (6 billion parameters).
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
NB-GPT-J-6B was finetuned on [NCC](https://huggingface.co/datasets/NbAiLab/NCC), the Norwegian Colossal Corpus, plus other Internet sources like Wikipedia, mC4, and OSCAR.
## Training procedure
This model was finetuned for 130 billion tokens over 1,000,000 steps on a TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
NB-GPT-J-6B learns an inner representation of the Norwegian language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("NbAiLab/nb-gpt-j-6B")
```
### Limitations and Biases
As the original GPT-J model, the core functionality of NB-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting NB-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon NB-GPT-J-6B to produce factually accurate output.
The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending.
As with all language models, it is hard to predict in advance how NB-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
We still have to find proper datasets to evaluate the model, so help is welcome!
## Citation and Related Information
### BibTeX entry
To cite this model or the corpus used:
```bibtex
@inproceedings{kummervold2021operationalizing,
title={Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model},
author={Kummervold, Per E and De la Rosa, Javier and Wetjen, Freddy and Brygfjeld, Svein Arne},
booktitle={Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
pages={20--29},
year={2021},
url={https://aclanthology.org/2021.nodalida-main.3/}
}
```
If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Specially, to [Stella Biderman](https://www.stellabiderman.com) for her general openness, and [Ben Wang](https://github.com/kingoflolz/mesh-transformer-jax) for the main codebase.
|
{"language": ["no", "nb", "nn"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["NbAiLab/NCC", "mc4", "oscar"], "pipeline_tag": "text-generation", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Company": "text", "Country": "text", "Intended Use": "text"}}
|
NbAiLab/nb-gpt-j-6B
| null |
[
"transformers",
"pytorch",
"safetensors",
"gptj",
"text-generation",
"causal-lm",
"no",
"nb",
"nn",
"dataset:NbAiLab/NCC",
"dataset:mc4",
"dataset:oscar",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"2104.09864",
"2101.00027"
] |
[
"no",
"nb",
"nn"
] |
TAGS
#transformers #pytorch #safetensors #gptj #text-generation #causal-lm #no #nb #nn #dataset-NbAiLab/NCC #dataset-mc4 #dataset-oscar #arxiv-2104.09864 #arxiv-2101.00027 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
* Release v1 (January 18th, 2023) *Full-precision, sharded, half-precision, and mesh-transformers-jax weights*
All checkpoints
```
- Release v1beta5 (December 18th, 2022) *Full-precision, sharded, and half-precision weights*
- Release v1beta4 (October 28th, 2022) *Full-precision, sharded, and half-precision weights*
- Release v1beta3 (August 8th, 2022) *Full-precision, sharded, and half-precision weights*
- Release v1beta2 (June 18th, 2022) *Full-precision, sharded, and half-precision weights*
- Release v1beta1 (April 28th, 2022) *Half-precision weights*
```
NB-GPT-J-6B
===========
Demo: URL (Be patient, it runs on CPU )
---------------------------------------
Model Description
-----------------
NB-GPT-J-6B is a Norwegian finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters (6 billion parameters).
**\*** Each layer consists of one feedforward block and one self attention block.
**†** Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
Training data
-------------
NB-GPT-J-6B was finetuned on NCC, the Norwegian Colossal Corpus, plus other Internet sources like Wikipedia, mC4, and OSCAR.
Training procedure
------------------
This model was finetuned for 130 billion tokens over 1,000,000 steps on a TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
Intended Use and Limitations
----------------------------
NB-GPT-J-6B learns an inner representation of the Norwegian language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the 'AutoModelForCausalLM' functionality:
### Limitations and Biases
As the original GPT-J model, the core functionality of NB-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting NB-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon NB-GPT-J-6B to produce factually accurate output.
The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending.
As with all language models, it is hard to predict in advance how NB-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Evaluation results
------------------
We still have to find proper datasets to evaluate the model, so help is welcome!
and Related Information
### BibTeX entry
To cite this model or the corpus used:
If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
Disclaimer
----------
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha. Specially, to Stella Biderman for her general openness, and Ben Wang for the main codebase.
|
[
"### How to use\n\n\nThis model can be easily loaded using the 'AutoModelForCausalLM' functionality:",
"### Limitations and Biases\n\n\nAs the original GPT-J model, the core functionality of NB-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting NB-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon NB-GPT-J-6B to produce factually accurate output.\n\n\nThe original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending.\n\n\nAs with all language models, it is hard to predict in advance how NB-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEvaluation results\n------------------\n\n\nWe still have to find proper datasets to evaluate the model, so help is welcome!\n\n\nand Related Information",
"### BibTeX entry\n\n\nTo cite this model or the corpus used:\n\n\nIf you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.\n\n\nDisclaimer\n----------\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha. Specially, to Stella Biderman for her general openness, and Ben Wang for the main codebase."
] |
[
"TAGS\n#transformers #pytorch #safetensors #gptj #text-generation #causal-lm #no #nb #nn #dataset-NbAiLab/NCC #dataset-mc4 #dataset-oscar #arxiv-2104.09864 #arxiv-2101.00027 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nThis model can be easily loaded using the 'AutoModelForCausalLM' functionality:",
"### Limitations and Biases\n\n\nAs the original GPT-J model, the core functionality of NB-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting NB-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon NB-GPT-J-6B to produce factually accurate output.\n\n\nThe original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending.\n\n\nAs with all language models, it is hard to predict in advance how NB-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEvaluation results\n------------------\n\n\nWe still have to find proper datasets to evaluate the model, so help is welcome!\n\n\nand Related Information",
"### BibTeX entry\n\n\nTo cite this model or the corpus used:\n\n\nIf you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.\n\n\nDisclaimer\n----------\n\n\nThe models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha. Specially, to Stella Biderman for her general openness, and Ben Wang for the main codebase."
] |
fill-mask
|
transformers
|
# This is just a Test Model. Do NOT use for anything!
Continued pretrained from the nb-roberta-base.
The domain specific pretraining is done on the 102GB (Scandinavian corpus)[https://huggingface.co/datasets/NbAiLab/scandinavian].
## Train for 180k steps for 128 sequences:
```bash
./run_mlm_flax_stream.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--model_name_or_path="./" \
--dataset_name="NbAiLab/scandinavian" \
--max_seq_length="128" \
--weight_decay="0.01" \
--per_device_train_batch_size="128" \
--per_device_eval_batch_size="128" \
--learning_rate="6e-5" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_steps="180000" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="10000" \
--save_steps="10000" \
--eval_steps="10000" \
--preprocessing_num_workers 96 \
--auth_token True \
--adafactor \
--push_to_hub
```
## Train for 20k steps for 512 sequences:
```bash
./run_mlm_flax_stream.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--model_name_or_path="./" \
--dataset_name="NbAiLab/scandinavian" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="48" \
--per_device_eval_batch_size="48" \
--learning_rate="3e-5" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_steps="20000" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="20000" \
--save_steps="10000" \
--eval_steps="10000" \
--preprocessing_num_workers 96 \
--auth_token True \
--adafactor \
--push_to_hub
```
Approximate additional training time: 1 week.
|
{"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "roberta"], "pipeline_tag": "fill-mask", "widget": [{"text": "P\u00e5 biblioteket kan du <mask> en bok."}, {"text": "Dette er et <mask> eksempel."}, {"text": "Av og til kan en spr\u00e5kmodell gi et <mask> resultat."}, {"text": "Som ansat f\u00e5r du <mask> for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling."}]}
|
NbAiLab/nb-roberta-base-scandinavian
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"norwegian",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"no"
] |
TAGS
#transformers #pytorch #jax #tensorboard #safetensors #roberta #fill-mask #norwegian #no #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# This is just a Test Model. Do NOT use for anything!
Continued pretrained from the nb-roberta-base.
The domain specific pretraining is done on the 102GB (Scandinavian corpus)[URL
## Train for 180k steps for 128 sequences:
## Train for 20k steps for 512 sequences:
Approximate additional training time: 1 week.
|
[
"# This is just a Test Model. Do NOT use for anything! \n\nContinued pretrained from the nb-roberta-base.\n\nThe domain specific pretraining is done on the 102GB (Scandinavian corpus)[URL",
"## Train for 180k steps for 128 sequences:",
"## Train for 20k steps for 512 sequences:\n\n\n\n\nApproximate additional training time: 1 week."
] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #safetensors #roberta #fill-mask #norwegian #no #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# This is just a Test Model. Do NOT use for anything! \n\nContinued pretrained from the nb-roberta-base.\n\nThe domain specific pretraining is done on the 102GB (Scandinavian corpus)[URL",
"## Train for 180k steps for 128 sequences:",
"## Train for 20k steps for 512 sequences:\n\n\n\n\nApproximate additional training time: 1 week."
] |
text2text-generation
|
transformers
|
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8.
This model is currently training. It will finish in January 2022. Please do not use yet..
```
|
{"language": false, "license": "cc-by-4.0", "tags": ["seq2seq"], "datasets": ["Norwegian Nynorsk/Bokm\u00e5l"]}
|
NbAiLab/nb-t5-base-v3
| null |
[
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"no"
] |
TAGS
#transformers #jax #tensorboard #t5 #text2text-generation #seq2seq #no #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8.
This model is currently training. It will finish in January 2022. Please do not use yet..
'''
|
[
"# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴 \n\nThis is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. \n\nThis model is currently training. It will finish in January 2022. Please do not use yet..\n '''"
] |
[
"TAGS\n#transformers #jax #tensorboard #t5 #text2text-generation #seq2seq #no #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴 \n\nThis is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. \n\nThis model is currently training. It will finish in January 2022. Please do not use yet..\n '''"
] |
automatic-speech-recognition
|
transformers
|
# Norwegian Wav2Vec2 Model - 1B Bokmål
This model is finetuned on top of feature extractor [XLS-R](https://huggingface.co/facebook/wav2vec2-xls-r-1b) from Facebook/Meta. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.0633** (0.0738)
- **CER: 0.0248** (0.0263)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| NbAiLab/nb-wav2vec2-1b-bokmaal (this model) | 6.33 | |
| [NbAiLab/nb-wav2vec2-300m-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-bokmaal) | 7.03 | |
| [NbAiLab/nb-wav2vec2-1b-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-nynorsk) | 11.32 | |
| [NbAiLab/nb-wav2vec2-300m-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-nynorsk) | 12.22 | |
## Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="facebook/wav2vec2-xls-r-1b"
--dataset_config_name="16K_mp3_bokmaal"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="40"
--per_device_train_batch_size="12"
--per_device_eval_batch_size="12"
--gradient_accumulation_steps="2"
--learning_rate="2e-5"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--ctc_zero_infinity=True
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="16"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
## Citation
```bibtex
@inproceedings{de-la-rosa-etal-2023-boosting,
title = "Boosting {N}orwegian Automatic Speech Recognition",
author = "De La Rosa, Javier and
Braaten, Rolv-Arild and
Kummervold, Per and
Wetjen, Freddy",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.55",
pages = "555--564",
abstract = "In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokm{\aa}l and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10{\%} to 7.60{\%}, with models achieving 5.81{\%} for Bokm{\aa}l and 11.54{\%} for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.",
}
```
See https://arxiv.org/abs/2307.01672
|
{"language": ["nb", false], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", false, "nb", "nb-NO"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "nb-wav2vec2-1b-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.0633, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.0248, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
|
NbAiLab/nb-wav2vec2-1b-bokmaal
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"no",
"nb",
"nb-NO",
"dataset:NbAiLab/NPSC",
"arxiv:2307.01672",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"2307.01672"
] |
[
"nb",
"no"
] |
TAGS
#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #no #nb #nb-NO #dataset-NbAiLab/NPSC #arxiv-2307.01672 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
Norwegian Wav2Vec2 Model - 1B Bokmål
====================================
This model is finetuned on top of feature extractor XLS-R from Facebook/Meta. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
* WER: 0.0633 (0.0738)
* CER: 0.0248 (0.0263)
Model description
-----------------
This is one of several Wav2Vec-models our team created during the hosted Robust Speech Event. This is the complete list of our models and their final scores:
Dataset
-------
In parallel with the event, the team also converted the Norwegian Parliamentary Speech Corpus (NPSC) to the NbAiLab/NPSC in Dataset format and used that as the main source for training.
Code
----
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
Team
----
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
Training procedure
------------------
To reproduce these results, we strongly recommend that you follow the instructions from to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files and from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.
### Parameters
The final model was run using these parameters:
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
See URL
|
[
"### Language Model\n\n\nAs the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.",
"### Parameters\n\n\nThe final model was run using these parameters:\n\n\nUsing these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.\n\n\n\nSee URL"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #no #nb #nb-NO #dataset-NbAiLab/NPSC #arxiv-2307.01672 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Language Model\n\n\nAs the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.",
"### Parameters\n\n\nThe final model was run using these parameters:\n\n\nUsing these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.\n\n\n\nSee URL"
] |
automatic-speech-recognition
|
transformers
|
# Norwegian Wav2Vec2 Model - 300M - VoxRex - Bokmål
This model is finetuned on top of feature extractor [VoxRex-model](https://huggingface.co/KBLab/wav2vec2-large-voxrex) from the National Library of Sweden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.0703** (0.0979)
- **CER: 0.0269** (0.0311)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| [NbAiLab/nb-wav2vec2-1b-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-bokmaal) | 6.33 | |
| NbAiLab/nb-wav2vec2-300m-bokmaal (this model) | 7.03 | |
| [NbAiLab/nb-wav2vec2-1b-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-nynorsk) | 11.32 | |
| [NbAiLab/nb-wav2vec2-300m-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-nynorsk) | 12.22 | |
## Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="KBLab/wav2vec2-large-voxrex"
--dataset_config_name="16K_mp3_bokmaal"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="15"
--per_device_train_batch_size="16"
--per_device_eval_batch_size="16"
--gradient_accumulation_steps="2"
--learning_rate="1e-4"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="32"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
## Citation
```bibtex
@inproceedings{de-la-rosa-etal-2023-boosting,
title = "Boosting {N}orwegian Automatic Speech Recognition",
author = "De La Rosa, Javier and
Braaten, Rolv-Arild and
Kummervold, Per and
Wetjen, Freddy",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.55",
pages = "555--564",
abstract = "In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokm{\aa}l and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10{\%} to 7.60{\%}, with models achieving 5.81{\%} for Bokm{\aa}l and 11.54{\%} for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.",
}
```
See https://arxiv.org/abs/2307.01672
|
{"language": [false, "nb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "nb-wav2vec2-300m-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.0703, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.0269, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
|
NbAiLab/nb-wav2vec2-300m-bokmaal
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"no",
"nb",
"dataset:NbAiLab/NPSC",
"arxiv:2307.01672",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"2307.01672"
] |
[
"no",
"nb"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #no #nb #dataset-NbAiLab/NPSC #arxiv-2307.01672 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Norwegian Wav2Vec2 Model - 300M - VoxRex - Bokmål
=================================================
This model is finetuned on top of feature extractor VoxRex-model from the National Library of Sweden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
* WER: 0.0703 (0.0979)
* CER: 0.0269 (0.0311)
Model description
-----------------
This is one of several Wav2Vec-models our team created during the hosted Robust Speech Event. This is the complete list of our models and their final scores:
Dataset
-------
In parallel with the event, the team also converted the Norwegian Parliamentary Speech Corpus (NPSC) to the NbAiLab/NPSC in Dataset format and used that as the main source for training.
Code
----
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
Team
----
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
Training procedure
------------------
To reproduce these results, we strongly recommend that you follow the instructions from to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files and from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.
### Parameters
The final model was run using these parameters:
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
See URL
|
[
"### Language Model\n\n\nAs the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.",
"### Parameters\n\n\nThe final model was run using these parameters:\n\n\nUsing these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.\n\n\n\nSee URL"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #no #nb #dataset-NbAiLab/NPSC #arxiv-2307.01672 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Language Model\n\n\nAs the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.",
"### Parameters\n\n\nThe final model was run using these parameters:\n\n\nUsing these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.\n\n\n\nSee URL"
] |
automatic-speech-recognition
|
transformers
|
# Norwegian Wav2Vec2 Model - 300M - VoxRex - Nynorsk
This model is finetuned on top of feature extractor [VoxRex-model](https://huggingface.co/KBLab/wav2vec2-large-voxrex) from the National Library of Sweden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.1222** (0.1537)
- **CER: 0.0419** (0.0468)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| [NbAiLab/nb-wav2vec2-1b-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-bokmaal) | 6.33 | |
| [NbAiLab/nb-wav2vec2-300m-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-bokmaal) | 7.03 | |
| [NbAiLab/nb-wav2vec2-1b-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-nynorsk) | 11.32 | |
| NbAiLab/nb-wav2vec2-300m-nynorsk (this model) | 12.22 | |
### Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="KBLab/wav2vec2-large-voxrex"
--dataset_config_name="16K_mp3_nynorsk"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="80"
--per_device_train_batch_size="16"
--per_device_eval_batch_size="16"
--gradient_accumulation_steps="2"
--learning_rate="1e-4"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="32"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
## Citation
```bibtex
@inproceedings{de-la-rosa-etal-2023-boosting,
title = "Boosting {N}orwegian Automatic Speech Recognition",
author = "De La Rosa, Javier and
Braaten, Rolv-Arild and
Kummervold, Per and
Wetjen, Freddy",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.55",
pages = "555--564",
abstract = "In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokm{\aa}l and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10{\%} to 7.60{\%}, with models achieving 5.81{\%} for Bokm{\aa}l and 11.54{\%} for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.",
}
```
See https://arxiv.org/abs/2307.01672
|
{"language": ["nn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "nb-wav2vec2-300m-nynorsk", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_nynorsk"}, "metrics": [{"type": "wer", "value": 0.1222, "name": "Test (Nynorsk) WER"}, {"type": "cer", "value": 0.0419, "name": "Test (Nynorsk) CER"}]}]}]}
|
NbAiLab/nb-wav2vec2-300m-nynorsk
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"nn",
"dataset:NbAiLab/NPSC",
"arxiv:2307.01672",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"2307.01672"
] |
[
"nn"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #nn #dataset-NbAiLab/NPSC #arxiv-2307.01672 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Norwegian Wav2Vec2 Model - 300M - VoxRex - Nynorsk
==================================================
This model is finetuned on top of feature extractor VoxRex-model from the National Library of Sweden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
* WER: 0.1222 (0.1537)
* CER: 0.0419 (0.0468)
Model description
-----------------
This is one of several Wav2Vec-models our team created during the hosted Robust Speech Event. This is the complete list of our models and their final scores:
### Dataset
In parallel with the event, the team also converted the Norwegian Parliamentary Speech Corpus (NPSC) to the NbAiLab/NPSC in Dataset format and used that as the main source for training.
Code
----
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
Team
----
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
Training procedure
------------------
To reproduce these results, we strongly recommend that you follow the instructions from to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files and from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.
### Parameters
The final model was run using these parameters:
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
See URL
|
[
"### Dataset\n\n\nIn parallel with the event, the team also converted the Norwegian Parliamentary Speech Corpus (NPSC) to the NbAiLab/NPSC in Dataset format and used that as the main source for training.\n\n\nCode\n----\n\n\nWe have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.\n\n\nTeam\n----\n\n\nThe following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.\n\n\nTraining procedure\n------------------\n\n\nTo reproduce these results, we strongly recommend that you follow the instructions from to train a simple Swedish model.\n\n\nWhen you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files and from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!",
"### Language Model\n\n\nAs the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.",
"### Parameters\n\n\nThe final model was run using these parameters:\n\n\nUsing these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.\n\n\n\nSee URL"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #nn #dataset-NbAiLab/NPSC #arxiv-2307.01672 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Dataset\n\n\nIn parallel with the event, the team also converted the Norwegian Parliamentary Speech Corpus (NPSC) to the NbAiLab/NPSC in Dataset format and used that as the main source for training.\n\n\nCode\n----\n\n\nWe have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.\n\n\nTeam\n----\n\n\nThe following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.\n\n\nTraining procedure\n------------------\n\n\nTo reproduce these results, we strongly recommend that you follow the instructions from to train a simple Swedish model.\n\n\nWhen you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files and from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!",
"### Language Model\n\n\nAs the scores indicate, adding even a simple 5-gram language will improve the results. has provided another very nice blog explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.",
"### Parameters\n\n\nThe final model was run using these parameters:\n\n\nUsing these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.\n\n\n\nSee URL"
] |
fill-mask
|
transformers
|
## Results
|**Model** | **NoRec** | **NorNe-NB**| **NorNe-NN** | **NorDial** | **DaNe** | **Da-Angry-Tweets** |
|:-----------|------------:|------------:|------------:|------------:|------------:|------------:|
|roberta-base (English) | 51.77 | 79.01/79.53| 79.79/83.02 | 67.18| 75.44/78.07 | 55.51 |
|mBERT-cased | 63.91 | 83.72/86.12| 83.05/87.12 | 66.23| 80.00/81.43 | 57.67 |
|nb-bert-base | 75.60 |**91.98**/**92.95** |**90.93**/**94.06**|69.39| 81.95/84.83| 64.18|
|notram-bert-norwegian-cased | 72.47 | 91.77/93.12|89.79/93.70| **78.55**| **83.69**/**86.55**| **64.19** |
|notram-bert-norwegian-uncased | 73.47 | 89.28/91.61 |87.23/90.23 |74.21 | 80.29/82.31| 61.18|
|notram-bert-norwegian-cased-pod | **76.18** | 91.24/92.24| 90.88/93.21| 76.21| 81.82/84.99| 62.16 |
|nb-roberta-base | 68.77 |87.99/89.43 | 85.43/88.66| 76.34| 75.91/77.94| 61.50 |
|nb-roberta-base-scandinavian | 67.88 | 87.73/89.14| 87.39/90.92| 74.81| 76.22/78.66 | 63.37 |
|nb-roberta-base-v2-200k | 46.87 | 85.57/87.04| - | 64.99| - | - |
|test_long_w5 200k| 60.48 | 88.00/90:00 | 83.93/88.45 | 68.41 |75.22/78.50| 57.95 |
|test_long_w5_roberta_tokenizer 200k| 63.51| 86.28/87.77| 84.95/88.61 | 69.86 | 71.31/74.27 | 59.96 |
|test_long_w5_roberta_tokenizer 400k| 59.76 |87.39/89.06 | 85.16/89.01 | 71.46 | 72.39/75.65| 39.73 |
|test_long_w5_dataset 400k| 66.80 | 86.52/88.55 | 82.81/86.76 | 66.94 | 71.47/74.20| 55.25 |
|test_long_w5_dataset 600k| 67.37 | 89.98/90.95 | 84.53/88.37 | 66.84 | 75.14/76.50| 57.47 |
|roberta-jan-128_ncc - 400k - 128| 67.79 | 91.45/92.33 | 86.41/90.19 | 67.20 | 81.00/82.39| 59.65 |
|roberta-jan-128_ncc - 1000k - 128| 68.17 | 89.34/90.74 | 86.89/89.87 | 68.41 | 80.30/82.17| 61.63 |
|
{"language": false, "license": "cc-by-4.0", "tags": ["norwegian", "bert"], "pipeline_tag": "fill-mask", "widget": [{"text": "P\u00e5 biblioteket kan du [MASK] en bok."}, {"text": "Dette er et [MASK] eksempel."}, {"text": "Av og til kan en spr\u00e5kmodell gi et [MASK] resultat."}, {"text": "Som ansat f\u00e5r du [MASK] for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling."}]}
|
NbAiLab/notram-bert-norwegian-cased-080321
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"norwegian",
"fill-mask",
"no",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"no"
] |
TAGS
#transformers #pytorch #tf #safetensors #bert #norwegian #fill-mask #no #license-cc-by-4.0 #endpoints_compatible #region-us
|
Results
-------
|
[] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #bert #norwegian #fill-mask #no #license-cc-by-4.0 #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLab/roberta_NCC_des_128
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLab/roberta_NCC_des_128_decayfrom200
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
This needed to be restarted at 100k. I am getting memory errors at the end of the epoch. Not really sure why.
Step 2 is therefore on train_2__4. Static learning rate for a while. The first 100k ended at 0.59. This is decent so early. No point in running more epochs here though. Changing the corpus and continue training.
|
{}
|
NbAiLab/roberta_des_128
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
This needed to be restarted at 100k. I am getting memory errors at the end of the epoch. Not really sure why.
Step 2 is therefore on train_2__4. Static learning rate for a while. The first 100k ended at 0.59. This is decent so early. No point in running more epochs here though. Changing the corpus and continue training.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
Since the loss seem to start going up, I did have to restore this from 9e945cb0636bde60bec30bd7df5db30f80401cc7 (2 step 600k/200). I am then restarting with warmup decaying from 1e-4.
That did failed. Checked out c94b5bb43b05fc798f9db013d940b05b3b47cd98 instead and restarted step 3 from here.
|
{}
|
NbAiLab/roberta_des_512
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
Since the loss seem to start going up, I did have to restore this from 9e945cb0636bde60bec30bd7df5db30f80401cc7 (2 step 600k/200). I am then restarting with warmup decaying from 1e-4.
That did failed. Checked out c94b5bb43b05fc798f9db013d940b05b3b47cd98 instead and restarted step 3 from here.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLab/roberta_des_512_4e4
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLab/roberta_des_512_6e4
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLab/roberta_des_ada_128
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLab/roberta_des_ada_128_6e4
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_NCC_OSCAR_16w_noada
| null |
[
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_NCC_OSCAR_style
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_NCC_OSCAR_style_98w
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_NCC_small_flax
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_NCC_small_flax_stream
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_NCC_small_flax_stream_100
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_NCC_small_pytorch
| null |
[
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_OSCAR_flax
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w4
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w5
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w5_long
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w5_long_dataset
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w5_long_roberta_tokenizer
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w5_long_roberta_tokenizer_adafactor
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w6
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w7
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Just for performing some experiments. Do not use.
|
{}
|
NbAiLabArchive/test_w8
| null |
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Just for performing some experiments. Do not use.
|
[] |
[
"TAGS\n#transformers #jax #tensorboard #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc-bokmaal
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1311
- Wer: 0.1038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.379967082059723e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2127 | 0.32 | 500 | 0.1335 | 0.1047 |
| 0.1976 | 0.64 | 1000 | 0.1309 | 0.1039 |
| 0.1887 | 0.97 | 1500 | 0.1306 | 0.1040 |
| 0.18 | 1.29 | 2000 | 0.1311 | 0.1038 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
{"language": ["nb-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-large-voxrex-npsc-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.07028972259374369, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.026870600821650645, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
|
NbAiLab/wav2vec2-large-voxrex-npsc-bokmaal
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"nb-NO"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-voxrex-npsc-bokmaal
==================================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1311
* Wer: 0.1038
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 8.379967082059723e-06
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 0.1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu113
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 8.379967082059723e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 8.379967082059723e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc-nynorsk
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 16K_MP3_NYNORSK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4142
- Wer: 0.1576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.086 | 2.17 | 500 | 3.0773 | 1.0 |
| 2.8532 | 4.35 | 1000 | 2.8393 | 1.0 |
| 0.9738 | 6.52 | 1500 | 0.7283 | 0.4890 |
| 0.6763 | 8.7 | 2000 | 0.5340 | 0.3662 |
| 0.5303 | 10.87 | 2500 | 0.4521 | 0.3140 |
| 0.4765 | 13.04 | 3000 | 0.4181 | 0.2853 |
| 0.4219 | 15.22 | 3500 | 0.4156 | 0.2934 |
| 0.3564 | 17.39 | 4000 | 0.3925 | 0.2509 |
| 0.3282 | 19.57 | 4500 | 0.3824 | 0.2420 |
| 0.3118 | 21.74 | 5000 | 0.3636 | 0.2354 |
| 0.2919 | 23.91 | 5500 | 0.3615 | 0.2281 |
| 0.2961 | 26.09 | 6000 | 0.3548 | 0.2255 |
| 0.284 | 28.26 | 6500 | 0.3526 | 0.2209 |
| 0.2566 | 30.43 | 7000 | 0.3526 | 0.2205 |
| 0.2422 | 32.61 | 7500 | 0.3569 | 0.2173 |
| 0.2472 | 34.78 | 8000 | 0.3592 | 0.2166 |
| 0.2337 | 36.96 | 8500 | 0.3625 | 0.2172 |
| 0.2315 | 39.13 | 9000 | 0.3580 | 0.2155 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"language": ["nn-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", "no", "nn-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-large-voxrex-npsc-nynorsk", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_nynorsk"}, "metrics": [{"type": "wer", "value": 0.12220762155059132, "name": "Test (Nynorsk) WER"}, {"type": "cer", "value": 0.04195612578778549, "name": "Test (Nynorsk) CER"}]}]}]}
|
NbAiLab/wav2vec2-large-voxrex-npsc-nynorsk
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"NbAiLab/NPSC",
"robust-speech-event",
"no",
"nn-NO",
"hf-asr-leaderboard",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"nn-NO"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #NbAiLab/NPSC #robust-speech-event #no #nn-NO #hf-asr-leaderboard #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-voxrex-npsc-nynorsk
==================================
This model is a fine-tuned version of KBLab/wav2vec2-large-voxrex on the NBAILAB/NPSC - 16K\_MP3\_NYNORSK dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4142
* Wer: 0.1576
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 40.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.0+cu113
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 40.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #NbAiLab/NPSC #robust-speech-event #no #nn-NO #hf-asr-leaderboard #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 40.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9728 | 0.32 | 500 | 2.9449 | 1.0 |
| 2.5099 | 0.64 | 1000 | 1.8492 | 0.9910 |
| 0.7872 | 0.97 | 1500 | 0.4467 | 0.3774 |
| 0.5993 | 1.29 | 2000 | 0.3181 | 0.2819 |
| 0.5134 | 1.61 | 2500 | 0.2638 | 0.2401 |
| 0.4544 | 1.93 | 3000 | 0.2287 | 0.2091 |
| 0.4085 | 2.26 | 3500 | 0.2153 | 0.1918 |
| 0.3921 | 2.58 | 4000 | 0.2004 | 0.1804 |
| 0.4613 | 2.9 | 4500 | 0.1905 | 0.1732 |
| 0.3402 | 3.22 | 5000 | 0.1778 | 0.1659 |
| 0.3258 | 3.55 | 5500 | 0.1732 | 0.1571 |
| 0.3044 | 3.87 | 6000 | 0.1677 | 0.1497 |
| 0.2914 | 4.19 | 6500 | 0.1597 | 0.1420 |
| 0.278 | 4.51 | 7000 | 0.1574 | 0.1386 |
| 0.2858 | 4.84 | 7500 | 0.1552 | 0.1300 |
| 0.2585 | 5.16 | 8000 | 0.1523 | 0.1276 |
| 0.2827 | 5.48 | 8500 | 0.1448 | 0.1265 |
| 0.3365 | 5.8 | 9000 | 0.1411 | 0.1232 |
| 0.2488 | 6.13 | 9500 | 0.1456 | 0.1195 |
| 0.2406 | 6.45 | 10000 | 0.1414 | 0.1194 |
| 0.2488 | 6.77 | 10500 | 0.1393 | 0.1173 |
| 0.3084 | 7.09 | 11000 | 0.1379 | 0.1164 |
| 0.2365 | 7.41 | 11500 | 0.1387 | 0.1165 |
| 0.2217 | 7.74 | 12000 | 0.1381 | 0.1132 |
| 0.2381 | 8.06 | 12500 | 0.1360 | 0.1126 |
| 0.2329 | 8.38 | 13000 | 0.1357 | 0.1124 |
| 0.2103 | 8.7 | 13500 | 0.1335 | 0.1087 |
| 0.2366 | 9.03 | 14000 | 0.1388 | 0.1105 |
| 0.2289 | 9.35 | 14500 | 0.1383 | 0.1098 |
| 0.2486 | 9.67 | 15000 | 0.1386 | 0.1087 |
| **0.2772** | **9.99** | **15500** | **0.1598** | **0.1093** |
| 0.2728 | 10.32 | 16000 | 0.1814 | 0.1110 |
| 0.3437 | 10.64 | 16500 | 0.2505 | 0.1124 |
| 0.431 | 10.96 | 17000 | 0.2828 | 0.1143 |
| 0.3929 | 11.28 | 17500 | 0.2977 | 0.1149 |
| 0.4396 | 11.61 | 18000 | 0.3198 | 0.1170 |
| 0.59 | 11.93 | 18500 | 0.4158 | 0.1315 |
| 0.7813 | 12.25 | 19000 | 0.6123 | 0.2208 |
| 0.9345 | 12.57 | 19500 | 0.6815 | 0.2885 |
| 0.998 | 12.89 | 20000 | 0.7587 | 0.1991 |
| 1.0493 | 13.22 | 20500 | 0.7583 | 0.1996 |
| 1.438 | 13.54 | 21000 | nan | 1.0 |
| 0.0 | 13.86 | 21500 | nan | 1.0 |
| 0.0 | 14.18 | 22000 | nan | 1.0 |
| 0.0 | 14.51 | 22500 | nan | 1.0 |
| 0.0 | 14.83 | 23000 | nan | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
{"license": "cc0-1.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer", "robust-speech-event"], "datasets": ["NbAiLab/NPSC"], "base_model": "KBLab/wav2vec2-large-voxrex", "model-index": [{"name": "wav2vec2-large-voxrex-npsc", "results": []}]}
|
NbAiLab/wav2vec2-large-voxrex-npsc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"robust-speech-event",
"dataset:NbAiLab/NPSC",
"base_model:KBLab/wav2vec2-large-voxrex",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #generated_from_trainer #robust-speech-event #dataset-NbAiLab/NPSC #base_model-KBLab/wav2vec2-large-voxrex #license-cc0-1.0 #endpoints_compatible #region-us
|
wav2vec2-large-voxrex-npsc
==========================
This model is a fine-tuned version of KBLab/wav2vec2-large-voxrex on the NBAILAB/NPSC - 16K\_MP3 dataset.
It achieves the following results on the evaluation set:
* Loss: nan
* Wer: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 15.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu113
* Datasets 1.18.3.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #generated_from_trainer #robust-speech-event #dataset-NbAiLab/NPSC #base_model-KBLab/wav2vec2-large-voxrex #license-cc0-1.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-npsc
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the [NbAiLab/NPSC (16K_mp3_bokmaal)](https://huggingface.co/datasets/NbAiLab/NPSC/viewer/16K_mp3_bokmaal/train) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1598
- WER: 0.0966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.8361 | 0.32 | 500 | 0.6304 | 0.4970 |
| 0.5703 | 0.64 | 1000 | 0.3195 | 0.2775 |
| 0.5451 | 0.97 | 1500 | 0.2700 | 0.2246 |
| 0.47 | 1.29 | 2000 | 0.2564 | 0.2329 |
| 0.4063 | 1.61 | 2500 | 0.2459 | 0.2099 |
| 0.374 | 1.93 | 3000 | 0.2175 | 0.1894 |
| 0.3297 | 2.26 | 3500 | 0.2036 | 0.1755 |
| 0.3145 | 2.58 | 4000 | 0.1957 | 0.1757 |
| 0.3989 | 2.9 | 4500 | 0.1923 | 0.1723 |
| 0.271 | 3.22 | 5000 | 0.1889 | 0.1649 |
| 0.2758 | 3.55 | 5500 | 0.1768 | 0.1588 |
| 0.2683 | 3.87 | 6000 | 0.1720 | 0.1534 |
| 0.2341 | 4.19 | 6500 | 0.1689 | 0.1471 |
| 0.2316 | 4.51 | 7000 | 0.1706 | 0.1405 |
| 0.2383 | 4.84 | 7500 | 0.1637 | 0.1426 |
| 0.2148 | 5.16 | 8000 | 0.1584 | 0.1347 |
| 0.2085 | 5.48 | 8500 | 0.1601 | 0.1387 |
| 0.2944 | 5.8 | 9000 | 0.1566 | 0.1294 |
| 0.1944 | 6.13 | 9500 | 0.1494 | 0.1271 |
| 0.1853 | 6.45 | 10000 | 0.1561 | 0.1247 |
| 0.235 | 6.77 | 10500 | 0.1461 | 0.1215 |
| 0.2286 | 7.09 | 11000 | 0.1447 | 0.1167 |
| 0.1781 | 7.41 | 11500 | 0.1502 | 0.1199 |
| 0.1714 | 7.74 | 12000 | 0.1425 | 0.1179 |
| 0.1725 | 8.06 | 12500 | 0.1427 | 0.1173 |
| 0.143 | 8.38 | 13000 | 0.1448 | 0.1142 |
| 0.154 | 8.7 | 13500 | 0.1392 | 0.1104 |
| 0.1447 | 9.03 | 14000 | 0.1404 | 0.1094 |
| 0.1471 | 9.35 | 14500 | 0.1404 | 0.1088 |
| 0.1479 | 9.67 | 15000 | 0.1414 | 0.1133 |
| 0.1607 | 9.99 | 15500 | 0.1458 | 0.1171 |
| 0.166 | 10.32 | 16000 | 0.1652 | 0.1264 |
| 0.188 | 10.64 | 16500 | 0.1713 | 0.1322 |
| 0.1461 | 10.96 | 17000 | 0.1423 | 0.1111 |
| 0.1289 | 11.28 | 17500 | 0.1388 | 0.1097 |
| 0.1273 | 11.61 | 18000 | 0.1438 | 0.1074 |
| 0.1317 | 11.93 | 18500 | 0.1312 | 0.1066 |
| 0.1448 | 12.25 | 19000 | 0.1446 | 0.1042 |
| 0.1424 | 12.57 | 19500 | 0.1386 | 0.1015 |
| 0.1392 | 12.89 | 20000 | 0.1379 | 0.1005 |
| 0.1408 | 13.22 | 20500 | 0.1408 | 0.0992 |
| 0.1239 | 13.54 | 21000 | 0.1338 | 0.0968 |
| 0.1244 | 13.86 | 21500 | 0.1335 | 0.0957 |
| 0.1254 | 14.18 | 22000 | 0.1382 | 0.0950 |
| 0.1597 | 14.51 | 22500 | 0.1544 | 0.0970 |
| 0.1566 | 14.83 | 23000 | 0.1589 | 0.0963 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
{"language": ["nb-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xls-r-1b-npsc-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.07901700231893541, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.029734583252347752, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
|
NbAiLab/wav2vec2-xls-r-1b-npsc-bokmaal
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"nb-NO"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-1b-npsc
======================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the NbAiLab/NPSC (16K\_mp3\_bokmaal) dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1598
* WER: 0.0966
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 15.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu113
* Datasets 1.18.3.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-npsc-bokmaal
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1663
- Wer: 0.0932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0969 | 0.32 | 500 | 0.1773 | 0.1054 |
| 0.0929 | 0.64 | 1000 | 0.1672 | 0.1061 |
| 0.1018 | 0.97 | 1500 | 0.1770 | 0.1067 |
| 0.0871 | 1.29 | 2000 | 0.1832 | 0.1087 |
| 0.0908 | 1.61 | 2500 | 0.1830 | 0.1101 |
| 0.0975 | 1.93 | 3000 | 0.1848 | 0.1100 |
| 0.0936 | 2.26 | 3500 | 0.1853 | 0.1113 |
| 0.1025 | 2.58 | 4000 | 0.1958 | 0.1149 |
| 0.0989 | 2.9 | 4500 | 0.1776 | 0.1123 |
| 0.0946 | 3.22 | 5000 | 0.1825 | 0.1097 |
| 0.0859 | 3.55 | 5500 | 0.1864 | 0.1072 |
| 0.0867 | 3.87 | 6000 | 0.1886 | 0.1081 |
| 0.0783 | 4.19 | 6500 | 0.1883 | 0.1063 |
| 0.0804 | 4.51 | 7000 | 0.1831 | 0.1063 |
| 0.0797 | 4.84 | 7500 | 0.1884 | 0.1058 |
| 0.0705 | 5.16 | 8000 | 0.1802 | 0.1057 |
| 0.0795 | 5.48 | 8500 | 0.1854 | 0.1038 |
| 0.0711 | 5.8 | 9000 | 0.1766 | 0.1032 |
| 0.0973 | 6.13 | 9500 | 0.1663 | 0.1014 |
| 0.087 | 6.45 | 10000 | 0.1664 | 0.1014 |
| 0.0962 | 6.77 | 10500 | 0.1631 | 0.1009 |
| 0.0857 | 7.09 | 11000 | 0.1659 | 0.1002 |
| 0.0882 | 7.41 | 11500 | 0.1668 | 0.1007 |
| 0.0784 | 7.74 | 12000 | 0.1688 | 0.0996 |
| 0.0838 | 8.06 | 12500 | 0.1675 | 0.0984 |
| 0.0863 | 8.38 | 13000 | 0.1639 | 0.0979 |
| 0.0763 | 8.7 | 13500 | 0.1638 | 0.0980 |
| 0.0822 | 9.03 | 14000 | 0.1709 | 0.0972 |
| 0.0769 | 9.35 | 14500 | 0.1700 | 0.0965 |
| 0.0838 | 9.67 | 15000 | 0.1703 | 0.0974 |
| 0.0799 | 9.99 | 15500 | 0.1667 | 0.0957 |
| 0.0712 | 10.32 | 16000 | 0.1754 | 0.0960 |
| 0.0737 | 10.64 | 16500 | 0.1725 | 0.0968 |
| 0.0851 | 10.96 | 17000 | 0.1733 | 0.0958 |
| 0.076 | 11.28 | 17500 | 0.1682 | 0.0954 |
| 0.0712 | 11.61 | 18000 | 0.1713 | 0.0943 |
| 0.0745 | 11.93 | 18500 | 0.1662 | 0.0951 |
| 0.0864 | 12.25 | 19000 | 0.1692 | 0.0947 |
| 0.0937 | 12.57 | 19500 | 0.1624 | 0.0943 |
| 0.0915 | 12.89 | 20000 | 0.1678 | 0.0942 |
| 0.0926 | 13.22 | 20500 | 0.1641 | 0.0945 |
| 0.0912 | 13.54 | 21000 | 0.1665 | 0.0937 |
| 0.0917 | 13.86 | 21500 | 0.1648 | 0.0936 |
| 0.094 | 14.18 | 22000 | 0.1635 | 0.0935 |
| 0.0864 | 14.51 | 22500 | 0.1678 | 0.0934 |
| 0.0899 | 14.83 | 23000 | 0.1663 | 0.0932 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
{"language": ["nb-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nb-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xls-r-300m-npsc-bokmaal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_bokmaal"}, "metrics": [{"type": "wer", "value": 0.07556265455560153, "name": "Test (Bokm\u00e5l) WER"}, {"type": "cer", "value": 0.028191288775481386, "name": "Test (Bokm\u00e5l) CER"}]}]}]}
|
NbAiLab/wav2vec2-xls-r-300m-npsc-bokmaal
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"nb-NO"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-npsc-bokmaal
================================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1663
* Wer: 0.0932
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 15.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu113
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1B-NPSC-NN
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4562
- Wer: 0.1531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.6894 | 1.08 | 500 | 1.2423 | 0.8619 |
| 0.7543 | 2.15 | 1000 | 0.5956 | 0.3817 |
| 0.5481 | 3.23 | 1500 | 0.5043 | 0.3246 |
| 0.4661 | 4.3 | 2000 | 0.4813 | 0.2793 |
| 0.3901 | 5.38 | 2500 | 0.4371 | 0.2592 |
| 0.3512 | 6.45 | 3000 | 0.4216 | 0.2458 |
| 0.3016 | 7.53 | 3500 | 0.3814 | 0.2257 |
| 0.278 | 8.6 | 4000 | 0.4151 | 0.2145 |
| 0.2435 | 9.68 | 4500 | 0.4816 | 0.2130 |
| 0.2122 | 10.75 | 5000 | 0.4489 | 0.2137 |
| 0.1949 | 11.83 | 5500 | 0.3978 | 0.2063 |
| 0.1929 | 12.9 | 6000 | 0.3823 | 0.2026 |
| 0.1757 | 13.98 | 6500 | 0.3409 | 0.1965 |
| 0.1771 | 15.05 | 7000 | 0.3844 | 0.1936 |
| 0.1452 | 16.13 | 7500 | 0.3749 | 0.1900 |
| 0.1341 | 17.2 | 8000 | 0.4407 | 0.2026 |
| 0.13 | 18.28 | 8500 | 0.4253 | 0.1883 |
| 0.1183 | 19.35 | 9000 | 0.4311 | 0.1880 |
| 0.118 | 20.43 | 9500 | 0.4431 | 0.1882 |
| 0.1123 | 21.51 | 10000 | 0.4753 | 0.1820 |
| 0.1037 | 22.58 | 10500 | 0.4087 | 0.1834 |
| 0.1066 | 23.66 | 11000 | 0.4151 | 0.1845 |
| 0.0977 | 24.73 | 11500 | 0.4367 | 0.1783 |
| 0.0968 | 25.81 | 12000 | 0.4237 | 0.1756 |
| 0.0835 | 26.88 | 12500 | 0.4729 | 0.1781 |
| 0.0919 | 27.96 | 13000 | 0.4153 | 0.1701 |
| 0.0677 | 29.03 | 13500 | 0.4317 | 0.1693 |
| 0.0726 | 30.11 | 14000 | 0.4380 | 0.1736 |
| 0.066 | 31.18 | 14500 | 0.4384 | 0.1681 |
| 0.0713 | 32.26 | 15000 | 0.4215 | 0.1629 |
| 0.0605 | 33.33 | 15500 | 0.4574 | 0.1714 |
| 0.0632 | 34.41 | 16000 | 0.4343 | 0.1642 |
| 0.0567 | 35.48 | 16500 | 0.4231 | 0.1601 |
| 0.0556 | 36.56 | 17000 | 0.4404 | 0.1667 |
| 0.0426 | 37.63 | 17500 | 0.4459 | 0.1625 |
| 0.0445 | 38.71 | 18000 | 0.4484 | 0.1629 |
| 0.0463 | 39.78 | 18500 | 0.4508 | 0.1596 |
| 0.0448 | 40.86 | 19000 | 0.4395 | 0.1605 |
| 0.0434 | 41.94 | 19500 | 0.4490 | 0.1607 |
| 0.0347 | 43.01 | 20000 | 0.4772 | 0.1582 |
| 0.0332 | 44.09 | 20500 | 0.4729 | 0.1582 |
| 0.037 | 45.16 | 21000 | 0.4559 | 0.1573 |
| 0.0328 | 46.24 | 21500 | 0.4664 | 0.1560 |
| 0.0366 | 47.31 | 22000 | 0.4543 | 0.1543 |
| 0.0377 | 48.39 | 22500 | 0.4507 | 0.1560 |
| 0.0331 | 49.46 | 23000 | 0.4567 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["nn-NO"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "NbAiLab/NPSC", "robust-speech-event", false, "nn-NO", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "wav2vec2-xlsr-1B-NPSC-NN", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC", "args": "16K_mp3_nynorsk"}, "metrics": [{"type": "wer", "value": 0.13347099680871036, "name": "Test (Nynorsk) WER"}, {"type": "cer", "value": 0.04537322093454329, "name": "Test (Nynorsk) CER"}]}]}]}
|
NbAiLab/wav2vec2-xlsr-1B-NPSC-NN
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"nn-NO"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xlsr-1B-NPSC-NN
========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the NBAILAB/NPSC - 16K\_MP3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4562
* Wer: 0.1531
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 6e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
# XLS-R-300M-LM - Norwegian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Norwegian [NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) dataset.
### Scores without Language Model
Without using a language model, it achieves the following scores on the NPSC Eval set
It achieves the following results on the evaluation set without a language model:
- WER: 0.2110
- CER: 0.0622
### Scores with Language Model
A 5-gram KenLM was added to boost the models performance. The language model was created on a corpus mainly consisting of online newspapers, public reports and Wikipedia data. After this we are getting these values.
- WER: 0.1540
- CER: 0.0548
## Team
The model is developed by Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen. Name in alphabetic order.
## Model description
This current version is based on checkpoint 8500 of [NbAiLab/wav2vec2-xlsr-300M-NPSC-OH](https://huggingface.co/NbAiLab/wav2vec2-xlsr-300M-NPSC-OH).
## Intended uses & limitations
Demo version only. The model will be updated later this week.
## Training and evaluation data
The model is trained and evaluated on [NPSC](https://huggingface.co/datasets/NbAiLab/NPSC). Unfortunately there is no Norwegian test data in Common Voice, and currently the model is only evaluated on the validation set of NPSC..
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0 (But interrupted after 8500 steps, approx 6 epochs)
- mixed_precision_training: Native AMP
|
{"language": ["nb-NO"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", false, "nb-NO", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["NbAiLab/NPSC"], "model-index": [{"name": "XLS-R-300M-LM - Norwegian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "NPSC", "type": "NbAiLab/NPSC"}, "metrics": [{"type": "wer", "value": 15.4, "name": "Eval WER"}, {"type": "cer", "value": 5.48, "name": "Eval CER"}]}]}]}
|
NbAiLab/wav2vec2-xlsr-300M-NPSC-LM
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"nb-NO"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# XLS-R-300M-LM - Norwegian
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Norwegian NPSC dataset.
### Scores without Language Model
Without using a language model, it achieves the following scores on the NPSC Eval set
It achieves the following results on the evaluation set without a language model:
- WER: 0.2110
- CER: 0.0622
### Scores with Language Model
A 5-gram KenLM was added to boost the models performance. The language model was created on a corpus mainly consisting of online newspapers, public reports and Wikipedia data. After this we are getting these values.
- WER: 0.1540
- CER: 0.0548
## Team
The model is developed by Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen. Name in alphabetic order.
## Model description
This current version is based on checkpoint 8500 of NbAiLab/wav2vec2-xlsr-300M-NPSC-OH.
## Intended uses & limitations
Demo version only. The model will be updated later this week.
## Training and evaluation data
The model is trained and evaluated on NPSC. Unfortunately there is no Norwegian test data in Common Voice, and currently the model is only evaluated on the validation set of NPSC..
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0 (But interrupted after 8500 steps, approx 6 epochs)
- mixed_precision_training: Native AMP
|
[
"# XLS-R-300M-LM - Norwegian\r\n\r\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Norwegian NPSC dataset.",
"### Scores without Language Model\r\nWithout using a language model, it achieves the following scores on the NPSC Eval set\r\nIt achieves the following results on the evaluation set without a language model:\r\n- WER: 0.2110\r\n- CER: 0.0622",
"### Scores with Language Model\r\nA 5-gram KenLM was added to boost the models performance. The language model was created on a corpus mainly consisting of online newspapers, public reports and Wikipedia data. After this we are getting these values.\r\n\r\n- WER: 0.1540\r\n- CER: 0.0548",
"## Team\r\nThe model is developed by Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen. Name in alphabetic order.",
"## Model description\r\nThis current version is based on checkpoint 8500 of NbAiLab/wav2vec2-xlsr-300M-NPSC-OH.",
"## Intended uses & limitations\r\nDemo version only. The model will be updated later this week.",
"## Training and evaluation data\r\nThe model is trained and evaluated on NPSC. Unfortunately there is no Norwegian test data in Common Voice, and currently the model is only evaluated on the validation set of NPSC..",
"## Training procedure",
"### Training hyperparameters\r\nThe following hyperparameters were used during training:\r\n- learning_rate: 7.5e-05\r\n- train_batch_size: 8\r\n- eval_batch_size: 8\r\n- seed: 42\r\n- gradient_accumulation_steps: 4\r\n- total_train_batch_size: 32\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: linear\r\n- lr_scheduler_warmup_steps: 2000\r\n- num_epochs: 30.0 (But interrupted after 8500 steps, approx 6 epochs)\r\n- mixed_precision_training: Native AMP"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #dataset-NbAiLab/NPSC #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# XLS-R-300M-LM - Norwegian\r\n\r\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Norwegian NPSC dataset.",
"### Scores without Language Model\r\nWithout using a language model, it achieves the following scores on the NPSC Eval set\r\nIt achieves the following results on the evaluation set without a language model:\r\n- WER: 0.2110\r\n- CER: 0.0622",
"### Scores with Language Model\r\nA 5-gram KenLM was added to boost the models performance. The language model was created on a corpus mainly consisting of online newspapers, public reports and Wikipedia data. After this we are getting these values.\r\n\r\n- WER: 0.1540\r\n- CER: 0.0548",
"## Team\r\nThe model is developed by Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen. Name in alphabetic order.",
"## Model description\r\nThis current version is based on checkpoint 8500 of NbAiLab/wav2vec2-xlsr-300M-NPSC-OH.",
"## Intended uses & limitations\r\nDemo version only. The model will be updated later this week.",
"## Training and evaluation data\r\nThe model is trained and evaluated on NPSC. Unfortunately there is no Norwegian test data in Common Voice, and currently the model is only evaluated on the validation set of NPSC..",
"## Training procedure",
"### Training hyperparameters\r\nThe following hyperparameters were used during training:\r\n- learning_rate: 7.5e-05\r\n- train_batch_size: 8\r\n- eval_batch_size: 8\r\n- seed: 42\r\n- gradient_accumulation_steps: 4\r\n- total_train_batch_size: 32\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: linear\r\n- lr_scheduler_warmup_steps: 2000\r\n- num_epochs: 30.0 (But interrupted after 8500 steps, approx 6 epochs)\r\n- mixed_precision_training: Native AMP"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-300M-NPSC-OH
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1692
- Wer: 0.1663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1638 | 0.66 | 500 | 3.0686 | 1.0 |
| 2.9311 | 1.31 | 1000 | 2.9208 | 1.0 |
| 2.4175 | 1.97 | 1500 | 1.5009 | 0.9049 |
| 1.4442 | 2.63 | 2000 | 0.4426 | 0.3783 |
| 1.2624 | 3.28 | 2500 | 0.3193 | 0.2998 |
| 1.1889 | 3.94 | 3000 | 0.2867 | 0.2630 |
| 1.1315 | 4.6 | 3500 | 0.2566 | 0.2444 |
| 1.0864 | 5.26 | 4000 | 0.2368 | 0.2294 |
| 1.093 | 5.91 | 4500 | 0.2240 | 0.2151 |
| 1.0368 | 6.57 | 5000 | 0.2117 | 0.2056 |
| 1.0178 | 7.23 | 5500 | 0.2020 | 0.1954 |
| 1.0035 | 7.88 | 6000 | 0.2005 | 0.1924 |
| 0.9759 | 8.54 | 6500 | 0.1971 | 0.1863 |
| 0.9795 | 9.2 | 7000 | 0.1892 | 0.1812 |
| 0.9601 | 9.85 | 7500 | 0.1863 | 0.1795 |
| 0.9673 | 10.51 | 8000 | 0.1809 | 0.1761 |
| 0.9233 | 11.17 | 8500 | 0.1818 | 0.1755 |
| 0.9382 | 11.83 | 9000 | 0.1767 | 0.1741 |
| 0.9242 | 12.48 | 9500 | 0.1743 | 0.1703 |
| 0.9703 | 13.14 | 10000 | 0.1711 | 0.1711 |
| 0.9139 | 13.8 | 10500 | 0.1718 | 0.1672 |
| 0.9073 | 14.45 | 11000 | 0.1700 | 0.1665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer"], "model-index": [{"name": "wav2vec2-xlsr-300M-NPSC-OH", "results": []}]}
|
NbAiLab/wav2vec2-xlsr-300M-NPSC-OH
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xlsr-300M-NPSC-OH
==========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the NBAILAB/NPSC - 16K\_MP3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1692
* Wer: 0.1663
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 13
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 15.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-npsc-oh
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 48K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2106
- Wer: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1093 | 2.61 | 1000 | 0.2572 | 0.9293 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
{"license": "cc0-1.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer"], "datasets": ["npsc"], "model-index": [{"name": "xls-npsc-oh", "results": []}]}
|
NbAiLab/xls-npsc-oh
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"dataset:npsc",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #generated_from_trainer #dataset-npsc #license-cc0-1.0 #endpoints_compatible #region-us
|
xls-npsc-oh
===========
This model is a fine-tuned version of KBLab/wav2vec2-large-voxrex on the NBAILAB/NPSC - 48K\_MP3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2106
* Wer: 0.8586
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 5.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #generated_from_trainer #dataset-npsc #license-cc0-1.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-npsc
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 48K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5006
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer"], "datasets": ["npsc"], "model-index": [{"name": "xls-npsc", "results": []}]}
|
NbAiLab/xls-npsc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"dataset:npsc",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #generated_from_trainer #dataset-npsc #license-apache-2.0 #endpoints_compatible #region-us
|
# xls-npsc
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the NBAILAB/NPSC - 48K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5006
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
[
"# xls-npsc\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the NBAILAB/NPSC - 48K_MP3 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.5006\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.1.dev0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #NbAiLab/NPSC #generated_from_trainer #dataset-npsc #license-apache-2.0 #endpoints_compatible #region-us \n",
"# xls-npsc\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the NBAILAB/NPSC - 48K_MP3 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.5006\n- Wer: 1.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 10.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.1.dev0\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Harry potter
|
{"tags": ["conversational"]}
|
Necrozma/harrypotterbot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry potter
|
[
"# Harry potter"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry potter"
] |
text-generation
|
transformers
|
not for use...
technical data
|
{"language": ["ru"], "widget": [{"text": "\u0421\u043c\u0435\u0440\u0442\u0438 \u043d\u0435\u0442, "}]}
|
Nehc/adpatres
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
not for use...
technical data
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
Start from sberbank-ai/rugpt3small_based_on_gpt2 and finetuning on Govard Phillips Lovecraft texts (russian).
On this moment - only 1 epoch (perplexity falls reasons)
on progress...
|
{"language": ["ru"], "metrics": [{"loss": 3.3}, {"perplexity": 25.7528}], "widget": [{"text": "\u041d\u0435\u043c\u044b\u0441\u043b\u0438\u043c\u043e, "}]}
|
Nehc/gpt2_lovecraft_ru
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Start from sberbank-ai/rugpt3small_based_on_gpt2 and finetuning on Govard Phillips Lovecraft texts (russian).
On this moment - only 1 epoch (perplexity falls reasons)
on progress...
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
Start from sberbank-ai/rugpt3small_based_on_gpt2 and finetuning on Biblie & preaching (russian).
On this moment - only 1 epoch, 1650 seq length
on progress...
|
{"language": ["ru"], "metrics": [{"loss": 3.3}, {"perplexity": 25.7528}], "widget": [{"text": "\u0411\u043e\u0433, \u044d\u0442\u043e "}]}
|
Nehc/gpt2_priest_ru
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Start from sberbank-ai/rugpt3small_based_on_gpt2 and finetuning on Biblie & preaching (russian).
On this moment - only 1 epoch, 1650 seq length
on progress...
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#zhongli DialoGTP Model
|
{"tags": ["conversational"]}
|
Nekoism/Zhongli-Beta
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#zhongli DialoGTP Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
image-classification
|
transformers
|
# sea_mammals
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### blue whale

#### dolphin

#### orca whale

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
Neto71/sea_mammals
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# sea_mammals
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### blue whale
!blue whale
#### dolphin
!dolphin
#### orca whale
!orca whale
|
[
"# sea_mammals\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### blue whale\n\n!blue whale",
"#### dolphin\n\n!dolphin",
"#### orca whale\n\n!orca whale"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# sea_mammals\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### blue whale\n\n!blue whale",
"#### dolphin\n\n!dolphin",
"#### orca whale\n\n!orca whale"
] |
question-answering
|
transformers
|
# BERT-Small CORD-19 fine-tuned on SQuAD 2.0
[bert-small-cord19 model](https://huggingface.co/NeuML/bert-small-cord19) fine-tuned on SQuAD 2.0
## Building the model
```bash
python run_squad.py
--model_type bert
--model_name_or_path bert-small-cord19
--do_train
--do_eval
--do_lower_case
--version_2_with_negative
--train_file train-v2.0.json
--predict_file dev-v2.0.json
--per_gpu_train_batch_size 8
--learning_rate 3e-5
--num_train_epochs 3.0
--max_seq_length 384
--doc_stride 128
--output_dir bert-small-cord19-squad2
--save_steps 0
--threads 8
--overwrite_cache
--overwrite_output_dir
|
{}
|
NeuML/bert-small-cord19-squad2
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #safetensors #bert #question-answering #endpoints_compatible #has_space #region-us
|
# BERT-Small CORD-19 fine-tuned on SQuAD 2.0
bert-small-cord19 model fine-tuned on SQuAD 2.0
## Building the model
'''bash
python run_squad.py
--model_type bert
--model_name_or_path bert-small-cord19
--do_train
--do_eval
--do_lower_case
--version_2_with_negative
--train_file train-v2.0.json
--predict_file dev-v2.0.json
--per_gpu_train_batch_size 8
--learning_rate 3e-5
--num_train_epochs 3.0
--max_seq_length 384
--doc_stride 128
--output_dir bert-small-cord19-squad2
--save_steps 0
--threads 8
--overwrite_cache
--overwrite_output_dir
|
[
"# BERT-Small CORD-19 fine-tuned on SQuAD 2.0\n\nbert-small-cord19 model fine-tuned on SQuAD 2.0",
"## Building the model\n\n'''bash\npython run_squad.py\n --model_type bert\n --model_name_or_path bert-small-cord19\n --do_train\n --do_eval\n --do_lower_case\n --version_2_with_negative\n --train_file train-v2.0.json\n --predict_file dev-v2.0.json\n --per_gpu_train_batch_size 8\n --learning_rate 3e-5\n --num_train_epochs 3.0\n --max_seq_length 384\n --doc_stride 128\n --output_dir bert-small-cord19-squad2\n --save_steps 0\n --threads 8\n --overwrite_cache\n --overwrite_output_dir"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #question-answering #endpoints_compatible #has_space #region-us \n",
"# BERT-Small CORD-19 fine-tuned on SQuAD 2.0\n\nbert-small-cord19 model fine-tuned on SQuAD 2.0",
"## Building the model\n\n'''bash\npython run_squad.py\n --model_type bert\n --model_name_or_path bert-small-cord19\n --do_train\n --do_eval\n --do_lower_case\n --version_2_with_negative\n --train_file train-v2.0.json\n --predict_file dev-v2.0.json\n --per_gpu_train_batch_size 8\n --learning_rate 3e-5\n --num_train_epochs 3.0\n --max_seq_length 384\n --doc_stride 128\n --output_dir bert-small-cord19-squad2\n --save_steps 0\n --threads 8\n --overwrite_cache\n --overwrite_output_dir"
] |
fill-mask
|
transformers
|
# BERT-Small fine-tuned on CORD-19 dataset
[BERT L6_H-512_A-8 model](https://huggingface.co/google/bert_uncased_L-6_H-512_A-8) fine-tuned on the [CORD-19 dataset](https://www.semanticscholar.org/cord19).
## CORD-19 data subset
The training data for this dataset is stored as a [Kaggle dataset](https://www.kaggle.com/davidmezzetti/cord19-qa?select=cord19.txt). The training
data is a subset of the full corpus, focusing on high-quality, study-design detected articles.
## Building the model
```bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-6_H-512_A-8
--do_train
--mlm
--line_by_line
--block_size 512
--train_data_file cord19.txt
--per_gpu_train_batch_size 4
--learning_rate 3e-5
--num_train_epochs 3.0
--output_dir bert-small-cord19
--save_steps 0
--overwrite_output_dir
|
{}
|
NeuML/bert-small-cord19
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
# BERT-Small fine-tuned on CORD-19 dataset
BERT L6_H-512_A-8 model fine-tuned on the CORD-19 dataset.
## CORD-19 data subset
The training data for this dataset is stored as a Kaggle dataset. The training
data is a subset of the full corpus, focusing on high-quality, study-design detected articles.
## Building the model
'''bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-6_H-512_A-8
--do_train
--mlm
--line_by_line
--block_size 512
--train_data_file URL
--per_gpu_train_batch_size 4
--learning_rate 3e-5
--num_train_epochs 3.0
--output_dir bert-small-cord19
--save_steps 0
--overwrite_output_dir
|
[
"# BERT-Small fine-tuned on CORD-19 dataset\n\nBERT L6_H-512_A-8 model fine-tuned on the CORD-19 dataset.",
"## CORD-19 data subset\nThe training data for this dataset is stored as a Kaggle dataset. The training\ndata is a subset of the full corpus, focusing on high-quality, study-design detected articles.",
"## Building the model\n\n'''bash\npython run_language_modeling.py\n --model_type bert\n --model_name_or_path google/bert_uncased_L-6_H-512_A-8\n --do_train\n --mlm\n --line_by_line\n --block_size 512\n --train_data_file URL\n --per_gpu_train_batch_size 4\n --learning_rate 3e-5\n --num_train_epochs 3.0\n --output_dir bert-small-cord19\n --save_steps 0\n --overwrite_output_dir"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT-Small fine-tuned on CORD-19 dataset\n\nBERT L6_H-512_A-8 model fine-tuned on the CORD-19 dataset.",
"## CORD-19 data subset\nThe training data for this dataset is stored as a Kaggle dataset. The training\ndata is a subset of the full corpus, focusing on high-quality, study-design detected articles.",
"## Building the model\n\n'''bash\npython run_language_modeling.py\n --model_type bert\n --model_name_or_path google/bert_uncased_L-6_H-512_A-8\n --do_train\n --mlm\n --line_by_line\n --block_size 512\n --train_data_file URL\n --per_gpu_train_batch_size 4\n --learning_rate 3e-5\n --num_train_epochs 3.0\n --output_dir bert-small-cord19\n --save_steps 0\n --overwrite_output_dir"
] |
question-answering
|
transformers
|
# BERT-Small fine-tuned on CORD-19 QA dataset
[bert-small-cord19-squad model](https://huggingface.co/NeuML/bert-small-cord19-squad2) fine-tuned on the [CORD-19 QA dataset](https://www.kaggle.com/davidmezzetti/cord19-qa?select=cord19-qa.json).
## CORD-19 QA dataset
The CORD-19 QA dataset is a SQuAD 2.0 formatted list of question, context, answer combinations covering the [CORD-19 dataset](https://www.semanticscholar.org/cord19).
## Building the model
```bash
python run_squad.py \
--model_type bert \
--model_name_or_path bert-small-cord19-squad \
--do_train \
--do_lower_case \
--version_2_with_negative \
--train_file cord19-qa.json \
--per_gpu_train_batch_size 8 \
--learning_rate 5e-5 \
--num_train_epochs 10.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir bert-small-cord19qa \
--save_steps 0 \
--threads 8 \
--overwrite_cache \
--overwrite_output_dir
```
## Testing the model
Example usage below:
```python
from transformers import pipeline
qa = pipeline(
"question-answering",
model="NeuML/bert-small-cord19qa",
tokenizer="NeuML/bert-small-cord19qa"
)
qa({
"question": "What is the median incubation period?",
"context": "The incubation period is around 5 days (range: 4-7 days) with a maximum of 12-13 day"
})
qa({
"question": "What is the incubation period range?",
"context": "The incubation period is around 5 days (range: 4-7 days) with a maximum of 12-13 day"
})
qa({
"question": "What type of surfaces does it persist?",
"context": "The virus can survive on surfaces for up to 72 hours such as plastic and stainless steel ."
})
```
```json
{"score": 0.5970273583242793, "start": 32, "end": 38, "answer": "5 days"}
{"score": 0.999555868193891, "start": 39, "end": 56, "answer": "(range: 4-7 days)"}
{"score": 0.9992726505196998, "start": 61, "end": 88, "answer": "plastic and stainless steel"}
```
|
{}
|
NeuML/bert-small-cord19qa
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #question-answering #endpoints_compatible #has_space #region-us
|
# BERT-Small fine-tuned on CORD-19 QA dataset
bert-small-cord19-squad model fine-tuned on the CORD-19 QA dataset.
## CORD-19 QA dataset
The CORD-19 QA dataset is a SQuAD 2.0 formatted list of question, context, answer combinations covering the CORD-19 dataset.
## Building the model
## Testing the model
Example usage below:
|
[
"# BERT-Small fine-tuned on CORD-19 QA dataset\n\nbert-small-cord19-squad model fine-tuned on the CORD-19 QA dataset.",
"## CORD-19 QA dataset\nThe CORD-19 QA dataset is a SQuAD 2.0 formatted list of question, context, answer combinations covering the CORD-19 dataset.",
"## Building the model",
"## Testing the model\n\nExample usage below:"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #question-answering #endpoints_compatible #has_space #region-us \n",
"# BERT-Small fine-tuned on CORD-19 QA dataset\n\nbert-small-cord19-squad model fine-tuned on the CORD-19 QA dataset.",
"## CORD-19 QA dataset\nThe CORD-19 QA dataset is a SQuAD 2.0 formatted list of question, context, answer combinations covering the CORD-19 dataset.",
"## Building the model",
"## Testing the model\n\nExample usage below:"
] |
text2text-generation
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 24135330
- CO2 Emissions (in grams): 155.8470724053265
## Validation Metrics
- Loss: 1.369327425956726
- Rouge1: 52.6656
- Rouge2: 30.5879
- RougeL: 40.1268
- RougeLsum: 47.4438
- Gen Len: 75.4625
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Neuralearn/autonlp-Summarization-AutoNLP-24135330
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["Neuralearn/autonlp-data-Summarization-AutoNLP"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 155.8470724053265}
|
Neuralearn/autonlp-Summarization-AutoNLP-24135330
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:Neuralearn/autonlp-data-Summarization-AutoNLP",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Neuralearn/autonlp-data-Summarization-AutoNLP #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 24135330
- CO2 Emissions (in grams): 155.8470724053265
## Validation Metrics
- Loss: 1.369327425956726
- Rouge1: 52.6656
- Rouge2: 30.5879
- RougeL: 40.1268
- RougeLsum: 47.4438
- Gen Len: 75.4625
## Usage
You can use cURL to access this model:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 24135330\n- CO2 Emissions (in grams): 155.8470724053265",
"## Validation Metrics\n\n- Loss: 1.369327425956726\n- Rouge1: 52.6656\n- Rouge2: 30.5879\n- RougeL: 40.1268\n- RougeLsum: 47.4438\n- Gen Len: 75.4625",
"## Usage\n\nYou can use cURL to access this model:"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Neuralearn/autonlp-data-Summarization-AutoNLP #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 24135330\n- CO2 Emissions (in grams): 155.8470724053265",
"## Validation Metrics\n\n- Loss: 1.369327425956726\n- Rouge1: 52.6656\n- Rouge2: 30.5879\n- RougeL: 40.1268\n- RougeLsum: 47.4438\n- Gen Len: 75.4625",
"## Usage\n\nYou can use cURL to access this model:"
] |
text2text-generation
|
transformers
|
# Test
Hf T5: -95.86687088012695
MTF T5: -67.8558578491211
|
{"tags": ["t5-new-failed"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-base-sh
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5: -95.86687088012695
MTF T5: -67.8558578491211
|
[
"# Test\nHf T5: -95.86687088012695\nMTF T5: -67.8558578491211"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: -95.86687088012695\nMTF T5: -67.8558578491211"
] |
text2text-generation
|
transformers
|
# Test
Hf T5:
MTF T5: -80.44100952148438
|
{"tags": ["t5-new-hf-not-loaded"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-base-skv
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5:
MTF T5: -80.44100952148438
|
[
"# Test\nHf T5: \nMTF T5: -80.44100952148438"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: \nMTF T5: -80.44100952148438"
] |
text2text-generation
|
transformers
|
# Test
Hf T5: -110.35000801086426
MTF T5: -57.58127975463867
|
{"tags": ["t5-new-failed"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-large-sh
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5: -110.35000801086426
MTF T5: -57.58127975463867
|
[
"# Test\nHf T5: -110.35000801086426\nMTF T5: -57.58127975463867"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: -110.35000801086426\nMTF T5: -57.58127975463867"
] |
text2text-generation
|
transformers
|
# Test
Hf T5:
MTF T5: -59.432472229003906
|
{"tags": ["t5-new-hf-not-loaded"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-large-skv
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5:
MTF T5: -59.432472229003906
|
[
"# Test\nHf T5: \nMTF T5: -59.432472229003906"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: \nMTF T5: -59.432472229003906"
] |
text2text-generation
|
transformers
|
# Test
Hf T5: -146.39734268188477
MTF T5: -72.12132263183594
|
{"tags": ["t5-new-failed"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-small-sh
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5: -146.39734268188477
MTF T5: -72.12132263183594
|
[
"# Test\nHf T5: -146.39734268188477\nMTF T5: -72.12132263183594"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: -146.39734268188477\nMTF T5: -72.12132263183594"
] |
text2text-generation
|
transformers
|
# Test
Hf T5:
MTF T5: -277.564697265625
|
{"tags": ["t5-new-hf-not-loaded"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-small-shkv
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5:
MTF T5: -277.564697265625
|
[
"# Test\nHf T5: \nMTF T5: -277.564697265625"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: \nMTF T5: -277.564697265625"
] |
text2text-generation
|
transformers
|
# Test
Hf T5: -149.6728801727295
MTF T5: -74.4166259765625
|
{"tags": ["t5-new-failed"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-tiny-sh
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5: -149.6728801727295
MTF T5: -74.4166259765625
|
[
"# Test\nHf T5: -149.6728801727295\nMTF T5: -74.4166259765625"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: -149.6728801727295\nMTF T5: -74.4166259765625"
] |
text2text-generation
|
transformers
|
# Test
Hf T5:
MTF T5: -138.18275451660156
|
{"tags": ["t5-new-hf-not-loaded"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-tiny-skv
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5:
MTF T5: -138.18275451660156
|
[
"# Test\nHf T5: \nMTF T5: -138.18275451660156"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: \nMTF T5: -138.18275451660156"
] |
text2text-generation
|
transformers
|
# Test
Hf T5: -118.6875057220459
MTF T5: -76.85459899902344
|
{"tags": ["t5-new-failed"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-xl-sh
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5: -118.6875057220459
MTF T5: -76.85459899902344
|
[
"# Test\nHf T5: -118.6875057220459\nMTF T5: -76.85459899902344"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-failed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: -118.6875057220459\nMTF T5: -76.85459899902344"
] |
text2text-generation
|
transformers
|
# Test
Hf T5:
MTF T5: -66.05513000488281
|
{"tags": ["t5-new-hf-not-loaded"]}
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-xl-skv
| null |
[
"transformers",
"t5",
"text2text-generation",
"t5-new-hf-not-loaded",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test
Hf T5:
MTF T5: -66.05513000488281
|
[
"# Test\nHf T5: \nMTF T5: -66.05513000488281"
] |
[
"TAGS\n#transformers #t5 #text2text-generation #t5-new-hf-not-loaded #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test\nHf T5: \nMTF T5: -66.05513000488281"
] |
text-classification
|
transformers
|
# xlm-r-finetuned-toxic-political-tweets-es
This model is based on the pre-trained model [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) and was fine-tuned on a dataset of tweets from members of the [Spanish Congress of the Deputies](https://www.congreso.es/) annotated regarding the level of political toxicity they generate.
### Inputs
The model has been trained on the text of Spanish tweets authored by politicians in 2021, so this is the input expected and its performance can degrade when applied to texts from other domains.
### Outputs
The model predicts 2 signals of political toxicity:
* Toxic: the tweet has at least some degree of toxicity.
* Very Toxic: the tweet has a strong degree of toxicity.
A value between 0 and 1 is predicted for each signal.
### Intended uses & limitations
The model was created to be used as a toxicity detector of spanish tweets from Spanish Congress Deputies. If the intended use is other one, for instance; toxicity detection on films reviews, the results won't be reliable and you might look for another model with this concrete purpose.
### How to use
The model can be used directly with a text-classification pipeline:
```python
>>> from transformers import pipeline
>>> text = "Es usted un auténtico impresentable, su señoría."
>>> pipe = pipeline("text-classification", model="Newtral/xlm-r-finetuned-toxic-political-tweets-es")
>>> pipe(text, return_all_scores=True)
[[{'label': 'toxic', 'score': 0.92560875415802},
{'label': 'very toxic', 'score': 0.8310967683792114}]]
```
### Training procedure
The pre-trained model was fine-tuned for sequence classification using the following hyperparameters, which were selected from a validation set:
* Batch size = 32
* Learning rate = 2e-5
* Epochs = 5
* Max length = 64
The optimizer used was AdamW and the loss optimized was binary cross-entropy with class weights proportional to the class imbalance.
|
{"language": "es", "license": "apache-2.0"}
|
Newtral/xlm-r-finetuned-toxic-political-tweets-es
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #text-classification #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# xlm-r-finetuned-toxic-political-tweets-es
This model is based on the pre-trained model xlm-roberta-base and was fine-tuned on a dataset of tweets from members of the Spanish Congress of the Deputies annotated regarding the level of political toxicity they generate.
### Inputs
The model has been trained on the text of Spanish tweets authored by politicians in 2021, so this is the input expected and its performance can degrade when applied to texts from other domains.
### Outputs
The model predicts 2 signals of political toxicity:
* Toxic: the tweet has at least some degree of toxicity.
* Very Toxic: the tweet has a strong degree of toxicity.
A value between 0 and 1 is predicted for each signal.
### Intended uses & limitations
The model was created to be used as a toxicity detector of spanish tweets from Spanish Congress Deputies. If the intended use is other one, for instance; toxicity detection on films reviews, the results won't be reliable and you might look for another model with this concrete purpose.
### How to use
The model can be used directly with a text-classification pipeline:
### Training procedure
The pre-trained model was fine-tuned for sequence classification using the following hyperparameters, which were selected from a validation set:
* Batch size = 32
* Learning rate = 2e-5
* Epochs = 5
* Max length = 64
The optimizer used was AdamW and the loss optimized was binary cross-entropy with class weights proportional to the class imbalance.
|
[
"# xlm-r-finetuned-toxic-political-tweets-es\n\nThis model is based on the pre-trained model xlm-roberta-base and was fine-tuned on a dataset of tweets from members of the Spanish Congress of the Deputies annotated regarding the level of political toxicity they generate.",
"### Inputs\n\nThe model has been trained on the text of Spanish tweets authored by politicians in 2021, so this is the input expected and its performance can degrade when applied to texts from other domains.",
"### Outputs\n\nThe model predicts 2 signals of political toxicity:\n\n* Toxic: the tweet has at least some degree of toxicity.\n* Very Toxic: the tweet has a strong degree of toxicity.\n\nA value between 0 and 1 is predicted for each signal.",
"### Intended uses & limitations \n\nThe model was created to be used as a toxicity detector of spanish tweets from Spanish Congress Deputies. If the intended use is other one, for instance; toxicity detection on films reviews, the results won't be reliable and you might look for another model with this concrete purpose.",
"### How to use\n\nThe model can be used directly with a text-classification pipeline:",
"### Training procedure\nThe pre-trained model was fine-tuned for sequence classification using the following hyperparameters, which were selected from a validation set:\n\n* Batch size = 32\n* Learning rate = 2e-5\n* Epochs = 5\n* Max length = 64\n\nThe optimizer used was AdamW and the loss optimized was binary cross-entropy with class weights proportional to the class imbalance."
] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# xlm-r-finetuned-toxic-political-tweets-es\n\nThis model is based on the pre-trained model xlm-roberta-base and was fine-tuned on a dataset of tweets from members of the Spanish Congress of the Deputies annotated regarding the level of political toxicity they generate.",
"### Inputs\n\nThe model has been trained on the text of Spanish tweets authored by politicians in 2021, so this is the input expected and its performance can degrade when applied to texts from other domains.",
"### Outputs\n\nThe model predicts 2 signals of political toxicity:\n\n* Toxic: the tweet has at least some degree of toxicity.\n* Very Toxic: the tweet has a strong degree of toxicity.\n\nA value between 0 and 1 is predicted for each signal.",
"### Intended uses & limitations \n\nThe model was created to be used as a toxicity detector of spanish tweets from Spanish Congress Deputies. If the intended use is other one, for instance; toxicity detection on films reviews, the results won't be reliable and you might look for another model with this concrete purpose.",
"### How to use\n\nThe model can be used directly with a text-classification pipeline:",
"### Training procedure\nThe pre-trained model was fine-tuned for sequence classification using the following hyperparameters, which were selected from a validation set:\n\n* Batch size = 32\n* Learning rate = 2e-5\n* Epochs = 5\n* Max length = 64\n\nThe optimizer used was AdamW and the loss optimized was binary cross-entropy with class weights proportional to the class imbalance."
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ## labels
- 0: Object
- 1: Recycle
- 2: Non-Recycle
# vit-base-patch16-224
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1510
- Accuracy: 0.9443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1438 | 1.0 | 150 | 0.1645 | 0.9353 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "vit-base-patch16-224", "results": []}]}
|
NhatPham/vit-base-patch16-224-recylce-ft
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit #image-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
## labels
=========
* 0: Object
* 1: Recycle
* 2: Non-Recycle
vit-base-patch16-224
====================
This model is a fine-tuned version of google/vit-base-patch16-224 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1510
* Accuracy: 0.9443
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 60
* eval\_batch\_size: 60
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 240
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"## labels\n=========\n\n\n* 0: Object\n* 1: Recycle\n* 2: Non-Recycle\n\n\nvit-base-patch16-224\n====================\n\n\nThis model is a fine-tuned version of google/vit-base-patch16-224 on the None dataset.\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.1510\n* Accuracy: 0.9443\n\n\nModel description\n-----------------\n\n\nMore information needed\n\n\nIntended uses & limitations\n---------------------------\n\n\nMore information needed\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 60\n* eval\\_batch\\_size: 60\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 240\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## labels\n=========\n\n\n* 0: Object\n* 1: Recycle\n* 2: Non-Recycle\n\n\nvit-base-patch16-224\n====================\n\n\nThis model is a fine-tuned version of google/vit-base-patch16-224 on the None dataset.\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.1510\n* Accuracy: 0.9443\n\n\nModel description\n-----------------\n\n\nMore information needed\n\n\nIntended uses & limitations\n---------------------------\n\n\nMore information needed\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 60\n* eval\\_batch\\_size: 60\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 240\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
audio-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1258
- Accuracy: 0.9793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1561 | 1.0 | 399 | 1.1127 | 0.6643 |
| 0.4803 | 2.0 | 798 | 0.3547 | 0.9687 |
| 0.2855 | 3.0 | 1197 | 0.1663 | 0.9763 |
| 0.1987 | 4.0 | 1596 | 0.1258 | 0.9793 |
| 0.2097 | 5.0 | 1995 | 0.1171 | 0.9791 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-finetuned-ks", "results": []}]}
|
NhatPham/wav2vec2-base-finetuned-ks
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-finetuned-ks
==========================
This model is a fine-tuned version of facebook/wav2vec2-base on the superb dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1258
* Accuracy: 0.9793
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #dataset-superb #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-large-xlsr-53-french
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fr", split="test[:20%]")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the French test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fr")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\â€\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.31 %
## Training
V1 of the Common Voice `train`, `validation` datasets were used for training.
## Testing
20% of V6.1 of the Common Voice `Test` dataset were used for training.
|
{"language": "fr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-French by Nhut DOAN NGUYEN", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fr", "type": "common_voice", "args": "fr"}, "metrics": [{"type": "wer", "value": "xx.xx", "name": "Test WER"}]}]}]}
|
Nhut/wav2vec2-large-xlsr-french
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# wav2vec2-large-xlsr-53-french
Fine-tuned facebook/wav2vec2-large-xlsr-53 in French using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the French test data of Common Voice.
Test Result: 29.31 %
## Training
V1 of the Common Voice 'train', 'validation' datasets were used for training.
## Testing
20% of V6.1 of the Common Voice 'Test' dataset were used for training.
|
[
"# wav2vec2-large-xlsr-53-french \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in French using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the French test data of Common Voice.\n\n\n\nTest Result: 29.31 %",
"## Training\n\nV1 of the Common Voice 'train', 'validation' datasets were used for training.",
"## Testing\n\n20% of V6.1 of the Common Voice 'Test' dataset were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# wav2vec2-large-xlsr-53-french \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in French using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the French test data of Common Voice.\n\n\n\nTest Result: 29.31 %",
"## Training\n\nV1 of the Common Voice 'train', 'validation' datasets were used for training.",
"## Testing\n\n20% of V6.1 of the Common Voice 'Test' dataset were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4) and [VIVOS](https://ailab.hcmus.edu.vn/vivos).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
ENCODER = {
"ia ": "iê ",
"ìa ": "iề ",
"ía ": "iế ",
"ỉa ": "iể ",
"ĩa ": "iễ ",
"ịa ": "iệ ",
"ya ": "yê ",
"ỳa ": "yề ",
"ýa ": "yế ",
"ỷa ": "yể ",
"ỹa ": "yễ ",
"ỵa ": "yệ ",
"ua ": "uô ",
"ùa ": "uồ ",
"úa ": "uố ",
"ủa ": "uổ ",
"ũa ": "uỗ ",
"ụa ": "uộ ",
"ưa ": "ươ ",
"ừa ": "ườ ",
"ứa ": "ướ ",
"ửa ": "ưở ",
"ữa ": "ưỡ ",
"ựa ": "ượ ",
"ke": "ce",
"kè": "cè",
"ké": "cé",
"kẻ": "cẻ",
"kẽ": "cẽ",
"kẹ": "cẹ",
"kê": "cê",
"kề": "cề",
"kế": "cế",
"kể": "cể",
"kễ": "cễ",
"kệ": "cệ",
"ki": "ci",
"kì": "cì",
"kí": "cí",
"kỉ": "cỉ",
"kĩ": "cĩ",
"kị": "cị",
"ky": "cy",
"kỳ": "cỳ",
"ký": "cý",
"kỷ": "cỷ",
"kỹ": "cỹ",
"kỵ": "cỵ",
"ghe": "ge",
"ghè": "gè",
"ghé": "gé",
"ghẻ": "gẻ",
"ghẽ": "gẽ",
"ghẹ": "gẹ",
"ghê": "gê",
"ghề": "gề",
"ghế": "gế",
"ghể": "gể",
"ghễ": "gễ",
"ghệ": "gệ",
"ngh": "\x80",
"uyê": "\x96",
"uyề": "\x97",
"uyế": "\x98",
"uyể": "\x99",
"uyễ": "\x9a",
"uyệ": "\x9b",
"ng": "\x81",
"ch": "\x82",
"gh": "\x83",
"nh": "\x84",
"gi": "\x85",
"ph": "\x86",
"kh": "\x87",
"th": "\x88",
"tr": "\x89",
"uy": "\x8a",
"uỳ": "\x8b",
"uý": "\x8c",
"uỷ": "\x8d",
"uỹ": "\x8e",
"uỵ": "\x8f",
"iê": "\x90",
"iề": "\x91",
"iế": "\x92",
"iể": "\x93",
"iễ": "\x94",
"iệ": "\x95",
"uô": "\x9c",
"uồ": "\x9d",
"uố": "\x9e",
"uổ": "\x9f",
"uỗ": "\xa0",
"uộ": "\xa1",
"ươ": "\xa2",
"ườ": "\xa3",
"ướ": "\xa4",
"ưở": "\xa5",
"ưỡ": "\xa6",
"ượ": "\xa7",
}
def decode_string(x):
for k, v in list(reversed(list(ENCODER.items()))):
x = x.replace(v, k)
return x
test_dataset = load_dataset("common_voice", "vi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", [decode_string(x) for x in processor.batch_decode(predicted_ids)])
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
ENCODER = {
"ia ": "iê ",
"ìa ": "iề ",
"ía ": "iế ",
"ỉa ": "iể ",
"ĩa ": "iễ ",
"ịa ": "iệ ",
"ya ": "yê ",
"ỳa ": "yề ",
"ýa ": "yế ",
"ỷa ": "yể ",
"ỹa ": "yễ ",
"ỵa ": "yệ ",
"ua ": "uô ",
"ùa ": "uồ ",
"úa ": "uố ",
"ủa ": "uổ ",
"ũa ": "uỗ ",
"ụa ": "uộ ",
"ưa ": "ươ ",
"ừa ": "ườ ",
"ứa ": "ướ ",
"ửa ": "ưở ",
"ữa ": "ưỡ ",
"ựa ": "ượ ",
"ke": "ce",
"kè": "cè",
"ké": "cé",
"kẻ": "cẻ",
"kẽ": "cẽ",
"kẹ": "cẹ",
"kê": "cê",
"kề": "cề",
"kế": "cế",
"kể": "cể",
"kễ": "cễ",
"kệ": "cệ",
"ki": "ci",
"kì": "cì",
"kí": "cí",
"kỉ": "cỉ",
"kĩ": "cĩ",
"kị": "cị",
"ky": "cy",
"kỳ": "cỳ",
"ký": "cý",
"kỷ": "cỷ",
"kỹ": "cỹ",
"kỵ": "cỵ",
"ghe": "ge",
"ghè": "gè",
"ghé": "gé",
"ghẻ": "gẻ",
"ghẽ": "gẽ",
"ghẹ": "gẹ",
"ghê": "gê",
"ghề": "gề",
"ghế": "gế",
"ghể": "gể",
"ghễ": "gễ",
"ghệ": "gệ",
"ngh": "\x80",
"uyê": "\x96",
"uyề": "\x97",
"uyế": "\x98",
"uyể": "\x99",
"uyễ": "\x9a",
"uyệ": "\x9b",
"ng": "\x81",
"ch": "\x82",
"gh": "\x83",
"nh": "\x84",
"gi": "\x85",
"ph": "\x86",
"kh": "\x87",
"th": "\x88",
"tr": "\x89",
"uy": "\x8a",
"uỳ": "\x8b",
"uý": "\x8c",
"uỷ": "\x8d",
"uỹ": "\x8e",
"uỵ": "\x8f",
"iê": "\x90",
"iề": "\x91",
"iế": "\x92",
"iể": "\x93",
"iễ": "\x94",
"iệ": "\x95",
"uô": "\x9c",
"uồ": "\x9d",
"uố": "\x9e",
"uổ": "\x9f",
"uỗ": "\xa0",
"uộ": "\xa1",
"ươ": "\xa2",
"ườ": "\xa3",
"ướ": "\xa4",
"ưở": "\xa5",
"ưỡ": "\xa6",
"ượ": "\xa7",
}
def decode_string(x):
for k, v in list(reversed(list(ENCODER.items()))):
x = x.replace(v, k)
return x
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese")
model.to("cuda")
chars_to_ignore_regex = '[\\\+\@\ǀ\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
# decode_string: We replace the encoded letter with the initial letters
batch["pred_strings"] = [decode_string(x) for x in batch["pred_strings"]]
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 49.59 %
## Training
The Common Voice `train`, `validation` and FOSD datasets and VIVOS datasets were used for training as well.
The script used for training can be found [here](https://colab.research.google.com/drive/11pP4uVJj4SYZTzGjlCUtOHywlhYqs0cPx)
|
{"language": "vi", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", {"FOSD": "https://data.mendeley.com/datasets/k9sxg2twv4/4"}, {"VIVOS": "https://ailab.hcmus.edu.vn/vivos"}], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Vietnamese by Nhut", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice vi", "type": "common_voice", "args": "vi"}, "metrics": [{"type": "wer", "value": 49.59, "name": "Test WER"}]}]}]}
|
Nhut/wav2vec2-large-xlsr-vietnamese
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"vi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"vi"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #vi #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Vietnamese using the Common Voice, FOSD and VIVOS.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
Test Result: 49.59 %
## Training
The Common Voice 'train', 'validation' and FOSD datasets and VIVOS datasets were used for training as well.
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Vietnamese\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Vietnamese using the Common Voice, FOSD and VIVOS.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Vietnamese test data of Common Voice.\n\nTest Result: 49.59 %",
"## Training\nThe Common Voice 'train', 'validation' and FOSD datasets and VIVOS datasets were used for training as well.\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #vi #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Vietnamese\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Vietnamese using the Common Voice, FOSD and VIVOS.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Vietnamese test data of Common Voice.\n\nTest Result: 49.59 %",
"## Training\nThe Common Voice 'train', 'validation' and FOSD datasets and VIVOS datasets were used for training as well.\nThe script used for training can be found here"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
NibrasShami/DialopGPT-small-HarryPotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
null | null |
this project was created to use in wav2vec
|
{}
|
Niciu/testtest1
| null |
[
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#region-us
|
this project was created to use in wav2vec
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
# My Awesome Laffy
|
{"tags": ["conversational"]}
|
NickCavarretta/DialoGPT-small-laffy
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Laffy
|
[
"# My Awesome Laffy"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Laffy"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4519
- Wer: 0.3375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4351 | 4.0 | 500 | 1.2740 | 0.8259 |
| 0.5828 | 8.0 | 1000 | 0.4276 | 0.4403 |
| 0.2274 | 12.0 | 1500 | 0.4646 | 0.3739 |
| 0.135 | 16.0 | 2000 | 0.4320 | 0.3662 |
| 0.0962 | 20.0 | 2500 | 0.4831 | 0.3607 |
| 0.0719 | 24.0 | 3000 | 0.4506 | 0.3463 |
| 0.0556 | 28.0 | 3500 | 0.4519 | 0.3375 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
NicoGrageda/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4519
* Wer: 0.3375
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Squi
|
{"tags": ["conversational"]}
|
Nihwy/DialoSqui
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Squi
|
[
"# Squi"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Squi"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
NikhilKrishna/DialoGPT-medium-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-classification
|
transformers
|
# **-- EMODa --**
## BERT-model for danish multi-class classification of emotions
Classifies a danish sentence into one of 6 different emotions:
| Danish emotion | Ekman's emotion |
| ----- | ----- |
| 😞 **Afsky** | Disgust |
| 😨 **Frygt** | Fear |
| 😄 **Glæde** | Joy |
| 😱 **Overraskelse** | Surprise |
| 😢 **Tristhed** | Sadness |
| 😠 **Vrede** | Anger |
# How to use
```python
from transformers import pipeline
model_path = "NikolajMunch/danish-emotion-classification"
classifier = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
prediction = classifier("Jeg er godt nok ked af at mine SMS'er er slettet")
print(prediction)
# [{'label': 'Tristhed', 'score': 0.9725030660629272}]
```
or
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("NikolajMunch/danish-emotion-classification")
model = AutoModelForSequenceClassification.from_pretrained("NikolajMunch/danish-emotion-classification")
```
|
{"language": ["da"], "tags": ["sentiment", "emotion", "danish"], "widget": [{"text": "Hold da op! Kan det virkelig passe?"}]}
|
NikolajMunch/danish-emotion-classification
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment",
"emotion",
"danish",
"da",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"da"
] |
TAGS
#transformers #pytorch #bert #text-classification #sentiment #emotion #danish #da #autotrain_compatible #endpoints_compatible #region-us
|
-- EMODa --
===========
BERT-model for danish multi-class classification of emotions
------------------------------------------------------------
Classifies a danish sentence into one of 6 different emotions:
How to use
==========
or
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #sentiment #emotion #danish #da #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# AOT-GAN CelebA-HQ
AOT-GAN is a model that can be used for image in-painting. The CelebA-HQ checkpoint is trained on synthetic human faces, which should make it suitable for touching up and restoring portraits.
This model was generated using [AOT-GAN-for-Inpainting](https://github.com/researchmm/AOT-GAN-for-Inpainting), cited as
```
@inproceedings{yan2021agg,
author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining},
title = {Aggregated Contextual Transformations for High-Resolution Image Inpainting},
booktitle = {Arxiv},
pages={-},
year = {2020}
}
```
## Dataset
The CelebA-HQ dataset was created with this codebase: https://github.com/tkarras/progressive_growing_of_gans, owned by NVidia and licensed under Creative Commons Attribution-NonCommercial 4.0 International.
|
{"tags": ["face-recognition", "face-generation", "face-segmentation", "generative-adversarial-network"], "datasets": ["celeba-hq"], "metrics": ["L1", "PSNR", "SSIM", "FID"]}
|
NimaBoscarino/aot-gan-celebahq
| null |
[
"transformers",
"pytorch",
"face-recognition",
"face-generation",
"face-segmentation",
"generative-adversarial-network",
"dataset:celeba-hq",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #face-recognition #face-generation #face-segmentation #generative-adversarial-network #dataset-celeba-hq #endpoints_compatible #has_space #region-us
|
# AOT-GAN CelebA-HQ
AOT-GAN is a model that can be used for image in-painting. The CelebA-HQ checkpoint is trained on synthetic human faces, which should make it suitable for touching up and restoring portraits.
This model was generated using AOT-GAN-for-Inpainting, cited as
## Dataset
The CelebA-HQ dataset was created with this codebase: URL owned by NVidia and licensed under Creative Commons Attribution-NonCommercial 4.0 International.
|
[
"# AOT-GAN CelebA-HQ\nAOT-GAN is a model that can be used for image in-painting. The CelebA-HQ checkpoint is trained on synthetic human faces, which should make it suitable for touching up and restoring portraits.\n\nThis model was generated using AOT-GAN-for-Inpainting, cited as",
"## Dataset\nThe CelebA-HQ dataset was created with this codebase: URL owned by NVidia and licensed under Creative Commons Attribution-NonCommercial 4.0 International."
] |
[
"TAGS\n#transformers #pytorch #face-recognition #face-generation #face-segmentation #generative-adversarial-network #dataset-celeba-hq #endpoints_compatible #has_space #region-us \n",
"# AOT-GAN CelebA-HQ\nAOT-GAN is a model that can be used for image in-painting. The CelebA-HQ checkpoint is trained on synthetic human faces, which should make it suitable for touching up and restoring portraits.\n\nThis model was generated using AOT-GAN-for-Inpainting, cited as",
"## Dataset\nThe CelebA-HQ dataset was created with this codebase: URL owned by NVidia and licensed under Creative Commons Attribution-NonCommercial 4.0 International."
] |
null |
transformers
|
# AOT-GAN Places2
AOT-GAN is a model that can be used for image in-painting. The Places2 checkpoint is trained on a dataset which should make it suitable for touching up and restoring images of landscapes, buildings, and other natural and developed places.
This model was generated using [AOT-GAN-for-Inpainting](https://github.com/researchmm/AOT-GAN-for-Inpainting), cited as
```
@inproceedings{yan2021agg,
author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining},
title = {Aggregated Contextual Transformations for High-Resolution Image Inpainting},
booktitle = {Arxiv},
pages={-},
year = {2020}
}
```
## Dataset
The Places2 dataset can be found here: http://places2.csail.mit.edu/download.html
|
{"tags": ["scene-recognition", "scene-generation", "generative-adversarial-network"], "datasets": ["places2"], "metrics": ["L1", "PSNR", "SSIM", "FID"]}
|
NimaBoscarino/aot-gan-places2
| null |
[
"transformers",
"pytorch",
"scene-recognition",
"scene-generation",
"generative-adversarial-network",
"dataset:places2",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #scene-recognition #scene-generation #generative-adversarial-network #dataset-places2 #endpoints_compatible #has_space #region-us
|
# AOT-GAN Places2
AOT-GAN is a model that can be used for image in-painting. The Places2 checkpoint is trained on a dataset which should make it suitable for touching up and restoring images of landscapes, buildings, and other natural and developed places.
This model was generated using AOT-GAN-for-Inpainting, cited as
## Dataset
The Places2 dataset can be found here: URL
|
[
"# AOT-GAN Places2\nAOT-GAN is a model that can be used for image in-painting. The Places2 checkpoint is trained on a dataset which should make it suitable for touching up and restoring images of landscapes, buildings, and other natural and developed places.\n\nThis model was generated using AOT-GAN-for-Inpainting, cited as",
"## Dataset\nThe Places2 dataset can be found here: URL"
] |
[
"TAGS\n#transformers #pytorch #scene-recognition #scene-generation #generative-adversarial-network #dataset-places2 #endpoints_compatible #has_space #region-us \n",
"# AOT-GAN Places2\nAOT-GAN is a model that can be used for image in-painting. The Places2 checkpoint is trained on a dataset which should make it suitable for touching up and restoring images of landscapes, buildings, and other natural and developed places.\n\nThis model was generated using AOT-GAN-for-Inpainting, cited as",
"## Dataset\nThe Places2 dataset can be found here: URL"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
Ninja5000/DialoGPT-medium-HarryPotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-generation
|
transformers
|
# DialoGPT-medium-TWEWYJoshua
Another not-so-good AI chatbot. Joshua from the game TWEWY(The World Ends With You).
* Credits to Lynn's Devlab who made the amazing tutorial.
|
{"tags": ["conversational"]}
|
Ninja5000/DialoGPT-medium-TWEWYJoshua
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# DialoGPT-medium-TWEWYJoshua
Another not-so-good AI chatbot. Joshua from the game TWEWY(The World Ends With You).
* Credits to Lynn's Devlab who made the amazing tutorial.
|
[
"# DialoGPT-medium-TWEWYJoshua\n\nAnother not-so-good AI chatbot. Joshua from the game TWEWY(The World Ends With You).\n\n* Credits to Lynn's Devlab who made the amazing tutorial."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# DialoGPT-medium-TWEWYJoshua\n\nAnother not-so-good AI chatbot. Joshua from the game TWEWY(The World Ends With You).\n\n* Credits to Lynn's Devlab who made the amazing tutorial."
] |
text-generation
|
transformers
|
#LOTR DialoGPT Model
|
{"tags": ["conversational"]}
|
Niphredil/DialoGPT-small-lotr
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#LOTR DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
license: apache-2.0
---
### Rick DialoGPT Model
|
{"tags": ["conversational"]}
|
Nisarg2701/DialoGPT-medium-Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
license: apache-2.0
---
### Rick DialoGPT Model
|
[
"### Rick DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Rick DialoGPT Model"
] |
null |
transformers
|
# ELECTRA
## Introduction
**ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
Electra-base-vn is trained on more 148gb text with max length 512.
You can download tensorflow version at [Electra base TF version](https://drive.google.com/drive/folders/1hN0LiOlMfNDDQVo2bgEYHd03I-xXDLVr?usp=sharing)
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
{}
|
NlpHUST/electra-base-vn
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"1406.2661"
] |
[] |
TAGS
#transformers #pytorch #electra #pretraining #arxiv-1406.2661 #endpoints_compatible #region-us
|
# ELECTRA
## Introduction
ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.
Electra-base-vn is trained on more 148gb text with max length 512.
You can download tensorflow version at Electra base TF version
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van (nha282@URL).
|
[
"# ELECTRA",
"## Introduction\nELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nElectra-base-vn is trained on more 148gb text with max length 512.\n\nYou can download tensorflow version at Electra base TF version",
"### Contact information\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #arxiv-1406.2661 #endpoints_compatible #region-us \n",
"# ELECTRA",
"## Introduction\nELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.\n\nElectra-base-vn is trained on more 148gb text with max length 512.\n\nYou can download tensorflow version at Electra base TF version",
"### Contact information\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
text-generation
|
transformers
|
# GPT-Neo-small for vietnamese
First GPT for vietnamese
## Model Description
GPT-Neo-vi-small is a transformer model designed using EleutherAI's replication of the GPT-3 architecture.
## Training data
GPT-Neo-vi-smal was trained on the News datasets, a large scale dataset created by from News Website for the purpose of training this model.
### How to use
his example generates a different sequence each time it's run:
```py
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
model = GPTNeoForCausalLM.from_pretrained("NlpHUST/gpt-neo-vi-small")
tokenizer = GPT2Tokenizer.from_pretrained("NlpHUST/gpt-neo-vi-small")
prompt = "Ngay sau Tết Nguyên đán Tân Sửu, hiện tượng giá đất tăng tại nhiều địa phương. Thị trường nhộn nhịp, tạo ra những cơn sóng sốt đất khó tin khiến bộ ngành, địa phương đưa cảnh báo."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, temperature=1.0, max_length=1024)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
{"language": "vi", "tags": ["vi", "vietnamese", "text-generation", "gpt3", "lm", "nlp"], "datasets": ["vietnamese"], "widget": [{"text": "Vi\u1ec7t Nam l\u00e0 qu\u1ed1c gia c\u00f3"}], "pipeline_tag": "text-generation"}
|
NlpHUST/gpt-neo-vi-small
| null |
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"vi",
"vietnamese",
"gpt3",
"lm",
"nlp",
"dataset:vietnamese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[
"vi"
] |
TAGS
#transformers #pytorch #gpt_neo #text-generation #vi #vietnamese #gpt3 #lm #nlp #dataset-vietnamese #autotrain_compatible #endpoints_compatible #region-us
|
# GPT-Neo-small for vietnamese
First GPT for vietnamese
## Model Description
GPT-Neo-vi-small is a transformer model designed using EleutherAI's replication of the GPT-3 architecture.
## Training data
GPT-Neo-vi-smal was trained on the News datasets, a large scale dataset created by from News Website for the purpose of training this model.
### How to use
his example generates a different sequence each time it's run:
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van (nha282@URL).
|
[
"# GPT-Neo-small for vietnamese\nFirst GPT for vietnamese",
"## Model Description\nGPT-Neo-vi-small is a transformer model designed using EleutherAI's replication of the GPT-3 architecture.",
"## Training data\nGPT-Neo-vi-smal was trained on the News datasets, a large scale dataset created by from News Website for the purpose of training this model.",
"### How to use\nhis example generates a different sequence each time it's run:",
"### Contact information\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
[
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #vi #vietnamese #gpt3 #lm #nlp #dataset-vietnamese #autotrain_compatible #endpoints_compatible #region-us \n",
"# GPT-Neo-small for vietnamese\nFirst GPT for vietnamese",
"## Model Description\nGPT-Neo-vi-small is a transformer model designed using EleutherAI's replication of the GPT-3 architecture.",
"## Training data\nGPT-Neo-vi-smal was trained on the News datasets, a large scale dataset created by from News Website for the purpose of training this model.",
"### How to use\nhis example generates a different sequence each time it's run:",
"### Contact information\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
text2text-generation
|
transformers
|
# T5-EN-VI-BASE:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation
# Dataset
The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/).
For all experiments the corpus was split into training, development and test set:
| Data set | Sentences | Download
| ----------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------
| Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz`
| Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz`
| Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz`
## Results
The results on test set.
| Model | BLEU (Beam Search)
| ----------------------------------------------------------------------------------------------------- | ------------------
| [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30
| Sequence-to-sequence model with attention | 26.10
| Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69
| Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07
| t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased)
|t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased)
| t5-en-vi-base (pretraining, without training data) | **29.66** (cased) / **30.37** (uncased)
#### Example Using
``` bash
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small")
model.to(device)
src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=128,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
```
#### Output
``` bash
Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù.
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
{}
|
NlpHUST/t5-en-vi-base
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"arxiv:1706.05565",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"1706.05565"
] |
[] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #arxiv-1706.05565 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
T5-EN-VI-BASE:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation
==============================================================================================
Dataset
=======
The *IWSLT'15 English-Vietnamese* data is used from Stanford NLP group.
For all experiments the corpus was split into training, development and test set:
Data set: Training, Sentences: 133,317, Download: via GitHub or located in 'data/URL'
Data set: Development, Sentences: 1,553, Download: via GitHub or located in 'data/URL'
Data set: Test, Sentences: 1,268, Download: via GitHub or located in 'data/URL'
Results
-------
The results on test set.
#### Example Using
#### Output
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van (nha282@URL).
|
[
"#### Example Using",
"#### Output",
"### Contact information\n\n\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #arxiv-1706.05565 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"#### Example Using",
"#### Output",
"### Contact information\n\n\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
text2text-generation
|
transformers
|
# T5-EN-VI-SMALL:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation
# Dataset
The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/).
For all experiments the corpus was split into training, development and test set:
| Data set | Sentences | Download
| ----------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------
| Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz`
| Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz`
| Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz`
## Results
The results on test set.
| Model | BLEU (Beam Search)
| ----------------------------------------------------------------------------------------------------- | ------------------
| [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30
| Sequence-to-sequence model with attention | 26.10
| Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69
| Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07
| t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased)
|t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased)
#### Example Using
``` bash
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small")
model.to(device)
src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=128,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
```
#### Output
``` bash
Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù.
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
{}
|
NlpHUST/t5-en-vi-small
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"arxiv:1706.05565",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[
"1706.05565"
] |
[] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #arxiv-1706.05565 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
T5-EN-VI-SMALL:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation
===============================================================================================
Dataset
=======
The *IWSLT'15 English-Vietnamese* data is used from Stanford NLP group.
For all experiments the corpus was split into training, development and test set:
Data set: Training, Sentences: 133,317, Download: via GitHub or located in 'data/URL'
Data set: Development, Sentences: 1,553, Download: via GitHub or located in 'data/URL'
Data set: Test, Sentences: 1,268, Download: via GitHub or located in 'data/URL'
Results
-------
The results on test set.
#### Example Using
#### Output
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van (nha282@URL).
|
[
"#### Example Using",
"#### Output",
"### Contact information\n\n\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #arxiv-1706.05565 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"#### Example Using",
"#### Output",
"### Contact information\n\n\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
text2text-generation
|
transformers
|
# T5-SMALL-SUMMARIZATION :Pretraining Text-To-Text Transfer Transformer for Vietnamese Text Summarization
#### Example Using
``` bash
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-small-vi-summarization")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-small-vi-summarization")
model.to(device)
src = "Theo BHXH Việt Nam, nhiều doanh nghiệp vẫn chỉ đóng BHXH cho người lao động theo mức lương. \\\\
Dù quy định từ 1/1/2018, tiền lương tháng đóng BHXH gồm mức lương và thêm khoản bổ sung khác. \\\\
BHXH Việt Nam vừa có báo cáo về tình hình thực hiện chính sách BHXH thời gian qua. \\\\
Theo đó, tình trạng nợ, trốn đóng BHXH, BHTN vẫn xảy ra ở hầu hết các tỉnh, thành. \\\\
Thống kê tới ngày 31/12/2020, tổng số nợ BHXH, BHYT, BHTN là hơn 13.500 tỷ đồng, \\\\
chiếm 3,35 % số phải thu, trong đó: Số nợ BHXH bắt buộc là hơn 8.600 tỷ đồng, \\\\
nợ BHTN là 335 tỷ đồng. Liên quan tới tiền lương đóng BHXH, báo cáo của \\\\
BHXH Việt Nam cho thấy: Nhiều doanh nghiệp vẫn chủ yếu xây dựng thang, \\\\
bảng lương để đóng BHXH bằng mức thấp nhất. Tức là bằng mức lương tối \\\\
thiểu vùng, cộng thêm 7 % đối với lao động đã qua đào tạo nghề và cộng \\\\
thêm 5 % hoặc 7 % đối với lao động làm nghề hoặc công việc nặng nhọc, \\\\
độc hại, nguy hiểm, đặc biệt nặng nhọc độc hại và nguy hiểm. Đối với \\\\
lao động giữ chức vụ, khoảng 80 % doanh nghiệp đã xây dựng thang, \\\\
bảng lương cụ thể theo chức danh. Đơn cử như với chức vụ giám đốc \\\\
sản xuất, giám đốc điều hành, trưởng phòng. Còn lại các doanh nghiệp \\\\
xây dựng đối với lao động giữ chức vụ theo thang lương, bảng lương \\\\
chuyên môn nghiệp vụ và bảng phụ cấp chức vụ, phụ cấp trách nhiệm. \\\\
Thống kê của BHXH Việt Nam cũng cho thấy, đa số doanh nghiệp đã đăng \\\\
ký đóng BHXH cho người lao động theo mức lương mà không có khoản bổ \\\\
sung khác. Mặc dù quy định từ ngày 1/1/2018, tiền lương tháng đóng BHXH \\\\
gồm mức lương và thêm khoản bổ sung khác."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=256,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
```
#### Output
``` bash
Nhiều doanh nghiệp vẫn chủ yếu xây dựng thang, bảng lương để đóng BHXH bằng mức thấp nhất. \\
Dù quy định từ 1/1/2018, tiền lương tháng đóng BHXH gồm mức lương và thêm khoản bổ sung khác. \\
Thống kê của BHXH Việt Nam cho thấy, nhiều doanh nghiệp vẫn chỉ đóng BHXH \\
cho người lao động theo mức lương mà không có khoản bổ sung khác.
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
{}
|
NlpHUST/t5-small-vi-summarization
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# T5-SMALL-SUMMARIZATION :Pretraining Text-To-Text Transfer Transformer for Vietnamese Text Summarization
#### Example Using
#### Output
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van (nha282@URL).
|
[
"# T5-SMALL-SUMMARIZATION :Pretraining Text-To-Text Transfer Transformer for Vietnamese Text Summarization",
"#### Example Using",
"#### Output",
"### Contact information\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# T5-SMALL-SUMMARIZATION :Pretraining Text-To-Text Transfer Transformer for Vietnamese Text Summarization",
"#### Example Using",
"#### Output",
"### Contact information\nFor personal communication related to this project, please contact Nha Nguyen Van (nha282@URL)."
] |
text2text-generation
|
transformers
|
---
language:
- vi
tags:
- t5
- seq2seq
# Machine translation for vietnamese
## Model Description
T5-vi-en-base is a transformer model for vietnamese machine translation designed using T5 architecture.
## Training data
T5-vi-en-base was trained on 4M sentence pairs (english,vietnamese)
### How to use
```py
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-base")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-base")
model.to(device)
src = "Theo lãnh đạo Sở Y tế, 3 người này không có triệu chứng sốt, ho, khó thở, đã được lấy mẫu xét nghiệm và cách ly tập trung."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=256,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
According to the head of the Department of Health, the three people had no symptoms of fever, cough, shortness of breath, were taken samples for testing and concentrated quarantine.
```
|
{}
|
NlpHUST/t5-vi-en-base
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
---
language:
- vi
tags:
- t5
- seq2seq
# Machine translation for vietnamese
## Model Description
T5-vi-en-base is a transformer model for vietnamese machine translation designed using T5 architecture.
## Training data
T5-vi-en-base was trained on 4M sentence pairs (english,vietnamese)
### How to use
|
[
"# Machine translation for vietnamese",
"## Model Description\nT5-vi-en-base is a transformer model for vietnamese machine translation designed using T5 architecture.",
"## Training data\nT5-vi-en-base was trained on 4M sentence pairs (english,vietnamese)",
"### How to use"
] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Machine translation for vietnamese",
"## Model Description\nT5-vi-en-base is a transformer model for vietnamese machine translation designed using T5 architecture.",
"## Training data\nT5-vi-en-base was trained on 4M sentence pairs (english,vietnamese)",
"### How to use"
] |
text2text-generation
|
transformers
|
---
language:
- vi
tags:
- t5
- seq2seq
# Machine translation for vietnamese
## Model Description
T5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture.
## Training data
T5-vi-en-small was trained on 4M sentence pairs (english,vietnamese)
### How to use
```py
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-small")
model.to(device)
src = "Indonesia phỏng đoán nguyên nhân tàu ngầm chở 53 người mất tích bí ẩn"
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=256,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
Indonesia anticipates the cause of the submarine transporting 53 mysterious missing persons
```
|
{}
|
NlpHUST/t5-vi-en-small
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:04+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
---
language:
- vi
tags:
- t5
- seq2seq
# Machine translation for vietnamese
## Model Description
T5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture.
## Training data
T5-vi-en-small was trained on 4M sentence pairs (english,vietnamese)
### How to use
|
[
"# Machine translation for vietnamese",
"## Model Description\nT5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture.",
"## Training data\nT5-vi-en-small was trained on 4M sentence pairs (english,vietnamese)",
"### How to use"
] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Machine translation for vietnamese",
"## Model Description\nT5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture.",
"## Training data\nT5-vi-en-small was trained on 4M sentence pairs (english,vietnamese)",
"### How to use"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.