modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 18:27:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 18:25:26
card
stringlengths
11
1.01M
SetFit/distilbert-base-uncased__sst2__train-8-3
SetFit
2022-02-10T07:10:59Z
6
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-8-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-8-3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6914 - Accuracy: 0.5195 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6931 | 1.0 | 3 | 0.7039 | 0.25 | | 0.6615 | 2.0 | 6 | 0.7186 | 0.25 | | 0.653 | 3.0 | 9 | 0.7334 | 0.25 | | 0.601 | 4.0 | 12 | 0.7592 | 0.25 | | 0.5555 | 5.0 | 15 | 0.7922 | 0.25 | | 0.4832 | 6.0 | 18 | 0.8179 | 0.25 | | 0.4565 | 7.0 | 21 | 0.8285 | 0.25 | | 0.3996 | 8.0 | 24 | 0.8559 | 0.25 | | 0.3681 | 9.0 | 27 | 0.8586 | 0.5 | | 0.2901 | 10.0 | 30 | 0.8646 | 0.5 | | 0.241 | 11.0 | 33 | 0.8524 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-8-2
SetFit
2022-02-10T07:10:08Z
9
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-8-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-8-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6932 - Accuracy: 0.4931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7081 | 1.0 | 3 | 0.7031 | 0.25 | | 0.6853 | 2.0 | 6 | 0.7109 | 0.25 | | 0.6696 | 3.0 | 9 | 0.7211 | 0.25 | | 0.6174 | 4.0 | 12 | 0.7407 | 0.25 | | 0.5717 | 5.0 | 15 | 0.7625 | 0.25 | | 0.5096 | 6.0 | 18 | 0.7732 | 0.25 | | 0.488 | 7.0 | 21 | 0.7798 | 0.25 | | 0.4023 | 8.0 | 24 | 0.7981 | 0.25 | | 0.3556 | 9.0 | 27 | 0.8110 | 0.25 | | 0.2714 | 10.0 | 30 | 0.8269 | 0.25 | | 0.2295 | 11.0 | 33 | 0.8276 | 0.25 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-8-0
SetFit
2022-02-10T07:08:27Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-8-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-8-0 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6920 - Accuracy: 0.5189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6916 | 1.0 | 3 | 0.7035 | 0.25 | | 0.6852 | 2.0 | 6 | 0.7139 | 0.25 | | 0.6533 | 3.0 | 9 | 0.7192 | 0.25 | | 0.6211 | 4.0 | 12 | 0.7322 | 0.25 | | 0.5522 | 5.0 | 15 | 0.7561 | 0.25 | | 0.488 | 6.0 | 18 | 0.7883 | 0.25 | | 0.48 | 7.0 | 21 | 0.8224 | 0.25 | | 0.3948 | 8.0 | 24 | 0.8605 | 0.25 | | 0.3478 | 9.0 | 27 | 0.8726 | 0.25 | | 0.2723 | 10.0 | 30 | 0.8885 | 0.25 | | 0.2174 | 11.0 | 33 | 0.8984 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
speech-seq2seq/wav2vec2-2-roberta-large
speech-seq2seq
2022-02-10T06:14:17Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 12.2365 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 6.5774 | 0.28 | 500 | 10.5449 | 1.0 | | 6.706 | 0.56 | 1000 | 9.4411 | 1.0 | | 6.9182 | 0.84 | 1500 | 10.9554 | 1.0 | | 6.7416 | 1.12 | 2000 | 10.0801 | 1.0 | | 6.8778 | 1.4 | 2500 | 9.8569 | 1.0 | | 6.7694 | 1.68 | 3000 | 10.4234 | 1.0 | | 6.7415 | 1.96 | 3500 | 10.6545 | 1.0 | | 6.5997 | 2.24 | 4000 | 10.4268 | 1.0 | | 6.7672 | 2.52 | 4500 | 11.1929 | 1.0 | | 6.5254 | 2.8 | 5000 | 12.2365 | 1.0 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
speech-seq2seq/wav2vec2-2-bert-large
speech-seq2seq
2022-02-10T06:06:24Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 6.9670 - Wer: 1.9878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.7599 | 0.28 | 500 | 6.8755 | 1.2551 | | 6.5943 | 0.56 | 1000 | 6.7702 | 1.5878 | | 6.3146 | 0.84 | 1500 | 6.6981 | 1.6627 | | 6.6112 | 1.12 | 2000 | 6.6760 | 1.9853 | | 6.6894 | 1.4 | 2500 | 6.6323 | 1.9376 | | 6.5525 | 1.68 | 3000 | 6.6185 | 1.9383 | | 6.571 | 1.96 | 3500 | 6.6126 | 1.9580 | | 6.3363 | 2.24 | 4000 | 6.7869 | 1.9818 | | 6.5832 | 2.52 | 4500 | 6.9096 | 2.0025 | | 6.3523 | 2.8 | 5000 | 6.9670 | 1.9878 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
fznmhmmd/distilbert-base-uncased-finetuned-cola
fznmhmmd
2022-02-10T04:00:35Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5543972545286807 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8273 - Matthews Correlation: 0.5544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5256 | 1.0 | 535 | 0.5419 | 0.4248 | | 0.3486 | 2.0 | 1070 | 0.5187 | 0.4999 | | 0.2406 | 3.0 | 1605 | 0.6580 | 0.5054 | | 0.1692 | 4.0 | 2140 | 0.7455 | 0.5403 | | 0.1343 | 5.0 | 2675 | 0.8273 | 0.5544 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
fznmhmmd/bert-base-cased-wikitext2
fznmhmmd
2022-02-10T00:37:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.8575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.0964 | 1.0 | 2346 | 7.0532 | | 6.9055 | 2.0 | 4692 | 6.8710 | | 6.8574 | 3.0 | 7038 | 6.8917 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Crives/distilbert-base-uncased-finetuned-emotion
Crives
2022-02-09T22:08:11Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9215538311282218 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2175 - Accuracy: 0.9215 - F1: 0.9216 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7814 | 1.0 | 250 | 0.3105 | 0.907 | 0.9046 | | 0.2401 | 2.0 | 500 | 0.2175 | 0.9215 | 0.9216 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
philippelaban/summary_loop10
philippelaban
2022-02-09T22:02:12Z
15
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "summarization", "en", "dataset:cnn_dailymail", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail --- # Try out in the Hosted inference API In the right panel, you can try to the model (although it only handles a short sequence length). Enter the document you want to summarize in the panel on the right. # Model Loading The model (based on a GPT2 base architecture) can be loaded in the following way: ``` from transformers import GPT2LMHeadModel, GPT2TokenizerFast model = GPT2LMHeadModel.from_pretrained("philippelaban/summary_loop10") tokenizer = GPT2TokenizerFast.from_pretrained("philippelaban/summary_loop10") ``` # Example Use ``` document = "Bouncing Boulders Point to Quakes on Mars. A preponderance of boulder tracks on the red planet may be evidence of recent seismic activity. If a rock falls on Mars, and no one is there to see it, does it leave a trace? Yes, and it's a beautiful herringbone-like pattern, new research reveals. Scientists have now spotted thousands of tracks on the red planet created by tumbling boulders. Delicate chevron-shaped piles of Martian dust and sand frame the tracks, the team showed, and most fade over the course of a few years. Rockfalls have been spotted elsewhere in the solar system, including on the moon and even a comet. But a big open question is the timing of these processes on other worlds — are they ongoing or did they predominantly occur in the past?" tokenized_document = tokenizer([document], max_length=300, truncation=True, return_tensors="pt")["input_ids"].cuda() input_shape = tokenized_document.shape outputs = model.generate(tokenized_document, do_sample=False, max_length=500, num_beams=4, num_return_sequences=4, no_repeat_ngram_size=6, return_dict_in_generate=True, output_scores=True) candidate_sequences = outputs.sequences[:, input_shape[1]:] # Remove the encoded text, keep only the summary candidate_scores = outputs.sequences_scores.tolist() for candidate_tokens, score in zip(candidate_sequences, candidate_scores): summary = tokenizer.decode(candidate_tokens) print("[Score: %.3f] %s" % (score, summary[:summary.index("END")])) ``` # Example output ``` [Score: -0.084] Here's what you need to know about rockfalls [Score: -0.087] Here's what you need to know about these tracks [Score: -0.091] Here's what we know so far about these tracks [Score: -0.101] Here's what you need to know about rockfall ``` # Github repo You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/CannyLab/summary_loop
philippelaban/summary_loop24
philippelaban
2022-02-09T22:01:38Z
11
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "summarization", "en", "dataset:cnn_dailymail", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail --- # Try out in the Hosted inference API In the right panel, you can try to the model (although it only handles a short sequence length). Enter the document you want to summarize in the panel on the right. # Model Loading The model (based on a GPT2 base architecture) can be loaded in the following way: ``` from transformers import GPT2LMHeadModel, GPT2TokenizerFast model = GPT2LMHeadModel.from_pretrained("philippelaban/summary_loop46") tokenizer = GPT2TokenizerFast.from_pretrained("philippelaban/summary_loop46") ``` # Example Use ``` document = "Bouncing Boulders Point to Quakes on Mars. A preponderance of boulder tracks on the red planet may be evidence of recent seismic activity. If a rock falls on Mars, and no one is there to see it, does it leave a trace? Yes, and it's a beautiful herringbone-like pattern, new research reveals. Scientists have now spotted thousands of tracks on the red planet created by tumbling boulders. Delicate chevron-shaped piles of Martian dust and sand frame the tracks, the team showed, and most fade over the course of a few years. Rockfalls have been spotted elsewhere in the solar system, including on the moon and even a comet. But a big open question is the timing of these processes on other worlds — are they ongoing or did they predominantly occur in the past?" tokenized_document = tokenizer([document], max_length=300, truncation=True, return_tensors="pt")["input_ids"].cuda() input_shape = tokenized_document.shape outputs = model.generate(tokenized_document, do_sample=False, max_length=500, num_beams=4, num_return_sequences=4, no_repeat_ngram_size=6, return_dict_in_generate=True, output_scores=True) candidate_sequences = outputs.sequences[:, input_shape[1]:] # Remove the encoded text, keep only the summary candidate_scores = outputs.sequences_scores.tolist() for candidate_tokens, score in zip(candidate_sequences, candidate_scores): summary = tokenizer.decode(candidate_tokens) print("[Score: %.3f] %s" % (score, summary[:summary.index("END")])) ``` # Example output ``` [Score: -0.113] These tracks have been spotted elsewhere in the solar system, including on the red planet, and no one is there to see it, does it leave a trace? Yes, and [Score: -0.119] Now researchers have spotted thousands of tracks on the red planet created by tumbling boulders in Mars, and no one is there to see it, does it leave a trace? [Score: -0.214] Here are answers to those questions posed by scientists investigating the tracks discovered by scientists examining the tracks discovered by scientists exploring the tracks discovered by scientists exploring the tracks discovered by scientists exploring the [Score: -0.388] These are the kinds of questions swirling around whether these tracks exist on Mars, and whether they should be noticed sooner rather than later. Here are some answers: -- The tracks detected ``` # Github repo You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/CannyLab/summary_loop
SetFit/distilbert-base-uncased__subj__train-8-9
SetFit
2022-02-09T20:34:07Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__subj__train-8-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__subj__train-8-9 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4865 - Accuracy: 0.778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7024 | 1.0 | 3 | 0.6843 | 0.75 | | 0.67 | 2.0 | 6 | 0.6807 | 0.5 | | 0.6371 | 3.0 | 9 | 0.6677 | 0.5 | | 0.585 | 4.0 | 12 | 0.6649 | 0.5 | | 0.5122 | 5.0 | 15 | 0.6707 | 0.5 | | 0.4379 | 6.0 | 18 | 0.6660 | 0.5 | | 0.4035 | 7.0 | 21 | 0.6666 | 0.5 | | 0.323 | 8.0 | 24 | 0.6672 | 0.5 | | 0.2841 | 9.0 | 27 | 0.6534 | 0.5 | | 0.21 | 10.0 | 30 | 0.6456 | 0.5 | | 0.1735 | 11.0 | 33 | 0.6325 | 0.5 | | 0.133 | 12.0 | 36 | 0.6214 | 0.5 | | 0.0986 | 13.0 | 39 | 0.6351 | 0.5 | | 0.081 | 14.0 | 42 | 0.6495 | 0.5 | | 0.0638 | 15.0 | 45 | 0.6671 | 0.5 | | 0.0449 | 16.0 | 48 | 0.7156 | 0.5 | | 0.0399 | 17.0 | 51 | 0.7608 | 0.5 | | 0.0314 | 18.0 | 54 | 0.7796 | 0.5 | | 0.0243 | 19.0 | 57 | 0.7789 | 0.5 | | 0.0227 | 20.0 | 60 | 0.7684 | 0.5 | | 0.0221 | 21.0 | 63 | 0.7628 | 0.5 | | 0.0192 | 22.0 | 66 | 0.7728 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__subj__train-8-8
SetFit
2022-02-09T20:32:49Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__subj__train-8-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__subj__train-8-8 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3160 - Accuracy: 0.8735 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7187 | 1.0 | 3 | 0.6776 | 1.0 | | 0.684 | 2.0 | 6 | 0.6608 | 1.0 | | 0.6532 | 3.0 | 9 | 0.6364 | 1.0 | | 0.5996 | 4.0 | 12 | 0.6119 | 1.0 | | 0.5242 | 5.0 | 15 | 0.5806 | 1.0 | | 0.4612 | 6.0 | 18 | 0.5320 | 1.0 | | 0.4192 | 7.0 | 21 | 0.4714 | 1.0 | | 0.3274 | 8.0 | 24 | 0.4071 | 1.0 | | 0.2871 | 9.0 | 27 | 0.3378 | 1.0 | | 0.2082 | 10.0 | 30 | 0.2822 | 1.0 | | 0.1692 | 11.0 | 33 | 0.2271 | 1.0 | | 0.1242 | 12.0 | 36 | 0.1793 | 1.0 | | 0.0977 | 13.0 | 39 | 0.1417 | 1.0 | | 0.0776 | 14.0 | 42 | 0.1117 | 1.0 | | 0.0631 | 15.0 | 45 | 0.0894 | 1.0 | | 0.0453 | 16.0 | 48 | 0.0733 | 1.0 | | 0.0399 | 17.0 | 51 | 0.0617 | 1.0 | | 0.0333 | 18.0 | 54 | 0.0528 | 1.0 | | 0.0266 | 19.0 | 57 | 0.0454 | 1.0 | | 0.0234 | 20.0 | 60 | 0.0393 | 1.0 | | 0.0223 | 21.0 | 63 | 0.0345 | 1.0 | | 0.0195 | 22.0 | 66 | 0.0309 | 1.0 | | 0.0161 | 23.0 | 69 | 0.0281 | 1.0 | | 0.0167 | 24.0 | 72 | 0.0260 | 1.0 | | 0.0163 | 25.0 | 75 | 0.0242 | 1.0 | | 0.0134 | 26.0 | 78 | 0.0227 | 1.0 | | 0.0128 | 27.0 | 81 | 0.0214 | 1.0 | | 0.0101 | 28.0 | 84 | 0.0204 | 1.0 | | 0.0109 | 29.0 | 87 | 0.0194 | 1.0 | | 0.0112 | 30.0 | 90 | 0.0186 | 1.0 | | 0.0108 | 31.0 | 93 | 0.0179 | 1.0 | | 0.011 | 32.0 | 96 | 0.0174 | 1.0 | | 0.0099 | 33.0 | 99 | 0.0169 | 1.0 | | 0.0083 | 34.0 | 102 | 0.0164 | 1.0 | | 0.0096 | 35.0 | 105 | 0.0160 | 1.0 | | 0.01 | 36.0 | 108 | 0.0156 | 1.0 | | 0.0084 | 37.0 | 111 | 0.0152 | 1.0 | | 0.0089 | 38.0 | 114 | 0.0149 | 1.0 | | 0.0073 | 39.0 | 117 | 0.0146 | 1.0 | | 0.0082 | 40.0 | 120 | 0.0143 | 1.0 | | 0.008 | 41.0 | 123 | 0.0141 | 1.0 | | 0.0093 | 42.0 | 126 | 0.0139 | 1.0 | | 0.0078 | 43.0 | 129 | 0.0138 | 1.0 | | 0.0086 | 44.0 | 132 | 0.0136 | 1.0 | | 0.009 | 45.0 | 135 | 0.0135 | 1.0 | | 0.0072 | 46.0 | 138 | 0.0134 | 1.0 | | 0.0075 | 47.0 | 141 | 0.0133 | 1.0 | | 0.0082 | 48.0 | 144 | 0.0133 | 1.0 | | 0.0068 | 49.0 | 147 | 0.0132 | 1.0 | | 0.0074 | 50.0 | 150 | 0.0132 | 1.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__subj__train-8-5
SetFit
2022-02-09T20:26:29Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__subj__train-8-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__subj__train-8-5 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6927 - Accuracy: 0.506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7102 | 1.0 | 3 | 0.6790 | 0.75 | | 0.6693 | 2.0 | 6 | 0.6831 | 0.75 | | 0.6438 | 3.0 | 9 | 0.6876 | 0.75 | | 0.6047 | 4.0 | 12 | 0.6970 | 0.75 | | 0.547 | 5.0 | 15 | 0.7065 | 0.75 | | 0.4885 | 6.0 | 18 | 0.7114 | 0.75 | | 0.4601 | 7.0 | 21 | 0.7147 | 0.5 | | 0.4017 | 8.0 | 24 | 0.7178 | 0.5 | | 0.3474 | 9.0 | 27 | 0.7145 | 0.5 | | 0.2624 | 10.0 | 30 | 0.7153 | 0.5 | | 0.2175 | 11.0 | 33 | 0.7158 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__subj__train-8-4
SetFit
2022-02-09T20:25:34Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__subj__train-8-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__subj__train-8-4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3305 - Accuracy: 0.8565 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6991 | 1.0 | 3 | 0.6772 | 0.75 | | 0.6707 | 2.0 | 6 | 0.6704 | 0.75 | | 0.6402 | 3.0 | 9 | 0.6608 | 1.0 | | 0.5789 | 4.0 | 12 | 0.6547 | 0.75 | | 0.5211 | 5.0 | 15 | 0.6434 | 0.75 | | 0.454 | 6.0 | 18 | 0.6102 | 1.0 | | 0.4187 | 7.0 | 21 | 0.5701 | 1.0 | | 0.3401 | 8.0 | 24 | 0.5289 | 1.0 | | 0.3107 | 9.0 | 27 | 0.4737 | 1.0 | | 0.2381 | 10.0 | 30 | 0.4255 | 1.0 | | 0.1982 | 11.0 | 33 | 0.3685 | 1.0 | | 0.1631 | 12.0 | 36 | 0.3200 | 1.0 | | 0.1234 | 13.0 | 39 | 0.2798 | 1.0 | | 0.0993 | 14.0 | 42 | 0.2455 | 1.0 | | 0.0781 | 15.0 | 45 | 0.2135 | 1.0 | | 0.0586 | 16.0 | 48 | 0.1891 | 1.0 | | 0.0513 | 17.0 | 51 | 0.1671 | 1.0 | | 0.043 | 18.0 | 54 | 0.1427 | 1.0 | | 0.0307 | 19.0 | 57 | 0.1225 | 1.0 | | 0.0273 | 20.0 | 60 | 0.1060 | 1.0 | | 0.0266 | 21.0 | 63 | 0.0920 | 1.0 | | 0.0233 | 22.0 | 66 | 0.0823 | 1.0 | | 0.0185 | 23.0 | 69 | 0.0751 | 1.0 | | 0.0173 | 24.0 | 72 | 0.0698 | 1.0 | | 0.0172 | 25.0 | 75 | 0.0651 | 1.0 | | 0.0142 | 26.0 | 78 | 0.0613 | 1.0 | | 0.0151 | 27.0 | 81 | 0.0583 | 1.0 | | 0.0117 | 28.0 | 84 | 0.0563 | 1.0 | | 0.0123 | 29.0 | 87 | 0.0546 | 1.0 | | 0.0121 | 30.0 | 90 | 0.0531 | 1.0 | | 0.0123 | 31.0 | 93 | 0.0511 | 1.0 | | 0.0112 | 32.0 | 96 | 0.0496 | 1.0 | | 0.0103 | 33.0 | 99 | 0.0481 | 1.0 | | 0.0086 | 34.0 | 102 | 0.0468 | 1.0 | | 0.0096 | 35.0 | 105 | 0.0457 | 1.0 | | 0.0107 | 36.0 | 108 | 0.0447 | 1.0 | | 0.0095 | 37.0 | 111 | 0.0439 | 1.0 | | 0.0102 | 38.0 | 114 | 0.0429 | 1.0 | | 0.0077 | 39.0 | 117 | 0.0422 | 1.0 | | 0.0092 | 40.0 | 120 | 0.0415 | 1.0 | | 0.0083 | 41.0 | 123 | 0.0409 | 1.0 | | 0.0094 | 42.0 | 126 | 0.0404 | 1.0 | | 0.0084 | 43.0 | 129 | 0.0400 | 1.0 | | 0.0085 | 44.0 | 132 | 0.0396 | 1.0 | | 0.0092 | 45.0 | 135 | 0.0392 | 1.0 | | 0.0076 | 46.0 | 138 | 0.0389 | 1.0 | | 0.0073 | 47.0 | 141 | 0.0388 | 1.0 | | 0.0085 | 48.0 | 144 | 0.0387 | 1.0 | | 0.0071 | 49.0 | 147 | 0.0386 | 1.0 | | 0.0079 | 50.0 | 150 | 0.0386 | 1.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__subj__train-8-0
SetFit
2022-02-09T20:17:24Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__subj__train-8-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__subj__train-8-0 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4440 - Accuracy: 0.789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7163 | 1.0 | 3 | 0.6868 | 0.5 | | 0.6683 | 2.0 | 6 | 0.6804 | 0.75 | | 0.6375 | 3.0 | 9 | 0.6702 | 0.75 | | 0.5997 | 4.0 | 12 | 0.6686 | 0.75 | | 0.5345 | 5.0 | 15 | 0.6720 | 0.75 | | 0.4673 | 6.0 | 18 | 0.6646 | 0.75 | | 0.4214 | 7.0 | 21 | 0.6494 | 0.75 | | 0.3439 | 8.0 | 24 | 0.6313 | 0.75 | | 0.3157 | 9.0 | 27 | 0.6052 | 0.75 | | 0.2329 | 10.0 | 30 | 0.5908 | 0.75 | | 0.1989 | 11.0 | 33 | 0.5768 | 0.75 | | 0.1581 | 12.0 | 36 | 0.5727 | 0.75 | | 0.1257 | 13.0 | 39 | 0.5678 | 0.75 | | 0.1005 | 14.0 | 42 | 0.5518 | 0.75 | | 0.0836 | 15.0 | 45 | 0.5411 | 0.75 | | 0.0611 | 16.0 | 48 | 0.5320 | 0.75 | | 0.0503 | 17.0 | 51 | 0.5299 | 0.75 | | 0.0407 | 18.0 | 54 | 0.5368 | 0.75 | | 0.0332 | 19.0 | 57 | 0.5455 | 0.75 | | 0.0293 | 20.0 | 60 | 0.5525 | 0.75 | | 0.0254 | 21.0 | 63 | 0.5560 | 0.75 | | 0.0231 | 22.0 | 66 | 0.5569 | 0.75 | | 0.0201 | 23.0 | 69 | 0.5572 | 0.75 | | 0.0179 | 24.0 | 72 | 0.5575 | 0.75 | | 0.0184 | 25.0 | 75 | 0.5547 | 0.75 | | 0.0148 | 26.0 | 78 | 0.5493 | 0.75 | | 0.0149 | 27.0 | 81 | 0.5473 | 0.75 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
Maunish/ecomm-sbert
Maunish
2022-02-09T17:47:29Z
3
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: apache-2.0 ---
justin871030/bert-base-uncased-goemotions-group-finetuned
justin871030
2022-02-09T17:22:07Z
4
0
transformers
[ "transformers", "pytorch", "bert", "go-emotion", "text-classification", "en", "dataset:go_emotions", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - go-emotion - text-classification - pytorch datasets: - go_emotions metrics: - f1 widget: - text: "Thanks for giving advice to the people who need it! 👌🙏" license: mit --- ## Model Description 1. Based on the uncased BERT pretrained model with a linear output layer. 2. Added several commonly-used emoji and tokens to the special token list of the tokenizer. 3. Did label smoothing while training. 4. Used weighted loss and focal loss to help the cases which trained badly. ## Results Best Result of `Macro F1` - 70% ## Tutorial Link - [GitHub](https://github.com/justin871030/GoEmotions)
justin871030/bert-base-uncased-goemotions-original-finetuned
justin871030
2022-02-09T17:17:55Z
5
0
transformers
[ "transformers", "pytorch", "bert", "go-emotion", "text-classification", "en", "dataset:go_emotions", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - go-emotion - text-classification - pytorch datasets: - go_emotions metrics: - f1 widget: - text: "Thanks for giving advice to the people who need it! 👌🙏" license: mit --- ## Model Description 1. Based on the uncased BERT pretrained model with a linear output layer. 2. Added several commonly-used emoji and tokens to the special token list of the tokenizer. 3. Did label smoothing while training. 4. Used weighted loss and focal loss to help the cases which trained badly. ## Results Best Result of `Macro F1` - 53% ## Tutorial Link - [GitHub](https://github.com/justin871030/GoEmotions)
am-shb/xlm-roberta-base-pretrained
am-shb
2022-02-09T15:53:08Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 16 - seed: 1337 - gradient_accumulation_steps: 4 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.11.2 - Pytorch 1.10.0 - Datasets 1.8.0 - Tokenizers 0.10.3
fznmhmmd/gpt2-wikitext2
fznmhmmd
2022-02-09T15:44:05Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5571 | 1.0 | 2249 | 6.4684 | | 6.1921 | 2.0 | 4498 | 6.1984 | | 6.0016 | 3.0 | 6747 | 6.1112 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
jgammack/SAE-bert-base-uncased
jgammack
2022-02-09T15:33:35Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: SAE-bert-base-uncased results: [] widget: - text: "Wind [MASK] was detected coming from the car door closure system." example_title: "Closure system" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SAE-bert-base-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [jgammack/SAE-door-abstracts](https://huggingface.co/datasets/jgammack/SAE-door-abstracts) dataset. It achieves the following results on the evaluation set: - Loss: 2.1256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 7 - eval_batch_size: 7 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5967 | 1.0 | 80 | 2.3409 | | 2.4881 | 2.0 | 160 | 2.2707 | | 2.3567 | 3.0 | 240 | 2.3134 | | 2.3413 | 4.0 | 320 | 2.2592 | | 2.3006 | 5.0 | 400 | 2.2351 | | 2.2568 | 6.0 | 480 | 2.2556 | | 2.2303 | 7.0 | 560 | 2.2546 | | 2.1892 | 8.0 | 640 | 2.1868 | | 2.1851 | 9.0 | 720 | 2.2073 | | 2.1738 | 10.0 | 800 | 2.1344 | | 2.1673 | 11.0 | 880 | 2.1927 | | 2.1518 | 12.0 | 960 | 2.1844 | | 2.1142 | 13.0 | 1040 | 2.1466 | | 2.1343 | 14.0 | 1120 | 2.2024 | | 2.1332 | 15.0 | 1200 | 2.1035 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
jgammack/SAE-distilbert-base-uncased
jgammack
2022-02-09T15:32:40Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: SAE-distilbert-base-uncased results: [] widget: - text: "Wind noise was detected coming from the car [MASK] closure system." example_title: "Closure system" --- # SAE-distilbert-base-uncased This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [jgammack/SAE-door-abstracts](https://huggingface.co/datasets/jgammack/SAE-door-abstracts) dataset. It achieves the following results on the evaluation set: - Loss: 2.2970 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 15 - eval_batch_size: 15 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5323 | 1.0 | 37 | 2.4503 | | 2.4968 | 2.0 | 74 | 2.4571 | | 2.4688 | 3.0 | 111 | 2.4099 | | 2.419 | 4.0 | 148 | 2.3343 | | 2.4229 | 5.0 | 185 | 2.3072 | | 2.4067 | 6.0 | 222 | 2.2927 | | 2.3877 | 7.0 | 259 | 2.2836 | | 2.374 | 8.0 | 296 | 2.3767 | | 2.3582 | 9.0 | 333 | 2.2493 | | 2.356 | 10.0 | 370 | 2.2847 | | 2.3294 | 11.0 | 407 | 2.3234 | | 2.3358 | 12.0 | 444 | 2.2660 | | 2.3414 | 13.0 | 481 | 2.2887 | | 2.3154 | 14.0 | 518 | 2.3737 | | 2.311 | 15.0 | 555 | 2.2686 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
navteca/multi-qa-mpnet-base-cos-v1
navteca
2022-02-09T14:55:14Z
9
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- language: en license: mit pipeline_tag: sentence-similarity tags: - feature-extraction - sentence-similarity - sentence-transformers --- # Multi QA MPNet base model for Semantic Search This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. This model uses [`mpnet-base`](https://huggingface.co/microsoft/mpnet-base). ## Training Data We use the concatenation from multiple datasets to fine-tune this model. In total we have about 215M (question, answer) pairs. The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20. | Dataset | Number of training tuples | |--------------------------------------------------------|:--------------------------:| | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 | | [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 | | [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 | | **Total** | **214,988,242** | ## Technical Details In the following some technical details how this model must be used: | Setting | Value | | --- | :---: | | Dimensions | 768 | | Produces normalized embeddings | Yes | | Pooling-Method | Mean pooling | | Suitable score functions | dot-product, cosine-similarity, or euclidean distance | Note: This model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used. ## Usage and Performance The trained model can be used like this: ```python from sentence_transformers import SentenceTransformer, util question = "That is a happy person" contexts = [ "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] # Load the model model = SentenceTransformer('navteca//multi-qa-mpnet-base-cos-v1') # Encode question and contexts question_emb = model.encode(question) contexts_emb = model.encode(contexts) # Compute dot score between question and all contexts embeddings result = util.dot_score(question_emb, contexts_emb)[0].cpu().tolist() print(result) #[ # 0.60806852579116820, # 0.94949364662170410, # 0.29836517572402954 #]
Plim/xls-r-300m-cv_8-fr
Plim
2022-02-09T13:59:08Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "fr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer model-index: - name: XLS-R-300m - French results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: fr metrics: - name: Test WER type: wer value: to recompute with STEP 24000 - name: Test CER type: cer value: to recompute with STEP 24000 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: fr metrics: - name: Test WER type: wer value: 35.29 - name: Test CER type: cer value: 13.94 --- ## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 (extended to 7.0 with training with checkpoint) - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.9114 | 0.29 | 1000 | inf | 0.9997 | | 1.2436 | 0.57 | 2000 | inf | 0.4310 | | 1.0552 | 0.86 | 3000 | inf | 0.3144 | | 1.0044 | 1.15 | 4000 | inf | 0.2814 | | 0.9718 | 1.43 | 5000 | inf | 0.2658 | | 0.9502 | 1.72 | 6000 | inf | 0.2566 | | 0.9418 | 2.01 | 7000 | inf | 0.2476 | | 0.9215 | 2.29 | 8000 | inf | 0.2420 | | 0.9236 | 2.58 | 9000 | inf | 0.2388 | | 0.9014 | 2.87 | 10000 | inf | 0.2354 | | 0.8814 | 3.15 | 11000 | inf | 0.2312 | | 0.8809 | 3.44 | 12000 | inf | 0.2285 | | 0.8717 | 3.73 | 13000 | inf | 0.2263 | | 0.8787 | 4.01 | 14000 | inf | 0.2218 | | 0.8567 | 4.3 | 15000 | inf | 0.2193 | | 0.8488 | 4.59 | 16000 | inf | 0.2187 | | 0.8359 | 4.87 | 17000 | inf | 0.2172 | Training continued with checkpoint from STEP 17000: | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | / | 5.16 | 18000 | inf | 0.2176 | | / | 5.45 | 19000 | inf | 0.2181 | | / | 5.73 | 20000 | inf | 0.2155 | | / | 6.02 | 21000 | inf | 0.2140 | | / | 6.31 | 22000 | inf | 0.2124 | | / | 6.59 | 23000 | inf | 0.2117 | | / | 6.88 | 24000 | inf | 0.2116 | It achieves the best result on the validation set on Step 24000: - Wer: 0.2116 Got some issue with validation loss calculation. ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0 ### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8` with split `test` ```bash python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
mrm8488/electricidad-small-finetuned-squadv1-es
mrm8488
2022-02-09T13:29:35Z
23
1
transformers
[ "transformers", "pytorch", "electra", "question-answering", "QA", "SQuAD", "es", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: es thumbnail: https://imgur.com/uxAvBfh tags: - QA - SQuAD --- # Electricidad small + Spanish SQuAD v1 ⚡❓ [Electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) fine-tuned on [Spanish SQUAD v1.1 dataset](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/master/SQuAD-es-v1.1) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Dataset 📚 [SQuAD-es-v1.1](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/master/SQuAD-es-v1.1) | Dataset split | # Samples | | ------------- | --------- | | Train | 130 K | | Test | 11 K | ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python /content/transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'mrm8488/electricidad-small-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1-es.json' \ --predict_file '/content/dataset/dev-v1.1-es.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/electricidad-small-finetuned-squadv1-es' \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **46.82** | | **F1** | **64.79** | ```json { 'exact': 46.82119205298013, 'f1': 64.79435260021918, 'total': 10570, 'HasAns_exact': 46.82119205298013, HasAns_f1': 64.79435260021918, 'HasAns_total': 10570, 'best_exact': 46.82119205298013, 'best_exact_thresh': 0.0, 'best_f1': 64.79435260021918, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/electricidad-small-finetuned-squadv1-es", tokenizer="mrm8488/electricidad-small-finetuned-squadv1-es" ) context = "Manuel ha creado una versión del modelo Electra small en español que alcanza una puntuación F1 de 65 en el dataset SQUAD-es y sólo pesa 50 MB" q1 = "Cuál es su marcador F1?" q2 = "¿Cuál es el tamaño del modelo?" q3 = "¿Quién lo ha creado?" q4 = "¿Que es lo que ha hecho Manuel?" questions = [q1, q2, q3, q4] for question in questions: result = qa_pipeline({ 'context': context, 'question': question}) print(result) # Output: {'score': 0.14836778166355025, 'start': 98, 'end': 100, 'answer': '65'} {'score': 0.32219420810758237, 'start': 136, 'end': 140, 'answer': '50 MB'} {'score': 0.9672326951118713, 'start': 0, 'end': 6, 'answer': 'Manuel'} {'score': 0.23552458113848118, 'start': 10, 'end': 53, 'answer': 'creado una versión del modelo Electra small'} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
victen/xlm-roberta-base-finetuned-panx-de
victen
2022-02-09T10:49:12Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8591260810195721 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.257 | 1.0 | 525 | 0.1512 | 0.8302 | | 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 | | 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
youzanai/clip-product-title-chinese
youzanai
2022-02-09T08:59:51Z
12
9
transformers
[ "transformers", "pytorch", "clip_chinese_model", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
<!-- * @Description: * @Version: * @Author: Hardy * @Date: 2022-02-09 15:13:53 * @LastEditors: Hardy * @LastEditTime: 2022-02-09 16:59:01 --> <br /> <p align="center"> <h1 align="center">clip-product-title-chinese</h1> </p> ## 基于有赞商品图片和标题语料训练的clip模型。 ## Usage 使用模型前,请 git clone https://github.com/youzanai/trexpark.git ```python import torch from src.clip.clip import ClipProcesserChinese, ClipChineseModel import requests from PIL import Image clip_processor = ClipProcesserChinese.from_pretrained('youzanai/clip-product-title-chinese') model = ClipChineseModel.from_pretrained('youzanai/clip-product-title-chinese') url = 'http://img.yzcdn.cn/upload_files/2015/04/21/0140dac4657f874f2acff9294b28088c.jpg' img = Image.open(requests.get(url, stream=True).raw).convert('RGB') imgs = [img] texts = ['运动鞋', '红色连衣裙', '黑色连衣裙', '大衣', '文具'] f = clip_processor(texts, imgs, return_tensors='pt', truncation=True, padding=True) del f['token_type_ids'] with torch.no_grad(): out = model(**f) logits_per_image, logits_per_text = out['logits_per_image'], out['logits_per_text'] print(logits_per_image.softmax(dim=-1).cpu().detach().numpy()) # 结果: [[1.1700666e-07 9.9948394e-01 5.1582896e-04 4.7687358e-11 6.9604440e-08]] ```
Duael/RRHood
Duael
2022-02-09T04:54:18Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: artistic-2.0 ---
Sense-X/uniformer_image
Sense-X
2022-02-09T04:06:53Z
0
7
null
[ "vision", "image-classification", "dataset:imagenet", "arxiv:2201.09450", "license:mit", "region:us" ]
image-classification
2022-03-02T23:29:04Z
--- license: mit tags: - vision - image-classification datasets: - imagenet --- # UniFormer (image model) UniFormer models are trained on ImageNet at resolution 224x224. It was introduced in the paper [UniFormer: Unifying Convolution and Self-attention for Visual Recognition](https://arxiv.org/abs/2201.09450) by Li et al, and first released in [this repository](https://github.com/Sense-X/UniFormer). ## Model description The UniFormer is a type of Vision Transformer, which can seamlessly integrate merits of convolution and self-attention in a concise transformer format. It adopt local MHRA in shallow layers to largely reduce computation burden and global MHRA in deep layers to learn global token relation. Without any extra training data, UniFormer achieves **86.3** top-1 accuracy on ImageNet-1K classification. With only ImageNet-1K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks. UniFormer obtains **82.9/84.8** top-1 accuracy on Kinetics-400/600, and **60.9/71.2** top-1 accuracy on Something-Something V1/V2 video classification tasks. It also achieves **53.8** box AP and **46.4** mask AP on COCO object detection task, **50.8** mIoU on ADE20K semantic segmentation task, and **77.4** AP on COCO pose estimation task. ![teaser](framework.png) [Source](https://paperswithcode.com/paper/uniformer-unifying-convolution-and-self) ## Intended uses & limitations You can use the raw model for image classification. We now only upload the models trained without Token Labeling and Layer Scale. More powerful models can be found in [the model hub](https://github.com/Sense-X/UniFormer/tree/main/image_classification). ### ImageNet | Model | Pretrain | Resolution | Top-1 | #Param. | FLOPs | | --------------- | ----------- | ---------- | ----- | ------- | ----- | | UniFormer-S | ImageNet-1K | 224x224 | 82.9 | 22M | 3.6G | | UniFormer-S† | ImageNet-1K | 224x224 | 83.4 | 24M | 4.2G | | UniFormer-B | ImageNet-1K | 224x224 | 83.8 | 50M | 8.3G | ### How to use You can followed our [demo](https://huggingface.co/spaces/Sense-X/uniformer_image_demo/tree/main) to use our models. ```python from uniformer import uniformer_small from imagenet_class_index import imagenet_classnames model = uniformer_small() # load state model_path = hf_hub_download(repo_id="Sense-X/uniformer_image", filename="uniformer_small_in1k.pth") state_dict = torch.load(model_path, map_location='cpu') model.load_state_dict(state_dict) # set to eval mode model = model.to(device) model = model.eval() # process image image = img image_transform = T.Compose( [ T.Resize(224), T.CenterCrop(224), T.ToTensor(), T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ] ) image = image_transform(image) image = image.unsqueeze(0) # model predicts one of the 1000 ImageNet classes prediction = model(image) predicted_class_idx = prediction.flatten().argmax(-1).item() print("Predicted class:", imagenet_classnames[str(predicted_class_idx)][1]) ``` ### BibTeX entry and citation info ```bibtex @misc{li2022uniformer, title={UniFormer: Unifying Convolution and Self-attention for Visual Recognition}, author={Kunchang Li and Yali Wang and Junhao Zhang and Peng Gao and Guanglu Song and Yu Liu and Hongsheng Li and Yu Qiao}, year={2022}, eprint={2201.09450}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
Sense-X/uniformer_video
Sense-X
2022-02-09T03:49:34Z
0
5
null
[ "vision", "video-classification", "dataset:kinetics-400", "dataset:kinetics-600", "dataset:something-something-v1", "dataset:something-something-v2", "arxiv:2201.04676", "license:mit", "region:us" ]
video-classification
2022-03-02T23:29:04Z
--- license: mit tags: - vision - video-classification datasets: - kinetics-400 - kinetics-600 - something-something-v1 - something-something-v2 --- # UniFormer (video model) UniFormer models are trained on [Kinetics](https://deepmind.com/research/open-source/kinetics) and [Something-Something](https://20bn.com/datasets/something-something) at resolution 224x224. It was introduced in the paper [UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning](https://arxiv.org/abs/2201.04676) by Li et al, and first released in [this repository](https://github.com/Sense-X/UniFormer). ## Model description The UniFormer is a type of Vision Transformer, which can seamlessly integrate merits of convolution and self-attention in a concise transformer format. It adopt local MHRA in shallow layers to largely reduce computation burden and global MHRA in deep layers to learn global token relation. Without any extra training data, UniFormer achieves **86.3** top-1 accuracy on ImageNet-1K classification. With only ImageNet-1K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks. UniFormer obtains **82.9/84.8** top-1 accuracy on Kinetics-400/600, and **60.9/71.2** top-1 accuracy on Something-Something V1/V2 video classification tasks. It also achieves **53.8** box AP and **46.4** mask AP on COCO object detection task, **50.8** mIoU on ADE20K semantic segmentation task, and **77.4** AP on COCO pose estimation task. ![teaser](framework.png) [Source](https://paperswithcode.com/paper/uniformer-unified-transformer-for-efficient) ## Intended uses & limitations You can use the raw model for video classification. We now only upload the powerful models with **single clip**. More models can be found in [the model hub](https://github.com/Sense-X/UniFormer/tree/main/video_classification). ### Kinetics | Model | #Frame | Sampling Stride | FLOPs | K400 Top-1 | K600 Top-1 | | ----------- | ------ | --------------- | ----- | ---------- | ---------- | | UniFormer-S | 16x1x1 | 8 | 41.8G | 78.4 | 80.8 | | UniFormer-B | 16x1x1 | 8 | 96.7G | 79.3 | 81.7 | | UniFormer-B | 32x1x1 | 4 | 259G | 80.9 | 82.4 | ### Something-Something | Model | #Frame | FLOPs | SSV1 Top-1 | SSV2 Top-1 | | ----------- | ------ | ----- | ---------- | ---------- | | UniFormer-S | 16x1x1 | 41.8G | 54.4 | 65.0 | | UniFormer-B | 32x1x1 | 259G | 58.0 | 67.5 | ### How to use You can followed our [demo](https://huggingface.co/spaces/Sense-X/uniformer_video_demo/tree/main) to use our models. ```python from uniformer import uniformer_small from kinetics_class_index import kinetics_classnames model = uniformer_small() # load state model_path = hf_hub_download(repo_id="Sense-X/uniformer_video", filename="uniformer_small_k400_16x8.pth") state_dict = torch.load(model_path, map_location='cpu') model.load_state_dict(state_dict) # set to eval mode model = model.to(device) model = model.eval() # please refer to the following url to process video of Kinetics: # https://huggingface.co/spaces/Sense-X/uniformer_video_demo/blob/main/app.py vid = load_video(video) # model predicts one of the 400 Kintics classes prediction = model(vid) predicted_class_idx = prediction.flatten().argmax(-1).item() print("Predicted class:", kinetics_classnames[str(predicted_class_idx)]) ``` ### BibTeX entry and citation info ```bibtex @misc{li2022uniformer, title={UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning}, author={Kunchang Li and Yali Wang and Peng Gao and Guanglu Song and Yu Liu and Hongsheng Li and Yu Qiao}, year={2022}, eprint={2201.04676}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
thyagosme/bert-base-cased-wikitext2
thyagosme
2022-02-09T03:44:53Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.8517 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.0902 | 1.0 | 2346 | 7.0492 | | 6.9027 | 2.0 | 4692 | 6.8692 | | 6.8553 | 3.0 | 7038 | 6.8882 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever
Vasanth
2022-02-09T00:44:30Z
10
2
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever') model = AutoModel.from_pretrained('Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Vasanth/multi-qa-MiniLM-L6-cos-v1-qa-squad2-retriever) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8144 with parameters: ``` {'batch_size': 16} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2443, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ghofrani/xls-r-1b-fa-cv8
ghofrani
2022-02-08T23:51:46Z
25
2
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "fa", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - fa tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: common8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # common8 This model is a fine-tuned version of [wghts/checkpoint-20000](https://huggingface.co/wghts/checkpoint-20000) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FA dataset. It achieves the following results on the evaluation set: - Loss: 0.3174 - Wer: 0.3022 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 250.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 3.5847 | 1.93 | 500 | 3.5104 | 1.0 | | 2.7858 | 3.86 | 1000 | 2.9601 | 1.0001 | | 1.6827 | 5.79 | 1500 | 0.7853 | 0.7030 | | 1.4656 | 7.72 | 2000 | 0.6076 | 0.6014 | | 1.3693 | 9.65 | 2500 | 0.5114 | 0.5307 | | 1.379 | 11.58 | 3000 | 0.4666 | 0.4940 | | 1.2832 | 13.51 | 3500 | 0.4257 | 0.4593 | | 1.1931 | 15.44 | 4000 | 0.4039 | 0.4427 | | 1.2911 | 17.37 | 4500 | 0.3956 | 0.4295 | | 1.1577 | 19.3 | 5000 | 0.3705 | 0.4114 | | 1.1135 | 21.24 | 5500 | 0.3740 | 0.4010 | | 1.19 | 23.17 | 6000 | 0.3611 | 0.3935 | | 1.1008 | 25.1 | 6500 | 0.3503 | 0.3880 | | 1.0805 | 27.03 | 7000 | 0.3427 | 0.3781 | | 1.1556 | 28.96 | 7500 | 0.3442 | 0.3727 | | 1.0596 | 30.89 | 8000 | 0.3398 | 0.3646 | | 1.0219 | 32.82 | 8500 | 0.3312 | 0.3660 | | 1.1042 | 34.75 | 9000 | 0.3287 | 0.3612 | | 1.0273 | 36.68 | 9500 | 0.3236 | 0.3556 | | 1.0383 | 38.61 | 10000 | 0.3217 | 0.3558 | | 1.0498 | 40.54 | 10500 | 0.3205 | 0.3520 | | 0.9969 | 42.47 | 11000 | 0.3125 | 0.3504 | | 1.0658 | 44.4 | 11500 | 0.3120 | 0.3493 | | 0.992 | 46.33 | 12000 | 0.3137 | 0.3476 | | 0.9737 | 48.26 | 12500 | 0.3085 | 0.3413 | | 1.0817 | 50.19 | 13000 | 0.3091 | 0.3418 | | 0.9414 | 52.12 | 13500 | 0.3072 | 0.3344 | | 0.9295 | 54.05 | 14000 | 0.3039 | 0.3322 | | 1.0248 | 55.98 | 14500 | 0.2991 | 0.3325 | | 0.9474 | 57.91 | 15000 | 0.3032 | 0.3348 | | 0.928 | 59.85 | 15500 | 0.2999 | 0.3285 | | 1.0321 | 61.78 | 16000 | 0.2982 | 0.3253 | | 0.9255 | 63.71 | 16500 | 0.2970 | 0.3231 | | 0.8928 | 65.64 | 17000 | 0.2993 | 0.3250 | | 1.008 | 67.57 | 17500 | 0.2985 | 0.3222 | | 0.9371 | 69.5 | 18000 | 0.2968 | 0.3216 | | 0.9077 | 71.43 | 18500 | 0.3011 | 0.3299 | | 1.0044 | 73.36 | 19000 | 0.3053 | 0.3306 | | 0.9625 | 75.29 | 19500 | 0.3159 | 0.3295 | | 0.9816 | 77.22 | 20000 | 0.3080 | 0.3304 | | 0.9587 | 119.19 | 20500 | 0.3088 | 0.3284 | | 0.9178 | 122.09 | 21000 | 0.3132 | 0.3320 | | 1.0282 | 125.0 | 21500 | 0.3099 | 0.3266 | | 0.9337 | 127.9 | 22000 | 0.3110 | 0.3317 | | 0.8822 | 130.81 | 22500 | 0.3037 | 0.3247 | | 0.9644 | 133.72 | 23000 | 0.3037 | 0.3238 | | 0.9214 | 136.62 | 23500 | 0.3040 | 0.3234 | | 0.9167 | 139.53 | 24000 | 0.3079 | 0.3203 | | 0.9047 | 142.44 | 24500 | 0.3018 | 0.3177 | | 0.8909 | 145.35 | 25000 | 0.3053 | 0.3181 | | 0.9646 | 148.25 | 25500 | 0.3095 | 0.3229 | | 0.8802 | 151.16 | 26000 | 0.3111 | 0.3192 | | 0.8411 | 154.07 | 26500 | 0.3068 | 0.3123 | | 0.9235 | 156.97 | 27000 | 0.3090 | 0.3177 | | 0.8943 | 159.88 | 27500 | 0.3115 | 0.3179 | | 0.8854 | 162.79 | 28000 | 0.3052 | 0.3157 | | 0.8734 | 165.69 | 28500 | 0.3077 | 0.3124 | | 0.8515 | 168.6 | 29000 | 0.3117 | 0.3128 | | 0.912 | 171.51 | 29500 | 0.3039 | 0.3121 | | 0.8669 | 174.42 | 30000 | 0.3120 | 0.3123 | | 0.823 | 177.32 | 30500 | 0.3148 | 0.3118 | | 0.9129 | 180.23 | 31000 | 0.3179 | 0.3101 | | 0.8255 | 183.14 | 31500 | 0.3164 | 0.3114 | | 0.8948 | 186.05 | 32000 | 0.3128 | 0.3101 | | 0.8397 | 188.95 | 32500 | 0.3143 | 0.3068 | | 0.8341 | 191.86 | 33000 | 0.3127 | 0.3136 | | 0.873 | 194.76 | 33500 | 0.3149 | 0.3124 | | 0.8232 | 197.67 | 34000 | 0.3166 | 0.3086 | | 0.8002 | 200.58 | 34500 | 0.3149 | 0.3061 | | 0.8621 | 203.49 | 35000 | 0.3160 | 0.3093 | | 0.8123 | 206.39 | 35500 | 0.3141 | 0.3063 | | 0.7995 | 209.3 | 36000 | 0.3174 | 0.3075 | | 0.8271 | 212.21 | 36500 | 0.3173 | 0.3043 | | 0.8059 | 215.12 | 37000 | 0.3176 | 0.3079 | | 0.8835 | 218.02 | 37500 | 0.3169 | 0.3062 | | 0.8027 | 220.93 | 38000 | 0.3203 | 0.3098 | | 0.775 | 223.83 | 38500 | 0.3159 | 0.3068 | | 0.8487 | 226.74 | 39000 | 0.3161 | 0.3072 | | 0.7929 | 229.65 | 39500 | 0.3143 | 0.3037 | | 0.7653 | 232.56 | 40000 | 0.3160 | 0.3048 | | 0.8211 | 235.46 | 40500 | 0.3173 | 0.3031 | | 0.7761 | 238.37 | 41000 | 0.3176 | 0.3025 | | 0.7761 | 241.28 | 41500 | 0.3179 | 0.3027 | | 0.7903 | 244.19 | 42000 | 0.3181 | 0.3016 | | 0.7807 | 247.09 | 42500 | 0.3170 | 0.3027 | | 0.8406 | 250.0 | 43000 | 0.3174 | 0.3022 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3.dev0 - Tokenizers 0.10.3
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
vuiseng9
2022-02-08T22:58:30Z
3
0
transformers
[ "transformers", "pytorch", "onnx", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes: 1. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers. 2. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad``` ``` eval_exact_match = 80.7001 eval_f1 = 87.9777 eval_samples = 10784 ``` # Setup ```bash # OpenVINO/NNCF git clone https://github.com/vuiseng9/nncf && cd nncf git checkout tld-poc git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2 python setup.py develop pip install -r examples/torch/requirements.txt # Huggingface nn_pruning git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning git checkout reproduce-evaluation git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446 pip install -e ".[dev]" # Huggingface Transformers git clone https://github.com/vuiseng9/transformers && cd transformers git checkout tld-poc git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5 pip install -e . head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {} # Additional dependencies pip install onnx ``` # Train ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt BASE_MODEL=/path/to/cloned_repo_above #to-revise wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt/raw/main/nncf_bert_squad_qat.json NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise OUTROOT=/path/to/train_output_root #to-revise WORKDIR=transformers/examples/pytorch/question-answering #to-revise RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt cd $WORKDIR OUTDIR=$OUTROOT/$RUNID mkdir -p $OUTDIR export CUDA_VISIBLE_DEVICES=0 NEPOCH=5 python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --optimize_model_before_eval \ --optimized_checkpoint $BASE_MODEL \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 250 \ --learning_rate 3e-5 \ --lr_scheduler_type cosine_with_restarts \ --warmup_ratio 0.25 \ --cosine_cycles 1 \ --teacher bert-large-uncased-whole-word-masking-finetuned-squad \ --teacher_ratio 0.9 \ --num_train_epochs $NEPOCH \ --per_device_eval_batch_size 128 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 250 \ --nncf_config $NNCF_CFG \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ``` # Eval This repo must be cloned locally. ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt MODELROOT=/path/to/cloned_repo_above #to-revise export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt WORKDIR=transformers/examples/pytorch/question-answering #to-revise cd $WORKDIR mkdir $OUTDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --dataset_name squad \ --optimize_model_before_eval \ --qat_checkpoint $MODELROOT/checkpoint-26750 \ --nncf_config $MODELROOT/nncf_bert_squad_qat.json \ --to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt.onnx \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ``` ### tile-alignment to evaluate tile-alignment checkpoint, add ```--tile_alignment``` and point ```--qat_checkpoint``` to checkpoint with 'tilealigned' postfix. Use branch ```tld-poc``` with commit id ```c525c52cq```
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt
vuiseng9
2022-02-08T22:58:08Z
1
0
transformers
[ "transformers", "pytorch", "onnx", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes: 1. magnitude sparsification at 57.92% upon initialization so that sparsity over all linear layers of bert-base is at 90%. Parameters are ranked globally via thier absolute norm. Only linear layers of self-attention and ffnn are targeted. 2. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers. 3. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad``` ``` eval_exact_match = 80.4541 eval_f1 = 87.6832 eval_samples = 10784 ``` # Setup ```bash # OpenVINO/NNCF git clone https://github.com/vuiseng9/nncf && cd nncf git checkout tld-poc git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2 python setup.py develop pip install -r examples/torch/requirements.txt # Huggingface nn_pruning git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning git checkout reproduce-evaluation git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446 pip install -e ".[dev]" # Huggingface Transformers git clone https://github.com/vuiseng9/transformers && cd transformers git checkout tld-poc git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5 pip install -e . head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {} # Additional dependencies pip install onnx ``` # Train ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt BASE_MODEL=/path/to/cloned_repo_above #to-revise wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt/raw/main/nncf_bert_squad_sparsity.json NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise OUTROOT=/path/to/train_output_root #to-revise WORKDIR=transformers/examples/pytorch/question-answering #to-revise RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt cd $WORKDIR OUTDIR=$OUTROOT/$RUNID mkdir -p $OUTDIR export CUDA_VISIBLE_DEVICES=0 NEPOCH=5 python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --optimize_model_before_eval \ --optimized_checkpoint $BASE_MODEL \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 250 \ --learning_rate 3e-5 \ --lr_scheduler_type cosine_with_restarts \ --warmup_ratio 0.25 \ --cosine_cycles 1 \ --teacher bert-large-uncased-whole-word-masking-finetuned-squad \ --teacher_ratio 0.9 \ --num_train_epochs $NEPOCH \ --per_device_eval_batch_size 128 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 250 \ --nncf_config $NNCF_CFG \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ``` # Eval This repo must be cloned locally. ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt MODELROOT=/path/to/cloned_repo_above #to-revise export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt WORKDIR=transformers/examples/pytorch/question-answering #to-revise cd $WORKDIR mkdir $OUTDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --dataset_name squad \ --optimize_model_before_eval \ --qat_checkpoint $MODELROOT/checkpoint-21750 \ --nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \ --to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt.onnx \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ``` ### tile-alignment to evaluate tile-alignment checkpoint, add ```--tile_alignment``` and point ```--qat_checkpoint``` to checkpoint with 'tilealigned' postfix. Use branch ```tld-poc``` with commit id ```c525c52cq```
jgammack/MTL-bert-base-uncased-ww-squad
jgammack
2022-02-08T22:16:36Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: MTL-bert-base-uncased-ww-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MTL-bert-base-uncased-ww-squad This model is a fine-tuned version of [jgammack/MTL-bert-base-uncased-ww](https://huggingface.co/jgammack/MTL-bert-base-uncased-ww) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
birgermoell/lm-swedish
birgermoell
2022-02-08T21:37:51Z
13
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "sv", "license:cc0-1.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: sv datasets: - common_voice - NST Swedish ASR Database - P4 metrics: - wer tags: - audio - automatic-speech-recognition - speech license: cc0-1.0 model-index: - name: Wav2vec 2.0 large VoxRex Swedish results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice type: common_voice args: sv-SE metrics: - name: Test WER type: wer value: 9.914 --- # Wav2vec 2.0 large VoxRex Swedish (C) Experiment with LM model. **Disclaimer:** This is a work in progress. See [VoxRex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) for more details. **Update 2022-01-10:** Updated to VoxRex-C version. Finetuned version of KBs [VoxRex large](https://huggingface.co/KBLab/wav2vec2-large-voxrex) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **2.5%**. WER for Common Voice test set is **8.49%** directly and **7.37%** with a 4-gram language model. When using this model, make sure that your speech input is sampled at 16kHz. # Performance\* ![Comparison](comparison.png "Comparison") <center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center> ## Training This model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>. ![WER during training](chart_1.svg "WER") ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish") model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
Mofe/speech-sprint-test
Mofe
2022-02-08T18:32:00Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 207.6065 - Wer: 1.5484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
jgammack/MTL-bert-base-uncased-ww
jgammack
2022-02-08T17:50:13Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: MTL-bert-base-uncased-ww results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MTL-bert-base-uncased-ww This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 7 - eval_batch_size: 7 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2964 | 1.0 | 99 | 2.9560 | | 3.0419 | 2.0 | 198 | 2.8336 | | 2.8979 | 3.0 | 297 | 2.8009 | | 2.8815 | 4.0 | 396 | 2.7394 | | 2.8373 | 5.0 | 495 | 2.6813 | | 2.741 | 6.0 | 594 | 2.6270 | | 2.6877 | 7.0 | 693 | 2.5216 | | 2.6823 | 8.0 | 792 | 2.5485 | | 2.6326 | 9.0 | 891 | 2.5690 | | 2.5976 | 10.0 | 990 | 2.6336 | | 2.6009 | 11.0 | 1089 | 2.5919 | | 2.5615 | 12.0 | 1188 | 2.4264 | | 2.5826 | 13.0 | 1287 | 2.5562 | | 2.5693 | 14.0 | 1386 | 2.5529 | | 2.5494 | 15.0 | 1485 | 2.5300 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
espnet/brianyan918_iwslt22_dialect_transformer_fisherlike
espnet
2022-02-08T16:43:28Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "dataset:iwslt22_dialect", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: noinfo datasets: - iwslt22_dialect license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/brianyan918_iwslt22_dialect_transformer_fisherlike` This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 77fce65312877a132bbae01917ad26b74f6e2e14 pip install -e . cd egs2/iwslt22_dialect/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_transformer_fisherlike ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Jan 31 10:15:38 EST 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1` - Git hash: `99581e0f5af3ad68851d556645e7292771436df9` - Commit date: `Sat Jan 29 11:32:38 2022 -0500` ## asr_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe1000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|27370|53.4|41.1|5.5|9.5|56.1|88.2| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|145852|83.8|7.5|8.7|12.2|28.4|88.2| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|64424|62.9|23.9|13.3|13.4|50.5|88.2| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/transformer_fisherlike_4gpu_bbins16m_fix.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe1000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 60761 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 3 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 16000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe1000_sp/train/speech_shape - exp/asr_stats_raw_bpe1000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe1000_sp/valid/speech_shape - exp/asr_stats_raw_bpe1000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /scratch/iwslt22asrdump/raw/train_sp/wav.scp - speech - kaldi_ark - - /scratch/iwslt22asrdump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - /scratch/iwslt22asrdump/raw/dev/wav.scp - speech - kaldi_ark - - /scratch/iwslt22asrdump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 5.0 scheduler: noamlr scheduler_conf: model_size: 256 warmup_steps: 25000 token_list: - <blank> - <unk> - ّ - ي - ا - ِ - ل - َ - و - ه - ة - م - ر - ك - ▁ما - ُ - ب - ش - د - ت - ▁في - َّ - ▁ن - ▁ي - ▁ت - ن - ▁لا - ح - ▁ه - س - وا - ▁م - ف - ▁إي - ع - ▁ب - ها - ط - ى - ق - ▁الل - ▁أ - ج - ▁والل - ▁و - ▁إيه - ▁ا - ▁يا - ز - ▁تو - ▁بش - ص - ▁أه - خ - ات - ▁إنت - ▁أنا - نا - ▁شن - ▁ق - ▁ش - ▁ك - يت - ين - ▁ف - ار - ▁قال - ▁باهي - ▁ع - ▁من - ▁ل - ▁مش - ▁كان - ▁حت - ▁ول - هم - ▁ر - ان - ▁س - ض - ني - ▁بال - ▁على - ▁متاع - ▁كي - ▁ال - ▁ح - ▁كل - ▁آنا - ▁الم - ▁خ - ▁الس - ▁وال - ون - ور - ▁أم - ▁هك - ▁آش - ▁الد - ▁عاد - ▁ج - ▁معناها - ▁مع - اش - ▁الص - ▁نهار - ▁لل - لها - ▁تي - ▁رب - ▁خاطر - ▁أكهو - غ - ▁شي - الل - ام - تها - ▁ون - ▁آك - ▁فهمت - وم - ▁موش - مشي - ▁ص - ▁اليوم - ▁مر - ست - ▁الب - ▁لاباس - تلي - ▁الكل - ▁عال - ذ - ▁فم - ▁الك - ▁حاجة - ▁شوي - اكا - ▁ياخي - ▁هاني - ▁صح - اس - ▁آه - ▁برشة - ▁الن - ▁وت - ▁الج - لك - ▁راهو - سم - ▁الح - مت - ▁الت - ▁بعد - اج - عد - ▁انشا - وش - لت - ▁وين - ث - ▁ولا - ▁باش - ▁فيها - نت - ▁إ - ▁الأ - ▁الف - ▁إم - ▁واحد - ▁ألو - ▁عندي - ▁أك - ▁خل - ▁وي - ▁تعمل - أ - ▁ريت - ▁وأ - ▁تعرف - بت - ▁الع - ▁مشيت - ▁وه - ▁حاصيلو - ▁بالل - ▁نعمل - ▁غ - ▁تجي - ▁يجي - ▁كيفاش - ▁عملت - ظ - اك - ▁هاو - ▁اش - ▁قد - ▁نق - ▁د - ▁زادا - ▁فيه - رة - ▁بر - ▁الش - ▁ز - ▁كيما - ▁الا - ند - عم - ▁نح - ▁بنتي - ▁نمشي - ▁عليك - ▁نعرفش - ▁كهو - ▁وم - ▁ط - تي - ▁خير - ▁آ - مش - ▁عليه - له - حت - ▁إيا - ▁أحنا - ▁تع - الا - عب - ▁ديما - ▁تت - ▁جو - ▁مالا - ▁أو - ▁قلتلك - ▁معنتها - لنا - ▁شكون - ▁تحب - بر - ▁الر - ▁وا - ▁الق - اء - ▁عل - ▁البارح - ▁وخ - ▁سافا - ▁هوما - ▁ولدي - ▁ - ▁نعرف - يف - رت - ▁وب - ▁روح - ▁علاش - ▁هاذاك - ▁رو - وس - ▁جا - ▁كيف - طر - ▁غادي - يكا - عمل - ▁نحب - ▁عندك - ▁وما - ▁فر - اني - ▁قلتله - ▁الط - فر - ▁دار - ▁عليها - ▁يعمل - ▁نت - ▁تح - باح - ▁ماهو - ▁وكل - ▁وع - قت - ▁فهمتك - عر - ▁وس - ▁تر - ▁سي - يلة - ▁قلت - ▁رمضان - صل - ▁آما - ▁الواحد - ▁بيه - ▁ثلاثة - ▁فهمتني - ▁ها - بط - ▁مازال - قل - ▁بالك - ▁معناتها - ▁ور - ▁قلتلها - ▁يس - رب - ▁ام - ▁وبعد - ▁الث - ▁وإنت - ▁بحذا - ▁لازم - ْ - ▁بن - قرا - سك - ▁يت - خل - ▁فه - عت - ▁هاك - ▁تق - ▁قبل - ▁وك - ▁نقول - ▁الز - حم - ▁عادش - حكي - وها - بة - نس - طل - ▁علاه - ذا - ▁سا - ▁طل - الي - ▁يق - ▁دو - حوا - حد - ▁نشوف - نة - ▁لي - ▁تك - ▁نا - ▁هاذ - ▁خويا - ▁المر - ▁وينك - ▁البر - ▁أتو - ينا - ▁حل - ولي - ▁ثم - ▁عم - ▁آي - ▁قر - از - ▁وح - كش - بعة - ▁كيفاه - ▁نع - ▁الحمدلله - ▁ياسر - ▁الخ - ▁معاك - ▁معاه - ▁تقول - دة - ▁حكاية - تش - ▁حس - ▁غدوا - ▁بالحق - روا - وز - ▁تخ - ▁العيد - رجع - ▁بالي - ▁جات - ▁وج - حة - ▁وش - ▁آخر - ▁طا - ▁مت - لقا - تك - ▁مس - ▁راني - كون - ▁صاحب - ▁هاكا - ▁قول - ▁عر - ▁عنده - ▁يلزم - ▁هاذا - ▁يخ - ▁وقتاش - ▁وقت - بع - ▁العش - ▁هاذي - هاش - ينة - ▁هاذاكا - عطي - ▁تنج - ▁باهية - نيا - فت - ▁يحب - ▁تف - ▁أهلا - وف - ▁غدوة - ▁بيك - ▁بد - عن - ▁در - ▁ننج - هار - ▁الحكاية - مون - وق - ▁نورمال - ▁عندها - خر - ▁بو - ▁حب - ▁آكا - ▁وف - ▁هاذيكا - ▁ديجا - ▁وق - ▁طي - لتل - بعث - ▁تص - رك - ▁مانيش - ▁العادة - ▁شوف - ضر - ▁يمشي - ▁نعملوا - ▁عرفت - ▁زال - ▁متع - ▁عمل - ▁بيها - ▁نحكي - اع - ▁نج - معة - ▁والكل - عناها - ▁يعي - ▁نجي - ستن - ▁هاذيك - ▁عام - ▁فلوس - قة - تين - ▁بالقدا - لهم - ▁تخدم - ▁ٱ - ▁شيء - ▁راهي - ▁جاب - ولاد - ابل - ▁ماك - عة - ▁نمشيوا - وني - شري - بار - انس - ▁وقتها - ▁جديد - ▁يز - ▁كر - ▁حاسيلو - ▁شق - ▁اه - ▁سايي - ▁انشالل - رج - مني - ▁بلا - ▁صحيح - ▁غير - ▁يخدم - مان - وكا - ▁عند - ▁قاعدة - ▁تس - ربة - ▁راس - ▁حط - ▁نكل - تني - ▁الو - سيون - ▁عندنا - ▁لو - ▁ست - صف - ▁ض - ▁كامل - ▁نخدم - ▁يبدا - ▁دونك - ▁أمور - رات - ▁تونس - بدا - ▁تحكي - ▁سو - ▁جاي - ▁وحدة - ▁ساعة - حنا - ▁بكري - ▁إل - ▁وبر - ▁كم - ▁تبدا - ارة - ادي - رق - لوا - ▁يمكن - ▁خاط - ▁وص - جين - ▁هاذاي - ▁هز - قد - ▁قل - ▁وكهو - ▁نص - ▁دي - لقى - ▁وأنا - سين - ▁يح - ▁ماشي - ▁شو - ▁خذيت - امات - ▁كنت - خرج - ▁لقيت - رتاح - كس - ▁حاجات - ▁مريق - ▁مل - ليفون - اوا - ▁شفت - ▁عاملة - ▁تن - ▁والا - سأل - ▁حد - ▁قاللك - ▁العباد - ▁عالاخ - ▁وآك - ▁ماني - ▁ناخذ - ▁حم - ▁الإ - ▁ماضي - ▁ث - الة - ▁أخرى - رين - ▁تشوف - ▁نخرج - ▁أربعة - ▁ألف - نيش - ▁هاي - آ - ▁فيك - رشة - ولة - فلة - ▁بابا - ▁أما - ▁روحي - ▁فيهم - ▁رج - ▁ليك - ونس - يرة - ▁وأكهو - ندي - ▁صار - شك - ▁نرو - ▁آكهو - ▁تش - ▁غاديكا - ▁معاها - ▁لب - ▁أذاكا - ▁آني - ▁يوم - عملوا - ▁نقعد - دوا - ▁عد - سمع - متني - ▁الخدمة - ▁مازلت - ▁قعدت - ايا - ▁برك - قعد - ▁خرجت - ضح - ▁قالل - ▁يقول - ▁وفي - ▁حق - ختي - ▁يعني - خدم - ▁جيت - ▁نرمال - طف - ▁عجب - ▁تقعد - ▁مشينا - اية - ▁خدمة - لدي - روف - ▁الفطر - ▁مشكل - ▁سل - ▁وآنا - الط - ▁بالس - ▁هانا - ▁أوه - ▁أذيكا - ▁وإ - ▁عليهم - ▁حالة - جت - قضي - ▁لق - ▁ونصف - سعة - عطيه - عاو - خانة - ▁مخ - ▁شبيك - بيعة - ▁أهوك - يني - ▁تعد - ▁خال - ▁قريب - ▁راك - ▁قالت - ▁لتو - ▁أكثر - اعة - ▁يظهرلي - ▁ماشية - سمعني - ▁نسيت - ▁ينج - ▁الحمدلل - هدي - ▁وشن - ▁تطي - ▁هنا - ▁نسمع - ▁إنتوما - ▁نحكيلك - ▁قاعد - ▁اسمعني - خرين - إ - ماعة - ▁بالر - ▁دا - ▁عمر - ▁نشري - ▁قهوة - ▁تبارك - ▁صب - ▁مشات - غر - ▁شريت - ▁عامل - ▁زوج - ثنين - ▁برب - ريق - ▁نكم - ▁لم - بيب - ▁مياة - ▁مالل - ▁قعد - ▁سخون - قس - ▁وحده - ▁اسمع - ▁خمسة - ▁غالي - ▁الأو - رلي - ▁العظيم - ▁ترو - تهم - كري - ▁نجيب - ▁جملة - قول - ▁قلتلي - ▁إيجا - ▁يقعد - ▁إيام - ▁يعطيك - ▁نخل - ▁دب - يمة - رهبة - ▁نهز - ▁محم - ▁بين - غار - ▁نحنا - ▁بون - ▁الغ - ▁شهر - ▁بار - رقة - ▁نطي - ئ - ترو - ▁ملا - ▁الكرهبة - ▁باه - ▁عالإخ - ▁عباد - ▁بلاصة - ▁مشى - بيع - ▁نفس - ▁عملنا - ▁واح - ▁أحلاه - ▁بحذاك - ▁لأ - ▁دخ - باب - ▁ودر - ▁غالب - ▁ناكل - ▁مثلا - ء - ▁راقد - ▁تفر - ▁الوقت - ▁تاخذ - حذا - نتر - ▁نبدا - ▁حال - ▁مريم - الم - ▁جمعة - رجول - ▁معايا - ▁تخرج - ▁باس - ▁ساعات - ▁عندهم - ▁نتفر - مسة - ▁الجمعة - بعين - ▁أكاهو - ▁ميش - مراة - ▁خذا - ▁ظ - ▁سيدي - ▁معاي - ▁شبيه - ▁حكا - ▁سف - ▁بعضنا - ▁بالض - ▁ليلة - ▁زعما - ▁الحق - مضان - ▁صعيب - ▁قالتلك - ً - ملة - ▁بق - عرف - لاطة - ▁خرج - ▁أخت - ▁تقوللي - ▁معانا - ▁صغير - ▁إسمه - ▁بعض - ▁العام - ▁علينا - ▁يتع - ▁فاش - ▁شع - ▁معاهم - ▁يسالش - ▁لهنا - ▁سمعت - ▁البار - ▁نتصو - ▁الاخ - ▁وكان - وبة - دمة - ▁كون - ▁مبعد - ▁تسمع - ▁بعيد - ▁تاكل - ▁نلقا - لامة - لاثة - ▁ذ - ▁تحس - ▁الواح - ▁لدار - ▁فاتت - ▁تاو - ▁أحوالك - ▁عاملين - ▁كبيرة - عجب - ▁بنت - ▁بيدي - ▁حكيت - ▁تحط - ▁مسكينة - ▁هاذوكم - ▁نزيد - لاث - ▁عشرة - ▁عيني - ▁تعب - ▁ياكل - ▁وزيد - ▁طول - ▁حمدلله - ▁وقتاه - ▁معناه - ▁وآش - ▁ووه - ▁وواحد - ▁نشوفوا - ▁عيد - ▁بصراحة - ▁بحذانا - ▁قاعدين - ▁راجل - ▁وحدي - ▁وعشرين - ▁لين - ▁خايب - ▁قالتله - ▁تهز - عيد - ▁كبير - ▁يعرف - ▁عارف - ▁الفلوس - ▁زايد - ▁خدمت - ▁هاذوما - ▁سلاطة - ▁فارغة - ▁ساعتين - ▁تبد - ▁راو - ▁مائة - ▁بعضهم - ▁ظاهرلي - ▁الفازة - كتب - ▁القهوة - سبوك - ▁زاد - ▁ضرب - حكيلي - ▁فوق - ▁عاود - ▁راي - ▁ومبعد - ▁حوايج - ▁دخلت - ▁يقوللك - ▁زيد - ▁زلت - لفزة - ▁وقال - ▁يهب - ▁يلزمني - ▁الحمد - ▁أذي - طبيعت - ▁دورة - ▁عالأقل - ▁آذاك - ▁وبال - ▁الجاي - عطيني - ▁ياخذ - ▁احكيلي - ▁نهبط - ▁رقدت - بلاصة - ▁عزيز - ▁صغار - ▁أقسم - ▁جيب - ▁وصلت - ▁أحوال - ▁جيست - ▁جماعة - سئل - ▁خوذ - ▁يهز - ▁الأخرى - ▁آلاف - ▁إسمع - ▁الحقيقة - ▁ناقص - ▁حاط - ▁موجود - عباد - ▁آذيك - ▁خارج - ▁الخير - ▁البنات - بقى - ▁طرف - ▁سينون - ▁ماذاب - ▁البحر - ▁نرقد - مدلله - ▁إيجى - ▁خالتي - ▁فازة - ▁بريك - ▁شريبتك - ▁تطلع - ؤ - ▁المشكلة - ▁طري - ▁مادام - ▁طلبت - ▁يلعب - ▁نعاود - ▁وحدك - ▁ظاهر - ٱ - ژ - ٍ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe1000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.6a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
tau/tavbert-he
tau
2022-02-08T16:38:50Z
60
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "language model", "he", "dataset:oscar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: he tags: - roberta - language model datasets: - oscar --- # TavBERT base model A Hebrew BERT-style masked language model operating over characters, pre-trained by masking spans of characters, similarly to SpanBERT (Joshi et al., 2020). ### How to use ```python import numpy as np import torch from transformers import AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("tau/tavbert-he") tokenizer = AutoTokenizer.from_pretrained("tau/tavbert-he") def mask_sentence(sent, span_len=5): start_pos = np.random.randint(0, len(sent) - span_len) masked_sent = sent[:start_pos] + '[MASK]' * span_len + sent[start_pos + span_len:] print("Masked sentence:", masked_sent) output = model(**tokenizer.encode_plus(masked_sent, return_tensors='pt'))['logits'][0][1:-1] preds = [int(x) for x in torch.argmax(torch.softmax(output, axis=1), axis=1)[start_pos:start_pos + span_len]] pred_sent = sent[:start_pos] + ''.join(tokenizer.convert_ids_to_tokens(preds)) + sent[start_pos + span_len:] print("Model's prediction:", pred_sent) ``` ## Training data OSCAR (Ortiz, 2019) Hebrew section (10 GB text, 20 million sentences).
espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
espnet
2022-02-08T16:35:06Z
2
1
espnet
[ "espnet", "audio", "automatic-speech-recognition", "dataset:iwslt22_dialect", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: noinfo datasets: - iwslt22_dialect license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug` This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 77fce65312877a132bbae01917ad26b74f6e2e14 pip install -e . cd egs2/iwslt22_dialect/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Feb 2 05:32:30 EST 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1` - Git hash: `99581e0f5af3ad68851d556645e7292771436df9` - Commit date: `Sat Jan 29 11:32:38 2022 -0500` ## asr_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe1000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|27370|54.7|39.5|5.8|8.8|54.2|87.9| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|145852|84.1|7.1|8.8|11.5|27.4|87.9| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/test1|4204|64424|63.8|22.8|13.4|12.2|48.3|87.9| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe1000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 55101 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 80 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 25000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe1000_sp/train/speech_shape - exp/asr_stats_raw_bpe1000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe1000_sp/valid/speech_shape - exp/asr_stats_raw_bpe1000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /scratch/iwslt22asrdump/raw/train_sp/wav.scp - speech - kaldi_ark - - /scratch/iwslt22asrdump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - /scratch/iwslt22asrdump/raw/dev/wav.scp - speech - kaldi_ark - - /scratch/iwslt22asrdump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - ّ - ي - ا - ِ - ل - َ - و - ه - ة - م - ر - ك - ▁ما - ُ - ب - ش - د - ت - ▁في - َّ - ▁ن - ▁ي - ▁ت - ن - ▁لا - ح - ▁ه - س - وا - ▁م - ف - ▁إي - ع - ▁ب - ها - ط - ى - ق - ▁الل - ▁أ - ج - ▁والل - ▁و - ▁إيه - ▁ا - ▁يا - ز - ▁تو - ▁بش - ص - ▁أه - خ - ات - ▁إنت - ▁أنا - نا - ▁شن - ▁ق - ▁ش - ▁ك - يت - ين - ▁ف - ار - ▁قال - ▁باهي - ▁ع - ▁من - ▁ل - ▁مش - ▁كان - ▁حت - ▁ول - هم - ▁ر - ان - ▁س - ض - ني - ▁بال - ▁على - ▁متاع - ▁كي - ▁ال - ▁ح - ▁كل - ▁آنا - ▁الم - ▁خ - ▁الس - ▁وال - ون - ور - ▁أم - ▁هك - ▁آش - ▁الد - ▁عاد - ▁ج - ▁معناها - ▁مع - اش - ▁الص - ▁نهار - ▁لل - لها - ▁تي - ▁رب - ▁خاطر - ▁أكهو - غ - ▁شي - الل - ام - تها - ▁ون - ▁آك - ▁فهمت - وم - ▁موش - مشي - ▁ص - ▁اليوم - ▁مر - ست - ▁الب - ▁لاباس - تلي - ▁الكل - ▁عال - ذ - ▁فم - ▁الك - ▁حاجة - ▁شوي - اكا - ▁ياخي - ▁هاني - ▁صح - اس - ▁آه - ▁برشة - ▁الن - ▁وت - ▁الج - لك - ▁راهو - سم - ▁الح - مت - ▁الت - ▁بعد - اج - عد - ▁انشا - وش - لت - ▁وين - ث - ▁ولا - ▁باش - ▁فيها - نت - ▁إ - ▁الأ - ▁الف - ▁إم - ▁واحد - ▁ألو - ▁عندي - ▁أك - ▁خل - ▁وي - ▁تعمل - أ - ▁ريت - ▁وأ - ▁تعرف - بت - ▁الع - ▁مشيت - ▁وه - ▁حاصيلو - ▁بالل - ▁نعمل - ▁غ - ▁تجي - ▁يجي - ▁كيفاش - ▁عملت - ظ - اك - ▁هاو - ▁اش - ▁قد - ▁نق - ▁د - ▁زادا - ▁فيه - رة - ▁بر - ▁الش - ▁ز - ▁كيما - ▁الا - ند - عم - ▁نح - ▁بنتي - ▁نمشي - ▁عليك - ▁نعرفش - ▁كهو - ▁وم - ▁ط - تي - ▁خير - ▁آ - مش - ▁عليه - له - حت - ▁إيا - ▁أحنا - ▁تع - الا - عب - ▁ديما - ▁تت - ▁جو - ▁مالا - ▁أو - ▁قلتلك - ▁معنتها - لنا - ▁شكون - ▁تحب - بر - ▁الر - ▁وا - ▁الق - اء - ▁عل - ▁البارح - ▁وخ - ▁سافا - ▁هوما - ▁ولدي - ▁ - ▁نعرف - يف - رت - ▁وب - ▁روح - ▁علاش - ▁هاذاك - ▁رو - وس - ▁جا - ▁كيف - طر - ▁غادي - يكا - عمل - ▁نحب - ▁عندك - ▁وما - ▁فر - اني - ▁قلتله - ▁الط - فر - ▁دار - ▁عليها - ▁يعمل - ▁نت - ▁تح - باح - ▁ماهو - ▁وكل - ▁وع - قت - ▁فهمتك - عر - ▁وس - ▁تر - ▁سي - يلة - ▁قلت - ▁رمضان - صل - ▁آما - ▁الواحد - ▁بيه - ▁ثلاثة - ▁فهمتني - ▁ها - بط - ▁مازال - قل - ▁بالك - ▁معناتها - ▁ور - ▁قلتلها - ▁يس - رب - ▁ام - ▁وبعد - ▁الث - ▁وإنت - ▁بحذا - ▁لازم - ْ - ▁بن - قرا - سك - ▁يت - خل - ▁فه - عت - ▁هاك - ▁تق - ▁قبل - ▁وك - ▁نقول - ▁الز - حم - ▁عادش - حكي - وها - بة - نس - طل - ▁علاه - ذا - ▁سا - ▁طل - الي - ▁يق - ▁دو - حوا - حد - ▁نشوف - نة - ▁لي - ▁تك - ▁نا - ▁هاذ - ▁خويا - ▁المر - ▁وينك - ▁البر - ▁أتو - ينا - ▁حل - ولي - ▁ثم - ▁عم - ▁آي - ▁قر - از - ▁وح - كش - بعة - ▁كيفاه - ▁نع - ▁الحمدلله - ▁ياسر - ▁الخ - ▁معاك - ▁معاه - ▁تقول - دة - ▁حكاية - تش - ▁حس - ▁غدوا - ▁بالحق - روا - وز - ▁تخ - ▁العيد - رجع - ▁بالي - ▁جات - ▁وج - حة - ▁وش - ▁آخر - ▁طا - ▁مت - لقا - تك - ▁مس - ▁راني - كون - ▁صاحب - ▁هاكا - ▁قول - ▁عر - ▁عنده - ▁يلزم - ▁هاذا - ▁يخ - ▁وقتاش - ▁وقت - بع - ▁العش - ▁هاذي - هاش - ينة - ▁هاذاكا - عطي - ▁تنج - ▁باهية - نيا - فت - ▁يحب - ▁تف - ▁أهلا - وف - ▁غدوة - ▁بيك - ▁بد - عن - ▁در - ▁ننج - هار - ▁الحكاية - مون - وق - ▁نورمال - ▁عندها - خر - ▁بو - ▁حب - ▁آكا - ▁وف - ▁هاذيكا - ▁ديجا - ▁وق - ▁طي - لتل - بعث - ▁تص - رك - ▁مانيش - ▁العادة - ▁شوف - ضر - ▁يمشي - ▁نعملوا - ▁عرفت - ▁زال - ▁متع - ▁عمل - ▁بيها - ▁نحكي - اع - ▁نج - معة - ▁والكل - عناها - ▁يعي - ▁نجي - ستن - ▁هاذيك - ▁عام - ▁فلوس - قة - تين - ▁بالقدا - لهم - ▁تخدم - ▁ٱ - ▁شيء - ▁راهي - ▁جاب - ولاد - ابل - ▁ماك - عة - ▁نمشيوا - وني - شري - بار - انس - ▁وقتها - ▁جديد - ▁يز - ▁كر - ▁حاسيلو - ▁شق - ▁اه - ▁سايي - ▁انشالل - رج - مني - ▁بلا - ▁صحيح - ▁غير - ▁يخدم - مان - وكا - ▁عند - ▁قاعدة - ▁تس - ربة - ▁راس - ▁حط - ▁نكل - تني - ▁الو - سيون - ▁عندنا - ▁لو - ▁ست - صف - ▁ض - ▁كامل - ▁نخدم - ▁يبدا - ▁دونك - ▁أمور - رات - ▁تونس - بدا - ▁تحكي - ▁سو - ▁جاي - ▁وحدة - ▁ساعة - حنا - ▁بكري - ▁إل - ▁وبر - ▁كم - ▁تبدا - ارة - ادي - رق - لوا - ▁يمكن - ▁خاط - ▁وص - جين - ▁هاذاي - ▁هز - قد - ▁قل - ▁وكهو - ▁نص - ▁دي - لقى - ▁وأنا - سين - ▁يح - ▁ماشي - ▁شو - ▁خذيت - امات - ▁كنت - خرج - ▁لقيت - رتاح - كس - ▁حاجات - ▁مريق - ▁مل - ليفون - اوا - ▁شفت - ▁عاملة - ▁تن - ▁والا - سأل - ▁حد - ▁قاللك - ▁العباد - ▁عالاخ - ▁وآك - ▁ماني - ▁ناخذ - ▁حم - ▁الإ - ▁ماضي - ▁ث - الة - ▁أخرى - رين - ▁تشوف - ▁نخرج - ▁أربعة - ▁ألف - نيش - ▁هاي - آ - ▁فيك - رشة - ولة - فلة - ▁بابا - ▁أما - ▁روحي - ▁فيهم - ▁رج - ▁ليك - ونس - يرة - ▁وأكهو - ندي - ▁صار - شك - ▁نرو - ▁آكهو - ▁تش - ▁غاديكا - ▁معاها - ▁لب - ▁أذاكا - ▁آني - ▁يوم - عملوا - ▁نقعد - دوا - ▁عد - سمع - متني - ▁الخدمة - ▁مازلت - ▁قعدت - ايا - ▁برك - قعد - ▁خرجت - ضح - ▁قالل - ▁يقول - ▁وفي - ▁حق - ختي - ▁يعني - خدم - ▁جيت - ▁نرمال - طف - ▁عجب - ▁تقعد - ▁مشينا - اية - ▁خدمة - لدي - روف - ▁الفطر - ▁مشكل - ▁سل - ▁وآنا - الط - ▁بالس - ▁هانا - ▁أوه - ▁أذيكا - ▁وإ - ▁عليهم - ▁حالة - جت - قضي - ▁لق - ▁ونصف - سعة - عطيه - عاو - خانة - ▁مخ - ▁شبيك - بيعة - ▁أهوك - يني - ▁تعد - ▁خال - ▁قريب - ▁راك - ▁قالت - ▁لتو - ▁أكثر - اعة - ▁يظهرلي - ▁ماشية - سمعني - ▁نسيت - ▁ينج - ▁الحمدلل - هدي - ▁وشن - ▁تطي - ▁هنا - ▁نسمع - ▁إنتوما - ▁نحكيلك - ▁قاعد - ▁اسمعني - خرين - إ - ماعة - ▁بالر - ▁دا - ▁عمر - ▁نشري - ▁قهوة - ▁تبارك - ▁صب - ▁مشات - غر - ▁شريت - ▁عامل - ▁زوج - ثنين - ▁برب - ريق - ▁نكم - ▁لم - بيب - ▁مياة - ▁مالل - ▁قعد - ▁سخون - قس - ▁وحده - ▁اسمع - ▁خمسة - ▁غالي - ▁الأو - رلي - ▁العظيم - ▁ترو - تهم - كري - ▁نجيب - ▁جملة - قول - ▁قلتلي - ▁إيجا - ▁يقعد - ▁إيام - ▁يعطيك - ▁نخل - ▁دب - يمة - رهبة - ▁نهز - ▁محم - ▁بين - غار - ▁نحنا - ▁بون - ▁الغ - ▁شهر - ▁بار - رقة - ▁نطي - ئ - ترو - ▁ملا - ▁الكرهبة - ▁باه - ▁عالإخ - ▁عباد - ▁بلاصة - ▁مشى - بيع - ▁نفس - ▁عملنا - ▁واح - ▁أحلاه - ▁بحذاك - ▁لأ - ▁دخ - باب - ▁ودر - ▁غالب - ▁ناكل - ▁مثلا - ء - ▁راقد - ▁تفر - ▁الوقت - ▁تاخذ - حذا - نتر - ▁نبدا - ▁حال - ▁مريم - الم - ▁جمعة - رجول - ▁معايا - ▁تخرج - ▁باس - ▁ساعات - ▁عندهم - ▁نتفر - مسة - ▁الجمعة - بعين - ▁أكاهو - ▁ميش - مراة - ▁خذا - ▁ظ - ▁سيدي - ▁معاي - ▁شبيه - ▁حكا - ▁سف - ▁بعضنا - ▁بالض - ▁ليلة - ▁زعما - ▁الحق - مضان - ▁صعيب - ▁قالتلك - ً - ملة - ▁بق - عرف - لاطة - ▁خرج - ▁أخت - ▁تقوللي - ▁معانا - ▁صغير - ▁إسمه - ▁بعض - ▁العام - ▁علينا - ▁يتع - ▁فاش - ▁شع - ▁معاهم - ▁يسالش - ▁لهنا - ▁سمعت - ▁البار - ▁نتصو - ▁الاخ - ▁وكان - وبة - دمة - ▁كون - ▁مبعد - ▁تسمع - ▁بعيد - ▁تاكل - ▁نلقا - لامة - لاثة - ▁ذ - ▁تحس - ▁الواح - ▁لدار - ▁فاتت - ▁تاو - ▁أحوالك - ▁عاملين - ▁كبيرة - عجب - ▁بنت - ▁بيدي - ▁حكيت - ▁تحط - ▁مسكينة - ▁هاذوكم - ▁نزيد - لاث - ▁عشرة - ▁عيني - ▁تعب - ▁ياكل - ▁وزيد - ▁طول - ▁حمدلله - ▁وقتاه - ▁معناه - ▁وآش - ▁ووه - ▁وواحد - ▁نشوفوا - ▁عيد - ▁بصراحة - ▁بحذانا - ▁قاعدين - ▁راجل - ▁وحدي - ▁وعشرين - ▁لين - ▁خايب - ▁قالتله - ▁تهز - عيد - ▁كبير - ▁يعرف - ▁عارف - ▁الفلوس - ▁زايد - ▁خدمت - ▁هاذوما - ▁سلاطة - ▁فارغة - ▁ساعتين - ▁تبد - ▁راو - ▁مائة - ▁بعضهم - ▁ظاهرلي - ▁الفازة - كتب - ▁القهوة - سبوك - ▁زاد - ▁ضرب - حكيلي - ▁فوق - ▁عاود - ▁راي - ▁ومبعد - ▁حوايج - ▁دخلت - ▁يقوللك - ▁زيد - ▁زلت - لفزة - ▁وقال - ▁يهب - ▁يلزمني - ▁الحمد - ▁أذي - طبيعت - ▁دورة - ▁عالأقل - ▁آذاك - ▁وبال - ▁الجاي - عطيني - ▁ياخذ - ▁احكيلي - ▁نهبط - ▁رقدت - بلاصة - ▁عزيز - ▁صغار - ▁أقسم - ▁جيب - ▁وصلت - ▁أحوال - ▁جيست - ▁جماعة - سئل - ▁خوذ - ▁يهز - ▁الأخرى - ▁آلاف - ▁إسمع - ▁الحقيقة - ▁ناقص - ▁حاط - ▁موجود - عباد - ▁آذيك - ▁خارج - ▁الخير - ▁البنات - بقى - ▁طرف - ▁سينون - ▁ماذاب - ▁البحر - ▁نرقد - مدلله - ▁إيجى - ▁خالتي - ▁فازة - ▁بريك - ▁شريبتك - ▁تطلع - ؤ - ▁المشكلة - ▁طري - ▁مادام - ▁طلبت - ▁يلعب - ▁نعاود - ▁وحدك - ▁ظاهر - ٱ - ژ - ٍ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 hop_length: 256 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe1000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 1024 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.6a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
jgammack/MTL-distilbert-base-uncased-squad
jgammack
2022-02-08T15:58:41Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: MTL-distilbert-base-uncased-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MTL-distilbert-base-uncased-squad This model is a fine-tuned version of [jgammack/MTL-distilbert-base-uncased](https://huggingface.co/jgammack/MTL-distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
jhonparra18/wav2vec2-xls-r-300m-spanish-large-noLM
jhonparra18
2022-02-08T13:27:14Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "es", "robust-speech-event", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - "es" - "robust-speech-event" datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-spanish-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-spanish-large This model is a fine-tuned version of [tomascufaro/xls-r-es-test](https://huggingface.co/tomascufaro/xls-r-es-test) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.1431 - Wer: 0.1197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1769 | 0.15 | 400 | 0.1795 | 0.1698 | | 0.217 | 0.3 | 800 | 0.2000 | 0.1945 | | 0.2372 | 0.45 | 1200 | 0.1985 | 0.1859 | | 0.2351 | 0.6 | 1600 | 0.1901 | 0.1772 | | 0.2269 | 0.75 | 2000 | 0.1968 | 0.1783 | | 0.2284 | 0.9 | 2400 | 0.1873 | 0.1771 | | 0.2014 | 1.06 | 2800 | 0.1840 | 0.1696 | | 0.1988 | 1.21 | 3200 | 0.1904 | 0.1730 | | 0.1919 | 1.36 | 3600 | 0.1827 | 0.1630 | | 0.1919 | 1.51 | 4000 | 0.1788 | 0.1629 | | 0.1817 | 1.66 | 4400 | 0.1755 | 0.1558 | | 0.1812 | 1.81 | 4800 | 0.1795 | 0.1638 | | 0.1808 | 1.96 | 5200 | 0.1762 | 0.1603 | | 0.1625 | 2.11 | 5600 | 0.1721 | 0.1557 | | 0.1477 | 2.26 | 6000 | 0.1735 | 0.1504 | | 0.1508 | 2.41 | 6400 | 0.1708 | 0.1478 | | 0.157 | 2.56 | 6800 | 0.1644 | 0.1466 | | 0.1491 | 2.71 | 7200 | 0.1638 | 0.1445 | | 0.1458 | 2.86 | 7600 | 0.1582 | 0.1426 | | 0.1387 | 3.02 | 8000 | 0.1607 | 0.1376 | | 0.1269 | 3.17 | 8400 | 0.1559 | 0.1364 | | 0.1172 | 3.32 | 8800 | 0.1521 | 0.1335 | | 0.1203 | 3.47 | 9200 | 0.1534 | 0.1330 | | 0.1177 | 3.62 | 9600 | 0.1485 | 0.1304 | | 0.1167 | 3.77 | 10000 | 0.1498 | 0.1302 | | 0.1194 | 3.92 | 10400 | 0.1463 | 0.1287 | | 0.1053 | 4.07 | 10800 | 0.1483 | 0.1282 | | 0.098 | 4.22 | 11200 | 0.1498 | 0.1267 | | 0.0958 | 4.37 | 11600 | 0.1461 | 0.1233 | | 0.0946 | 4.52 | 12000 | 0.1444 | 0.1218 | | 0.094 | 4.67 | 12400 | 0.1434 | 0.1206 | | 0.0932 | 4.82 | 12800 | 0.1424 | 0.1206 | | 0.0912 | 4.98 | 13200 | 0.1431 | 0.1197 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
MarioPenguin/roberta-model-english
MarioPenguin
2022-02-08T13:11:33Z
5
0
transformers
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: roberta-model-english results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-model-english This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1140 - Train Accuracy: 0.9596 - Validation Loss: 0.2166 - Validation Accuracy: 0.9301 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2922 | 0.8804 | 0.2054 | 0.9162 | 0 | | 0.1710 | 0.9352 | 0.1879 | 0.9353 | 1 | | 0.1140 | 0.9596 | 0.2166 | 0.9301 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Tokenizers 0.11.0
tesemnikov-av/rubert-ner-toxicity
tesemnikov-av
2022-02-08T12:52:32Z
80
2
transformers
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- widget: - text: "Ну ты и придурок!!" --- NER Toxic models Fine-tuning [cointegrated/rubert-tiny-toxicity](https://huggingface.co/cointegrated/rubert-tiny-toxicity) model on data from [toxic_dataset_ner](https://huggingface.co/datasets/tesemnikov-av/toxic_dataset_ner) language: RU ```python !pip install transformers > /dev/null from transformers import ( AutoModelForTokenClassification, AutoTokenizer, pipeline ) model = AutoModelForTokenClassification.from_pretrained('tesemnikov-av/rubert-ner-toxicity') tokenizer = AutoTokenizer.from_pretrained('tesemnikov-av/rubert-ner-toxicity') pipe = pipeline(model=model, tokenizer=tokenizer, task='ner', aggregation_strategy='average') text = "Они охриневшие там все придурки!!" print(text) print(pipe(text)) ```
imfiba1991/gpt2-wikitext2
imfiba1991
2022-02-08T10:53:31Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.2082 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 13 | 8.1476 | | No log | 2.0 | 26 | 7.4435 | | No log | 3.0 | 39 | 7.2082 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
edugp/wav2vec2-xls-r-300m-cv8-es
edugp
2022-02-08T08:57:24Z
14
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-xls-r-300m-cv8-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-cv8-es This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2115 - eval_wer: 0.1931 - eval_runtime: 859.964 - eval_samples_per_second: 17.954 - eval_steps_per_second: 2.244 - epoch: 6.97 - step: 50000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
jkang/espnet2_mini_librispeech_diar
jkang
2022-02-08T08:33:52Z
3
0
espnet
[ "espnet", "audio", "diarization", "dataset:mini_librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - espnet - audio - diarization language: noinfo datasets: - mini_librispeech license: cc-by-4.0 --- ## ESPnet2 DIAR model ### `jkang/espnet2_mini_librispeech_diar` This model was trained by jaekookang using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout e08a89e0a43db7fc12bec835c62a000ad10bd417 pip install -e . cd egs2/mini_librispeech/diar1 ./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_mini_librispeech_diar ``` <!-- Generated by scripts/utils/show_diar_result.sh --> # RESULTS ## Environments - date: `Tue Feb 8 16:41:16 KST 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.10.1` - Git hash: `e08a89e0a43db7fc12bec835c62a000ad10bd417` - Commit date: `Sun Feb 6 18:54:20 2022 -0500` ## diar_train_diar_raw ### DER dev_clean_2_ns2_beta2_500 |threshold_median_collar|DER| |---|---| |result_th0.3_med11_collar0.0|31.39| |result_th0.3_med1_collar0.0|31.78| |result_th0.4_med11_collar0.0|29.99| |result_th0.4_med1_collar0.0|30.61| |result_th0.5_med11_collar0.0|29.28| |result_th0.5_med1_collar0.0|30.19| |result_th0.6_med11_collar0.0|29.50| |result_th0.6_med1_collar0.0|30.66| |result_th0.7_med11_collar0.0|30.90| |result_th0.7_med1_collar0.0|32.38| ## DIAR config <details><summary>expand</summary> ``` config: conf/train_diar.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/diar_train_diar_raw ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 3 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/diar_stats_8k/train/speech_shape - exp/diar_stats_8k/train/spk_labels_shape valid_shape_file: - exp/diar_stats_8k/valid/speech_shape - exp/diar_stats_8k/valid/spk_labels_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 200000 chunk_shift_ratio: 0.5 num_cache_chunks: 64 train_data_path_and_name_and_type: - - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp - speech - sound - - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm - spk_labels - rttm valid_data_path_and_name_and_type: - - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp - speech - sound - - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm - spk_labels - rttm allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.01 scheduler: noamlr scheduler_conf: warmup_steps: 1000 num_spk: 2 init: xavier_uniform input_size: null model_conf: attractor_weight: 1.0 use_preprocessor: true frontend: default frontend_conf: fs: 8k hop_length: 128 specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/diar_stats_8k/train/feats_stats.npz encoder: transformer encoder_conf: input_layer: linear num_blocks: 2 linear_units: 512 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 decoder: linear decoder_conf: {} label_aggregator: label_aggregator label_aggregator_conf: {} attractor: null attractor_conf: {} required: - output_dir version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
woohyun/sdssd
woohyun
2022-02-08T08:03:32Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
kloon99/KML_Eula_generate_v2
kloon99
2022-02-08T07:06:09Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: trained_model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_model2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.9.1 - Datasets 1.14.0 - Tokenizers 0.10.3
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_final
LegolasTheElf
2022-02-08T04:27:18Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "Openslr Multilingual", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "hi", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - Openslr Multilingual - mozilla-foundation/common_voice_7_0 - generated_from_trainer model-index: - name: Wav2Vec2_xls_r_300m_hi_final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wav2Vec2_xls_r_300m_hi_final This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3035 - Wer: 0.3137 - Cer: 0.0972 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 | | 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 | | 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 | | 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 | | 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 | | 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 | | 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 | | 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 | | 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 | | 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 | | 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 | | 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
softcatala/wav2vec2-large-100k-voxpopuli-catala
softcatala
2022-02-08T02:20:32Z
4
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "speech-to-text", "ca", "dataset:common_voice", "dataset:parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ca datasets: - common_voice - parlament_parla metrics: - wer tags: - audio - automatic-speech-recognition - speech - speech-to-text license: apache-2.0 model-index: - name: Catalan VoxPopuli Wav2Vec2 Large results: - task: name: Speech Recognition type: automatic-speech-recognition datasets: - name: Common Voice ca type: common_voice args: ca - name: ParlamentParla url: https://www.openslr.org/59/ metrics: - name: Test WER type: wer value: 5.98 - name: Google Crowsourced Corpus WER type: wer value: 12.14 - name: Audiobook “La llegenda de Sant Jordi” WER type: wer value: 12.02 --- # Wav2Vec2-Large-100k-VoxPopuli-Català Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets. **Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala) When using this model, make sure that your speech input is sampled at 16kHz. ## Results Word error rate was evaluated on the following datasets unseen by the model: | Dataset | WER | | ------- | --- | | [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% | | [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% | | Audiobook “La llegenda de Sant Jordi” | 12.02% | ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala") model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
jgammack/distilbert-base-uncased-squad
jgammack
2022-02-08T01:36:38Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
ccoreilly/wav2vec2-large-100k-voxpopuli-catala
ccoreilly
2022-02-08T00:59:52Z
14
2
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "speech-to-text", "ca", "dataset:common_voice", "dataset:parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ca datasets: - common_voice - parlament_parla metrics: - wer tags: - audio - automatic-speech-recognition - speech - speech-to-text license: apache-2.0 model-index: - name: Catalan VoxPopuli Wav2Vec2 Large results: - task: name: Speech Recognition type: automatic-speech-recognition datasets: - name: Common Voice ca type: common_voice args: ca - name: ParlamentParla url: https://www.openslr.org/59/ metrics: - name: Test WER type: wer value: 5.98 - name: Google Crowsourced Corpus WER type: wer value: 12.14 - name: Audiobook “La llegenda de Sant Jordi” WER type: wer value: 12.02 --- # Wav2Vec2-Large-100k-VoxPopuli-Català **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL:** https://huggingface.co/softcatala/wav2vec2-large-100k-voxpopuli-catala Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets. **Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala) When using this model, make sure that your speech input is sampled at 16kHz. ## Results Word error rate was evaluated on the following datasets unseen by the model: | Dataset | WER | | ------- | --- | | [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% | | [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% | | Audiobook “La llegenda de Sant Jordi” | 12.02% | ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala") model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
jgammack/MTL-distilbert-base-uncased
jgammack
2022-02-07T23:23:37Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: MTL-distilbert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MTL-distilbert-base-uncased This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 7 - eval_batch_size: 7 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5593 | 1.0 | 99 | 2.3163 | | 2.4346 | 2.0 | 198 | 2.2918 | | 2.3377 | 3.0 | 297 | 2.2345 | | 2.2953 | 4.0 | 396 | 2.1463 | | 2.2296 | 5.0 | 495 | 2.1761 | | 2.2235 | 6.0 | 594 | 2.0721 | | 2.1878 | 7.0 | 693 | 2.1460 | | 2.1569 | 8.0 | 792 | 2.0856 | | 2.1455 | 9.0 | 891 | 2.1039 | | 2.1391 | 10.0 | 990 | 2.1112 | | 2.1056 | 11.0 | 1089 | 2.0694 | | 2.1076 | 12.0 | 1188 | 2.0501 | | 2.0919 | 13.0 | 1287 | 2.0484 | | 2.0669 | 14.0 | 1386 | 2.0342 | | 2.0595 | 15.0 | 1485 | 2.0802 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
jgammack/MTL-bert-base-uncased
jgammack
2022-02-07T23:09:21Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: MTL-bert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MTL-bert-base-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9283 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 7 - eval_batch_size: 7 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4409 | 1.0 | 99 | 2.1982 | | 2.2905 | 2.0 | 198 | 2.1643 | | 2.1974 | 3.0 | 297 | 2.1168 | | 2.15 | 4.0 | 396 | 2.0023 | | 2.0823 | 5.0 | 495 | 2.0199 | | 2.0752 | 6.0 | 594 | 1.9061 | | 2.0408 | 7.0 | 693 | 1.9770 | | 1.9984 | 8.0 | 792 | 1.9322 | | 1.9933 | 9.0 | 891 | 1.9167 | | 1.9806 | 10.0 | 990 | 1.9652 | | 1.9436 | 11.0 | 1089 | 1.9308 | | 1.9491 | 12.0 | 1188 | 1.9064 | | 1.929 | 13.0 | 1287 | 1.8831 | | 1.9096 | 14.0 | 1386 | 1.8927 | | 1.9032 | 15.0 | 1485 | 1.9117 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
microsoft/cocolm-large
microsoft
2022-02-07T22:49:54Z
9
7
transformers
[ "transformers", "pytorch", "arxiv:2102.08473", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining This model card contains the COCO-LM model (**large++** version) proposed in [this paper](https://arxiv.org/abs/2102.08473). The official GitHub repository can be found [here](https://github.com/microsoft/COCO-LM). # Citation If you find this model card useful for your research, please cite the following paper: ``` @inproceedings{meng2021coco, title={{COCO-LM}: Correcting and contrasting text sequences for language model pretraining}, author={Meng, Yu and Xiong, Chenyan and Bajaj, Payal and Tiwary, Saurabh and Bennett, Paul and Han, Jiawei and Song, Xia}, booktitle={NeurIPS}, year={2021} } ```
jgammack/SAE-roberta-base
jgammack
2022-02-07T22:14:50Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: SAE-roberta-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SAE-roberta-base This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 7 - eval_batch_size: 7 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9847 | 1.0 | 79 | 1.8238 | | 1.9142 | 2.0 | 158 | 1.8299 | | 1.8613 | 3.0 | 237 | 1.7636 | | 1.8384 | 4.0 | 316 | 1.8048 | | 1.8193 | 5.0 | 395 | 1.7734 | | 1.7985 | 6.0 | 474 | 1.7271 | | 1.7758 | 7.0 | 553 | 1.8525 | | 1.7611 | 8.0 | 632 | 1.7716 | | 1.7599 | 9.0 | 711 | 1.7913 | | 1.7118 | 10.0 | 790 | 1.7578 | | 1.7003 | 11.0 | 869 | 1.7598 | | 1.7072 | 12.0 | 948 | 1.6942 | | 1.6511 | 13.0 | 1027 | 1.6955 | | 1.6802 | 14.0 | 1106 | 1.7837 | | 1.7048 | 15.0 | 1185 | 1.7377 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
robot-test/old-clip-tokenizer
robot-test
2022-02-07T21:44:19Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
Old version of the CLIP fast tokenizer cf [this issue](https://github.com/huggingface/transformers/issues/12648) on transformers
nateraw/codecarbon-text-classification
nateraw
2022-02-07T20:30:43Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: codecarbon-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codecarbon-text-classification This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
jiobiala24/wav2vec2-base-checkpoint-11.1
jiobiala24
2022-02-07T19:33:31Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-11.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-11.1 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-10](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-10) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.0173 - Wer: 0.3350 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2788 | 1.52 | 1000 | 0.5776 | 0.3410 | | 0.2277 | 3.04 | 2000 | 0.6148 | 0.3465 | | 0.1772 | 4.56 | 3000 | 0.6497 | 0.3497 | | 0.1528 | 6.08 | 4000 | 0.6786 | 0.3430 | | 0.1285 | 7.6 | 5000 | 0.6779 | 0.3489 | | 0.1104 | 9.12 | 6000 | 0.7417 | 0.3528 | | 0.0965 | 10.64 | 7000 | 0.7956 | 0.3477 | | 0.0914 | 12.16 | 8000 | 0.7994 | 0.3570 | | 0.082 | 13.68 | 9000 | 0.8690 | 0.3510 | | 0.0788 | 15.2 | 10000 | 0.8569 | 0.3526 | | 0.0727 | 16.72 | 11000 | 0.8885 | 0.3440 | | 0.0656 | 18.24 | 12000 | 0.9586 | 0.3476 | | 0.0608 | 19.76 | 13000 | 0.9317 | 0.3495 | | 0.0588 | 21.28 | 14000 | 0.9809 | 0.3449 | | 0.0547 | 22.8 | 15000 | 0.9552 | 0.3421 | | 0.0519 | 24.32 | 16000 | 0.9782 | 0.3380 | | 0.0474 | 25.84 | 17000 | 0.9923 | 0.3386 | | 0.046 | 27.36 | 18000 | 0.9984 | 0.3347 | | 0.045 | 28.88 | 19000 | 1.0173 | 0.3350 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
elozano/tweet_emotion_eval
elozano
2022-02-07T18:04:47Z
5
4
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "dataset:tweet_eval", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit datasets: - tweet_eval language: en widget: - text: "Stop sharing which songs did you listen to during this year on Spotify, NOBODY CARES" example_title: "Anger" - text: "I love that joke HAHAHAHAHA" example_title: "Joy" - text: "Despite I've not studied a lot for this exam, I think I will pass 😜" example_title: "Optimism" - text: "My dog died this morning..." example_title: "Sadness" ---
elozano/tweet_sentiment_eval
elozano
2022-02-07T17:50:59Z
11
4
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "dataset:tweet_eval", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit datasets: - tweet_eval language: en widget: - text: "I love summer!" example_title: "Positive" - text: "Does anyone want to play?" example_title: "Neutral" - text: "This movie is just awful! 😫" example_title: "Negative" ---
sukhendrasingh/finetuning-sentiment-model-3000-samples
sukhendrasingh
2022-02-07T17:20:03Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.879746835443038 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3323 - Accuracy: 0.8733 - F1: 0.8797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
shahukareem/wav2vec2-xls-r-300m-dv
shahukareem
2022-02-07T15:55:39Z
10
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-xls-r-300m-dv results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: dv metrics: - name: Test WER type: wer value: 24.72 - name: Test CER type: cer value: 4.17 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-dv This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2206 - Wer: 0.2451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.9623 | 0.66 | 400 | 3.3010 | 1.0 | | 3.2238 | 1.33 | 800 | 2.8950 | 1.0 | | 1.1988 | 1.99 | 1200 | 0.5277 | 0.6681 | | 0.6084 | 2.65 | 1600 | 0.4113 | 0.5831 | | 0.4973 | 3.32 | 2000 | 0.3538 | 0.5333 | | 0.4476 | 3.98 | 2400 | 0.3201 | 0.5081 | | 0.3999 | 4.64 | 2800 | 0.2917 | 0.4759 | | 0.3779 | 5.31 | 3200 | 0.2788 | 0.4672 | | 0.3457 | 5.97 | 3600 | 0.2667 | 0.4557 | | 0.3222 | 6.63 | 4000 | 0.2549 | 0.4452 | | 0.3129 | 7.3 | 4400 | 0.2491 | 0.4266 | | 0.2927 | 7.96 | 4800 | 0.2488 | 0.4246 | | 0.2786 | 8.62 | 5200 | 0.2429 | 0.4145 | | 0.2756 | 9.29 | 5600 | 0.2453 | 0.4150 | | 0.258 | 9.95 | 6000 | 0.2282 | 0.4109 | | 0.251 | 10.61 | 6400 | 0.2307 | 0.4012 | | 0.2397 | 11.28 | 6800 | 0.2275 | 0.4 | | 0.2312 | 11.94 | 7200 | 0.2244 | 0.3889 | | 0.2323 | 12.6 | 7600 | 0.2247 | 0.3983 | | 0.216 | 13.27 | 8000 | 0.2301 | 0.3863 | | 0.2169 | 13.93 | 8400 | 0.2224 | 0.3782 | | 0.2089 | 14.59 | 8800 | 0.2276 | 0.3771 | | 0.2042 | 15.26 | 9200 | 0.2286 | 0.3784 | | 0.1953 | 15.92 | 9600 | 0.2235 | 0.3822 | | 0.1876 | 16.58 | 10000 | 0.2267 | 0.3674 | | 0.186 | 17.25 | 10400 | 0.2295 | 0.3676 | | 0.1847 | 17.91 | 10800 | 0.2244 | 0.3608 | | 0.178 | 18.57 | 11200 | 0.2229 | 0.3526 | | 0.1751 | 19.24 | 11600 | 0.2219 | 0.3483 | | 0.17 | 19.9 | 12000 | 0.2241 | 0.3503 | | 0.1641 | 20.56 | 12400 | 0.2187 | 0.3403 | | 0.1629 | 21.23 | 12800 | 0.2135 | 0.3433 | | 0.1568 | 21.89 | 13200 | 0.2117 | 0.3358 | | 0.1585 | 22.55 | 13600 | 0.2151 | 0.3332 | | 0.1512 | 23.22 | 14000 | 0.2097 | 0.3344 | | 0.1427 | 23.88 | 14400 | 0.2119 | 0.3255 | | 0.1458 | 24.54 | 14800 | 0.2209 | 0.3213 | | 0.1413 | 25.21 | 15200 | 0.2228 | 0.3202 | | 0.1363 | 25.87 | 15600 | 0.2071 | 0.3207 | | 0.1302 | 26.53 | 16000 | 0.2094 | 0.3138 | | 0.1283 | 27.2 | 16400 | 0.2193 | 0.3132 | | 0.1278 | 27.86 | 16800 | 0.2197 | 0.3103 | | 0.1271 | 28.52 | 17200 | 0.2133 | 0.3009 | | 0.1243 | 29.19 | 17600 | 0.2202 | 0.3026 | | 0.1182 | 29.85 | 18000 | 0.2092 | 0.3046 | | 0.1171 | 30.51 | 18400 | 0.2142 | 0.2947 | | 0.1156 | 31.18 | 18800 | 0.2219 | 0.2926 | | 0.1129 | 31.84 | 19200 | 0.2194 | 0.2848 | | 0.1099 | 32.5 | 19600 | 0.2218 | 0.2869 | | 0.1045 | 33.17 | 20000 | 0.2183 | 0.2803 | | 0.1057 | 33.83 | 20400 | 0.2242 | 0.2896 | | 0.1056 | 34.49 | 20800 | 0.2189 | 0.2838 | | 0.1039 | 35.16 | 21200 | 0.2256 | 0.2819 | | 0.1007 | 35.82 | 21600 | 0.2196 | 0.2743 | | 0.1012 | 36.48 | 22000 | 0.2218 | 0.2752 | | 0.098 | 37.15 | 22400 | 0.2181 | 0.2721 | | 0.0963 | 37.81 | 22800 | 0.2162 | 0.2691 | | 0.0943 | 38.47 | 23200 | 0.2148 | 0.2686 | | 0.0959 | 39.14 | 23600 | 0.2194 | 0.2658 | | 0.0904 | 39.8 | 24000 | 0.2170 | 0.2641 | | 0.0898 | 40.46 | 24400 | 0.2129 | 0.2585 | | 0.0886 | 41.13 | 24800 | 0.2199 | 0.2606 | | 0.088 | 41.79 | 25200 | 0.2155 | 0.2595 | | 0.0863 | 42.45 | 25600 | 0.2169 | 0.2564 | | 0.0876 | 43.12 | 26000 | 0.2178 | 0.2529 | | 0.0827 | 43.78 | 26400 | 0.2171 | 0.2559 | | 0.087 | 44.44 | 26800 | 0.2192 | 0.2530 | | 0.0818 | 45.11 | 27200 | 0.2180 | 0.2496 | | 0.0811 | 45.77 | 27600 | 0.2207 | 0.2502 | | 0.0828 | 46.43 | 28000 | 0.2186 | 0.2502 | | 0.0796 | 47.1 | 28400 | 0.2203 | 0.2468 | | 0.0804 | 47.76 | 28800 | 0.2201 | 0.2453 | | 0.0791 | 48.42 | 29200 | 0.2204 | 0.2477 | | 0.0777 | 49.09 | 29600 | 0.2197 | 0.2466 | | 0.0775 | 49.75 | 30000 | 0.2206 | 0.2451 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
ahmedrachid/FinancialBERT-Sentiment-Analysis
ahmedrachid
2022-02-07T14:58:57Z
45,019
86
transformers
[ "transformers", "pytorch", "bert", "text-classification", "financial-sentiment-analysis", "sentiment-analysis", "en", "dataset:financial_phrasebank", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - financial-sentiment-analysis - sentiment-analysis datasets: - financial_phrasebank widget: - text: Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales. - text: Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000. - text: Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008. --- ### FinancialBERT for Sentiment Analysis [*FinancialBERT*](https://huggingface.co/ahmedrachid/FinancialBERT) is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model. The model was fine-tuned for Sentiment Analysis task on _Financial PhraseBank_ dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models. More details on `FinancialBERT`'s pre-training process can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining ### Training data FinancialBERT model was fine-tuned on [Financial PhraseBank](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). ### Fine-tuning hyper-parameters - learning_rate = 2e-5 - batch_size = 32 - max_seq_length = 512 - num_train_epochs = 5 ### Evaluation metrics The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set. | sentiment | precision | recall | f1-score | support | | ------------- |:-------------:|:-------------:|:-------------:| -----:| | negative | 0.96 | 0.97 | 0.97 | 58 | | neutral | 0.98 | 0.99 | 0.98 | 279 | | positive | 0.98 | 0.97 | 0.97 | 148 | | macro avg | 0.97 | 0.98 | 0.98 | 485 | | weighted avg | 0.98 | 0.98 | 0.98 | 485 | ### How to use The model can be used thanks to Transformers pipeline for sentiment analysis. ```python from transformers import BertTokenizer, BertForSequenceClassification from transformers import pipeline model = BertForSequenceClassification.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis",num_labels=3) tokenizer = BertTokenizer.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis") nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) sentences = ["Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.", "Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.", "Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.", ] results = nlp(sentences) print(results) [{'label': 'positive', 'score': 0.9998133778572083}, {'label': 'neutral', 'score': 0.9997822642326355}, {'label': 'negative', 'score': 0.9877365231513977}] ``` > Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
huggingtweets/r2devops_io
huggingtweets
2022-02-07T14:42:27Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/r2devops_io/1644244942715/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1467763268559253504/kLy9pmCe_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">R2Devops</div> <div style="text-align: center; font-size: 14px;">@r2devops_io</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from R2Devops. | Data | R2Devops | | --- | --- | | Tweets downloaded | 277 | | Retweets | 57 | | Short tweets | 4 | | Tweets kept | 216 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mg7zs5q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @r2devops_io's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28hfbi0v) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28hfbi0v/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/r2devops_io') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lgris/bp500-base100k_voxpopuli
lgris
2022-02-07T11:53:19Z
4
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "dataset:tedx", "dataset:sid", "arxiv:2012.03411", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge - tedx - sid metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch license: apache-2.0 --- # bp500-base100k_voxpopuli: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets: - [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus. - [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). - [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control. - [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers. - [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech. - [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation; - [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz. These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets. | Dataset | Train | Valid | Test | |--------------------------------|-------:|------:|------:| | CETUC | 94.0h | -- | 5.4h | | Common Voice | 37.8h | 8.9h | 9.5h | | LaPS BM | 0.8h | -- | 0.1h | | MLS | 161.0h | -- | 3.7h | | Multilingual TEDx (Portuguese) | 148.9h | -- | 1.8h | | SID | 7.2h | -- | 1.0h | | VoxForge | 3.9h | -- | 0.1h | | Total | 453.6h | 8.9h | 21.6h | The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/10iESR5AQxuxF5F7w3wLbpc_9YMsYbY9H/view?usp=sharing). #### Summary | | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG | |----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | bp\_500-base100k_voxpopuli (demonstration below) | 0.142 | 0.201 | 0.052 | 0.224 | 0.102 | 0.317 | 0.048 | 0.155 | | bp\_500-base100k_voxpopuli + 4-gram (demonstration below) | 0.099 | 0.149 | 0.047 | 0.192 | 0.115 | 0.371 | 0.127 | 0.157 | #### Transcription examples | Text | Transcription | |------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| |qual o instagram dele|**qualo** **está** **gramedele**| |o capitão foi expulso do exército porque era doido|o **capitãl** foi **exposo** do exército porque era doido| |também por que não|também **porque** não| |não existe tempo como o presente|não existe tempo como *o* presente| |eu pulei para salvar rachel|eu pulei para salvar **haquel**| |augusto cezar passos marinho|augusto **cesa** **passoesmarinho**| ## Demonstration ```python MODEL_NAME = "lgris/bp500-base100k_voxpopuli" ``` ### Imports and dependencies ```python %%capture !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html !pip install datasets !pip install jiwer !pip install transformers !pip install soundfile !pip install pyctcdecode !pip install https://github.com/kpu/kenlm/archive/master.zip ``` ```python import jiwer import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) from pyctcdecode import build_ctcdecoder import torch import re import sys ``` ### Helpers ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = 16_000 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") batch["target"] = batch["sentence"] return batch ``` ```python def calc_metrics(truths, hypos): wers = [] mers = [] wils = [] for t, h in zip(truths, hypos): try: wers.append(jiwer.wer(t, h)) mers.append(jiwer.mer(t, h)) wils.append(jiwer.wil(t, h)) except: # Empty string? pass wer = sum(wers)/len(wers) mer = sum(mers)/len(mers) wil = sum(wils)/len(wils) return wer, mer, wil ``` ```python def load_data(dataset): data_files = {'test': f'{dataset}/test.csv'} dataset = load_dataset('csv', data_files=data_files)["test"] return dataset.map(map_to_array) ``` ### Model ```python class STT: def __init__(self, model_name, device='cuda' if torch.cuda.is_available() else 'cpu', lm=None): self.model_name = model_name self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) self.processor = Wav2Vec2Processor.from_pretrained(model_name) self.vocab_dict = self.processor.tokenizer.get_vocab() self.sorted_dict = { k.lower(): v for k, v in sorted(self.vocab_dict.items(), key=lambda item: item[1]) } self.device = device self.lm = lm if self.lm: self.lm_decoder = build_ctcdecoder( list(self.sorted_dict.keys()), self.lm ) def batch_predict(self, batch): features = self.processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(self.device) with torch.no_grad(): logits = self.model(input_values).logits if self.lm: logits = logits.cpu().numpy() batch["predicted"] = [] for sample_logits in logits: batch["predicted"].append(self.lm_decoder.decode(sample_logits)) else: pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = self.processor.batch_decode(pred_ids) return batch ``` ### Download datasets ```python %%capture !gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI !mkdir bp_dataset !unzip bp_dataset -d bp_dataset/ ``` ```python %cd bp_dataset ``` /content/bp_dataset ### Tests ```python stt = STT(MODEL_NAME) ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.1419179499917191 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.20079950312040154 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.052780934343434324 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.22413887199364113 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.1019041538671034 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.31711268778273327 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.04826433982683982 ### Tests with LM ```python !rm -rf ~/.cache !gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa') # !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp # stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa') ``` ### Cetuc ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.099518615112877 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.1488912889506362 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.047080176767676764 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.19220291966887196 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.11535498771650306 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.3707890073539895 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.12682088744588746
MarioPenguin/bert-model-english1
MarioPenguin
2022-02-07T11:31:41Z
6
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: bert-model-english1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-model-english1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0274 - Train Accuracy: 0.9914 - Validation Loss: 0.3493 - Validation Accuracy: 0.9303 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0366 | 0.9885 | 0.3013 | 0.9299 | 0 | | 0.0261 | 0.9912 | 0.3445 | 0.9351 | 1 | | 0.0274 | 0.9914 | 0.3493 | 0.9303 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.3 - Tokenizers 0.11.0
victen/distilbert-base-uncased-finetuned-emotion
victen
2022-02-07T10:42:22Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9235 - name: F1 type: f1 value: 0.9236951195245434 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2265 - Accuracy: 0.9235 - F1: 0.9237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8243 | 1.0 | 250 | 0.3199 | 0.906 | 0.9025 | | 0.2484 | 2.0 | 500 | 0.2265 | 0.9235 | 0.9237 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc_inference_only
deepdoctection
2022-02-07T10:33:04Z
0
0
null
[ "Tensorflow", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - Tensorflow license: apache-2.0 datasets: - Pubtabnet --- # Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc). ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"}) pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"]) path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"] build_train_config=["max_datapoints=500000","rows_and_cols=True"] dataset_val = pubtabnet build_val_config = ["max_datapoints=2000","rows_and_cols=True"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ```
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc
deepdoctection
2022-02-07T10:24:03Z
0
0
null
[ "Tensorflow", "dataset:Pubtabnet", "arxiv:1911.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - Tensorflow license: apache-2.0 datasets: - Pubtabnet --- # Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"}) pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"]) path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"] build_train_config=["max_datapoints=500000","rows_and_cols=True"] dataset_val = pubtabnet build_val_config = ["max_datapoints=2000","rows_and_cols=True"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ``` ## How to fine-tune this model To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
willemjan/spa
willemjan
2022-02-07T09:21:31Z
6
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:cc-by-nc-sa-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: cc-by-nc-sa-3.0 ---
willemjan/indo2
willemjan
2022-02-07T09:17:20Z
7
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:cc-by-nc-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: cc-by-nc-3.0 ---
Llamacha/QuBERTa
Llamacha
2022-02-07T09:14:51Z
52
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "Llamacha", "qu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - qu tags: - Llamacha --- # QuBERTa QuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka). El modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras. ## Usabilidad Una vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue `QuBERTa `. ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="./QuBERTa", tokenizer="./QuBERTa" ) ``` Se hace la prueba, la cual esta en fases de mejoras. ```python fill_mask("allinllachu <mask> allinlla huk wasipita.") ``` [{'score': 0.23992203176021576, 'sequence': 'allinllachu nisqaqa allinlla huk wasipita.', 'token': 334, 'token_str': ' nisqaqa'}, {'score': 0.061005301773548126, 'sequence': 'allinllachu, allinlla huk wasipita.', 'token': 16, 'token_str': ','}, {'score': 0.028720015659928322, 'sequence': "allinllachu' allinlla huk wasipita.", 'token': 11, 'token_str': "'"}, {'score': 0.012927944771945477, 'sequence': 'allinllachu kay allinlla huk wasipita.', 'token': 377, 'token_str': ' kay'}, {'score': 0.01230092253535986, 'sequence': 'allinllachu. allinlla huk wasipita.', 'token': 18, 'token_str': '.'}]
willemjan/nl2
willemjan
2022-02-07T08:52:58Z
4
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:cc-by-nc-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: cc-by-nc-3.0 ---
willemjan/nl1
willemjan
2022-02-07T08:44:23Z
3
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:cc-by-nc-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: cc-by-nc-3.0 ---
aidj/distilbert-base-uncased-finetuned-ner
aidj
2022-02-07T07:19:58Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9260322366968425 - name: Recall type: recall value: 0.9383599955252265 - name: F1 type: f1 value: 0.9321553592265377 - name: Accuracy type: accuracy value: 0.9834146186474335 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0607 - Precision: 0.9260 - Recall: 0.9384 - F1: 0.9322 - Accuracy: 0.9834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2545 | 1.0 | 878 | 0.0711 | 0.9096 | 0.9214 | 0.9154 | 0.9800 | | 0.0555 | 2.0 | 1756 | 0.0593 | 0.9185 | 0.9356 | 0.9270 | 0.9827 | | 0.0297 | 3.0 | 2634 | 0.0607 | 0.9260 | 0.9384 | 0.9322 | 0.9834 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
bespin-global/klue-sentence-roberta-base-kornlu
bespin-global
2022-02-07T07:14:21Z
8
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "dataset:kor_nlu", "license:cc-by-nc-4.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - kor_nlu license: cc-by-nc-4.0 --- # bespin-global/klue-sentence-roberta-kornlu This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('bespin-global/klue-sentence-roberta-kornlu') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('bespin-global/klue-sentence-roberta-kornlu') model = AutoModel.from_pretrained('bespin-global/klue-sentence-roberta-kornlu') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 180 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 72, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> [Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/)
leeyujin/distilbert-base-uncased-finetuned-cola
leeyujin
2022-02-07T07:08:04Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5062132225102124 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5608 - Matthews Correlation: 0.5062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 134 | 0.4851 | 0.4301 | | No log | 2.0 | 268 | 0.4619 | 0.4891 | | No log | 3.0 | 402 | 0.5447 | 0.4965 | | 0.3828 | 4.0 | 536 | 0.5608 | 0.5062 | | 0.3828 | 5.0 | 670 | 0.5702 | 0.5029 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.8.1+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
gagan3012/ViTGPT2_vizwiz
gagan3012
2022-02-07T05:54:26Z
31
1
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "image-to-text", "endpoints_compatible", "region:us" ]
image-to-text
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer - image-to-text model-index: - name: ViTGPT2_vizwiz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViTGPT2_vizwiz This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.1207 | 0.07 | 1000 | 0.0906 | | 0.0916 | 0.14 | 2000 | 0.0861 | | 0.0879 | 0.2 | 3000 | 0.0840 | | 0.0856 | 0.27 | 4000 | 0.0822 | | 0.0834 | 0.34 | 5000 | 0.0806 | | 0.0817 | 0.41 | 6000 | 0.0795 | | 0.0812 | 0.48 | 7000 | 0.0785 | | 0.0808 | 0.55 | 8000 | 0.0779 | | 0.0796 | 0.61 | 9000 | 0.0771 | | 0.0786 | 0.68 | 10000 | 0.0767 | | 0.0774 | 0.75 | 11000 | 0.0762 | | 0.0772 | 0.82 | 12000 | 0.0758 | | 0.0756 | 0.89 | 13000 | 0.0754 | | 0.0759 | 0.96 | 14000 | 0.0750 | | 0.0756 | 1.02 | 15000 | 0.0748 | | 0.0726 | 1.09 | 16000 | 0.0745 | | 0.0727 | 1.16 | 17000 | 0.0745 | | 0.0715 | 1.23 | 18000 | 0.0742 | | 0.0726 | 1.3 | 19000 | 0.0741 | | 0.072 | 1.37 | 20000 | 0.0738 | | 0.0723 | 1.43 | 21000 | 0.0735 | | 0.0715 | 1.5 | 22000 | 0.0734 | | 0.0724 | 1.57 | 23000 | 0.0732 | | 0.0723 | 1.64 | 24000 | 0.0730 | | 0.0718 | 1.71 | 25000 | 0.0729 | | 0.07 | 1.78 | 26000 | 0.0728 | | 0.0702 | 1.84 | 27000 | 0.0726 | | 0.0704 | 1.91 | 28000 | 0.0725 | | 0.0703 | 1.98 | 29000 | 0.0725 | | 0.0686 | 2.05 | 30000 | 0.0726 | | 0.0687 | 2.12 | 31000 | 0.0726 | | 0.0688 | 2.19 | 32000 | 0.0724 | | 0.0677 | 2.25 | 33000 | 0.0724 | | 0.0665 | 2.32 | 34000 | 0.0725 | | 0.0684 | 2.39 | 35000 | 0.0723 | | 0.0678 | 2.46 | 36000 | 0.0722 | | 0.0686 | 2.53 | 37000 | 0.0722 | | 0.067 | 2.59 | 38000 | 0.0721 | | 0.0669 | 2.66 | 39000 | 0.0721 | | 0.0673 | 2.73 | 40000 | 0.0721 | | 0.0673 | 2.8 | 41000 | 0.0720 | | 0.0662 | 2.87 | 42000 | 0.0720 | | 0.0681 | 2.94 | 43000 | 0.0719 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
ghofrani/common6
ghofrani
2022-02-07T02:29:26Z
18
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "fa", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - fa tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: common6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # common6 This model is a fine-tuned version of [common6/checkpoint-3500](https://huggingface.co/common6/checkpoint-3500) on the COMMON_VOICE - FA dataset. It achieves the following results on the evaluation set: - Loss: 0.3706 - Wer: 0.3421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0344 | 10.0 | 500 | 0.4043 | 0.4511 | | 0.9651 | 20.0 | 1000 | 0.3793 | 0.4159 | | 0.9125 | 30.0 | 1500 | 0.3756 | 0.4046 | | 0.8831 | 40.0 | 2000 | 0.3650 | 0.3876 | | 0.8399 | 50.0 | 2500 | 0.3605 | 0.3772 | | 0.819 | 60.0 | 3000 | 0.3622 | 0.3714 | | 0.8029 | 70.0 | 3500 | 0.3561 | 0.3664 | | 0.8104 | 80.0 | 4000 | 0.3595 | 0.3660 | | 0.8118 | 90.0 | 4500 | 0.3460 | 0.3592 | | 0.7831 | 100.0 | 5000 | 0.3566 | 0.3593 | | 0.744 | 110.0 | 5500 | 0.3578 | 0.3535 | | 0.7388 | 120.0 | 6000 | 0.3538 | 0.3520 | | 0.714 | 130.0 | 6500 | 0.3682 | 0.3506 | | 0.7291 | 140.0 | 7000 | 0.3625 | 0.3505 | | 0.697 | 150.0 | 7500 | 0.3619 | 0.3479 | | 0.6811 | 160.0 | 8000 | 0.3631 | 0.3440 | | 0.6841 | 170.0 | 8500 | 0.3672 | 0.3460 | | 0.6616 | 180.0 | 9000 | 0.3677 | 0.3410 | | 0.6471 | 190.0 | 9500 | 0.3707 | 0.3420 | | 0.6759 | 200.0 | 10000 | 0.3706 | 0.3421 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3.dev0 - Tokenizers 0.10.3
JIWON/bert-base-finetuned-nli
JIWON
2022-02-07T00:29:00Z
11
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:klue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - klue metrics: - accuracy model-index: - name: bert-base-finetuned-nli results: - task: name: Text Classification type: text-classification dataset: name: klue type: klue args: nli metrics: - name: Accuracy type: accuracy value: 0.085 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-nli This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.6210 - Accuracy: 0.085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 196 | 0.6210 | 0.085 | | No log | 2.0 | 392 | 0.5421 | 0.0643 | | 0.5048 | 3.0 | 588 | 0.5523 | 0.062 | | 0.5048 | 4.0 | 784 | 0.5769 | 0.0533 | | 0.5048 | 5.0 | 980 | 0.5959 | 0.052 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
BigSalmon/Points2
BigSalmon
2022-02-07T00:27:54Z
13
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Converting Points or Headlines to Paragraphs Example Prompts: ``` ### - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership. ### - with 2,000,000 individual articles on everything - wikipedia is the #8 site on the world wide web - created by anyone with access to a computer - growing at fast rate - proof that collaborative community-based projects are the future Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future. ### - ``` ``` Essay Intro (Sega Centers Classics): unyielding in its insistence on consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. this is a task that not even the most devoted fan could have foreseen. *** Essay Intro (Blizzard Shows Video Games Are An Art): universally adored, video games have come to be revered not only as interactive diversions, but as artworks. a firm believer in this doctrine, blizzard actively works to further the craft of storytelling in their respective titles. *** Essay Intro (What Happened To Linux): chancing upon a linux user is a rare occurrence in the present day. once a mainstay, the brand has come to only be seen in the hands of the most ardent of its followers. ```
fractalego/personal-speech-to-text-model
fractalego
2022-02-06T22:32:50Z
52
6
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
# Personal speech to text model s2t models often do not understand my accent, so I fine tuned this one from "facebook/wav2vec2-large-robust-ft-swbd-300h" using about 1000 recordings of my voice. Do not download unless you have exactly my accent.
StevenLimcorn/wav2vec2-xls-r-300m-zh-TW
StevenLimcorn
2022-02-06T21:57:14Z
26
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - zh-TW license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-TW dataset. It achieves the following results on the evaluation set: - Loss: 1.1786 - Wer: 0.8594 - Cer: 0.2964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 64.6189 | 2.51 | 500 | 63.8077 | 1.0 | 1.0 | | 8.0561 | 5.03 | 1000 | 6.8014 | 1.0 | 1.0 | | 6.0427 | 7.54 | 1500 | 6.0745 | 1.0 | 1.0 | | 5.9357 | 10.05 | 2000 | 5.8682 | 1.0 | 1.0 | | 5.0489 | 12.56 | 2500 | 4.4032 | 0.9990 | 0.7750 | | 4.6184 | 15.08 | 3000 | 3.8383 | 0.9983 | 0.6768 | | 4.365 | 17.59 | 3500 | 3.4633 | 0.9959 | 0.6299 | | 4.1026 | 20.1 | 4000 | 3.0732 | 0.9902 | 0.5814 | | 3.8655 | 22.61 | 4500 | 2.7638 | 0.9868 | 0.5465 | | 3.6991 | 25.13 | 5000 | 2.4759 | 0.9811 | 0.5088 | | 3.4894 | 27.64 | 5500 | 2.2937 | 0.9746 | 0.4852 | | 3.3983 | 30.15 | 6000 | 2.1684 | 0.9733 | 0.4674 | | 3.2736 | 32.66 | 6500 | 2.0372 | 0.9659 | 0.4458 | | 3.1884 | 35.18 | 7000 | 1.9267 | 0.9648 | 0.4329 | | 3.1248 | 37.69 | 7500 | 1.8408 | 0.9591 | 0.4217 | | 3.0381 | 40.2 | 8000 | 1.7531 | 0.9503 | 0.4074 | | 2.9515 | 42.71 | 8500 | 1.6880 | 0.9459 | 0.3967 | | 2.8704 | 45.23 | 9000 | 1.6264 | 0.9378 | 0.3884 | | 2.8128 | 47.74 | 9500 | 1.5621 | 0.9341 | 0.3782 | | 2.7386 | 50.25 | 10000 | 1.5011 | 0.9243 | 0.3664 | | 2.6646 | 52.76 | 10500 | 1.4608 | 0.9192 | 0.3575 | | 2.6072 | 55.28 | 11000 | 1.4251 | 0.9148 | 0.3501 | | 2.569 | 57.79 | 11500 | 1.3837 | 0.9060 | 0.3462 | | 2.5091 | 60.3 | 12000 | 1.3589 | 0.9070 | 0.3392 | | 2.4588 | 62.81 | 12500 | 1.3261 | 0.8966 | 0.3284 | | 2.4083 | 65.33 | 13000 | 1.3052 | 0.8982 | 0.3265 | | 2.3787 | 67.84 | 13500 | 1.2997 | 0.8908 | 0.3243 | | 2.3457 | 70.35 | 14000 | 1.2778 | 0.8898 | 0.3187 | | 2.3099 | 72.86 | 14500 | 1.2661 | 0.8830 | 0.3172 | | 2.2559 | 75.38 | 15000 | 1.2475 | 0.8851 | 0.3143 | | 2.2264 | 77.89 | 15500 | 1.2319 | 0.8739 | 0.3085 | | 2.196 | 80.4 | 16000 | 1.2218 | 0.8722 | 0.3049 | | 2.1613 | 82.91 | 16500 | 1.2093 | 0.8719 | 0.3051 | | 2.1455 | 85.43 | 17000 | 1.2055 | 0.8624 | 0.3005 | | 2.1193 | 87.94 | 17500 | 1.1975 | 0.8600 | 0.2982 | | 2.0911 | 90.45 | 18000 | 1.1960 | 0.8648 | 0.3003 | | 2.0884 | 92.96 | 18500 | 1.1871 | 0.8638 | 0.2971 | | 2.0766 | 95.48 | 19000 | 1.1814 | 0.8617 | 0.2967 | | 2.0735 | 97.99 | 19500 | 1.1801 | 0.8621 | 0.2969 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
preetham18/xls-r-hi-300m-8
preetham18
2022-02-06T20:40:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hi", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.5258 - Wer: 1.0073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.917 | 16.13 | 500 | 4.8963 | 1.0 | | 3.3585 | 32.25 | 1000 | 3.3069 | 1.0000 | | 1.5873 | 48.38 | 1500 | 0.8274 | 1.0061 | | 1.2654 | 64.51 | 2000 | 0.6250 | 1.0076 | | 1.0917 | 80.64 | 2500 | 0.5460 | 1.0056 | | 1.0001 | 96.76 | 3000 | 0.5304 | 1.0083 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
groar/gpt-neo-1.3B-finetuned-escape
groar
2022-02-06T18:14:00Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: gpt-neo-1.3B-finetuned-escape results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-1.3B-finetuned-escape This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
dark-knight/wav2vec2-base-timit-demo-colab
dark-knight
2022-02-06T16:25:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
Mahalakshmi/wav2vec2-xls-r-300m-demo-colab
Mahalakshmi
2022-02-06T13:51:42Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-xls-r-300m-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9475 - eval_wer: 1.0377 - eval_runtime: 70.5646 - eval_samples_per_second: 25.239 - eval_steps_per_second: 3.16 - epoch: 21.05 - step: 2000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 300 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
Jeevesh8/feather_berts
Jeevesh8
2022-02-06T04:53:08Z
0
0
null
[ "arxiv:1911.02969", "region:us" ]
null
2022-03-02T23:29:04Z
First 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10. Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``. For downloading next 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts1/).
Jeevesh8/feather_berts1
Jeevesh8
2022-02-06T04:52:40Z
0
0
null
[ "arxiv:1911.02969", "region:us" ]
null
2022-03-02T23:29:04Z
Second 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10. Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``. For downloading first 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts/).
am-shb/bert-base-multilingual-uncased-finetuned
am-shb
2022-02-06T00:05:59Z
5
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: '57463134' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 57463134 This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 16 - seed: 1337 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.11.2 - Pytorch 1.10.0 - Datasets 1.8.0 - Tokenizers 0.10.3
huggingtweets/bouncemanautumn
huggingtweets
2022-02-05T20:35:09Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/bouncemanautumn/1644093304436/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1466500150759763979/_SP07dAh_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">autumn wants to hold ty’s hand</div> <div style="text-align: center; font-size: 14px;">@bouncemanautumn</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from autumn wants to hold ty’s hand. | Data | autumn wants to hold ty’s hand | | --- | --- | | Tweets downloaded | 3245 | | Retweets | 195 | | Short tweets | 434 | | Tweets kept | 2616 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/16mq5may/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bouncemanautumn's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vlqrfex) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vlqrfex/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bouncemanautumn') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sunitha/Trial_3_Results
sunitha
2022-02-05T19:27:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: Trial_3_Results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Trial_3_Results This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
keras-io/ctc_asr
keras-io
2022-02-05T17:54:45Z
8
1
tf-keras
[ "tf-keras", "speech recognition", "ctc", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - speech recognition - ctc dataset: - LJSpeech dataset license: cc0-1.0 --- ## Automatic Speech Recognition using CTC model on the 🤗Hub! Full credits go to [Mohamed Reda Bouadjenek]() and [Ngoc Dung Huynh](). This repository contains the model from [this notebook on Automatic Speech Recognition using CTC](https://keras.io/examples/audio/ctc_asr/).
transformersbook/xlm-roberta-base-finetuned-panx-fr
transformersbook
2022-02-05T17:07:57Z
8
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8454790823211876 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb). It achieves the following results on the evaluation set: - Loss: 0.2772 - F1: 0.8455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.562 | 1.0 | 191 | 0.3183 | 0.7828 | | 0.2697 | 2.0 | 382 | 0.2706 | 0.8324 | | 0.1735 | 3.0 | 573 | 0.2772 | 0.8455 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
transformersbook/xlm-roberta-base-finetuned-panx-it
transformersbook
2022-02-05T17:07:26Z
8
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8215158924205379 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb). It achieves the following results on the evaluation set: - Loss: 0.2445 - F1: 0.8215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7594 | 1.0 | 70 | 0.3402 | 0.7467 | | 0.2942 | 2.0 | 140 | 0.2555 | 0.7971 | | 0.1814 | 3.0 | 210 | 0.2445 | 0.8215 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
transformersbook/xlm-roberta-base-finetuned-panx-en
transformersbook
2022-02-05T17:07:09Z
17
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.69816564758199 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb). It achieves the following results on the evaluation set: - Loss: 0.3676 - F1: 0.6982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.026 | 1.0 | 50 | 0.5734 | 0.4901 | | 0.4913 | 2.0 | 100 | 0.3870 | 0.6696 | | 0.3734 | 3.0 | 150 | 0.3676 | 0.6982 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3