modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 18:27:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
549 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 18:24:50
card
stringlengths
11
1.01M
McGill-NLP/bart-qg-nq-checkpoint
McGill-NLP
2022-04-01T17:35:04Z
26
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "arxiv:1910.13461", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-01T16:32:49Z
--- license: cc-by-4.0 --- # BART-base fine-tuned on NaturalQuestions for **Question Generation** [BART Model](https://arxiv.org/pdf/1910.13461.pdf) fine-tuned on [Google NaturalQuestions](https://ai.google.com/research/NaturalQuestions/) for **Question Generation** by treating long answer as input, and question as output. ## Details of BART The **BART** model was presented in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by *Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer* in Here the abstract: We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance. ## Details of the downstream task (QG) - Dataset 📚 🧐 Dataset: ```NaturalQuestions``` from Google (https://ai.google.com/research/NaturalQuestions/) | Dataset | Split | # samples | | -------- | ----- | --------- | | NaturalQuestions | train | 97650 | | NaturalQuestions | valid | 10850 | ## Model fine-tuning 🏋️‍ The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/QG/train.py) ## Model in Action 🚀 ```python from transformers import AutoModel, BartTokenizer #Load the tokenizer tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') #Load the model model = AutoModelForSeq2SeqLM.from_pretrained("McGill-NLP/bart-qg-nq-checkpoint") ``` ## Citation If you want to cite this model you can use this: ```bibtex @inproceedings{kulshreshtha-etal-2021-back, title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval", author = "Kulshreshtha, Devang and Belfer, Robert and Serban, Iulian Vlad and Reddy, Siva", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.566", pages = "7064--7078", abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.", } ``` > Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
bitsanlp/distilbert-base-uncased-distilbert-fakenews-detection
bitsanlp
2022-04-01T17:17:55Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T16:12:00Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-distilbert-fakenews-detection results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilbert-fakenews-detection This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | 0.0125 | 1.0 | 978 | 0.0000 | 1.0 | 1.0 | | 0.0 | 2.0 | 1956 | 0.0000 | 1.0 | 1.0 | | 0.0 | 3.0 | 2934 | 0.0000 | 1.0 | 1.0 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
ahmedzaky91/Fatima-Fake_news_calssifier
ahmedzaky91
2022-04-01T16:54:24Z
0
0
null
[ "region:us" ]
null
2022-04-01T00:00:39Z
## This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on Fake and real dataset on kaggle ## The following hyperparameters were used during training: learning_rate: 5e-05 train_batch_size: 8 num_epochs: 2
vicl/canine-c-finetuned-mrpc
vicl
2022-04-01T16:33:28Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "canine", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T16:05:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: canine-c-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8627450980392157 - name: F1 type: f1 value: 0.9014084507042254 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-c-finetuned-mrpc This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4066 - Accuracy: 0.8627 - F1: 0.9014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.5014 | 0.7696 | 0.8479 | | No log | 2.0 | 460 | 0.4755 | 0.7892 | 0.8622 | | 0.5096 | 3.0 | 690 | 0.3645 | 0.8431 | 0.8869 | | 0.5096 | 4.0 | 920 | 0.4066 | 0.8627 | 0.9014 | | 0.2619 | 5.0 | 1150 | 0.4551 | 0.8431 | 0.8877 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
ydshieh/bert-base-uncased-yelp-polarity
ydshieh
2022-04-01T15:20:05Z
103
0
transformers
[ "transformers", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T15:17:35Z
## TextAttack Model Card This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 5e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9699473684210527, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
avialfont/ner-dummy-model
avialfont
2022-04-01T14:59:22Z
5
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-01T10:59:27Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ner-dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ner-dummy-model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.6
somosnlp-hackathon-2022/es_tweets_laboral
somosnlp-hackathon-2022
2022-04-01T14:50:40Z
1
1
spacy
[ "spacy", "text-classification", "es", "region:us" ]
text-classification
2022-04-01T13:48:09Z
--- tags: - spacy - text-classification language: es widget: - text: "todos merecemos un salario justo" --- ## es_tweets_laboral ## Modelo creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
jfealko/wav2vec2-large-xls-r-300m-irish-colab_test
jfealko
2022-04-01T13:23:06Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-01T11:29:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-irish-colab_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-irish-colab_test This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.7839 - Wer: 0.6220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 90 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 10.0428 | 2.94 | 50 | 4.1311 | 1.0 | | 3.2917 | 5.88 | 100 | 3.1468 | 1.0 | | 3.0221 | 8.82 | 150 | 2.9848 | 1.0 | | 2.9795 | 11.76 | 200 | 2.9567 | 1.0 | | 2.9379 | 14.71 | 250 | 2.9463 | 1.0 | | 2.9068 | 17.65 | 300 | 2.8330 | 1.0 | | 2.5088 | 20.59 | 350 | 1.9807 | 0.9535 | | 1.6188 | 23.53 | 400 | 1.4254 | 0.8398 | | 1.0435 | 26.47 | 450 | 1.3668 | 0.7807 | | 0.7212 | 29.41 | 500 | 1.3914 | 0.7476 | | 0.5456 | 32.35 | 550 | 1.5495 | 0.7470 | | 0.4297 | 35.29 | 600 | 1.4751 | 0.6960 | | 0.3533 | 38.24 | 650 | 1.5157 | 0.6909 | | 0.2899 | 41.18 | 700 | 1.5394 | 0.6879 | | 0.2529 | 44.12 | 750 | 1.6186 | 0.6903 | | 0.2413 | 47.06 | 800 | 1.6386 | 0.6954 | | 0.2113 | 50.0 | 850 | 1.6906 | 0.6778 | | 0.1769 | 52.94 | 900 | 1.6918 | 0.6575 | | 0.1622 | 55.88 | 950 | 1.7313 | 0.6572 | | 0.1564 | 58.82 | 1000 | 1.7701 | 0.6510 | | 0.1637 | 61.76 | 1050 | 1.6800 | 0.6444 | | 0.148 | 64.71 | 1100 | 1.7306 | 0.6477 | | 0.1385 | 67.65 | 1150 | 1.7605 | 0.6408 | | 0.1264 | 70.59 | 1200 | 1.7534 | 0.6244 | | 0.1157 | 73.53 | 1250 | 1.7906 | 0.6381 | | 0.1027 | 76.47 | 1300 | 1.7803 | 0.6265 | | 0.1061 | 79.41 | 1350 | 1.7617 | 0.6259 | | 0.0934 | 82.35 | 1400 | 1.7649 | 0.6253 | | 0.0904 | 85.29 | 1450 | 1.7713 | 0.6187 | | 0.0911 | 88.24 | 1500 | 1.7839 | 0.6220 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
bmichele/poetry-generation-firstline-mbart-ws-fi-sorted
bmichele
2022-04-01T13:03:49Z
0
0
null
[ "pytorch", "region:us" ]
null
2022-04-01T12:58:00Z
TODO: This is still a demo model, the file does not match with the model card!!! # poetry-generation-firstline-mbart-ws-fi-sorted * `nextline`: generates the first poem line from keywords * `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) * `ws`: trained on Wikisource data * `fi`: Finnish language * `sorted`: the order of input keywords matter when generating candidates
bharatR/up_down
bharatR
2022-04-01T12:38:05Z
0
0
null
[ "classification", "en", "dataset:cifar10-custom", "region:us" ]
null
2022-04-01T12:19:00Z
--- language: en tags: - classification datasets: - cifar10-custom metrics: - accuracy --- # Up-Down Classification This repo has the weights of resnet-18 model training on cifar-10 custom data, where some images are made upside down, and the goal is to predict the orientation of the image(0/1 classification task).
birgermoell/psst-base-rep
birgermoell
2022-04-01T12:02:45Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-01T07:58:20Z
The model is a reproduction of the baseline trained with Wav2vec2-small on PSST pssteval INFO: ASR metrics for split `valid` FER: 10.4% PER: 23.1%
z5ying/distilgpt2-finetuned-wikitext2
z5ying
2022-04-01T10:47:57Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-01T07:10:02Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [z5ying/distilgpt2-finetuned-wikitext2](https://huggingface.co/z5ying/distilgpt2-finetuned-wikitext2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 118 | 3.0306 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
osanseviero/llama-alpaca-snake
osanseviero
2022-04-01T09:45:01Z
62
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "llama-leaderboard", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-04-01T09:20:01Z
--- tags: - image-classification - pytorch - huggingpics - llama-leaderboard metrics: - accuracy model-index: - name: llama-alpaca-snake results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.7910447716712952 --- # llama-alpaca-snake Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### alpaca ![alpaca](images/alpaca.jpg) #### llamas ![llamas](images/llamas.jpg) #### snake ![snake](images/snake.jpg)
Basedino/GPT-RO
Basedino
2022-04-01T07:47:41Z
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
2022-03-31T08:19:30Z
--- license: gpl-3.0 --- So i made this model because i had nothing to do. it's gpt 2 124m finetuned to a bunch of italian recipes. I made it using aitextgen, so you can use that to play with the model easily.
joniponi/discharge-classifier
joniponi
2022-04-01T06:33:17Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T06:24:11Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: discharge-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # discharge-classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2473 - Accuracy: 0.9172 - F1: 0.9169 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5607 | 1.0 | 40 | 0.4780 | 0.7643 | 0.7654 | | 0.3673 | 2.0 | 80 | 0.2975 | 0.8854 | 0.8849 | | 0.2424 | 3.0 | 120 | 0.2473 | 0.9172 | 0.9169 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
Yaxin/xlm-roberta-base-amazon-en-es-fr-mlm
Yaxin
2022-04-01T05:28:33Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "generated_from_trainer", "dataset:Yaxin/amazon_reviews_multi", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-31T14:56:00Z
--- license: mit tags: - generated_from_trainer datasets: - Yaxin/amazon_reviews_multi metrics: - accuracy model-index: - name: xlm-roberta-base-amazon-en-es-fr-mlm results: - task: name: Masked Language Modeling type: fill-mask dataset: name: Yaxin/amazon_reviews_multi type: Yaxin/amazon_reviews_multi metrics: - name: Accuracy type: accuracy value: 0.6951035447140035 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-amazon-en-es-fr-mlm This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the Yaxin/amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.3936 - Accuracy: 0.6951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.6
z5ying/mbart-large-cc25-finetuned-source-to-target
z5ying
2022-04-01T03:43:40Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-07T18:25:31Z
--- tags: - generated_from_trainer model-index: - name: mbart-large-cc25-finetuned-source-to-target results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-cc25-finetuned-source-to-target This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
Mr-Wick/xlnet-base-cased
Mr-Wick
2022-04-01T01:31:59Z
3
0
transformers
[ "transformers", "tf", "xlnet", "question-answering", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
question-answering
2022-03-26T12:52:07Z
--- tags: - generated_from_keras_callback model-index: - name: xlnet-base-cased results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16530, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.12.0
arjundd/vortex-release
arjundd
2022-03-31T21:54:43Z
0
0
null
[ "mri", "reconstruction", "artifact correction", "en", "arxiv:2111.02549", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - mri - reconstruction - artifact correction --- # VORTEX <div align="center"> <img src="https://drive.google.com/uc?export=view&id=1q0jAm6Kg5ZhRg3h0w0ZbtIgcRF3_-Vgb" alt="Vortex Schematic" width="700px" /> </div> > **VORTEX: Physics-Driven Data Augmentations for Consistency Training for Robust Accelerated MRI Reconstruction**\ > Arjun Desai, Beliz Gunel, Batu Ozturkler, Harris Beg, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\ > https://arxiv.org/abs/2111.02549 This repository contains the artifacts for the VORTEX paper. To use our code and artifacts in your research, please use the [Meddlr](https://github.com/ad12/meddlr) package.
anisdismail/celebA-orientation-detection
anisdismail
2022-03-31T21:51:37Z
0
2
null
[ "image-classification", "pytorch", "en", "dataset:nielsr/CelebA-faces", "license:cc-by-nc-4.0", "model-index", "region:us" ]
image-classification
2022-03-31T19:48:26Z
--- language: - en license: cc-by-nc-4.0 tags: - image-classification - pytorch datasets: - nielsr/CelebA-faces model-index: - name: celebA_orientation_detection_model results: - task: type: image_classification # Required. Example: automatic-speech-recognition name: Image Classification # Optional. Example: Speech Recognition dataset: type: nielsr/CelebA-faces name: CelebA-faces metrics: - type: f1score # Required. Example: wer value: 0.97 # Required. Example: 20.90 name: Val F1 Score # Optional. Example: Test WER --- ## Detecting the Orientation of CelebA pictures using Deep Learning This model has been trained on a modified version of the CelebA-faces dataset, which was made from flipping 20,000 images upside down and keeping 20,000 images intact.<br> The model relies on Resnet-18 as a backbone and is connected to one output node to classify whether the images are flipped upside down (1) or not (0).
arjundd/noise2recon-release
arjundd
2022-03-31T21:50:44Z
0
1
null
[ "mri", "reconstruction", "denoising", "en", "arxiv:2110.00075", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - mri - reconstruction - denoising --- # Noise2Recon > **Noise2Recon: A Semi-Supervised Framework for Joint MRI Reconstruction and Denoising**\ > Arjun Desai, Batu Ozturkler, Christopher Sandino, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\ > https://arxiv.org/abs/2110.00075 This repository contains the artifacts for the Noise2Recon paper. To use our code and artifacts in your research, please use the [Meddlr](https://github.com/ad12/meddlr) package.
arjundd/dosma-models
arjundd
2022-03-31T21:39:54Z
0
0
null
[ "mri", "knee", "segmentation", "en", "region:us" ]
null
2022-03-31T18:30:03Z
--- language: en tags: - mri - knee - segmentation --- # DOSMA models These models are those that are made publicly available in the [DOSMA](https://github.com/ad12/DOSMA). More information on these models can be found in the [documentation](https://dosma.readthedocs.io/en/latest/models.html). ## Citation If you use any models, please cite any reference for the model in addition to the DOSMA reference below: ``` @inproceedings{desai2019dosma, title={DOSMA: A deep-learning, open-source framework for musculoskeletal MRI analysis}, author={Desai, Arjun D and Barbieri, Marco and Mazzoli, Valentina and Rubin, Elka and Black, Marianne S and Watkins, Lauren E and Gold, Garry E and Hargreaves, Brian A and Chaudhari, Akshay S}, booktitle={Proc 27th Annual Meeting ISMRM, Montreal}, pages={1135}, year={2019} } ```
abdusah/aradia-ctc-hubert-ft
abdusah
2022-03-31T20:56:27Z
14
0
transformers
[ "transformers", "pytorch", "hubert", "automatic-speech-recognition", "abdusahmbzuai/arabic_speech_massive_300hrs", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T08:14:31Z
--- tags: - automatic-speech-recognition - abdusahmbzuai/arabic_speech_massive_300hrs - generated_from_trainer model-index: - name: aradia-ctc-hubert-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aradia-ctc-hubert-ft This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.8536 - Wer: 0.3737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.43 | 100 | 3.6934 | 1.0 | | No log | 0.87 | 200 | 3.0763 | 1.0 | | No log | 1.3 | 300 | 2.9737 | 1.0 | | No log | 1.74 | 400 | 2.5734 | 1.0 | | 5.0957 | 2.17 | 500 | 1.1900 | 0.9011 | | 5.0957 | 2.61 | 600 | 0.9726 | 0.7572 | | 5.0957 | 3.04 | 700 | 0.8960 | 0.6209 | | 5.0957 | 3.48 | 800 | 0.7851 | 0.5515 | | 5.0957 | 3.91 | 900 | 0.7271 | 0.5115 | | 1.0312 | 4.35 | 1000 | 0.7053 | 0.4955 | | 1.0312 | 4.78 | 1100 | 0.6823 | 0.4737 | | 1.0312 | 5.22 | 1200 | 0.6768 | 0.4595 | | 1.0312 | 5.65 | 1300 | 0.6635 | 0.4488 | | 1.0312 | 6.09 | 1400 | 0.6602 | 0.4390 | | 0.6815 | 6.52 | 1500 | 0.6464 | 0.4310 | | 0.6815 | 6.95 | 1600 | 0.6455 | 0.4394 | | 0.6815 | 7.39 | 1700 | 0.6630 | 0.4312 | | 0.6815 | 7.82 | 1800 | 0.6521 | 0.4126 | | 0.6815 | 8.26 | 1900 | 0.6282 | 0.4284 | | 0.544 | 8.69 | 2000 | 0.6248 | 0.4178 | | 0.544 | 9.13 | 2100 | 0.6510 | 0.4104 | | 0.544 | 9.56 | 2200 | 0.6527 | 0.4013 | | 0.544 | 10.0 | 2300 | 0.6511 | 0.4064 | | 0.544 | 10.43 | 2400 | 0.6734 | 0.4061 | | 0.4478 | 10.87 | 2500 | 0.6756 | 0.4145 | | 0.4478 | 11.3 | 2600 | 0.6727 | 0.3990 | | 0.4478 | 11.74 | 2700 | 0.6619 | 0.4007 | | 0.4478 | 12.17 | 2800 | 0.6614 | 0.4019 | | 0.4478 | 12.61 | 2900 | 0.6695 | 0.4004 | | 0.3919 | 13.04 | 3000 | 0.6778 | 0.3966 | | 0.3919 | 13.48 | 3100 | 0.6872 | 0.3971 | | 0.3919 | 13.91 | 3200 | 0.6882 | 0.3945 | | 0.3919 | 14.35 | 3300 | 0.7177 | 0.4010 | | 0.3919 | 14.78 | 3400 | 0.6888 | 0.4043 | | 0.3767 | 15.22 | 3500 | 0.7124 | 0.4202 | | 0.3767 | 15.65 | 3600 | 0.7276 | 0.4120 | | 0.3767 | 16.09 | 3700 | 0.7265 | 0.4034 | | 0.3767 | 16.52 | 3800 | 0.7392 | 0.4077 | | 0.3767 | 16.95 | 3900 | 0.7403 | 0.3965 | | 0.3603 | 17.39 | 4000 | 0.7445 | 0.4016 | | 0.3603 | 17.82 | 4100 | 0.7579 | 0.4012 | | 0.3603 | 18.26 | 4200 | 0.7225 | 0.3963 | | 0.3603 | 18.69 | 4300 | 0.7355 | 0.3951 | | 0.3603 | 19.13 | 4400 | 0.7482 | 0.3925 | | 0.3153 | 19.56 | 4500 | 0.7723 | 0.3972 | | 0.3153 | 20.0 | 4600 | 0.7469 | 0.3898 | | 0.3153 | 20.43 | 4700 | 0.7800 | 0.3944 | | 0.3153 | 20.87 | 4800 | 0.7827 | 0.3897 | | 0.3153 | 21.3 | 4900 | 0.7935 | 0.3914 | | 0.286 | 21.74 | 5000 | 0.7984 | 0.3750 | | 0.286 | 22.17 | 5100 | 0.7945 | 0.3830 | | 0.286 | 22.61 | 5200 | 0.8011 | 0.3775 | | 0.286 | 23.04 | 5300 | 0.7978 | 0.3824 | | 0.286 | 23.48 | 5400 | 0.8161 | 0.3833 | | 0.2615 | 23.91 | 5500 | 0.7823 | 0.3858 | | 0.2615 | 24.35 | 5600 | 0.8312 | 0.3863 | | 0.2615 | 24.78 | 5700 | 0.8427 | 0.3819 | | 0.2615 | 25.22 | 5800 | 0.8432 | 0.3802 | | 0.2615 | 25.65 | 5900 | 0.8286 | 0.3794 | | 0.2408 | 26.09 | 6000 | 0.8224 | 0.3824 | | 0.2408 | 26.52 | 6100 | 0.8228 | 0.3823 | | 0.2408 | 26.95 | 6200 | 0.8324 | 0.3795 | | 0.2408 | 27.39 | 6300 | 0.8564 | 0.3744 | | 0.2408 | 27.82 | 6400 | 0.8629 | 0.3774 | | 0.2254 | 28.26 | 6500 | 0.8545 | 0.3778 | | 0.2254 | 28.69 | 6600 | 0.8492 | 0.3767 | | 0.2254 | 29.13 | 6700 | 0.8511 | 0.3751 | | 0.2254 | 29.56 | 6800 | 0.8491 | 0.3753 | | 0.2254 | 30.0 | 6900 | 0.8536 | 0.3737 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
magitz/distilbert-base-uncased-finetuned-emotion
magitz
2022-03-31T20:48:43Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T20:41:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.9267965474109292 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2235 - Accuracy: 0.9265 - F1: 0.9268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8101 | 1.0 | 250 | 0.3177 | 0.9045 | 0.9010 | | 0.2472 | 2.0 | 500 | 0.2235 | 0.9265 | 0.9268 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.8.1 - Datasets 1.18.3 - Tokenizers 0.11.0
ghees/FatimeFellowship
ghees
2022-03-31T20:47:24Z
0
0
null
[ "region:us" ]
null
2022-03-31T20:45:21Z
Preprocessing before feeding to model ``` from sentence_transformers import SentenceTransformer model = SentenceTransformer('paraphrase-MiniLM-L6-v2', device='cuda') ... embeddings = model.encode([text]) return embeddings[0] ```
osanseviero/test_model_bertmesh
osanseviero
2022-03-31T20:35:05Z
4
0
transformers
[ "transformers", "pytorch", "bert", "custom_code", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-31T19:47:46Z
--- license: apache-2.0 --- # WellcomeBertMesh WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications. # Model description The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model. WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies. We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs. The model achieves 63% micro f1 with a 0.5 threshold for all labels. The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger # How to use ⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models. You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`. ``` from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Wellcome/WellcomeBertMesh" ) model = AutoModel.from_pretrained( "Wellcome/WellcomeBertMesh", trust_remote_code=True ) text = "This grant is about malaria and not about HIV." inputs = tokenizer([text], padding="max_length") labels = model(**inputs, return_labels=True) print(labels) ``` You can inspect the model code if you navigate to the files and see `model.py`.
arampacha/gpt-neo-therapist-small
arampacha
2022-03-31T20:34:26Z
17
1
transformers
[ "transformers", "pytorch", "tensorboard", "onnx", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-30T08:40:54Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: gpt-neo-therapist-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-therapist-small This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.6731 - Rouge1: 39.5028 - Rouge2: 6.43 - Rougel: 24.0091 - Rougelsum: 35.4481 - Gen Len: 204.1329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 24 - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 9.9955 | 0.97 | 7 | 6.8195 | 18.6047 | 1.0194 | 14.8565 | 17.9774 | 212.0983 | | 6.9729 | 1.97 | 14 | 5.6783 | 26.3789 | 3.0779 | 18.5195 | 24.8592 | 203.0925 | | 5.2614 | 2.97 | 21 | 5.0506 | 34.9428 | 4.921 | 21.9741 | 32.1122 | 206.2775 | | 5.0599 | 3.97 | 28 | 4.7372 | 38.5235 | 6.2251 | 23.5923 | 34.5633 | 204.2428 | | 4.5479 | 4.97 | 35 | 4.6731 | 39.5028 | 6.43 | 24.0091 | 35.4481 | 204.1329 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
novarac23/distilbert-base-uncased-finetuned-emotion
novarac23
2022-03-31T19:39:15Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T19:05:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9251919899321654 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2234 - Accuracy: 0.925 - F1: 0.9252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8213 | 1.0 | 250 | 0.3210 | 0.9025 | 0.8989 | | 0.2463 | 2.0 | 500 | 0.2234 | 0.925 | 0.9252 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
huggingtweets/stillconor
huggingtweets
2022-03-31T17:49:05Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-31T16:59:05Z
--- language: en thumbnail: http://www.huggingtweets.com/stillconor/1648748939988/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1485398297984389121/DmUfFheN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">conor</div> <div style="text-align: center; font-size: 14px;">@stillconor</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from conor. | Data | conor | | --- | --- | | Tweets downloaded | 3199 | | Retweets | 102 | | Short tweets | 432 | | Tweets kept | 2665 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z83yigq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stillconor's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30hsnorw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30hsnorw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/stillconor') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Tahsin-Mayeesha/distilbert-finetuned-fakenews
Tahsin-Mayeesha
2022-03-31T17:11:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T15:58:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-finetuned-fakenews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finetuned-fakenews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0049 - Accuracy: 0.9995 - F1: 0.9995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0392 | 1.0 | 500 | 0.0059 | 0.999 | 0.999 | | 0.002 | 2.0 | 1000 | 0.0047 | 0.9995 | 0.9995 | | 0.0001 | 3.0 | 1500 | 0.0047 | 0.9995 | 0.9995 | | 0.0001 | 4.0 | 2000 | 0.0049 | 0.9995 | 0.9995 | | 0.0 | 5.0 | 2500 | 0.0049 | 0.9995 | 0.9995 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
israfelsr/UpsideDownClassifier
israfelsr
2022-03-31T17:06:27Z
0
0
null
[ "region:us" ]
null
2022-03-31T15:41:33Z
# UpsideDownClassifier This classifier was trained using the [auto-cats-and-dogs](https://huggingface.co/datasets/nateraw/auto-cats-and-dogs) dataset. It was trained over 5 epochs using a pretrained resent18. The configuration for the model was ``` config = { "batch_size": 64, "num_epochs": 5, "lr": 0.005, "betas": (0.9, 0.999), "eps": 1e-6, "lr": 8e-3, "do_eval": True } ``` ## Traning Plots We can see in the figures below the training plots for accuracy and the loss in both, training and validation sets. ### Accuracy Plot ![Accuracy](https://huggingface.co/israfelsr/UpsideDownClassifier/blob/main/accuracy.png) ### Loss Plot ![Loss](https://huggingface.co/israfelsr/UpsideDownClassifier/blob/main/loss.png) ## Some Results Evaluating on the Test Set, we obtain: - Accuracy = 0.9696 A batch with some missclassifications can be seen in the picture below. ![Results](https://huggingface.co/israfelsr/UpsideDownClassifier/blob/main/results.png)
rahulacj/bertweet-base-finetuned-sentiment-analysis
rahulacj
2022-03-31T16:21:16Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T09:42:31Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bertweet-base-finetuned-sentiment-analysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertweet-base-finetuned-sentiment-analysis This model is a fine-tuned version of [cardiffnlp/bertweet-base-sentiment](https://huggingface.co/cardiffnlp/bertweet-base-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8458 - Accuracy: 0.6426 - F1: 0.6397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8904 | 1.0 | 630 | 0.8509 | 0.6381 | 0.6340 | | 0.7655 | 2.0 | 1260 | 0.8345 | 0.6579 | 0.6559 | | 0.66 | 3.0 | 1890 | 0.9199 | 0.6548 | 0.6514 | | 0.447 | 4.0 | 2520 | 1.0324 | 0.6429 | 0.6417 | | 0.3585 | 5.0 | 3150 | 1.1234 | 0.6452 | 0.6424 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
eren23/pneumonia-bielefeld-dl-course
eren23
2022-03-31T15:55:27Z
61
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-27T12:17:21Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pneumonia-bielefeld-dl-course results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8456632494926453 --- # pneumonia-bielefeld-dl-course This registry contains the model for making pneumonia predictions and was prepared for Bielefeld University Deep Learning course homework. The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset.
Nonem100/Test-Model
Nonem100
2022-03-31T15:19:38Z
62
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-31T15:19:30Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: Test-Model results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9017857313156128 --- # Test-Model Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### cotton candy ![cotton candy](images/cotton_candy.jpg) #### hamburger ![hamburger](images/hamburger.jpg) #### hot dog ![hot dog](images/hot_dog.jpg) #### nachos ![nachos](images/nachos.jpg) #### popcorn ![popcorn](images/popcorn.jpg)
huggingtweets/timdingmanlive
huggingtweets
2022-03-31T14:30:05Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-31T14:26:57Z
--- language: en thumbnail: http://www.huggingtweets.com/timdingmanlive/1648736999131/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/2844974270/7bb6450b90b65f8712d9433b8d5e1971_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tim Dingman</div> <div style="text-align: center; font-size: 14px;">@timdingmanlive</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Tim Dingman. | Data | Tim Dingman | | --- | --- | | Tweets downloaded | 3240 | | Retweets | 555 | | Short tweets | 138 | | Tweets kept | 2547 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7yvdv2z7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @timdingmanlive's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/311pu3zj) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/311pu3zj/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/timdingmanlive') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
oferweintraub/bert-base-finance-sentiment-noisy-search
oferweintraub
2022-03-31T14:13:45Z
23
3
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "Finance-sentiment-analysis", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - Finance-sentiment-analysis - generated_from_trainer metrics: - f1 - accuracy - precision - recall model-index: - name: bert-base-finance-sentiment-noisy-search results: [] widget: - text: "Third quarter reported revenues were $10.9 billion, up 5 percent compared to prior year and up 8 percent on a currency-neutral basis" example_title: "Positive" - text: "The London-listed website for businesses reported a pretax loss of $26.6 million compared with a loss of $12.9 million the previous year" example_title: "Negative" - text: "Microsoft updates Outlook, Teams, and PowerPoint to be hybrid work ready" example_title: "Neutral" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finance-sentiment-noisy-search This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on Kaggle finance news sentiment analysis with data enhancement using noisy search. The process is explained below: 1. First "bert-base-uncased" was fine-tuned on Kaggle's finance news sentiment analysis https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news dataset achieving accuracy of about 88% 2. We then used a logistic-regression classifier on the same data. Here we looked at coefficients that contributed the most to the "Positive" and "Negative" classes by inspecting only bi-grams. 3. Using the top 25 bi-grams per class (i.e. "Positive" / "Negative") we invoked Bing news search with those bi-grams and retrieved up to 50 news items per bi-gram phrase. 4. We called it "noisy-search" because it is assumed the positive bi-grams (e.g. "profit rose" , "growth net") give rise to positive examples whereas negative bi-grams (e.g. "loss increase", "share loss") result in negative examples but note that we didn't test for the validity of this assumption (hence: noisy-search) 5. For each article we kept the title + excerpt and labeled it according to pre-assumptions on class associations. 6. We then trained the same model on the noisy data and apply it to an held-out test set from the original data set split. 7. Training with couple of thousands noisy "positives" and "negatives" examples yielded a test set accuracy of about 95%. 8. It shows that by automatically collecting noisy examples using search we can boost accuracy performance from about 88% to more than 95%. Accuracy results for Logistic Regression (LR) and BERT (base-cased) are shown in the attached pdf: https://drive.google.com/file/d/1MI9gRdppactVZ_XvhCwvoaOV1aRfprrd/view?usp=sharing ## Model description BERT model trained on noisy data from search results. See PDF for more details. ## Intended uses & limitations Intended for use on finance news sentiment analysis with 3 options: "Positive", "Neutral" and "Negative" To get the best results feed the classifier with the title and either the 1st paragraph or a short news summarization e.g. of up to 64 tokens. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
huggingtweets/youtube
huggingtweets
2022-03-31T14:06:33Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-31T14:05:50Z
--- language: en thumbnail: http://www.huggingtweets.com/youtube/1648735587597/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1427292844612595720/RC1YSvuT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">YouTube</div> <div style="text-align: center; font-size: 14px;">@youtube</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from YouTube. | Data | YouTube | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 23 | | Short tweets | 104 | | Tweets kept | 3123 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dx34obn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @youtube's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/youtube') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Edresson/wav2vec2-large-xlsr-coraa-portuguese
Edresson
2022-03-31T13:28:43Z
632
15
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "hf-asr-leaderboard", "PyTorch", "dataset:CORAA", "arxiv:2110.15731", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: pt datasets: - CORAA metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - hf-asr-leaderboard - speech - PyTorch license: apache-2.0 model-index: - name: Edresson Casanova XLSR Wav2Vec2 Large 53 Portuguese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: CORAA type: CORAA args: pt metrics: - name: Test CORAA WER type: wer value: 25.26 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: pt metrics: - name: Test WER on Common Voice 7 type: wer value: 20.08 --- # Wav2vec 2.0 trained with CORAA Portuguese Dataset This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA) # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese") ``` # Results For the results check the [CORAA article](https://arxiv.org/abs/2110.15731) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Visual-Attention-Network/van-tiny
Visual-Attention-Network
2022-03-31T12:45:47Z
173
2
transformers
[ "transformers", "pytorch", "van", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2202.09741", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-16T15:05:02Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Van Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification). Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/van_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, VanForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base") >>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van).
Visual-Attention-Network/van-base
Visual-Attention-Network
2022-03-31T12:45:44Z
185
1
transformers
[ "transformers", "pytorch", "van", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2202.09741", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-16T15:06:37Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Van Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification). Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/van_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, VanForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base") >>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van).
mustapha/flipped-image-ViT
mustapha
2022-03-31T12:30:19Z
61
2
transformers
[ "transformers", "pytorch", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-30T21:57:42Z
Hello world, This model have been created in the context of ` Fatima Fellowship Programme`. The model was trained on the Cifar10 dataset with a googd final accuracy of arround 98%. This model determines wether an image is flipped of not.
scasutt/wav2vec2-base_toy_train_data_random_low_pass
scasutt
2022-03-31T10:42:02Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T08:21:35Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_random_low_pass results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_random_low_pass This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3227 - Wer: 0.7288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0795 | 2.1 | 500 | 3.2227 | 0.9982 | | 1.21 | 4.2 | 1000 | 1.3713 | 0.8879 | | 0.742 | 6.3 | 1500 | 1.2660 | 0.8296 | | 0.5877 | 8.4 | 2000 | 1.2921 | 0.7794 | | 0.4823 | 10.5 | 2500 | 1.2899 | 0.7565 | | 0.4036 | 12.6 | 3000 | 1.3486 | 0.7494 | | 0.391 | 14.7 | 3500 | 1.2701 | 0.7466 | | 0.3426 | 16.81 | 4000 | 1.3570 | 0.7279 | | 0.3015 | 18.91 | 4500 | 1.3227 | 0.7288 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
nikhil6041/wav2vec2-commonvoice-tamil
nikhil6041
2022-03-31T09:24:01Z
18
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T04:00:23Z
--- license: mit tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-commonvoice-tamil results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-commonvoice-tamil This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-tamil-tam-250](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-tamil-tam-250) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 3.3415 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 5.384 | 1.69 | 200 | 3.3400 | 1.0 | | 3.3085 | 3.39 | 400 | 3.3609 | 1.0 | | 3.3008 | 5.08 | 600 | 3.3331 | 1.0 | | 3.2852 | 6.78 | 800 | 3.3492 | 1.0 | | 3.2908 | 8.47 | 1000 | 3.3318 | 1.0 | | 3.2865 | 10.17 | 1200 | 3.3501 | 1.0 | | 3.2826 | 11.86 | 1400 | 3.3403 | 1.0 | | 3.2875 | 13.56 | 1600 | 3.3335 | 1.0 | | 3.2899 | 15.25 | 1800 | 3.3311 | 1.0 | | 3.2755 | 16.95 | 2000 | 3.3617 | 1.0 | | 3.2877 | 18.64 | 2200 | 3.3317 | 1.0 | | 3.2854 | 20.34 | 2400 | 3.3560 | 1.0 | | 3.2878 | 22.03 | 2600 | 3.3332 | 1.0 | | 3.2766 | 23.73 | 2800 | 3.3317 | 1.0 | | 3.2943 | 25.42 | 3000 | 3.3737 | 1.0 | | 3.2845 | 27.12 | 3200 | 3.3347 | 1.0 | | 3.2765 | 28.81 | 3400 | 3.3415 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
davidmasip/racism
davidmasip
2022-03-31T06:56:46Z
26
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "es", "license:cc", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-16T18:23:46Z
--- license: cc language: es widget: - text: "Me cae muy bien." example_title: "Non-racist example" - text: "Unos menas agreden a una mujer." example_title: "Racist example" --- Model to predict whether a given text is racist or not: * `LABEL_0` output indicates non-racist text * `LABEL_1` output indicates racist text Usage: ```python from transformers import pipeline RACISM_MODEL = "davidmasip/racism" racism_analysis_pipe = pipeline("text-classification", model=RACISM_MODEL, tokenizer=RACISM_MODEL) results = racism_analysis_pipe("Unos menas agreden a una mujer.") def clean_labels(results): for result in results: label = "Non-racist" if results["label"] == "LABEL_0" else "Racist" result["label"] = label clean_labels(results) print(results) ```
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5
yy642
2022-03-31T02:22:21Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T20:09:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-mnli-rte-wnli-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-mnli-rte-wnli-5 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4400 - Accuracy: 0.9209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2253 | 1.0 | 16558 | 0.2346 | 0.9139 | | 0.1667 | 2.0 | 33116 | 0.2973 | 0.9143 | | 0.1207 | 3.0 | 49674 | 0.3361 | 0.9203 | | 0.0553 | 4.0 | 66232 | 0.4400 | 0.9209 | | 0.033 | 5.0 | 82790 | 0.5175 | 0.9203 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.0.0 - Tokenizers 0.11.6
michiyasunaga/BioLinkBERT-large
michiyasunaga
2022-03-31T00:54:57Z
4,470
33
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "exbert", "linkbert", "biolinkbert", "fill-mask", "question-answering", "text-classification", "token-classification", "en", "dataset:pubmed", "arxiv:2203.15827", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2022-03-08T06:20:38Z
--- license: apache-2.0 language: en datasets: - pubmed tags: - bert - exbert - linkbert - biolinkbert - feature-extraction - fill-mask - question-answering - text-classification - token-classification widget: - text: "Sunitinib is a tyrosine kinase inhibitor" --- ## BioLinkBERT-large BioLinkBERT-large model pretrained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts along with citation link information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT). This model achieves state-of-the-art performance on several biomedical NLP benchmarks such as [BLURB](https://microsoft.github.io/BLURB/) and [MedQA-USMLE](https://github.com/jind11/MedQA). ## Model description LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document. LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval). ## Intended uses & limitations The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text). ### How to use To use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/BioLinkBERT-large') model = AutoModel.from_pretrained('michiyasunaga/BioLinkBERT-large') inputs = tokenizer("Sunitinib is a tyrosine kinase inhibitor", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases. ## Evaluation results When fine-tuned on downstream tasks, LinkBERT achieves the following results. **Biomedical benchmarks ([BLURB](https://microsoft.github.io/BLURB/), [MedQA](https://github.com/jind11/MedQA), [MMLU](https://github.com/hendrycks/test), etc.):** BioLinkBERT attains new state-of-the-art. | | BLURB score | PubMedQA | BioASQ | MedQA-USMLE | | ---------------------- | -------- | -------- | ------- | -------- | | PubmedBERT-base | 81.10 | 55.8 | 87.5 | 38.1 | | **BioLinkBERT-base** | **83.39** | **70.2** | **91.4** | **40.0** | | **BioLinkBERT-large** | **84.30** | **72.2** | **94.8** | **44.6** | | | MMLU-professional medicine | | ---------------------- | -------- | | GPT-3 (175 params) | 38.7 | | UnifiedQA (11B params) | 43.2 | | **BioLinkBERT-large (340M params)** | **50.7** | ## Citation If you find LinkBERT useful in your project, please cite the following: ```bibtex @InProceedings{yasunaga2022linkbert, author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang}, title = {LinkBERT: Pretraining Language Models with Document Links}, year = {2022}, booktitle = {Association for Computational Linguistics (ACL)}, } ```
michiyasunaga/LinkBERT-base
michiyasunaga
2022-03-31T00:38:32Z
847
7
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "exbert", "linkbert", "fill-mask", "question-answering", "text-classification", "token-classification", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:2203.15827", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2022-03-08T07:21:51Z
--- license: apache-2.0 language: en datasets: - wikipedia - bookcorpus tags: - bert - exbert - linkbert - feature-extraction - fill-mask - question-answering - text-classification - token-classification --- ## LinkBERT-base LinkBERT-base model pretrained on English Wikipedia articles along with hyperlink information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT). ## Model description LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document. LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval). ## Intended uses & limitations The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text). ### How to use To use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/LinkBERT-base') model = AutoModel.from_pretrained('michiyasunaga/LinkBERT-base') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases. ## Evaluation results When fine-tuned on downstream tasks, LinkBERT achieves the following results. **General benchmarks ([MRQA](https://github.com/mrqa/MRQA-Shared-Task-2019) and [GLUE](https://gluebenchmark.com/)):** | | HotpotQA | TriviaQA | SearchQA | NaturalQ | NewsQA | SQuAD | GLUE | | ---------------------- | -------- | -------- | -------- | -------- | ------ | ----- | -------- | | | F1 | F1 | F1 | F1 | F1 | F1 | Avg score | | BERT-base | 76.0 | 70.3 | 74.2 | 76.5 | 65.7 | 88.7 | 79.2 | | **LinkBERT-base** | **78.2** | **73.9** | **76.8** | **78.3** | **69.3** | **90.1** | **79.6** | | BERT-large | 78.1 | 73.7 | 78.3 | 79.0 | 70.9 | 91.1 | 80.7 | | **LinkBERT-large** | **80.8** | **78.2** | **80.5** | **81.0** | **72.6** | **92.7** | **81.1** | ## Citation If you find LinkBERT useful in your project, please cite the following: ```bibtex @InProceedings{yasunaga2022linkbert, author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang}, title = {LinkBERT: Pretraining Language Models with Document Links}, year = {2022}, booktitle = {Association for Computational Linguistics (ACL)}, } ```
UBC-NLP/MARBERTv2
UBC-NLP
2022-03-30T21:52:31Z
3,124
8
transformers
[ "transformers", "pytorch", "tf", "bert", "fill-mask", "Arabic BERT", "MSA", "Twitter", "Masked Langauge Model", "ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - ar tags: - Arabic BERT - MSA - Twitter - Masked Langauge Model widget: - text: "اللغة العربية هي لغة [MASK]." --- <img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="30%" height="30%" align="right"/> **MARBERTv2** is one of three models described in our **ACL 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://aclanthology.org/2021.acl-long.551.pdf)**. We find that results with ARBERT and MARBERT on QA are not competitive, a clear discrepancy from what we have observed thus far on other tasksWe hypothesize this is because the two models are pre-trained with a sequence length of only 128, which does not allow them to sufficiently capture both a question and its likely answer within the same sequence window during the pre-training. To rectify this, we further pre-train the stronger model, MARBERT, on the same MSA data as ARBERT in addition to AraNews dataset but with a bigger sequence length of 512 tokens for 40 epochs. We call this further pre-trained model **MARBERTv2**, noting it has **29B tokens**. MARBERTv2 acquires best performance on all but one test set, where XLM-RLarge marginally outperforms us (only in F1). For more information, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert). # BibTex If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): ```bibtex @inproceedings{abdul-mageed-etal-2021-arbert, title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic", author = "Abdul-Mageed, Muhammad and Elmadany, AbdelRahim and Nagoudi, El Moatez Billah", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.551", doi = "10.18653/v1/2021.acl-long.551", pages = "7088--7105", abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.", } ``` ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
vlsb/autotrain-security-text-classification-albert-688320769
vlsb
2022-03-30T20:59:32Z
15
2
transformers
[ "transformers", "pytorch", "albert", "text-classification", "autotrain", "unk", "dataset:vlsb/autotrain-data-security-text-classification-albert", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T20:55:59Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - vlsb/autotrain-data-security-text-classification-albert co2_eq_emissions: 3.670416179055797 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 688320769 - CO2 Emissions (in grams): 3.670416179055797 ## Validation Metrics - Loss: 0.3046899139881134 - Accuracy: 0.8826530612244898 - Precision: 0.9181818181818182 - Recall: 0.8782608695652174 - AUC: 0.9423510466988727 - F1: 0.8977777777777778 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-text-classification-albert-688320769 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
vlsb/autotrain-security-texts-classification-distilroberta-688220764
vlsb
2022-03-30T20:56:57Z
13
2
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain", "unk", "dataset:vlsb/autotrain-data-security-texts-classification-distilroberta", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T20:54:56Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - vlsb/autotrain-data-security-texts-classification-distilroberta co2_eq_emissions: 2.0817207656772445 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 688220764 - CO2 Emissions (in grams): 2.0817207656772445 ## Validation Metrics - Loss: 0.3055502772331238 - Accuracy: 0.9030612244897959 - Precision: 0.9528301886792453 - Recall: 0.8782608695652174 - AUC: 0.9439076757917337 - F1: 0.9140271493212669 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-texts-classification-distilroberta-688220764 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-texts-classification-distilroberta-688220764", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-texts-classification-distilroberta-688220764", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
mrm8488/electricidad-base-discriminator
mrm8488
2022-03-30T20:42:47Z
74
4
transformers
[ "transformers", "pytorch", "electra", "pretraining", "Spanish", "Electra", "es", "dataset:-large_spanish_corpus", "arxiv:1406.2661", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: es thumbnail: https://i.imgur.com/uxAvBfh.png tags: - Spanish - Electra datasets: -large_spanish_corpus --- ## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh) **Electricidad-base-discriminator** (uncased) is a ```base``` Electra like model (discriminator in this case) trained on a [Large Spanish Corpus](https://github.com/josecannete/spanish-corpora) (aka BETO's corpus) As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Model details ⚙ |Name| # Value| |-----|--------| |Layers| 12 | |Hidden | 768 | |Params| 110M | ## Evaluation metrics (for discriminator) 🧾 |Metric | # Score | |-------|---------| |Accuracy| 0.985| |Precision| 0.726| |AUC | 0.922| ## Fast example of usage 🚀 ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("mrm8488/electricidad-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/electricidad-base-discriminator") sentence = "El rápido zorro marrón salta sobre el perro perezoso" fake_sentence = "El rápido zorro marrón amar sobre el perro perezoso" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % prediction, end="") for prediction in predictions.tolist()] # Output: ''' el rapido zorro marro ##n amar sobre el perro pere ##zoso 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0[None, None, None, None, None, None, None, None, None, None, None, None, None ''' ``` As you can see there are **1s** in the places where the model detected a fake token. So, it works! 🎉 ### Some models fine-tuned on a downstream task 🛠️ [Question Answering](https://huggingface.co/mrm8488/electricidad-base-finetuned-squadv1-es) [POS](https://huggingface.co/mrm8488/electricidad-base-finetuned-pos) [NER](https://huggingface.co/mrm8488/electricidad-base-finetuned-ner) ### Spanish LM model comparison 📊 | Dataset | Metric | RoBERTa-b | RoBERTa-l | BETO | mBERT | BERTIN | Electricidad-b | |-------------|----------|-----------|-----------|--------|--------|--------|---------| | UD-POS | F1 | 0.9907 | 0.9901 | 0.9900 | 0.9886 | 0.9904 | 0.9818 | | Conll-NER | F1 | 0.8851 | 0.8772 | 0.8759 | 0.8691 | 0.8627 | 0.7954 | | Capitel-POS | F1 | 0.9846 | 0.9851 | 0.9836 | 0.9839 | 0.9826 | 0.9816 | | Capitel-NER | F1 | 0.8959 | 0.8998 | 0.8771 | 0.8810 | 0.8741 | 0.8035 | | STS | Combined | 0.8423 | 0.8420 | 0.8216 | 0.8249 | 0.7822 | 0.8065 | | MLDoc | Accuracy | 0.9595 | 0.9600 | 0.9650 | 0.9560 | 0.9673 | 0.9490 | | PAWS-X | F1 | 0.9035 | 0.9000 | 0.8915 | 0.9020 | 0.8820 | **0.9045** | | XNLI | Accuracy | 0.8016 | 0.7958 | 0.8130 | 0.7876 | 0.7864 | 0.7878 | ## Acknowledgments I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)). ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2020electricidad-base-discriminator, title={Spanish Electra by Manuel Romero}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/electricidad-base-discriminator/}}, year={2020} } ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/longformer-base-4096-spanish
mrm8488
2022-03-30T20:36:36Z
49
16
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "Long documents", "longformer", "bertin", "spanish", "es", "dataset:spanish_large_corpus", "arxiv:2004.05150", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - es license: mit widget: - text: "Manuel Romero ha creado con el equipo de BERTIN un modelo que procesa documentos <mask> largos." tags: - Long documents - longformer - bertin - spanish datasets: - spanish_large_corpus --- # longformer-base-4096-spanish ## [Longformer](https://arxiv.org/abs/2004.05150) is a Transformer model for long documents. `longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint (**BERTIN** in this case) and pre-trained for *MLM* on long documents (from BETO's `all_wikis`). It supports sequences of length up to 4,096! **Longformer** uses a combination of a sliding window (*local*) attention and *global* attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations. This model was made following the research done by [Iz Beltagy and Matthew E. Peters and Arman Cohan](https://arxiv.org/abs/2004.05150). ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2022longformer-base-4096-spanish, title={Spanish LongFormer by Manuel Romero}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/longformer-base-4096-spanish}}, year={2022} } ```
horsbug98/Part_2_XLM_Model_E1
horsbug98
2022-03-30T18:29:46Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:tydiqa", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-16T17:32:47Z
--- license: mit tags: - generated_from_trainer datasets: - tydiqa model-index: - name: debug_xlm_task2_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # debug_xlm_task2_1 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa secondary_task dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 2.0.0 - Tokenizers 0.10.3
hoangbinhmta99/wav2vec-demo
hoangbinhmta99
2022-03-30T17:18:48Z
9
2
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
Convert from model .pt to transformer Link: https://huggingface.co/tommy19970714/wav2vec2-base-960h Bash: ```bash pip install transformers[sentencepiece] pip install fairseq -U git clone https://github.com/huggingface/transformers.git cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py . wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt -O ./wav2vec_small.pt mkdir dict wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt mkdir outputs python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./finetuned/wav2vec_small.pt --dict_path ./dict/dict.ltr.txt --not_finetuned ``` # install and upload model ``` curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash git lfs install sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/hoangbinhmta99/wav2vec-demo ls cd wav2vec-demo/ git status git add . git commit -m "First model version" git config --global user.email [yourname] git config --global user.name [yourpass] git commit -m "First model version" git push ```
scasutt/wav2vec2-base_toy_train_data_random_high_pass
scasutt
2022-03-30T16:37:23Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-30T13:17:36Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_random_high_pass results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_random_high_pass This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2841 - Wer: 0.7222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.061 | 2.1 | 500 | 3.0551 | 1.0 | | 1.1294 | 4.2 | 1000 | 1.3102 | 0.8777 | | 0.7051 | 6.3 | 1500 | 1.2081 | 0.8092 | | 0.5421 | 8.4 | 2000 | 1.2280 | 0.7684 | | 0.448 | 10.5 | 2500 | 1.2459 | 0.7506 | | 0.3777 | 12.6 | 3000 | 1.3533 | 0.7631 | | 0.3611 | 14.7 | 3500 | 1.2058 | 0.7291 | | 0.3177 | 16.81 | 4000 | 1.3168 | 0.7185 | | 0.279 | 18.91 | 4500 | 1.2841 | 0.7222 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
javilonso/classificationPolEsp2
javilonso
2022-03-30T15:21:42Z
3
0
transformers
[ "transformers", "tf", "gpt2", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T13:41:58Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: javilonso/classificationPolEsp2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # javilonso/classificationPolEsp2 This model is a fine-tuned version of [PlanTL-GOB-ES/gpt2-base-bne](https://huggingface.co/PlanTL-GOB-ES/gpt2-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1229 - Validation Loss: 0.8172 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17958, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6246 | 0.5679 | 0 | | 0.4198 | 0.6097 | 1 | | 0.1229 | 0.8172 | 2 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Datasets 2.0.0 - Tokenizers 0.11.6
javilonso/classificationEsp1_Attraction
javilonso
2022-03-30T13:25:38Z
6
0
transformers
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-23T15:27:21Z
--- tags: - generated_from_keras_callback model-index: - name: classificationEsp1_Attraction results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # classificationEsp1_Attraction This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Datasets 2.0.0 - Tokenizers 0.11.6
peterhsu/distilbert-base-uncased-finetuned-squad-d5716d28
peterhsu
2022-03-30T12:22:49Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
2022-03-29T09:35:05Z
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
yinde/dummy-model
yinde
2022-03-30T11:59:15Z
10
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T11:37:44Z
Fake news classifier This model trains a text classification model to detect fake news articles, it uses distilbert-base-uncased-finetuned-sst-2-english pretrained model to work on fake and real news dataset from kaggle (https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset)
joe5campbell/Horovod_Tweet_Sentiment_1K_4eps
joe5campbell
2022-03-30T11:38:32Z
5
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-24T12:35:50Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Horovod_Tweet_Sentiment_1K_4eps results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Horovod_Tweet_Sentiment_1K_4eps This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6803332 - Train Accuracy: 0.57187504 - Validation Loss: 0.6883397 - Validation Accuracy: 0.54375 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 0.0003, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.70931095 | 0.5078125 | 0.81717503 | 0.528125 | 0 | | 0.77384466 | 0.5296875 | 0.68696874 | 0.51875 | 1 | | 0.68944424 | 0.53125 | 0.6837756 | 0.53125 | 2 | | 0.6803332 | 0.57187504 | 0.6883397 | 0.54375 | 3 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Tokenizers 0.11.6
mimicheng/codeparrot-ds-sample-2ep-29mar
mimicheng
2022-03-30T09:50:15Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-30T03:41:46Z
--- license: mit tags: - generated_from_trainer model-index: - name: codeparrot-ds-sample-2ep-29mar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds-sample-2ep-29mar This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6283 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - distributed_type: tpu - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2585 | 1.86 | 5000 | 1.6283 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.8.2+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
Peltarion/xlm-roberta-longformer-base-4096
Peltarion
2022-03-30T09:23:58Z
75
8
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "longformer", "multilingual", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- tags: - longformer language: multilingual license: apache-2.0 datasets: - wikitext --- ## XLM-R Longformer Model XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus. The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r). Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps. ## How to Use The model can be used as expected to fine-tune on a downstream task. For instance for QA. ```python import torch from transformers import AutoModel, AutoTokenizer MAX_SEQUENCE_LENGTH = 4096 MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096" tokenizer = AutoTokenizer.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, padding="max_length", truncation=True, ) model = AutoModelForQuestionAnswering.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, ) ``` ## Training Procedure The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information ```sh wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip unzip wikitext-103-raw-v1.zip export DATA_DIR=./wikitext-103-raw scripts/run_long_lm.py \ --model_name_or_path xlm-roberta-base \ --model_name xlm-roberta-to-longformer \ --output_dir ./output \ --logging_dir ./logs \ --val_file_path $DATA_DIR/wiki.valid.raw \ --train_file_path $DATA_DIR/wiki.train.raw \ --seed 42 \ --max_pos 4096 \ --adam_epsilon 1e-8 \ --warmup_steps 500 \ --learning_rate 3e-5 \ --weight_decay 0.01 \ --max_steps 6000 \ --evaluate_during_training \ --logging_steps 50 \ --eval_steps 50 \ --save_steps 6000 \ --max_grad_norm 1.0 \ --per_device_eval_batch_size 2 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 64 \ --overwrite_output_dir \ --fp16 \ --do_train \ --do_eval ```
Aureliano/electra-if
Aureliano
2022-03-30T09:07:27Z
6
0
transformers
[ "transformers", "pytorch", "tf", "electra", "feature-extraction", "en", "arxiv:1406.2661", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-11T15:40:21Z
--- language: en license: apache-2.0 --- ## ELECTRA for IF **ELECTRA** is a method for self-supervised language representation learning. They are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). For a detailed description and experimental results, please refer to the original paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains a small ELECTRA discriminator finetuned on a corpus of interactive fiction commands labelled with the WordNet synset offset of the verb in the sentence. The original dataset has been collected from the list of action in the walkthroughs for the game included in the [Jericho](https://github.com/microsoft/jericho) framework and manually annotated. For more information visit https://github.com/aporporato/electra and https://github.com/aporporato/jericho-corpora. ## How to use the discriminator in `transformers` (Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb) ```python import math import numpy as np import tensorflow as tf from datasets import load_metric, Dataset, DatasetDict from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer from transformers.keras_callbacks import KerasMetricCallback # This example shows how this model can be used: # you should finetune the model of your specific corpus if commands, bigger than this dict_train = { "idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20"], "sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book", "inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich", "drop sandwich", "x sandwich", "agin"], "label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"] } dict_val = { "idx": ["0", "1", "2", "3", "4", "5"], "sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"], "label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"] } raw_train_dataset = Dataset.from_dict(dict_train) raw_val_dataset = Dataset.from_dict(dict_val) raw_dataset = DatasetDict() raw_dataset["train"] = raw_train_dataset raw_dataset["val"] = raw_val_dataset raw_dataset = raw_dataset.class_encode_column("label") print(raw_dataset) print(raw_dataset["train"].features) print(raw_dataset["val"].features) print(raw_dataset["train"][1]) label2id = {} id2label = {} for i, l in enumerate(raw_dataset["train"].features["label"].names): label2id[l] = i id2label[i] = l discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/electra-if", label2id=label2id, id2label=id2label) tokenizer = AutoTokenizer.from_pretrained("Aureliano/electra-if") tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True) pre_tokenizer_columns = set(raw_dataset["train"].features) encoded_dataset = raw_dataset.map(tokenize_function, batched=True) tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") batch_size = len(encoded_dataset["train"]) tf_train_dataset = encoded_dataset["train"].to_tf_dataset( columns=tokenizer_columns, label_cols=["labels"], shuffle=True, batch_size=batch_size, collate_fn=data_collator ) tf_validation_dataset = encoded_dataset["val"].to_tf_dataset( columns=tokenizer_columns, label_cols=["labels"], shuffle=False, batch_size=batch_size, collate_fn=data_collator ) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) num_epochs = 25 batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size) total_train_steps = int(batches_per_epoch * num_epochs) optimizer, schedule = create_optimizer( init_lr=5e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps ) metric = load_metric("accuracy") def compute_metrics(eval_predictions): logits, labels = eval_predictions predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset) callbacks = [metric_callback] discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"]) discriminator.fit( tf_train_dataset, epochs=num_epochs, validation_data=tf_validation_dataset, callbacks=callbacks ) print("Evaluate on test data") results = discriminator.evaluate(tf_validation_dataset) print("test loss, test acc:", results) text = "i" encoded_input = tokenizer(text, return_tensors='tf') output = discriminator(encoded_input) prediction = tf.nn.softmax(output["logits"][0], -1) label = id2label[tf.math.argmax(prediction).numpy()] print("\n", text, ":", label, "\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset text = "get lamp" encoded_input = tokenizer(text, return_tensors='tf') output = discriminator(encoded_input) prediction = tf.nn.softmax(output["logits"][0], -1) label = id2label[tf.math.argmax(prediction).numpy()] print("\n", text, ":", label, "\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset text = "w" encoded_input = tokenizer(text, return_tensors='tf') output = discriminator(encoded_input) prediction = tf.nn.softmax(output["logits"][0], -1) label = id2label[tf.math.argmax(prediction).numpy()] print("\n", text, ":", label, "\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset ```
javilonso/classificationPolEsp1
javilonso
2022-03-30T09:02:50Z
3
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T07:49:20Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: javilonso/classificationPolEsp1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # javilonso/classificationPolEsp1 This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3728 - Validation Loss: 0.6217 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17958, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6282 | 0.6017 | 0 | | 0.5129 | 0.6177 | 1 | | 0.3728 | 0.6217 | 2 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Datasets 2.0.0 - Tokenizers 0.11.6
shrishail/t5_paraphrase_msrp_paws
shrishail
2022-03-30T05:47:27Z
38
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "paraphrase-generation", "text-generation", "Conditional Generation", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-29T13:13:11Z
--- language: "en" tags: - paraphrase-generation - text-generation - Conditional Generation inference: false --- # Simple model for Paraphrase Generation ​ ## Model description ​ T5-based model for generating paraphrased sentences. It is trained on the labeled [MSRP](https://www.microsoft.com/en-us/download/details.aspx?id=52398) and [Google PAWS](https://github.com/google-research-datasets/paws) dataset. ​ ## How to use ​ ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("shrishail/t5_paraphrase_msrp_paws") model = AutoModelForSeq2SeqLM.from_pretrained("shrishail/t5_paraphrase_msrp_paws") ​ sentence = "This is something which i cannot understand at all" text = "paraphrase: " + sentence + " </s>" encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, do_sample=True, top_k=120, top_p=0.95, early_stopping=True, num_return_sequences=5 ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print(line) ​ ```
loulou/distilbert-base-uncased-finetuned-emotion
loulou
2022-03-30T04:57:58Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-22T04:55:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9221931901873676 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2285 - Accuracy: 0.922 - F1: 0.9222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8366 | 1.0 | 250 | 0.3212 | 0.9025 | 0.8990 | | 0.2588 | 2.0 | 500 | 0.2285 | 0.922 | 0.9222 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
lazyturtl/roomidentifier
lazyturtl
2022-03-30T04:10:41Z
89
3
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-30T04:10:32Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: roomidentifier results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9375 --- # roomidentifier Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Bathroom ![Bathroom](images/Bathroom.jpg) #### Bedroom ![Bedroom](images/Bedroom.jpg) #### DinningRoom ![DinningRoom](images/DinningRoom.jpg) #### Kitchen ![Kitchen](images/Kitchen.jpg) #### LivingRoom ![LivingRoom](images/LivingRoom.jpg)
samayash/finetuning-financial-news-sentiment
samayash
2022-03-30T03:36:40Z
4
3
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T03:27:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-financial-news-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-financial-news-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3345 - Accuracy: 0.8751 - F1: 0.8751 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
tharangahf/botcircuits_nlu
tharangahf
2022-03-30T02:32:47Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-30T02:32:47Z
--- license: apache-2.0 ---
ntt123/hifigan_ljs_22k
ntt123
2022-03-30T01:47:26Z
0
0
null
[ "tensorboard", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2022-03-29T02:20:52Z
--- license: cc-by-nc-sa-4.0 ---
aaraki/vit-base-patch16-224-in21k-finetuned-cifar10
aaraki
2022-03-30T01:41:47Z
8,239
10
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:cifar10", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-30T00:18:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cifar10 metrics: - accuracy model-index: - name: vit-base-patch16-224-in21k-finetuned-cifar10 results: - task: name: Image Classification type: image-classification dataset: name: cifar10 type: cifar10 args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9788 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-cifar10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.2564 - Accuracy: 0.9788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4291 | 1.0 | 390 | 0.2564 | 0.9788 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
cammiemw/bert-marco-hdct
cammiemw
2022-03-30T01:21:38Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T01:09:55Z
--- license: cc-by-nc-4.0 ---
DrishtiSharma/poem-gen-spanish-t5-small-v7
DrishtiSharma
2022-03-30T00:34:41Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-29T19:14:40Z
--- license: mit tags: - generated_from_trainer model-index: - name: poem-gen-spanish-t5-small-v7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poem-gen-spanish-t5-small-v7 This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000333 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.1716 | 0.73 | 30000 | 3.1114 | | 2.9666 | 1.46 | 60000 | 3.0271 | | 2.8292 | 2.19 | 90000 | 2.9531 | | 2.7264 | 2.93 | 120000 | 2.9126 | | 2.6057 | 3.66 | 150000 | 2.9175 | | 2.4876 | 4.39 | 180000 | 2.9077 | | 2.3791 | 5.12 | 210000 | 2.9240 | | 2.3515 | 5.85 | 240000 | 2.9169 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
BigSalmon/PointsToSentence
BigSalmon
2022-03-29T23:11:32Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-29T22:58:46Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsToSentence") model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsToSentence") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27 Keywords to sentences or sentence.
efederici/sentence-it5-base
efederici
2022-03-29T23:09:01Z
35
4
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "feature-extraction", "sentence-similarity", "transformers", "it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-29T19:57:59Z
--- pipeline_tag: sentence-similarity language: - it tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-IT5-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-base)) base model. It is trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)), tags/news-article pairs, headline/text pairs ([change-it](https://huggingface.co/datasets/gsarti/change_it)) and on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train). ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"] model = SentenceTransformer('efederici/sentence-IT5-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-base') model = AutoModel.from_pretrained('efederici/sentence-IT5-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
espnet/bur_openslr80_hubert
espnet
2022-03-29T22:19:50Z
0
0
null
[ "region:us" ]
null
2022-03-28T22:04:54Z
<!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Mar 21 22:59:35 UTC 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.10.1` - Git hash: `7ae4efd81778436a98b822483e8123adba6aa430` - Commit date: `Tue Mar 15 20:11:18 2022 -0400` ## asr_train_asr_hubert_transformer_adam_specaug_raw_bpe150 ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|4227|39.1|50.4|10.5|6.1|67.0|99.8| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|33345|82.2|7.6|10.1|3.6|21.4|99.8| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|18237|70.7|17.7|11.6|2.5|31.8|99.8|
Chikashi/t5-small-finetuned-cnndm_3epoch
Chikashi
2022-03-29T19:28:09Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-29T00:14:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cnn_dailymail metrics: - rouge model-index: - name: t5-small-finetuned-cnndm_3epoch results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: cnn_dailymail type: cnn_dailymail args: 3.0.0 metrics: - name: Rouge1 type: rouge value: 24.5435 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-cnndm_3epoch This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6622 - Rouge1: 24.5435 - Rouge2: 11.7919 - Rougel: 20.2929 - Rougelsum: 23.1661 - Gen Len: 18.9996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9113 | 0.14 | 5000 | 1.7162 | 24.4374 | 11.6932 | 20.1741 | 23.0427 | 18.9997 | | 1.8772 | 0.28 | 10000 | 1.7008 | 24.3715 | 11.6699 | 20.1387 | 22.9772 | 18.9997 | | 1.8609 | 0.42 | 15000 | 1.6911 | 24.4174 | 11.6986 | 20.1756 | 23.0205 | 18.9997 | | 1.8564 | 0.56 | 20000 | 1.6871 | 24.4374 | 11.6801 | 20.1663 | 23.0366 | 18.9995 | | 1.8495 | 0.7 | 25000 | 1.6796 | 24.4019 | 11.6901 | 20.177 | 23.034 | 18.999 | | 1.8448 | 0.84 | 30000 | 1.6787 | 24.4813 | 11.7227 | 20.1985 | 23.0847 | 18.999 | | 1.8427 | 0.98 | 35000 | 1.6762 | 24.4905 | 11.7591 | 20.2548 | 23.1006 | 18.9993 | | 1.8341 | 1.11 | 40000 | 1.6747 | 24.4743 | 11.7124 | 20.1782 | 23.0726 | 18.9996 | | 1.822 | 1.25 | 45000 | 1.6753 | 24.4797 | 11.7292 | 20.2319 | 23.0816 | 18.9993 | | 1.8262 | 1.39 | 50000 | 1.6713 | 24.4865 | 11.7079 | 20.2214 | 23.0919 | 18.9986 | | 1.8281 | 1.53 | 55000 | 1.6702 | 24.5095 | 11.7364 | 20.2534 | 23.1264 | 18.9991 | | 1.8228 | 1.67 | 60000 | 1.6678 | 24.5153 | 11.7595 | 20.2544 | 23.1138 | 18.9993 | | 1.824 | 1.81 | 65000 | 1.6662 | 24.5324 | 11.7804 | 20.2671 | 23.1498 | 18.9997 | | 1.8265 | 1.95 | 70000 | 1.6648 | 24.5795 | 11.7917 | 20.2935 | 23.1855 | 18.9992 | | 1.8179 | 2.09 | 75000 | 1.6658 | 24.5426 | 11.804 | 20.2861 | 23.1586 | 18.9996 | | 1.8147 | 2.23 | 80000 | 1.6646 | 24.5429 | 11.7914 | 20.2889 | 23.1542 | 18.9993 | | 1.8026 | 2.37 | 85000 | 1.6632 | 24.5451 | 11.8045 | 20.2781 | 23.1555 | 18.9996 | | 1.8141 | 2.51 | 90000 | 1.6643 | 24.5078 | 11.7781 | 20.2631 | 23.121 | 18.9996 | | 1.8124 | 2.65 | 95000 | 1.6628 | 24.5728 | 11.7958 | 20.2875 | 23.178 | 18.9996 | | 1.8098 | 2.79 | 100000 | 1.6635 | 24.5534 | 11.7998 | 20.2979 | 23.169 | 18.9996 | | 1.8153 | 2.93 | 105000 | 1.6622 | 24.5435 | 11.7919 | 20.2929 | 23.1661 | 18.9996 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
efederici/sentence-it5-small
efederici
2022-03-29T17:29:14Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "feature-extraction", "sentence-similarity", "transformers", "it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-27T15:19:10Z
--- pipeline_tag: sentence-similarity language: - it tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-IT5-small This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-small)) small model trained for asymmetric semantic search. Query is a keyword, Paragraph is a short news article. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"] model = SentenceTransformer('efederici/sentence-IT5-small') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-small') model = AutoModel.from_pretrained('efederici/sentence-IT5-small') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
krinal214/augmented
krinal214
2022-03-29T16:58:16Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-29T15:02:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: augmented results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # augmented This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5104 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0609 | 1.0 | 9787 | 0.5104 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
GleamEyeBeast/ascend
GleamEyeBeast
2022-03-29T16:49:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-29T01:37:59Z
--- tags: - generated_from_trainer model-index: - name: ascend results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ascend This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3718 - Wer: 0.6412 - Cer: 0.2428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 0.5769 | 1.0 | 688 | 1.1864 | 0.7716 | 0.3159 | | 0.5215 | 2.0 | 1376 | 1.1613 | 0.7504 | 0.2965 | | 0.4188 | 3.0 | 2064 | 1.1644 | 0.7389 | 0.2950 | | 0.3695 | 4.0 | 2752 | 1.1937 | 0.7184 | 0.2815 | | 0.3404 | 5.0 | 3440 | 1.1947 | 0.7083 | 0.2719 | | 0.2885 | 6.0 | 4128 | 1.2314 | 0.7108 | 0.2685 | | 0.2727 | 7.0 | 4816 | 1.2243 | 0.6850 | 0.2616 | | 0.2417 | 8.0 | 5504 | 1.2506 | 0.6767 | 0.2608 | | 0.2207 | 9.0 | 6192 | 1.2804 | 0.6922 | 0.2595 | | 0.2195 | 10.0 | 6880 | 1.2582 | 0.6818 | 0.2575 | | 0.1896 | 11.0 | 7568 | 1.3101 | 0.6814 | 0.2545 | | 0.1961 | 12.0 | 8256 | 1.2793 | 0.6706 | 0.2526 | | 0.1752 | 13.0 | 8944 | 1.2643 | 0.6584 | 0.2509 | | 0.1638 | 14.0 | 9632 | 1.3152 | 0.6588 | 0.2482 | | 0.1522 | 15.0 | 10320 | 1.3098 | 0.6433 | 0.2439 | | 0.1351 | 16.0 | 11008 | 1.3253 | 0.6537 | 0.2447 | | 0.1266 | 17.0 | 11696 | 1.3394 | 0.6365 | 0.2418 | | 0.1289 | 18.0 | 12384 | 1.3718 | 0.6412 | 0.2443 | | 0.1204 | 19.0 | 13072 | 1.3708 | 0.6433 | 0.2433 | | 0.1189 | 20.0 | 13760 | 1.3718 | 0.6412 | 0.2428 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
gabitoo1234/autotrain-mut_all_text-680820343
gabitoo1234
2022-03-29T16:09:31Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "es", "dataset:gabitoo1234/autotrain-data-mut_all_text", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T14:22:14Z
--- tags: autotrain language: es widget: - text: "I love AutoTrain 🤗" datasets: - gabitoo1234/autotrain-data-mut_all_text co2_eq_emissions: 115.48848403681228 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 680820343 - CO2 Emissions (in grams): 115.48848403681228 ## Validation Metrics - Loss: 0.3041240870952606 - Accuracy: 0.9462770369425126 - Macro F1: 0.7836898686625933 - Micro F1: 0.9462770369425126 - Weighted F1: 0.9449148298990091 - Macro Precision: 0.8344505891491089 - Micro Precision: 0.9462770369425126 - Weighted Precision: 0.9451247372908952 - Macro Recall: 0.7568785255994025 - Micro Recall: 0.9462770369425126 - Weighted Recall: 0.9462770369425126 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/gabitoo1234/autotrain-mut_all_text-680820343 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
tbosse/bert-base-german-cased-finetuned-subj_v1
tbosse
2022-03-29T15:59:49Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-29T14:22:30Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-german-cased-finetuned-subj_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-finetuned-subj_v1 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1594 - Precision: 0.1875 - Recall: 0.0077 - F1: 0.0147 - Accuracy: 0.9508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 136 | 0.1591 | 1.0 | 0.0051 | 0.0102 | 0.9523 | | No log | 2.0 | 272 | 0.1571 | 0.375 | 0.0077 | 0.015 | 0.9518 | | No log | 3.0 | 408 | 0.1594 | 0.1875 | 0.0077 | 0.0147 | 0.9508 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
sayef/fsner-bert-base-uncased
sayef
2022-03-29T14:20:35Z
9
6
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2008.10570", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# FSNER Implemented by [sayef](https://huggingface.co/sayef). # Overview The FSNER model was proposed in [Example-Based Named Entity Recognition](https://arxiv.org/abs/2008.10570) by Morteza Ziyadi, Yuting Sun, Abhishek Goswami, Jade Huang, Weizhu Chen. To identify entity spans in a new domain, it uses a train-free few-shot learning approach inspired by question-answering. ## Abstract > We present a novel approach to named entity recognition (NER) in the presence of scarce data that we call example-based NER. Our train-free few-shot learning approach takes inspiration from question-answering to identify entity spans in a new and unseen domain. In comparison with the current state-of-the-art, the proposed method performs significantly better, especially when using a low number of support examples. ## Model Training Details | identifier | epochs | datasets | | ---------- |:------:|:-----------------------------------------------------------------------------------------------:| | [sayef/fsner-bert-base-uncased](https://huggingface.co/sayef/fsner-bert-base-uncased) | 25 | ontonotes5, conll2003, wnut2017, mit_movie_trivia, mit_restaurant and fin (Alvarado et al.). | ## Installation and Example Usage You can use the FSNER model in 3 ways: 1. Install directly from PyPI: `pip install fsner` and import the model as shown in the code example below or 2. Install from source: `python install .` and import the model as shown in the code example below or 3. Clone [repo](https://github.com/sayef/fsner) and add absolute path of `fsner/src` directory to your PYTHONPATH and import the model as shown in the code example below ```python import json from fsner import FSNERModel, FSNERTokenizerUtils, pretty_embed query_texts = [ "Does Luke's serve lunch?", "Chang does not speak Taiwanese very well.", "I like Berlin." ] # Each list in supports are the examples of one entity type # Wrap entities around with [E] and [/E] in the examples. # Each sentence should have only one pair of [E] ... [/E] support_texts = { "Restaurant": [ "What time does [E] Subway [/E] open for breakfast?", "Is there a [E] China Garden [/E] restaurant in newark?", "Does [E] Le Cirque [/E] have valet parking?", "Is there a [E] McDonalds [/E] on main street?", "Does [E] Mike's Diner [/E] offer huge portions and outdoor dining?" ], "Language": [ "Although I understood no [E] French [/E] in those days , I was prepared to spend the whole day with Chien - chien .", "like what the hell 's that called in [E] English [/E] ? I have to register to be here like since I 'm a foreigner .", "So , I 'm also working on an [E] English [/E] degree because that 's my real interest .", "Al - Jazeera TV station , established in November 1996 in Qatar , is an [E] Arabic - language [/E] news TV station broadcasting global news and reports nonstop around the clock .", "They think it 's far better for their children to be here improving their [E] English [/E] than sitting at home in front of a TV . \"", "The only solution seemed to be to have her learn [E] French [/E] .", "I have to read sixty pages of [E] Russian [/E] today ." ] } device = 'cpu' tokenizer = FSNERTokenizerUtils("sayef/fsner-bert-base-uncased") queries = tokenizer.tokenize(query_texts).to(device) supports = tokenizer.tokenize(list(support_texts.values())).to(device) model = FSNERModel("sayef/fsner-bert-base-uncased") model.to(device) p_starts, p_ends = model.predict(queries, supports) # One can prepare supports once and reuse multiple times with different queries # ------------------------------------------------------------------------------ # start_token_embeddings, end_token_embeddings = model.prepare_supports(supports) # p_starts, p_ends = model.predict(queries, start_token_embeddings=start_token_embeddings, # end_token_embeddings=end_token_embeddings) output = tokenizer.extract_entity_from_scores(query_texts, queries, p_starts, p_ends, entity_keys=list(support_texts.keys()), thresh=0.50) print(json.dumps(output, indent=2)) # install displacy for pretty embed pretty_embed(query_texts, output, list(support_texts.keys())) ``` <!DOCTYPE html> <html lang="en"> <head> <title>displaCy</title> </head> <body style="font-size: 16px; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; padding: 4rem 2rem; direction: ltr"> <figure style="margin-bottom: 6rem"> <div class="entities" style="line-height: 2.5; direction: ltr"> <div class="entities" style="line-height: 2.5; direction: ltr">Does <mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Luke's <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Restaurant</span> </mark> serve lunch?</div> <div class="entities" style="line-height: 2.5; direction: ltr">Chang does not speak <mark class="entity" style="background: #bfeeb7; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Taiwanese <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Language</span> </mark> very well.</div> <div class="entities" style="line-height: 2.5; direction: ltr">I like Berlin.</div> </div> </figure> </body> </html> ## Datasets preparation 1. We need to convert dataset into the following format. Let's say we have a dataset file train.json like following. 2. Each list in supports are the examples of one entity type 3. Wrap entities around with [E] and [/E] in the examples. 4. Each example should have only one pair of [E] ... [/E]. ```json { "CARDINAL_NUMBER": [ "Washington , cloudy , [E] 2 [/E] to 6 degrees .", "New Dehli , sunny , [E] 6 [/E] to 19 degrees .", "Well this is number [E] two [/E] .", "....." ], "LANGUAGE": [ "They do n't have the Quicken [E] Dutch [/E] version ?", "they learned a lot of [E] German [/E] .", "and then [E] Dutch [/E] it 's Mifrau", "...." ], "MONEY": [ "Per capita personal income ranged from $ [E] 11,116 [/E] in Mississippi to $ 23,059 in Connecticut ... .", "The trade surplus was [E] 582 million US dollars [/E] .", "It settled with a loss of 4.95 cents at $ [E] 1.3210 [/E] a pound .", "...." ] } ``` 2. Converted ontonotes5 dataset can be found here: 1. [train](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.train.json) 2. [dev](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.dev.json) 3. Then trainer script can be used to train/evaluate your fsner model. ```bash fsner trainer --pretrained-model bert-base-uncased --mode train --train-data train.json --val-data val.json \ --train-batch-size 6 --val-batch-size 6 --n-examples-per-entity 10 --neg-example-batch-ratio 1/3 --max-epochs 25 --device gpu \ --gpus -1 --strategy ddp ```
ArtemChistyakov-2/f
ArtemChistyakov-2
2022-03-29T12:21:18Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-29T12:21:18Z
--- license: apache-2.0 ---
gayanin/bart-med-term-conditional-masking-0
gayanin
2022-03-29T12:03:56Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-28T22:12:30Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-med-term-conditional-masking-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-med-term-conditional-masking-0 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5041 - Rouge2 Precision: 0.7497 - Rouge2 Recall: 0.5246 - Rouge2 Fmeasure: 0.5986 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.6381 | 1.0 | 13915 | 0.5595 | 0.734 | 0.5152 | 0.5873 | | 0.5429 | 2.0 | 27830 | 0.5243 | 0.7441 | 0.5225 | 0.5956 | | 0.5002 | 3.0 | 41745 | 0.5078 | 0.7482 | 0.5238 | 0.5976 | | 0.4607 | 4.0 | 55660 | 0.5041 | 0.7497 | 0.5246 | 0.5986 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms
scasutt
2022-03-29T11:29:52Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-28T18:54:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5945 - Wer: 0.4929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4049 | 1.05 | 250 | 3.3497 | 1.0 | | 3.0851 | 2.1 | 500 | 3.4440 | 1.0 | | 2.3512 | 3.15 | 750 | 1.5938 | 0.9317 | | 1.1762 | 4.2 | 1000 | 0.8481 | 0.7333 | | 0.903 | 5.25 | 1250 | 0.7180 | 0.6484 | | 0.6754 | 6.3 | 1500 | 0.6603 | 0.6044 | | 0.5961 | 7.35 | 1750 | 0.6410 | 0.5778 | | 0.5325 | 8.4 | 2000 | 0.6245 | 0.5545 | | 0.4685 | 9.45 | 2250 | 0.5925 | 0.5359 | | 0.4526 | 10.5 | 2500 | 0.5991 | 0.5345 | | 0.3975 | 11.55 | 2750 | 0.5916 | 0.5228 | | 0.3672 | 12.6 | 3000 | 0.5882 | 0.5037 | | 0.3774 | 13.65 | 3250 | 0.5693 | 0.5028 | | 0.3489 | 14.7 | 3500 | 0.5645 | 0.5018 | | 0.3593 | 15.75 | 3750 | 0.5977 | 0.5043 | | 0.3167 | 16.81 | 4000 | 0.6049 | 0.5018 | | 0.3225 | 17.86 | 4250 | 0.6172 | 0.4921 | | 0.2807 | 18.91 | 4500 | 0.5937 | 0.4923 | | 0.2889 | 19.96 | 4750 | 0.5945 | 0.4929 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
ai4bharat/MultiIndicWikiBioUnified
ai4bharat
2022-03-29T09:25:58Z
5
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "wikibio", "multilingual", "nlp", "indicnlp", "as", "bn", "hi", "kn", "ml", "or", "pa", "ta", "te", "dataset:ai4bharat/IndicWikiBio", "arxiv:2203.05437", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-16T11:35:33Z
--- tags: - wikibio - multilingual - nlp - indicnlp datasets: - ai4bharat/IndicWikiBio language: - as - bn - hi - kn - ml - or - pa - ta - te licenses: - cc-by-nc-4.0 widget: - <TAG> name </TAG> नवतेज भारती <TAG> image </TAG> NavtejBharati . jpg <TAG> birth name </TAG> नवतेज <TAG> birth date </TAG> 1938 <TAG> birth place </TAG> रोडे , भारतीय पंजाब , भारत । पंजाब <TAG> occupation </TAG> लेखक , कवि <TAG> nationality </TAG> कैनेडा । कैनेडियन <TAG> ethnicity </TAG> पंजाबी लोक । पंजाबी </s> <2hi> --- # MultiIndicWikiBioUnified MultiIndicWikiBioUnified is a multilingual, sequence-to-sequence pre-trained model, a [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint fine-tuned on the 9 languages of [IndicWikiBio](https://huggingface.co/datasets/ai4bharat/IndicWikiBio) dataset. For fine-tuning details, see the [paper](https://arxiv.org/abs/2203.05437). You can use MultiIndicWikiBio to build biography generation applications for Indian languages by fine-tuning the model with supervised training data. Some salient features of the MultiIndicWikiBio are: <ul> <li >Supported languages: Assamese, Bengali, Hindi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for fine-tuning and decoding. </li> <li> Fine-tuned on an Indic language corpora (34,653 examples). </li> <li> All languages have been represented in Devanagari script to encourage transfer learning among the related languages. </li> </ul> You can read more about MultiIndicWikiBioUnified in this <a href="https://arxiv.org/abs/2203.05437">paper</a>. ## Using this model in `transformers` ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicWikiBioUnified") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicWikiBioUnified") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2as>', '<2bn>', '<2hi>', '<2kn>', '<2ml>', '<2or>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("<TAG> name </TAG> भीखा लाल <TAG> office </TAG> विधायक - 318 - हसनगंज विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1957 से 1962 <TAG> nationality </TAG> भारतीय</s><2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids out = tokenizer("<2hi> भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:]) # For loss model_outputs.loss ## This is not label smoothed. # For logits model_outputs.logits # For generation. Pardon the messiness. Note the decoder_start_token_id. model.eval() # Set dropouts to zero model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। # Disclaimer Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the [Indic NLP Library](https://github.com/AI4Bharat/indic-bart/blob/main/indic_scriptmap.py). ``` # Note: If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script. ## Benchmarks Scores on the `IndicWikiBio` test sets are as follows: Language | RougeL ---------|---------------------------- as | 56.28 bn | 57.42 hi | 67.48 kn | 40.01 ml | 38.84 or | 67.13 pa | 52.88 ta | 51.82 te | 51.43 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" } ``` # License The model is available under the MIT License.
Davlan/m2m100_418M-yor-eng-mt
Davlan
2022-03-29T09:21:03Z
5
0
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "arxiv:2103.08647", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
Hugging Face's logo --- language: - yo - en datasets: - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) --- # m2m100_418M-eng-yor-mt ## Model description **m2m100_418M-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English. Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). #### Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset ## Training procedure This model was trained on NVIDIA V100 GPU ## Eval results on Test set (BLEU score) Fine-tuning m2m100_418M achieves **16.76 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57 ### BibTeX entry and citation info By David Adelani ``` ```
PereLluis13/Wav2Vec2-Large-XLSR-53-catalan
PereLluis13
2022-03-29T08:51:28Z
6,942
2
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ca", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: ca datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Catalan XLSR Wav2Vec Large 53 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53` results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ca type: common_voice args: ca #TODO: metrics: - name: Test WER type: wer value: 8.11 --- # Disclaimer This model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking [wav2vec2-xls-r-1b-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-1b-ca-lm) which is a 1b model with a LM on top trained on CV8+ with much better performance or [wav2vec2-xls-r-300m-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) which has the same size (300m) as this model but trained on CV8+ and the same LM. # Wav2Vec2-Large-XLSR-53-ca Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the catalan test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ca", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) import jiwer # Chunk WER computation due to memory issues, taken from https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000))) ``` **Test Result**: 8.11 % ## Training The Common Voice `train`, `validation` datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up. The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). Slight modifications were done in order to speed up the ordering by length during training, which can be found [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6). Another version trained for catalan can be found [here](https://huggingface.co/ccoreilly/wav2vec2-large-xlsr-catala), which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset.
PereLluis13/wav2vec2-xls-r-1b-ca
PereLluis13
2022-03-29T08:44:49Z
17
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-1b-ca results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 11.030639657300516 - name: Test CER type: cer value: 2.8405630530040634 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 6.483115660665961 - name: Test CER type: cer value: 2.0212863746191828 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 17.917773414943988 - name: Test CER type: cer value: 8.872589572206396 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Catalan Dev Data type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 27.126683954209097 - name: Test CER type: cer value: 14.213308815078726 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 18.7 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
PereLluis13/wav2vec2-xls-r-300m-ca
PereLluis13
2022-03-29T08:43:53Z
52
2
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-300m-ca results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 13.170091241317552 - name: Test CER type: cer value: 3.356726205534543 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 8.048005647723261 - name: Test CER type: cer value: 2.240912911020065 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 23.320629787889285 - name: Test CER type: cer value: 10.439216202089989 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: speech-recognition-community-v2/dev_data ca type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 31.99671115046487 - name: Test CER type: cer value: 15.820020687277325 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 22.04 --- # wav2vec2-xls-r-300m-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets): - Loss: 0.2472 - Wer: 0.1499 ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data More information needed ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 18.0 - mixed_precision_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.2099 | 0.09 | 500 | 3.4125 | 1.0 | | 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 | | 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 | | 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 | | 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 | | 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 | | 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 | | 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 | | 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 | | 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 | | 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 | | 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 | | 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 | | 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 | | 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 | | 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 | | 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 | | 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 | | 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 | | 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 | | 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 | | 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 | | 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 | | 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 | | 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 | | 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 | | 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 | | 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 | | 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 | | 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 | | 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 | | 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 | | 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 | | 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 | | 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 | | 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 | | 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 | | 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 | | 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 | | 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 | | 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 | | 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 | | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 | | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 | | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 | | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 | | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 | | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 | | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 | | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 | | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 | | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 | | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 | | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 | | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 | | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 | | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 | | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 | | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 | | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 | | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 | | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
PereLluis13/wav2vec2-xls-r-300m-ca-lm
PereLluis13
2022-03-29T08:42:55Z
20
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-300m-ca-lm results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 6.771703090587865 - name: Test CER type: cer value: 2.1007777843712293 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 5.565360630662431 - name: Test CER type: cer value: 1.8594390167034354 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 13.53312545713516 - name: Test CER type: cer value: 8.684635913340556 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Catalan Dev Data type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 26.04515843400164 - name: Test CER type: cer value: 15.056890012642224 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 17.68 --- # wav2vec2-xls-r-300m-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. It achieves the following results on the evaluation set (for the three datasets and without the LM): - Loss: 0.2472 - Wer: 0.1499 ## Model description Please check the original [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data More information needed ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 18.0 - mixed_precision_training: Native AMP ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.2099 | 0.09 | 500 | 3.4125 | 1.0 | | 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 | | 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 | | 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 | | 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 | | 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 | | 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 | | 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 | | 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 | | 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 | | 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 | | 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 | | 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 | | 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 | | 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 | | 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 | | 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 | | 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 | | 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 | | 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 | | 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 | | 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 | | 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 | | 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 | | 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 | | 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 | | 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 | | 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 | | 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 | | 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 | | 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 | | 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 | | 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 | | 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 | | 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 | | 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 | | 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 | | 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 | | 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 | | 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 | | 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 | | 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 | | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 | | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 | | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 | | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 | | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 | | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 | | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 | | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 | | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 | | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 | | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 | | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 | | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 | | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 | | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 | | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 | | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 | | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 | | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 | | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
PereLluis13/wav2vec2-xls-r-1b-ca-lm
PereLluis13
2022-03-29T08:41:46Z
3,126
4
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-1b-ca-lm results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 6.0722669958130644 - name: Test CER type: cer value: 1.9180697705166526 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 5.139820371024042 - name: Test CER type: cer value: 2.0163620128164722 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 11.207991684952073 - name: Test CER type: cer value: 7.32119307305963 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Catalan Dev Data type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 22.870153690468661 - name: Test CER type: cer value: 13.59039190897598 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 15.41 --- # wav2vec2-xls-r-1b-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
johnowhitaker/sketchy_unet_rn34
johnowhitaker
2022-03-29T08:02:43Z
0
0
null
[ "license:cc-by-4.0", "region:us" ]
null
2022-03-29T07:57:40Z
--- license: cc-by-4.0 --- This is the exported model for a small project I' working on, to test integration with spaces. It is a fastai model and needs some custom code to work. For now please ignore :)
STARBORN/MMC
STARBORN
2022-03-29T07:14:35Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-03-29T07:12:26Z
--- license: mit --- Metamodel Card (MMC) builds on MC and DC schemas by adding system level abstraction to the data. MMC instantiations follow
rampasek/prot_bert_bfd_rosetta204060aa
rampasek
2022-03-29T04:35:10Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "protein language model", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T04:02:40Z
--- language: protein tags: - protein language model datasets: - BFD - Custom Rosetta --- # ProtBert-BFD finetuned on Rosetta 20,40,60AA dataset This model is finetuned to predict Rosetta fold energy using a dataset of 300k protein sequences: 100k of 20AA, 100k of 40AA, and 100k of 60AA Current model in this repo: `prot_bert_bfd-finetuned-032822_1323` ## Performance - 20AA sequences (1k eval set):\ Metrics: 'mae': 0.100418, 'r2': 0.989028, 'mse': 0.016266, 'rmse': 0.127537 - 40AA sequences (10k eval set):\ Metrics: 'mae': 0.173888, 'r2': 0.963361, 'mse': 0.048218, 'rmse': 0.219587 - 60AA sequences (10k eval set):\ Metrics: 'mae': 0.235238, 'r2': 0.930164, 'mse': 0.088131, 'rmse': 0.2968 ## `prot_bert_bfd` from ProtTrans The starting pretrained model is from ProtTrans, trained on 2.1 billion proteins from BFD. It was trained on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). > Created by [Ladislav Rampasek](https://rampasek.github.io)
rampasek/prot_bert_bfd_rosetta20aa
rampasek
2022-03-29T04:33:02Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "protein language model", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-28T04:13:53Z
--- language: protein tags: - protein language model datasets: - BFD - Custom Rosetta --- # ProtBert-BFD finetuned on Rosetta 20AA dataset This model is finetuned to predict Rosetta fold energy using a dataset of 100k 20AA sequences. Current model in this repo: `prot_bert_bfd-finetuned-032722_1752` ## Performance - 20AA sequences (1k eval set):\ Metrics: 'mae': 0.090115, 'r2': 0.991208, 'mse': 0.013034, 'rmse': 0.114165 - 40AA sequences (10k eval set):\ Metrics: 'mae': 0.537456, 'r2': 0.659122, 'mse': 0.448607, 'rmse': 0.669781 - 60AA sequences (10k eval set):\ Metrics: 'mae': 0.629267, 'r2': 0.506747, 'mse': 0.622476, 'rmse': 0.788972 ## `prot_bert_bfd` from ProtTrans The starting pretrained model is from ProtTrans, trained on 2.1 billion proteins from BFD. It was trained on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). > Created by [Ladislav Rampasek](https://rampasek.github.io)
tbosse/bert-base-german-cased-finetuned-subj
tbosse
2022-03-28T22:50:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-28T20:51:21Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-german-cased-finetuned-subj results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-finetuned-subj This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1424 - Precision: 0.6514 - Recall: 0.0186 - F1: 0.0363 - Accuracy: 0.9511 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 140 | 0.1588 | 0.6 | 0.0016 | 0.0031 | 0.9507 | | No log | 2.0 | 280 | 0.1466 | 0.75 | 0.0039 | 0.0078 | 0.9508 | | No log | 3.0 | 420 | 0.1424 | 0.6514 | 0.0186 | 0.0363 | 0.9511 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
frtna/ted_mt-Spanish-to-Italian
frtna
2022-03-28T22:04:21Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:new_dataset", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - new_dataset model-index: - name: ted_mt-Spanish-to-Italian results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ted_mt-Spanish-to-Italian This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-it](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) on the new_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | No log | 1.0 | 46 | 1.4873 | 29.6133 | 26.9081 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.6
Symbermine/rare-puppers
Symbermine
2022-03-28T19:38:23Z
57
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-28T19:38:13Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9285714030265808 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Husky siberiano ![Husky siberiano](images/Husky_siberiano.jpg) #### cocker spaniel ![cocker spaniel](images/cocker_spaniel.jpg) #### galgo ![galgo](images/galgo.jpg) #### labrador ![labrador](images/labrador.jpg) #### pastor aleman ![pastor aleman](images/pastor_aleman.jpg)