modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 18:27:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
549 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 18:24:50
card
stringlengths
11
1.01M
artemis13fowl/distilbert-base-uncased-finetuned-imdb
artemis13fowl
2022-01-23T14:10:31Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5756 | 2.0 | 314 | 2.4230 | | 2.5395 | 3.0 | 471 | 2.4358 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
Madhour/gpt2-eli5
Madhour
2022-01-23T12:00:23Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "ELI5", "en", "dataset:eli5", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: en tags: - ELI5 license: gpl-3.0 datasets: - eli5 Task: Summarization widget: - text: "<|BOS|><|SEP|>Consulting,business,Fraud<|SEP|>" inference: parameters: temperature: 0.9 return_full_text: False repetition_penalty: 1 --- # Conditional ELI5 Generator Given a few keywords, it generates an Eli5 question with a corresponding answer. The model is mainly used for [SeemsPhishy](https://github.com/madhour/seemsphishy) to auto generate newsletters for phishing/penetration-testing. # How to use ```Python from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM from torch import tensor tokenizer = AutoTokenizer.from_pretrained("Madhour/gpt2-eli5") model = AutoModelForCausalLM.from_pretrained("Madhour/gpt2-eli5") prompt = <|BOS|> +"I have a question."+ <|SEP|> + "keyword1,keyword2,keyword3" + <|SEP|> prompt = tensor(tokenizer.encode(prompt)).unsqueeze(0) text = model.generate(prompt, do_sample=True, min_length=50, max_length=768, top_k=30, top_p=0.7, temperature=0.9, repetition_penalty=2.0, num_return_sequences=3) ```
dandelin/vilt-b32-finetuned-flickr30k
dandelin
2022-01-23T09:46:32Z
34
3
transformers
[ "transformers", "pytorch", "vilt", "arxiv:1505.04870", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 --- # Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k Vision-and-Language Transformer (ViLT) model fine-tuned on [Flickr30k](https://arxiv.org/abs/1505.04870#:~:text=The%20Flickr30k%20dataset%20has%20become,for%20sentence%2Dbased%20image%20description.&text=Such%20annotations%20are%20essential%20for,entity%20mentions%20in%20an%20image.). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model for image and text retrieval. ### How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k") model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass scores = dict() for text in texts: encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0, :].item() ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
ylh1013/ja_chatbot
ylh1013
2022-01-23T02:24:03Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - finetuned_from license: mit tags: - generated_from_trainer model-index: - name: ja_chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ja_chatbot This model is a fine-tuned version of [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu102 - Tokenizers 0.10.3
Pinwheel/wav2vec2-base-timit-demo-colab
Pinwheel
2022-01-22T15:04:16Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4812 - Wer: 0.3557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4668 | 4.0 | 500 | 1.3753 | 0.9895 | | 0.6126 | 8.0 | 1000 | 0.4809 | 0.4350 | | 0.2281 | 12.0 | 1500 | 0.4407 | 0.4033 | | 0.1355 | 16.0 | 2000 | 0.4590 | 0.3765 | | 0.0923 | 20.0 | 2500 | 0.4754 | 0.3707 | | 0.0654 | 24.0 | 3000 | 0.4719 | 0.3557 | | 0.0489 | 28.0 | 3500 | 0.4812 | 0.3557 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
alistvt/bert-base-uncased-pretrain-finetuned-coqa-falttened
alistvt
2022-01-22T05:06:00Z
30
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: bert-base-uncased-pretrain-finetuned-coqa-falttened results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-pretrain-finetuned-coqa-falttened This model is a fine-tuned version of [alistvt/bert-base-uncased-pretrained-mlm-coqa-stories](https://huggingface.co/alistvt/bert-base-uncased-pretrained-mlm-coqa-stories) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.2886 | 0.29 | 2000 | 3.0142 | | 3.0801 | 0.59 | 4000 | 2.8347 | | 2.9744 | 0.88 | 6000 | 2.7643 | | 2.494 | 1.18 | 8000 | 2.7605 | | 2.4417 | 1.47 | 10000 | 2.7790 | | 2.4042 | 1.77 | 12000 | 2.7382 | | 2.1285 | 2.06 | 14000 | 2.8588 | | 2.0569 | 2.36 | 16000 | 2.8937 | | 2.0794 | 2.65 | 18000 | 2.8511 | | 2.0679 | 2.95 | 20000 | 2.8655 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ying-tina/temp
ying-tina
2022-01-22T03:43:36Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: temp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # temp This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4645 - Wer: 0.3527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4324 | 0.4 | 50 | 0.5800 | 0.4458 | | 0.4027 | 0.8 | 100 | 0.5374 | 0.4109 | | 0.3163 | 1.2 | 150 | 0.5285 | 0.3881 | | 0.3064 | 1.6 | 200 | 0.5161 | 0.3815 | | 0.3235 | 2.0 | 250 | 0.4645 | 0.3527 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
ms29315/distilbert-base-uncased-finetuned-cola
ms29315
2022-01-21T19:56:06Z
4
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ms29315/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ms29315/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3100 - Validation Loss: 0.5090 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3100 | 0.5090 | 0 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.18.0 - Tokenizers 0.10.3
facebook/xm_transformer_600m-en_zh-multi_domain
facebook
2022-01-21T19:02:57Z
5
2
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:must_c", "dataset:covost2", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: en-zh datasets: - must_c - covost2 widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3 --- # xm_transformer_600m-en_zh-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - English-Chinese - Trained on MuST-C, CoVoST 2, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/tts_transformer-zh-cv7_css10](https://huggingface.co/facebook/tts_transformer-zh-cv7_css10) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.speech_to_text.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-en_zh-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/tts_transformer-zh-cv7_css10", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } ```
facebook/xm_transformer_600m-en_es-multi_domain
facebook
2022-01-21T19:01:24Z
2
1
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:must_c", "dataset:europarl_st", "dataset:voxpopuli", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: en-es datasets: - must_c - europarl_st - voxpopuli widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3 --- # xm_transformer_600m-en_es-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - English-Spanish - Trained on MuST-C, EuroParl-ST, VoxPopuli, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/tts_transformer-es-css10](https://huggingface.co/facebook/tts_transformer-es-css10) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-en_es-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/tts_transformer-es-css10", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } ```
facebook/xm_transformer_600m-ru_en-multi_domain
facebook
2022-01-21T18:56:34Z
6
2
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:mtedx", "dataset:covost2", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: ru-en datasets: - mtedx - covost2 widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-ru_en-multi_domain/resolve/main/common_voice_ru_18945535.flac --- # xm_transformer_600m-ru_en-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - Russian-English - Trained on mTEDx, CoVoST 2, OpenSTT, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/fastspeech2-en-ljspeech](https://huggingface.co/facebook/fastspeech2-en-ljspeech) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-ru_en-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/fastspeech2-en-ljspeech", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
facebook/xm_transformer_600m-es_en-multi_domain
facebook
2022-01-21T18:19:44Z
14
1
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "dataset:mtedx", "dataset:covost2", "dataset:europarl_st", "dataset:voxpopuli", "arxiv:2010.05171", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation language: es-en datasets: - mtedx - covost2 - europarl_st - voxpopuli widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-es_en-multi_domain/resolve/main/common_voice_es_19966634.flac --- # xm_transformer_600m-es_en-multi_domain [W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)): - Spanish-English - Trained on mTEDx, CoVoST 2, EuroParl-ST, VoxPopuli, Multilingual LibriSpeech, Common Voice v7 and CCMatrix - Speech synthesis with [facebook/fastspeech2-en-ljspeech](https://huggingface.co/facebook/fastspeech2-en-ljspeech) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import S2THubInterface from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd import torchaudio models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/xm_transformer_600m-es_en-multi_domain", arg_overrides={"config_yaml": "config.yaml"}, ) model = models[0] generator = task.build_generator(model, cfg) # requires 16000Hz mono channel audio audio, _ = torchaudio.load("/path/to/an/audio/file") sample = S2THubInterface.get_model_input(task, audio) text = S2THubInterface.get_prediction(task, model, generator, sample) # speech synthesis tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub( f"facebook/fastspeech2-en-ljspeech", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, ) tts_model = tts_models[0] TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg) tts_generator = tts_task.build_generator([tts_model], tts_cfg) tts_sample = TTSHubInterface.get_model_input(tts_task, text) wav, sr = TTSHubInterface.get_prediction( tts_task, tts_model, tts_generator, tts_sample ) ipd.Audio(wav, rate=rate) ``` ## Citation ```bibtex @inproceedings{li-etal-2021-multilingual, title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models", author = "Li, Xian and Wang, Changhan and Tang, Yun and Tran, Chau and Tang, Yuqing and Pino, Juan and Baevski, Alexei and Conneau, Alexis and Auli, Michael", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.68", doi = "10.18653/v1/2021.acl-long.68", pages = "827--838", } @inproceedings{wang-etal-2020-fairseq, title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq", author = "Wang, Changhan and Tang, Yun and Ma, Xutai and Wu, Anne and Okhonko, Dmytro and Pino, Juan", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.aacl-demo.6", pages = "33--39", } @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
Yaia/distilbert-base-uncased-finetuned-emotion
Yaia
2022-01-21T17:28:21Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9255 - name: F1 type: f1 value: 0.9257196896784097 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2086 - Accuracy: 0.9255 - F1: 0.9257 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8249 | 1.0 | 250 | 0.3042 | 0.9085 | 0.9068 | | 0.2437 | 2.0 | 500 | 0.2086 | 0.9255 | 0.9257 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
joheras/xls-r-ab-spanish
joheras
2022-01-21T15:42:21Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 156.8790 - Wer: 1.3448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
deepparag/DumBot
deepparag
2022-01-21T15:40:27Z
148
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- thumbnail: https://cdn.discordapp.com/app-icons/870239976690970625/c02cae78ae105f07969cfd8f8ea3d0a0.png tags: - conversational license: mit --- # THIS AI IS OUTDATED. See [Aeona](https://huggingface.co/deepparag/Aeona) An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small). Trained on: https://www.kaggle.com/Cornell-University/movie-dialog-corpus https://www.kaggle.com/jef1056/discord-data [Live Demo](https://dumbot-331213.uc.r.appspot.com/) Example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot") model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=4, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
Gianpe/en_textcat_emotion_xlm
Gianpe
2022-01-21T15:09:03Z
3
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_emotion_xlm results: [] ---
deepdml/output
deepdml
2022-01-21T11:50:22Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 156.8789 - Wer: 1.3456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
anuragshas/wav2vec2-large-xls-r-300m-ur
anuragshas
2022-01-21T04:32:18Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-ur results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ur This model is a fine-tuned version of [anuragshas/wav2vec2-large-xls-r-300m-ur](https://huggingface.co/anuragshas/wav2vec2-large-xls-r-300m-ur) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 2.0508 - Wer: 0.7328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.12 - num_epochs: 240 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0719 | 66.67 | 400 | 1.8510 | 0.7432 | | 0.0284 | 133.33 | 800 | 2.0088 | 0.7415 | | 0.014 | 200.0 | 1200 | 2.0508 | 0.7328 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
espnet
2022-01-21T04:15:13Z
8
2
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp` This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout b0ff60946ada6753af79423a2e6063984bec2926 pip install -e . cd egs2/librispeech/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp ``` ## ASR config <details><summary>expand</summary> ``` ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Gigworks/ASR_zh_espnet2
Gigworks
2022-01-21T02:58:59Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
<b>Speech-To-Text Chinese Model</b> <br/><br/> Reference: <br/> Model - https://huggingface.co/espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char <br/> Code - https://huggingface.co/spaces/akhaliq/espnet2_asr/blob/main/app.py
huggingtweets/anticarbons
huggingtweets
2022-01-20T22:52:20Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/anticarbons/1642719091326/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1477498953524518912/yvJkd9VL_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ANTICARBON</div> <div style="text-align: center; font-size: 14px;">@anticarbons</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ANTICARBON. | Data | ANTICARBON | | --- | --- | | Tweets downloaded | 2518 | | Retweets | 427 | | Short tweets | 352 | | Tweets kept | 1739 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/s9q99sc5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @anticarbons's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1k8boybi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1k8boybi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/anticarbons') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
milyiyo/selectra-small-finetuned-amazon-review
milyiyo
2022-01-20T21:11:57Z
16
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 - precision - recall model-index: - name: selectra-small-finetuned-amazon-review results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.737 - name: F1 type: f1 value: 0.7437773019932409 - name: Precision type: precision value: 0.7524857881639091 - name: Recall type: recall value: 0.737 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # selectra-small-finetuned-amazon-review This model is a fine-tuned version of [Recognai/selectra_small](https://huggingface.co/Recognai/selectra_small) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.6279 - Accuracy: 0.737 - F1: 0.7438 - Precision: 0.7525 - Recall: 0.737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 0.5 | 500 | 0.7041 | 0.7178 | 0.6724 | 0.6715 | 0.7178 | | 0.7908 | 1.0 | 1000 | 0.6365 | 0.7356 | 0.7272 | 0.7211 | 0.7356 | | 0.7908 | 1.5 | 1500 | 0.6204 | 0.7376 | 0.7380 | 0.7387 | 0.7376 | | 0.6358 | 2.0 | 2000 | 0.6162 | 0.7386 | 0.7377 | 0.7380 | 0.7386 | | 0.6358 | 2.5 | 2500 | 0.6228 | 0.7274 | 0.7390 | 0.7576 | 0.7274 | | 0.5827 | 3.0 | 3000 | 0.6188 | 0.7378 | 0.7400 | 0.7425 | 0.7378 | | 0.5827 | 3.5 | 3500 | 0.6246 | 0.7374 | 0.7416 | 0.7467 | 0.7374 | | 0.5427 | 4.0 | 4000 | 0.6266 | 0.7446 | 0.7452 | 0.7465 | 0.7446 | | 0.5427 | 4.5 | 4500 | 0.6331 | 0.7392 | 0.7421 | 0.7456 | 0.7392 | | 0.5184 | 5.0 | 5000 | 0.6279 | 0.737 | 0.7438 | 0.7525 | 0.737 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mbateman/distilbert-base-uncased-finetuned-imdb
mbateman
2022-01-20T20:43:24Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6482 | 1.0 | 625 | 2.4283 | | 2.5156 | 2.0 | 1250 | 2.3816 | | 2.475 | 3.0 | 1875 | 2.3638 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.1
espnet/akreal_swbd_da_hubert_conformer
espnet
2022-01-20T18:57:49Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:swbd_da", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - swbd_da license: cc-by-4.0 --- ## ESPnet2 ASR model ### `akreal/espnet2_swbd_da_hubert_conformer` This model was trained by Pavel Denisov using swbd_da recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 08c6efbc6299c972301236625f9abafe087c9f9c pip install -e . cd egs2/swbd_da/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/akreal_swbd_da_hubert_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Thu Jan 20 19:31:21 CET 2022` - python version: `3.8.12 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.10.1+cu113` - Git hash: `08c6efbc6299c972301236625f9abafe087c9f9c` - Commit date: `Tue Jan 4 13:40:33 2022 +0100` ## asr_train_asr_raw_en_word_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.loss.ave/test_context3|2379|2379|66.3|33.7|0.0|0.0|33.7|33.7| |decode_asr_asr_model_valid.loss.ave/valid_context3|8116|8116|69.5|30.5|0.0|0.0|30.5|30.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.loss.ave/test_context3|2379|19440|76.1|17.7|6.2|8.1|32.0|33.7| |decode_asr_asr_model_valid.loss.ave/valid_context3|8116|66353|79.5|16.1|4.4|8.0|28.5|30.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_hubert_context3.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_hubert_context3_raw_en_word_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 35 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 7 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 4000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_context3_raw_en_word_sp/train/speech_shape - exp/asr_stats_context3_raw_en_word_sp/train/text_shape.word valid_shape_file: - exp/asr_stats_context3_raw_en_word_sp/valid/speech_shape - exp/asr_stats_context3_raw_en_word_sp/valid/text_shape.word batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_context3_sp/wav.scp - speech - sound - - dump/raw/train_context3_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid_context3/wav.scp - speech - sound - - dump/raw/valid_context3/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0001 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - statement - backchannel - opinion - abandon - agree - yn_q - apprec - 'yes' - uninterp - close - wh_q - acknowledge - 'no' - yn_decl_q - hedge - backchannel_q - sum - quote - affirm - other - directive - repeat - open_q - completion - rhet_q - hold - reject - answer - neg - ans_dispref - repeat_q - open - or - commit - maybe - decl_q - third_pty - self_talk - thank - apology - tag_q - downplay - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.0 extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.5a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
nntadotzip
2022-01-20T18:06:05Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3489 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 382 | 0.4695 | | 0.5633 | 2.0 | 764 | 0.3361 | | 0.3533 | 3.0 | 1146 | 0.3489 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ucberkeley-dlab/hate-measure-roberta-large
ucberkeley-dlab
2022-01-20T17:57:30Z
7
4
tf-keras
[ "tf-keras", "text-classification", "hate-speech", "counterspeech", "irt", "arxiv:2009.10277", "en", "dataset:ucberkeley-dlab/measuring-hate-speech", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en tags: - text-classification - hate-speech - counterspeech - irt - arxiv:2009.10277 datasets: - ucberkeley-dlab/measuring-hate-speech --- # Measuring hate speech: RoBERTa-Large This model predicts a continuous hate speech score as described in Kennedy et al. (2020). ## Citation ``` @article{kennedy2020constructing, title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application}, author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia}, journal={arXiv preprint arXiv:2009.10277}, year={2020} } ``` ## References Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). [Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application](https://arxiv.org/abs/2009.10277). arXiv preprint arXiv:2009.10277.
Rocketknight1/distilroberta-base-finetuned-wikitext2
Rocketknight1
2022-01-20T17:54:46Z
22
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 1.17.0 - Tokenizers 0.11.0
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts
nntadotzip
2022-01-20T17:12:19Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: xlnet-base-cased-IUChatbot-ontologyDts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-IUChatbot-ontologyDts This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 318 | 0.5005 | | 0.8222 | 2.0 | 636 | 0.4488 | | 0.8222 | 3.0 | 954 | 0.4965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
radhakri119/wav2vec2-base-timit-demo-colab
radhakri119
2022-01-20T16:09:09Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4780 - Wer: 0.3403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5299 | 4.0 | 500 | 1.5195 | 0.9991 | | 0.6229 | 8.0 | 1000 | 0.4447 | 0.4282 | | 0.2136 | 12.0 | 1500 | 0.4154 | 0.3764 | | 0.1196 | 16.0 | 2000 | 0.4394 | 0.3597 | | 0.0834 | 20.0 | 2500 | 0.4891 | 0.3619 | | 0.0591 | 24.0 | 3000 | 0.4535 | 0.3439 | | 0.0448 | 28.0 | 3500 | 0.4780 | 0.3403 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
ml6team/distilbart-tos-summarizer-tosdr
ml6team
2022-01-20T15:21:41Z
22
15
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "t&c", "tos", "distilbart", "distilbart-6-6", "en", "dataset:tosdr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - en tags: - summarization - t&c - tos - distilbart - distilbart-6-6 datasets: - tosdr metrics: - rouge1 - rouge2 - rougel inference: parameters: min_length: 5 max_length: 512 do_sample: False widget: - text: "In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions. Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws. Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use. Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities. Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides." --- # T&C Summarization Model T&C Summarization Model based on [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6), This abstractive summarization model is a part of a bigger end-to-end T&C summarizer pipeline which is preceded by LSA (Latent Semantic Analysis) extractive summarization. The extractive summarization shortens the T&C to be further summarized by this model. ## Finetuning Corpus We collaborated with [TOSDR](https://tosdr.org/) to work with their data, and the model is finetuned accordingly. The article and summarization text is reduced via extractive summarization before it is finetuned to the model. ## Contact Us https://ml6.eu/ . This abstractive model finetuning is the continuation of the Christmas Project 2021 done in ML6: https://bit.ly/XmasProjects . ## Load Finetuned Model ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") ``` ## Code Sample This sample requires [sumy](https://pypi.org/project/sumy/), the LSA Extractive Summarization library, as additional package to run. ``` import re import nltk nltk.download('punkt') from sumy.parsers.plaintext import PlaintextParser from sumy.nlp.tokenizers import Tokenizer from sumy.nlp.stemmers import Stemmer from sumy.summarizers.lsa import LsaSummarizer from transformers import AutoTokenizer, AutoModelForSeq2SeqLM LANGUAGE = "english" EXTRACTED_ARTICLE_SENTENCES_LEN = 12 stemmer = Stemmer(LANGUAGE) lsa_summarizer = LsaSummarizer(stemmer) tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") def get_extractive_summary(text, sentences_count): parser = PlaintextParser.from_string(text, Tokenizer(LANGUAGE)) summarized_info = lsa_summarizer(parser.document, sentences_count) summarized_info = [element._text for element in summarized_info] return ' '.join(summarized_info) def get_summary(dict_summarizer_model, dict_tokenizer, text_content): text_content = get_extractive_summary(text_content, EXTRACTED_ARTICLE_SENTENCES_LEN) tokenizer = dict_tokenizer['tokenizer'] model = dict_summarizer_model['model'] inputs = tokenizer(text_content, max_length=dict_tokenizer['max_length'], truncation=True, return_tensors="pt") outputs = model.generate( inputs["input_ids"], max_length=dict_summarizer_model['max_length'], min_length=dict_summarizer_model['min_length'], ) summarized_text = tokenizer.decode(outputs[0]) match = re.search(r"<s>(.*)</s>", summarized_text) if match is not None: summarized_text = match.group(1) return summarized_text.replace('<s>', '').replace('</s>', '') test_tos = """ In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions. Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws. Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use. Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities. Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides """ model_dict = { 'model': model, 'max_length': 512, 'min_length': 4 } tokenizer_dict = { 'tokenizer': tokenizer, 'max_length': 1024 } print(get_summary(model_dict, tokenizer_dict, test_tos)) ```
milyiyo/distilbert-base-uncased-finetuned-amazon-review
milyiyo
2022-01-20T15:14:48Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 - precision - recall model-index: - name: distilbert-base-uncased-finetuned-amazon-review results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.693 - name: F1 type: f1 value: 0.7002653469272611 - name: Precision type: precision value: 0.709541681233075 - name: Recall type: recall value: 0.693 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-amazon-review This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.3494 - Accuracy: 0.693 - F1: 0.7003 - Precision: 0.7095 - Recall: 0.693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 0.5 | 500 | 0.8287 | 0.7104 | 0.7120 | 0.7152 | 0.7104 | | 0.4238 | 1.0 | 1000 | 0.8917 | 0.7094 | 0.6989 | 0.6917 | 0.7094 | | 0.4238 | 1.5 | 1500 | 0.9367 | 0.6884 | 0.6983 | 0.7151 | 0.6884 | | 0.3152 | 2.0 | 2000 | 0.9845 | 0.7116 | 0.7144 | 0.7176 | 0.7116 | | 0.3152 | 2.5 | 2500 | 1.0752 | 0.6814 | 0.6968 | 0.7232 | 0.6814 | | 0.2454 | 3.0 | 3000 | 1.1215 | 0.6918 | 0.6954 | 0.7068 | 0.6918 | | 0.2454 | 3.5 | 3500 | 1.2905 | 0.6976 | 0.7048 | 0.7138 | 0.6976 | | 0.1989 | 4.0 | 4000 | 1.2938 | 0.694 | 0.7016 | 0.7113 | 0.694 | | 0.1989 | 4.5 | 4500 | 1.3623 | 0.6972 | 0.7014 | 0.7062 | 0.6972 | | 0.1746 | 5.0 | 5000 | 1.3494 | 0.693 | 0.7003 | 0.7095 | 0.693 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
aidan-o-brien/recipe-improver
aidan-o-brien
2022-01-20T14:26:53Z
5
0
transformers
[ "transformers", "tf", "albert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: recipe-improver results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-improver This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5570 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 5539, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 2.5570 | 0 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
Aleksandra/herbert-base-cased-finetuned-squad
Aleksandra
2022-01-20T13:14:11Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: herbert-base-cased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # herbert-base-cased-finetuned-squad This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 233 | 1.2474 | | No log | 2.0 | 466 | 1.1951 | | 1.3459 | 3.0 | 699 | 1.2071 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
g30rv17ys/avhubert
g30rv17ys
2022-01-20T13:07:45Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://dl.fbaipublicfiles.com/avhubert/model/lrs3_vox/vsr/base_vox_433h.pt
dbsamu/distilbert-base-uncased-finetuned-ner
dbsamu
2022-01-20T10:30:26Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: en metrics: - name: Precision type: precision value: 0.8120642485217545 - name: Recall type: recall value: 0.830235495804385 - name: F1 type: f1 value: 0.8210493441599 - name: Accuracy type: accuracy value: 0.9203828724683252 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2781 - Precision: 0.8121 - Recall: 0.8302 - F1: 0.8210 - Accuracy: 0.9204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3504 | 1.0 | 1250 | 0.2922 | 0.7930 | 0.8075 | 0.8002 | 0.9115 | | 0.2353 | 2.0 | 2500 | 0.2711 | 0.8127 | 0.8264 | 0.8195 | 0.9196 | | 0.1745 | 3.0 | 3750 | 0.2781 | 0.8121 | 0.8302 | 0.8210 | 0.9204 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
dehio/german-qg-t5-e2e-quad
dehio
2022-01-20T09:40:47Z
5
3
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: mit widget: - text: "Naturschutzwarte haben auf der ostfriesischen Insel Wangerooge zwei seltene Kurzschnäuzige Seepferdchen entdeckt. Die Tiere seien vergangene Woche bei einer sogenannten Spülsaumkontrolle entdeckt worden, bei der die Strände eigentlich nach Müll und toten Vögeln abgesucht würden, sagte der Geschäftsführer der zuständigen Naturschutz- und Forschungsgemeinschaft Mellumrat, Mathias Heckroth. Dabei seien den Naturschützern am Nordstrand kurz hintereinander die beiden leblosen, nur wenige Zentimeter großen Tiere aufgefallen. Experten der Nationalparkverwaltung bestimmten beide Tiere als Kurzschnäuzige Seepferdchen (Hippocampus hippocampus)." inference: parameters: max_length: 128 language: - de tags: - question generation datasets: - deepset/germanquad model-index: - name: german-qg-t5-e2e-quad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german-qg-t5-e2e-quad (Work in progress) This model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of [valhalla/t5-base-e2e-qg](https://huggingface.co/valhalla/t5-base-e2e-qg) on the [GermanQuAD dataset from deepset](https://huggingface.co/datasets/deepset/germanquad). ## Model description More information needed ## Training and evaluation data Bleu_1: 0.196051 Bleu_2: 0.122380 Bleu_3: 0.079980 Bleu_4: 0.053672 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
hrdipto/wav2vec2-xls-r-tf-left-right-shuru
hrdipto
2022-01-20T08:48:17Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-tf-left-right-shuru results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-shuru This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0921 - Wer: 1.2628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.5528 | 23.81 | 500 | 0.5509 | 1.9487 | | 0.2926 | 47.62 | 1000 | 0.1306 | 1.2756 | | 0.1171 | 71.43 | 1500 | 0.1189 | 1.2628 | | 0.0681 | 95.24 | 2000 | 0.0921 | 1.2628 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingtweets/chickenhalf
huggingtweets
2022-01-20T07:52:22Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/chickenhalf/1642665052826/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1482989404125806596/JtLgKHTu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">chicken sandwich</div> <div style="text-align: center; font-size: 14px;">@chickenhalf</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from chicken sandwich. | Data | chicken sandwich | | --- | --- | | Tweets downloaded | 3202 | | Retweets | 126 | | Short tweets | 427 | | Tweets kept | 2649 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3r0cwhle/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chickenhalf's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zvaxh71) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zvaxh71/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/chickenhalf') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
abdelkader/distilbert-base-uncased-finetuned-clinc
abdelkader
2022-01-20T04:59:36Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9174193548387096 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7713 - Accuracy: 0.9174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2831 | 0.7426 | | 3.785 | 2.0 | 636 | 1.8739 | 0.8335 | | 3.785 | 3.0 | 954 | 1.1525 | 0.8926 | | 1.6894 | 4.0 | 1272 | 0.8569 | 0.91 | | 0.897 | 5.0 | 1590 | 0.7713 | 0.9174 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mrp/marian-finetuned-kde4-en-to-fr
mrp
2022-01-20T04:05:30Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-fr metrics: - name: Bleu type: bleu value: 50.20410659441166 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9643 - Bleu: 50.2041 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ethzanalytics/ai-msgbot-gpt2-XL
ethzanalytics
2022-01-20T01:40:42Z
9
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "gpt", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - text-generation - gpt2 - gpt license: mit datasets: - natural questions widget: - text: "Do you like my new haircut?\nperson beta:\n\n" example_title: "haircut" - text: "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n" example_title: "teaching" - text: "What's your favorite animal? Mine is the dog? \nperson beta:\n\n" example_title: "favorite" - text: "how much does it cost?\nperson beta:\n\n" example_title: "money" inference: parameters: min_length: 2 max_length: 64 length_penalty: 0.6 no_repeat_ngram_size: 3 do_sample: True top_p: 0.85 top_k: 10 repetition_penalty: 2.1 --- # ai-msgbot GPT2-XL _NOTE: model card is WIP_ GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`. Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it). ## conversation data The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses. `script_speaker_name` = `person alpha` `script_responder_name` = `person beta` ## examples - the default inference API examples should work _okay_ - an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt. ### Example prompt: ``` do you like to eat beans? person beta: ``` ### Resulting output ``` do you like to eat beans?person beta: yes, i like fried beans. person alpha: i wonder when the first beans were cultivated and how they were processed. person beta: nitrogenic bacteria (in ``` _Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_ ## citations ``` @inproceedings{dinan2019wizard, author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston}, title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents}, booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)}, year={2019}, } @inproceedings{li-etal-2017-dailydialog, title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset", author = "Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi", booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = nov, year = "2017", address = "Taipei, Taiwan", publisher = "Asian Federation of Natural Language Processing", url = "https://aclanthology.org/I17-1099", pages = "986--995", abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}", } ```
nimrah/wav2vec2-large-xls-r-300m-hindi-colab
nimrah
2022-01-19T21:21:34Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hindi-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
vuiseng9/bert-base-squadv1
vuiseng9
2022-01-19T19:03:57Z
5
0
transformers
[ "transformers", "pytorch", "onnx", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
This model is a fork of [```csarron/bert-base-uncased-squad-v1```](https://huggingface.co/csarron/bert-base-uncased-squad-v1). ``` eval_exact_match = 80.9082 eval_f1 = 88.2275 eval_samples = 10784 ``` # Eval ```bash export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1 WORKDIR=transformers/examples/pytorch/question-answering cd $WORKDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1 \ --dataset_name squad \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
masapasa/wav2vec2-large-xls-r-300m-turkish-colab
masapasa
2022-01-19T17:30:55Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.13.3 - Tokenizers 0.10.3
facebook/contriever
facebook
2022-01-19T17:23:28Z
303,332
60
transformers
[ "transformers", "pytorch", "bert", "arxiv:2112.09118", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model has been trained without supervision following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever. ## Usage (HuggingFace Transformers) Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding. ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('facebook/contriever') model = AutoModel.from_pretrained('facebook/contriever') sentences = [ "Where was Marie Curie born?", "Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.", "Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace." ] # Apply tokenizer inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings outputs = model(**inputs) # Mean pooling def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings embeddings = mean_pooling(outputs[0], inputs['attention_mask']) ```
huggingtweets/t_zahil
huggingtweets
2022-01-19T16:50:12Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1374040164180299791/ACw4G3nZ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Thomas Sanlis 🌱</div> <div style="text-align: center; font-size: 14px;">@t_zahil</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Thomas Sanlis 🌱. | Data | Thomas Sanlis 🌱 | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 597 | | Short tweets | 312 | | Tweets kept | 2333 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33umauvo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @t_zahil's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fhm3dlx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fhm3dlx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/t_zahil') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
dehio/german-qg-t5-drink600
dehio
2022-01-19T16:38:22Z
7
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: mit widget: - text: "generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert." language: - de tags: - question generation datasets: - deepset/germanquad model-index: - name: german-qg-t5-drink600 results: [] --- # german-qg-t5-drink600 This model is fine-tuned in question generation in German. The expected answer must be highlighted with &lt;hl> token. It is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad) and further pre-trained on drink related questions. ## Task example #### Input generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, die sowohl &lt;hl>im Sommer wie auch zu Silvester&lt;hl> funktioniert. #### Expected Question Zu welchen Gelegenheiten passt der Monk Sour gut? ## Model description The model is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad), which was pre-trained on [GermanQUAD](https://www.deepset.ai/germanquad). We further pre-trained it on questions annotated on drink receipts from [Mixology](https://mixology.eu/) ("drink600"). We have not yet open sourced the dataset, since we do not own copyright on the source material. ## Training and evaluation data The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg). ## Evaluation It achieves a **BLEU-4 score of 29.80** on the drink600 test set (n=120) and **11.30** on the GermanQUAD test set. Thus, fine-tuning on drink600 did not affect performance on GermanQuAD. In comparison, *german-qg-t5-quad* achieves a BLEU-4 score of **10.76** on the drink600 test set. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 100 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
DanL/scientific-challenges-and-directions
DanL
2022-01-19T12:47:22Z
315
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:DanL/scientific-challenges-and-directions-dataset", "arxiv:2108.13751", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer - text-classification language: - en datasets: - DanL/scientific-challenges-and-directions-dataset widget: - text: "severe atypical cases of pneumonia emerged and quickly spread worldwide." example_title: "challenge" - text: "we speculate that studying IL-6 will be beneficial." example_title: "direction" - text: "in future studies, both PRRs should be tested as the cause for multiple deaths." example_title: "both" - text: "IbMADS1-transformed potatoes exhibited tuber morphogenesis in the fibrous roots." example_title: "neither" metrics: - precision - recall - f1 model-index: - name: scientific-challenges-and-directions results: [] --- # scientific-challenges-and-directions We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the _challenges_ and _directions_ are defined as follows: * **Challenge**: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap. * **Research direction**: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration. * This model here is described in our paper: [A Search Engine for Discovery of Scientific Challenges and Directions](https://arxiv.org/abs/2108.13751) (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results). * Our dataset can be found [here](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset). * Please cite our paper if you use our datasets or models in your project. See the [BibTeX](#citation). * Feel free to [email us](#contact-us). * Also, check out [our search engine](https://challenges.apps.allenai.org/), as an example application. ## Model description This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [scientific-challenges-and-directions-dataset](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset), designed for multi-label text classification. ## Training and evaluation data The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our [paper](https://arxiv.org/abs/2108.13751) ## Example notebook We include an example notebook that uses the model for inference in our [repo](https://github.com/Dan-La/scientific-challenges-and-directions). See `Inference_Notebook.ipynb`. A training notebook is also included. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning rate: 2e-05 - train batch size: 8 - eval batch size: 4 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr scheduler type: linear - lr scheduler warmup steps: 500 - num epochs: 30 ### Training results The achieves the following results on the test set: - Precision Challenge: 0.768719 - Recall Challenge: 0.780405 - F1 Challenge: 0.774518 - Precision Direction: 0.758112 - Recall Direction: 0.774096 - F1 Direction: 0.766021 - Precision (micro avg. on both labels): 0.764894 - Recall (micro avg. on both labels): 0.778139 - F1 (micro avg. on both labels): 0.771459 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3 ## Citation If using our dataset and models, please cite: ``` @misc{lahav2021search, title={A Search Engine for Discovery of Scientific Challenges and Directions}, author={Dan Lahav and Jon Saad Falcon and Bailey Kuehl and Sophie Johnson and Sravanthi Parasa and Noam Shomron and Duen Horng Chau and Diyi Yang and Eric Horvitz and Daniel S. Weld and Tom Hope}, year={2021}, eprint={2108.13751}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contact us Please don't hesitate to reach out. **Email:** `[email protected]`,`[email protected]`.
mishig/test_vid
mishig
2022-01-19T09:56:39Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# Video demo on ModelCard Please find [this file](https://huggingface.co/mishig/test_vid/blob/main/README.md) to see how to add a video to model card. <video src="https://huggingface.co/mishig/test_vid/resolve/main/output.mp4" controls autoplay loop/>
chitra/finetuned-adversarial-paraphrase-model
chitra
2022-01-19T09:13:16Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: finetuned-adversarial-paraphrase-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-adversarial-paraphrase-model This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.5680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0848 | 1.0 | 2000 | 5.4633 | | 0.0495 | 2.0 | 4000 | 6.0352 | | 0.0121 | 3.0 | 6000 | 7.5680 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mrp/distilbert-base-uncased-finetuned-imdb
mrp
2022-01-19T08:44:09Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.707 | 1.0 | 157 | 2.4883 | | 2.572 | 2.0 | 314 | 2.4240 | | 2.5377 | 3.0 | 471 | 2.4355 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/wmascen
huggingtweets
2022-01-19T04:52:23Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/wmascen/1642567908765/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1453179488569802752/LsB82o0-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wihrel</div> <div style="text-align: center; font-size: 14px;">@wmascen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wihrel. | Data | wihrel | | --- | --- | | Tweets downloaded | 2900 | | Retweets | 203 | | Short tweets | 236 | | Tweets kept | 2461 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bsbw98xm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wmascen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pwlitks) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pwlitks/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/wmascen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
chitra/finetune-paraphrase-model
chitra
2022-01-19T04:40:57Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: finetune-paraphrase-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune-paraphrase-model This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.1 | 200 | 3.0116 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
domdomreloaded/bert-base-uncased-finetuned-swag
domdomreloaded
2022-01-18T22:33:47Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "dataset:swag", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - swag metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 0.6045 - Accuracy: 0.7960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7494 | 1.0 | 4597 | 0.5942 | 0.7716 | | 0.3499 | 2.0 | 9194 | 0.6045 | 0.7960 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mrm8488/bert-tiny-5-finetuned-squadv2
mrm8488
2022-01-18T20:19:49Z
154
4
transformers
[ "transformers", "pytorch", "jax", "bert", "question-answering", "QA", "en", "arxiv:1908.08962", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en thumbnail: tags: - QA --- # BERT-Tiny ([5](https://huggingface.co/google/bert_uncased_L-12_H-128_A-2)) fine-tuned on SQuAD v2 [BERT-Tiny](https://huggingface.co/google/bert_uncased_L-12_H-128_A-2) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. **Mode size** (after training): **24.33 MB** ## Details of BERT-Tiny and its 'family' (from their documentation) Released on March 11th, 2020 This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. ## Details of the downstream task (Q&A) - Dataset [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **57.12** | | **F1** | **60.86** | | Model | EM | F1 score | SIZE (MB) | | ----------------------------------------------------------------------------------------- | --------- | --------- | --------- | | [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** | | [bert-tiny-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-5-finetuned-squadv2) | **57.12** | **60.86** | 24.34 ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/bert-tiny-5-finetuned-squadv2", tokenizer="mrm8488/bert-tiny-5-finetuned-squadv2" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
malloc/OpenNMT-py-English-German-Transformer
malloc
2022-01-18T20:18:11Z
0
2
null
[ "translation", "pytorch", "de", "en", "dataset:WMT", "license:mit", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - de - en tags: - translation - pytorch license: mit datasets: - WMT metrics: - bleu --- # OpenNMT-py-English-German-Transformer [OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework. OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for English to German translation. - Configuration: Base Transformer configuration with [standard training options](http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-the-transformer-model-do-you-support-multi-gpu) - Data: WMT with shared SentencePiece model - BLEU: - newstest2014 = 26.89 - newstest2017 = 28.09
Supiri/t5-base-conversation
Supiri
2022-01-18T17:56:42Z
33
20
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "NLP", "ChatBot", "Game AI", "en", "dataset:cornell_movie_dialog", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - cornell_movie_dialog license: gpl-3.0 tags: - NLP - ChatBot - Game AI metrics: - rouge widget: - text: "personality: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.</s> inquiry: What's your name?" example_title: "Talk to Hinata" - text: "personality: Voldemort is a raging psychopath, devoid of the normal human responses to other people's suffering. He has no conscience, feels no remorse or empathy, and does not recognize the worth and humanity of anybody except himself.</s> inquiry: What's your name?" example_title: "Talk to Voldemort" inference: parameters: num_beams: 6 diversity_penalty: 2.5 num_beam_groups: 2 --- # FreeIsland AI With the advancement of the graphical processing power of computers and sophisticated algorithms like [Nanite](https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Nanite/), simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games [showoff](https://www.youtube.com/watch?v=WU0gvPcc3jQ) the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it. One of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to [talk to any person](https://www.youtube.com/watch?v=Z1OtYGzUoSo) in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game. So the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games. # Usage ```py from transformers import AutoModelForSeq2SeqLM trained_model = AutoModelForSeq2SeqLM.from_pretrained(f"Supiri/t5-base-conversation") prompt = "What's your name?" context = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody." input_ids = tokenizer(f"personality: {context}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=2.5, num_beam_groups=2) print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True)) # Answer: My name is Hinata ``` # Evaluation ## Test 1 For this test, I sampled input from the test dataset. For this question the actual response is > "It works a little." But models' response was > "I don't want to flirt with you." Which reflect its bio which was filled by GPT-3 > "He stands primarily to gain self-esteem, which he often receives through the submission of others" In gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often tries to assert his dominance over others. ```py prompt = dataset['test'][66]['request'] contexts = dataset['test'][66]['bio'] input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2) print("Input to the Model") print("Bio:\t",contexts) print("\nPrompt:\t", prompt) print("\nGround truth response") print("\t", dataset['test'][66]['response']) print("\nModel's Prediction") print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ```txt Input to the Model Bio: Sebastian is a very extreme representation of the trope of the "Confidence Man", and acts it out to a degree that is sometimes comedic but mostly frightening. He stands primarily to gain self-esteem, which he often receives through the submission of others or solely through his own perceptions. An artful seducer, his incredible charisma is both his greatest weapon and most intoxicating weakness. Prompt: You think you can come in here with that cute little smirk on your face and try and flirt with me. It doesn't work, Sebastian. Ground truth response It works a little. Model's Prediction Answer: I don't want to flirt with you. ``` ### Test 2 Hinata is a kind-hearted girl from the anime series Naruto. I took her bio from [personality database](https://www.personality-database.com/profile/2790/hinata-hyga-naruto-shippden-mbti-personality-type) and ask a few questions about her. Off the top, you can see the model understands the context since when I asked the model, "**What's your name?**" it responded with the name given with the context. Also, notice when prompted the same question differently (**"Who are you?"**), it still manages to answer it well. ```py prompts = ["What's your name?", "How are you feeling?", "Do you like Star Wars?", "Who are you?", "Coffee or tea?"] contexts = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody." print("Bio:\t",contexts, "\n") for prompt in prompts: input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2) print("Prompt:\t", prompt) print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True), "\n") ``` ```txt Bio: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody. Prompt: What's your name? Answer: My name is Hinata Prompt: How are you feeling? Answer: I'm fine. Prompt: Do you like Star Wars? Answer: No, I don't. Prompt: Who are you? Answer: My name is Hinata Prompt: Coffee or tea? Answer: No, I don't drink much. ``` # Conclusion After training the `t5-base` model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done. 1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the **budget constraints**. So If I manage to cover at least half of the dataset this model will have come up with far better responses. 2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and **generalization of model**. 3. Using a bigger model like `t5-large` or `t5-3b` will certainly improve the performance. 4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the **vocabulary size and trainable parameters**. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task.
tal-yifat/injury-report-test
tal-yifat
2022-01-18T16:24:00Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: injury-report-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # injury-report-test This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.8158 | 1.0 | 6633 | 1.7368 | | 1.6984 | 2.0 | 13266 | 1.6198 | | 1.6209 | 3.0 | 19899 | 1.5800 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
phueb/BabyBERTa-2
phueb
2022-01-18T14:44:44Z
60
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "BabyBERTa", "en", "dataset:CHILDES", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - BabyBERTa datasets: - CHILDES widget: - text: "Look here. What is that <mask> ?" - text: "Do you like your <mask> ?" --- ## BabyBERTA ### Overview BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input. It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed. The three provided models are randomly selected from 10 that were trained and reported in the paper. ## Loading the tokenizer BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults. For instance, to load the tokenizer for BabyBERTa-1, load it as follows: ```python tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1", add_prefix_space=True) ``` ### Hyper-Parameters See the paper for details. All provided models were trained for 400K steps with a batch size of 16. Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero. ### Performance BabyBerta was developed for learning grammatical knowledge from child-directed input. Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite. The best model achieves an overall accuracy of 80.3, comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021). Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/). There are two reasons for this: 1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation. Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased. In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change. 2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective. this resulted in a small reduction in the performance of BabyBERTa. Overall Accuracy on Zorro: | Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) | |----------------------------------------|------------------------------|------------| | [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 | | [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 | | [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 | ### Additional Information This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org). More info can be found [here](https://github.com/phueb/BabyBERTa). [link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1 [link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2 [link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
akozlo/conserv_fulltext_1_18_22
akozlo
2022-01-18T13:42:59Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: conserv_fulltext_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # conserv_fulltext_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3 unbalanced_texts gpt2
huggingtweets/dankogai-hirox246
huggingtweets
2022-01-18T09:55:05Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/dankogai-hirox246/1642499700234/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1190142566831984640/o4kO2hp-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ひろゆき, Hiroyuki Nishimura & Dan Kogai</div> <div style="text-align: center; font-size: 14px;">@dankogai-hirox246</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ひろゆき, Hiroyuki Nishimura & Dan Kogai. | Data | ひろゆき, Hiroyuki Nishimura | Dan Kogai | | --- | --- | --- | | Tweets downloaded | 3249 | 3250 | | Retweets | 284 | 340 | | Short tweets | 1988 | 2416 | | Tweets kept | 977 | 494 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vrtv6xf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dankogai-hirox246's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yfxplpr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yfxplpr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dankogai-hirox246') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
hkunlp/T5_large_prefix_all_tasks_2upsample2
hkunlp
2022-01-18T07:15:22Z
4
2
transformers
[ "transformers", "pytorch", "t5", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This is the ckpt of prefix-tuning model we trained on 21 tasks using a upsampling temp of 2. Note: The prefix module is large due to the fact we keep the re-param weight and didn't compress it to make it more original and extendable for researchers.
jkang/drawing-artistic-trend-classifier
jkang
2022-01-18T01:19:29Z
3
0
tf-keras
[ "tf-keras", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: mit datasets: - web crawled (coming soon) --- # Simple CNN-based Artist Classifier This repo contains a simple CNN-based Keras model which classifies images into one of 8 artistic trends. See also: `https://huggingface.co/jkang/drawing-artist-classifier` - The purpose of this model was for a quick prototyping - Data has been web-crawled using `https://github.com/YoongiKim/AutoCrawler` - 8 popular artists/painters were chosen: - \[TREND\]: \[ID\] - cubism: 0, - expressionism: 1, - fauvisme: 2, - graffitiar: 3, - impressionism: 4, - popart: 5, - post_impressionism: 6, - surrealism: 7} - About 100 representative paintings per artist considering 8 trends were crawled and manually checked - Dataset will be shared later # How to use ```python import tensorflow as tf from huggingface_hub import from_pretrained_keras model = from_pretrained_keras("jkang/drawing-artistic-trend-classifier") image_file = 'monet.jpg' img = tf.io.read_file(image_file) img = tf.io.decode_jpeg(img, channels=3) last_layer_activation, predictions = model(img[tf.newaxis,...]) ``` # Intended uses & limitations You can use this model freely for predicting artists or trends of a given image. Please keep in mind that this model is not intended for production, but for research and quick prototyping. Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists. --- - 2022-01-18 first created by jaekoo kang
jkang/drawing-artist-classifier
jkang
2022-01-18T01:19:28Z
5
1
tf-keras
[ "tf-keras", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: mit datasets: - web crawled (coming soon) --- # Simple CNN-based Artist Classifier This repo contains a simple CNN-based Keras model which classifies images into one of 10 selected artists/painters. - The purpose of this model was for a quick prototyping - Data has been web-crawled using `https://github.com/YoongiKim/AutoCrawler` - 10 popular artists/painters were chosen: - \[ARTIST\]: \[ID\] - claude_monet: 0, - henri_matisse: 1, - jean_michel_basquiat: 2, - keith_haring: 3, - pablo_picasso: 4, - pierre_augste_renoir: 5, - rene_magritte: 6, - roy_richtenstein: 7, - vincent_van_gogh: 8, - wassily_kandinsky: 9 - About 100 representative paintings per artist were crawled and manually checked - Dataset will be shared later # How to use ```python import tensorflow as tf from huggingface_hub import from_pretrained_keras model = from_pretrained_keras("jkang/drawing-artist-classifier") image_file = 'monet.jpg' img = tf.io.read_file(image_file) img = tf.io.decode_jpeg(img, channels=3) last_layer_activation, predictions = model(img[tf.newaxis,...]) ``` # Intended uses & limitations You can use this model freely for predicting artists or trends of a given image. Please keep in mind that this model is not intended for production, but for research and quick prototyping. Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists. --- - 2022-01-18 first created by jaekoo kang
huggingtweets/eri_razapii-hayakawagomi-nagiko726
huggingtweets
2022-01-18T01:03:14Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/eri_razapii-hayakawagomi-nagiko726/1642467789468/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1242278691494756352/TfHYNcpA_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1463699400405164034/aRY9jlnO_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1087144695568855041/p7u3lvnC_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nagisa Ichikawa 🧠 THE GUILD & えりらざぴ | SHE CEO/CCO & ハヤカワ五味</div> <div style="text-align: center; font-size: 14px;">@eri_razapii-hayakawagomi-nagiko726</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nagisa Ichikawa 🧠 THE GUILD & えりらざぴ | SHE CEO/CCO & ハヤカワ五味. | Data | Nagisa Ichikawa 🧠 THE GUILD | えりらざぴ | SHE CEO/CCO | ハヤカワ五味 | | --- | --- | --- | --- | | Tweets downloaded | 3236 | 3234 | 3250 | | Retweets | 846 | 1768 | 175 | | Short tweets | 1733 | 1185 | 2943 | | Tweets kept | 657 | 281 | 132 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wxptdvg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eri_razapii-hayakawagomi-nagiko726's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1g5vtvdk) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1g5vtvdk/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eri_razapii-hayakawagomi-nagiko726') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ayatokura-chomado-ikeay
huggingtweets
2022-01-17T23:42:42Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/ayatokura-chomado-ikeay/1642462957980/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1334136134234849280/XgE0O39a_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1480842681182220288/ywam5sXK_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1480168235417083905/Kp8uyXIy_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">池澤あやか / いけあや & ちょまど🎀💻エンジニア兼漫画家 & 職業「戸倉彩」👩‍💻とくあや</div> <div style="text-align: center; font-size: 14px;">@ayatokura-chomado-ikeay</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 池澤あやか / いけあや & ちょまど🎀💻エンジニア兼漫画家 & 職業「戸倉彩」👩‍💻とくあや. | Data | 池澤あやか / いけあや | ちょまど🎀💻エンジニア兼漫画家 | 職業「戸倉彩」👩‍💻とくあや | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3245 | 3249 | | Retweets | 224 | 717 | 1266 | | Short tweets | 2813 | 867 | 1036 | | Tweets kept | 213 | 1661 | 947 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2rhguk5h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ayatokura-chomado-ikeay's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34bxjwb8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34bxjwb8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ayatokura-chomado-ikeay') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ronanki/xlmr_17-01-2022_v3
ronanki
2022-01-17T20:34:20Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # ronanki/xlmr_17-01-2022_v3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ronanki/xlmr_17-01-2022_v3') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ronanki/xlmr_17-01-2022_v3') model = AutoModel.from_pretrained('ronanki/xlmr_17-01-2022_v3') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/xlmr_17-01-2022_v3) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
abhi1nandy2/EManuals_BERT
abhi1nandy2
2022-01-17T17:12:46Z
14
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "EManuals", "customer support", "QA", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - English tags: - EManuals - customer support - QA - bert --- Refer to https://aclanthology.org/2021.findings-emnlp.392/ for the paper and https://sites.google.com/view/emanualqa/home for the project website ## Citation Please cite the work if you would like to use it. ``` @inproceedings{nandy-etal-2021-question-answering, title = "Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based {QA} Framework", author = "Nandy, Abhilash and Sharma, Soumya and Maddhashiya, Shubham and Sachdeva, Kapil and Goyal, Pawan and Ganguly, NIloy", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.392", doi = "10.18653/v1/2021.findings-emnlp.392", pages = "4600--4609", abstract = "Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering. This can be primarily attributed to the absence of standard benchmark datasets. In this paper, we meticulously create a large amount of data connected with E-manuals and develop a suitable algorithm to exploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals, and pretrain RoBERTa on this large corpus. We create various benchmark QA datasets which include question answer pairs curated by experts based upon two E-manuals, real user questions from Community Question Answering Forum pertaining to E-manuals etc. We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices. Built upon the pretrained RoBERTa, it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section. For E-Manual annotated question-answer pairs, we show an improvement of about 40{\%} in ROUGE-L F1 scores over most competitive baseline. We perform a detailed ablation study and establish the versatility of EMQAP across different circumstances. The code and datasets are shared at https://github.com/abhi1nandy2/EMNLP-2021-Findings, and the corresponding project website is https://sites.google.com/view/emanualqa/home.", } ```
DSI/human-directed-sentiment
DSI
2022-01-17T14:20:52Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
** Human-Directed Sentiment Analysis in Arabic A supervised training procedure to classify human-directed-sentiment in a text. We define the human-directed-sentiment as the polarity of one user towards a second person who is involved with him in a discussion.
Dumiiii/wav2vec2-xls-r-300m-romanian
Dumiiii
2022-01-17T13:34:59Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: name: wav2vec2-xls-r-300m-romanian --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## This model achieves WER on common-voice ro test split of WER: 12.457178% # wav2vec2-xls-r-300m-romanian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an common voice ro and RSS dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0836 - eval_wer: 0.0705 - eval_runtime: 160.4549 - eval_samples_per_second: 11.081 - eval_steps_per_second: 1.39 - epoch: 14.38 - step: 2703 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 15 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3 Used the following code for evaluation: ``` import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ro", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Dumiiii/wav2vec2-xls-r-300m-romanian") model = Wav2Vec2ForCTC.from_pretrained("Dumiiii/wav2vec2-xls-r-300m-romanian") model.to("cuda") chars_to_ignore_regex = '['+string.punctuation+']' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` Credits for evaluation: https://huggingface.co/anton-l
groadabike/ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline
groadabike
2022-01-17T12:53:22Z
11
1
asteroid
[ "asteroid", "pytorch", "audio", "ConvTasNet", "audio-to-audio", "license:cc-by-sa-4.0", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- tags: - asteroid - audio - ConvTasNet - audio-to-audio datasets: - DAMP-VSEP - Singing/Accompaniment Separation license: cc-by-sa-4.0 --- ## Description: This model was trained by Gerardo Roa using the dampvsep recipe in Asteroid. It was trained on the `singing/accompaniment` task of the `DAMP-VSEP` dataset. ## Training config: ```yaml data: channels: 1 emb_model: 'no' metadata_path: metadata mixture: remix root_path: /fastdata/acp13gr/DAMP/DAMP-VSEP sample_rate: 16000 train_set: english_nonenglish filterbank: kernel_size: 20 n_filters: 256 stride: 10 main_args: exp_dir: exp/train_convtasnet_remix-no-0.0-english_nonenglish-0.0005-jade help: null masknet: bn_chan: 256 conv_kernel_size: 3 hid_chan: 512 mask_act: relu n_blocks: 10 n_repeats: 4 n_src: 2 norm_type: gLN skip_chan: 256 optim: lr: 0.0005 optimizer: adam weight_decay: 0.0 positional arguments: {} training: batch_size: 7 early_stop: true epochs: 50 half_lr: true loss_alpha: 0.0 num_workers: 10 ``` ## Results: ```yaml "si_sdr": 15.111802516750586, "si_sdr_imp": 15.178209807687663, "si_sdr_s0": 12.160261214703553, "si_sdr_s0_imp": 17.434593619085675, "si_sdr_s1": 18.063343818797623, "si_sdr_s1_imp": 12.92182599628965, "sdr": 15.959722569460281, "sdr_imp": 14.927002467087567, "sdr_s0": 13.270412028426595, "sdr_s0_imp": 16.45867572657551, "sdr_s1": 18.64903311049397, "sdr_s1_imp": 13.39532920759962, "sir": 23.935932341084754, "sir_imp": 22.903212238712012, "sir_s0": 22.30777879911744, "sir_s0_imp": 25.49604249726635, "sir_s1": 25.56408588305207, "sir_s1_imp": 20.310381980157665, "sar": 17.174899162445882, "sar_imp": -134.47377304178818, "sar_s0": 14.268071153965913, "sar_s0_imp": -137.38060105026818, "sar_s1": 20.081727170925856, "sar_s1_imp": -131.56694503330817, "stoi": 0.7746496376326059, "stoi_imp": 0.19613735629114643, "stoi_s0": 0.6611376621212413, "stoi_s0_imp": 0.21162695175464794, "stoi_s1": 0.8881616131439705, "stoi_s1_imp": 0.1806477608276449 ``` ## License notice: ** This is important, please fill it, if you need help, you can ask on Asteroid's slack.** This work "ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline" is a derivative of [DAMP-VSEP corpus](https://zenodo.org/record/3553059) by [Smule, Inc](https://www.smule.com/), used under [Restricted License](https://zenodo.org/record/3553059)(Research only). "ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Gerardo Roa.
addy88/t5-grammar-correction
addy88
2022-01-17T12:09:14Z
109
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
### How to use Here is how to use this model in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("addy88/t5-grammar-correction") model = AutoModelForSeq2SeqLM.from_pretrained("addy88/t5-grammar-correction") input_ids = tokenizer('grammar: This sentences has has bads grammar.', return_tensors='pt').input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
addy88/T5-23-emotions-detections
addy88
2022-01-17T12:08:03Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
### How to use Here is how to use this model in PyTorch: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("addy88/T5-23-emotions-detections") tokenizer = T5Tokenizer.from_pretrained("addy88/T5-23-emotions-detections") text_to_summarize="emotion: i don't like it this is nonsense." input_ids = tokenizer.encode(text_to_summarize, return_tensors="pt", add_special_tokens=True) input_ids = input_ids.to(self.device) generated_ids = model.generate( input_ids=input_ids, num_beams=2, max_length=512, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, top_p=0.95, top_k=50, num_return_sequences=1, ) preds = [tokenizer.decode(g,skip_special_tokens=True,clean_up_tokenization_spaces=True,)for g in generated_ids] ```
nickmuchi/minilm-finetuned-emotion_nm
nickmuchi
2022-01-17T08:15:50Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - emotion metrics: - f1 model-index: - name: minilm-finetuned-emotion_nm results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: F1 type: f1 value: 0.9322805793931607 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # minilm-finetuned-emotion_nm This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1918 - F1: 0.9323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.3627 | 1.0 | 250 | 1.0048 | 0.5936 | | 0.8406 | 2.0 | 500 | 0.6477 | 0.8608 | | 0.5344 | 3.0 | 750 | 0.4025 | 0.9099 | | 0.3619 | 4.0 | 1000 | 0.3142 | 0.9188 | | 0.274 | 5.0 | 1250 | 0.2489 | 0.9277 | | 0.2225 | 6.0 | 1500 | 0.2320 | 0.9303 | | 0.191 | 7.0 | 1750 | 0.2083 | 0.9298 | | 0.1731 | 8.0 | 2000 | 0.1969 | 0.9334 | | 0.1606 | 9.0 | 2250 | 0.1928 | 0.9362 | | 0.1462 | 10.0 | 2500 | 0.1918 | 0.9323 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
DoyyingFace/dummy-model
DoyyingFace
2022-01-17T05:44:26Z
6
0
transformers
[ "transformers", "tf", "camembert", "fill-mask", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-model This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
sahri/indonesiasentiment
sahri
2022-01-17T04:50:03Z
19
0
transformers
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "indonesian-roberta-base-sentiment-classifier", "id", "dataset:indonlu", "arxiv:1907.11692", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: id tags: - indonesian-roberta-base-sentiment-classifier license: mit datasets: - indonlu widget: - text: "tidak jelek tapi keren" --- ## Indonesian RoBERTa Base Sentiment Classifier Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews. After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ---------------------------------------------- | ------- | ------------ | ------------------------------- | | `indonesian-roberta-base-sentiment-classifier` | 124M | RoBERTa Base | `SmSA` | ## Evaluation Results The model was trained for 5 epochs and the best model was loaded at the end. | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | | ----- | ------------- | --------------- | -------- | -------- | --------- | -------- | | 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 | | 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 | | 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 | | 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 | | 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 | ## How to Use ### As Text Classifier ```python from transformers import pipeline pretrained_name = "sahri/sentiment" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("tidak jelek tapi keren") ``` ## Disclaimer Do consider the biases which come from both the pre-trained RoBERTa model and the `SmSA` dataset that may be carried over into the results of this model. ## Author Indonesian RoBERTa Base Sentiment Classifier was trained and evaluated by [sahri ramadhan] All computation and development are done on Google Colaboratory using their free GPU access.
huggingtweets/lazar181
huggingtweets
2022-01-17T01:55:14Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/lazar181/1642384387963/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1451342601483952130/-RJ3Ewqp_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ari/Sera @ 🛌</div> <div style="text-align: center; font-size: 14px;">@lazar181</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ari/Sera @ 🛌. | Data | Ari/Sera @ 🛌 | | --- | --- | | Tweets downloaded | 3241 | | Retweets | 362 | | Short tweets | 668 | | Tweets kept | 2211 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21d2ewj0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lazar181's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ukmb9ye) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ukmb9ye/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/lazar181') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/emsorkun
huggingtweets
2022-01-16T22:19:55Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1477509052074766340/rVamRzsW_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Enver Melih Sorkun</div> <div style="text-align: center; font-size: 14px;">@emsorkun</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Enver Melih Sorkun. | Data | Enver Melih Sorkun | | --- | --- | | Tweets downloaded | 2107 | | Retweets | 618 | | Short tweets | 130 | | Tweets kept | 1359 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c12hxxur/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @emsorkun's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3prqt8oz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3prqt8oz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/emsorkun') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
husnu/electra-small-turkish-uncased-discriminator
husnu
2022-01-16T19:01:47Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: ft_electra-small-turkish-uncased-discriminator_lr-2e-1_epochs-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 5.9506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.2 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.951 | 1.0 | 5818 | 5.9506 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Shushant/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
Shushant
2022-01-16T15:54:15Z
55
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7515 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 22 | 3.9518 | | No log | 2.0 | 44 | 3.2703 | | No log | 3.0 | 66 | 2.9308 | | No log | 4.0 | 88 | 2.7806 | | No log | 5.0 | 110 | 2.6926 | | No log | 6.0 | 132 | 2.7043 | | No log | 7.0 | 154 | 2.7113 | | No log | 8.0 | 176 | 2.7236 | | No log | 9.0 | 198 | 2.7559 | | No log | 10.0 | 220 | 2.7515 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Shushant/biobert-v1.1-biomedicalQuestionAnswering
Shushant
2022-01-16T15:34:49Z
83
5
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: biobert-v1.1-biomedicalQuestionAnswering results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-v1.1-biomedicalQuestionAnswering This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9009 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 22 | 3.7409 | | No log | 2.0 | 44 | 3.1852 | | No log | 3.0 | 66 | 3.0342 | | No log | 4.0 | 88 | 2.9416 | | No log | 5.0 | 110 | 2.9009 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
jiobiala24/wav2vec2-base-checkpoint-5
jiobiala24
2022-01-16T10:56:18Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-5 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-4](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-4) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9849 - Wer: 0.3354 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3947 | 1.96 | 1000 | 0.5749 | 0.3597 | | 0.2856 | 3.93 | 2000 | 0.6212 | 0.3479 | | 0.221 | 5.89 | 3000 | 0.6280 | 0.3502 | | 0.1755 | 7.86 | 4000 | 0.6517 | 0.3526 | | 0.1452 | 9.82 | 5000 | 0.7115 | 0.3481 | | 0.1256 | 11.79 | 6000 | 0.7687 | 0.3509 | | 0.1117 | 13.75 | 7000 | 0.7785 | 0.3490 | | 0.0983 | 15.72 | 8000 | 0.8115 | 0.3442 | | 0.0877 | 17.68 | 9000 | 0.8290 | 0.3429 | | 0.0799 | 19.65 | 10000 | 0.8517 | 0.3412 | | 0.0733 | 21.61 | 11000 | 0.9370 | 0.3448 | | 0.066 | 23.58 | 12000 | 0.9157 | 0.3410 | | 0.0623 | 25.54 | 13000 | 0.9673 | 0.3377 | | 0.0583 | 27.5 | 14000 | 0.9804 | 0.3348 | | 0.0544 | 29.47 | 15000 | 0.9849 | 0.3354 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
porpaul/t5-small-finetuned-xsum
porpaul
2022-01-16T06:59:38Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xlsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xlsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xlsum type: xlsum args: chinese_traditional metrics: - name: Rouge1 type: rouge value: 0.5217 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 1.2188 - Rouge1: 0.5217 - Rouge2: 0.0464 - Rougel: 0.527 - Rougelsum: 0.5215 - Gen Len: 6.7441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.3831 | 1.0 | 7475 | 1.2188 | 0.5217 | 0.0464 | 0.527 | 0.5215 | 6.7441 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Sakil/imdbsentdistilbertmodel
Sakil
2022-01-16T06:54:14Z
6
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "text Classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - en tags: - text Classification license: apache-2.0 widget: - text: "I like you. </s></s> I love you." --- * IMDBSentimentDistilBertModel: - I have used IMDB movie review dataset to create custom model by using DistilBertForSequenceClassification. from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments model = DistilBertForSequenceClassification.from_pretrained('./imdbsentdistilbertmodel')
anzorq/t5-v1_1-small-ru_kbd-cased
anzorq
2022-01-16T05:24:51Z
13
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "translation", "ru", "kbd", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - ru - kbd tags: - translation datasets: - anzorq/kbd-ru-1.67M-temp - 17753 Russian-Kabardian pairs of text widget: - text: "ru->kbd: Я иду домой." example_title: "Я иду домой." - text: "ru->kbd: Дети играют во дворе." example_title: "Дети играют во дворе." - text: "ru->kbd: Сколько тебе лет?" example_title: "Сколько тебе лет?" --- ## [google/t5-v1_1-small](google/t5-v1_1-small) model ### pretrained on [anzorq/kbd-ru-1.67M-temp](https://huggingface.co/datasets/anzorq/kbd-ru-1.67M-temp) ### fine-tuned on **17753** Russian-Kabardian word/sentence pairs kbd text uses custom latin script for optimization reasons. Translation input should start with '**ru->kbd:** '. **Tokenizer**: T5 sentencepiece, char, cased.
matthewburke/korean_sentiment
matthewburke
2022-01-16T02:31:37Z
4,148
16
transformers
[ "transformers", "pytorch", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
``` from transformers import pipeline classifier = pipeline("text-classification", model="matthewburke/korean_sentiment") custom_tweet = "영화 재밌다." preds = classifier(custom_tweet, return_all_scores=True) is_positive = preds[0][1]['score'] > 0.5 ```
huggingtweets/nathanmarz
huggingtweets
2022-01-15T19:05:04Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/nathanmarz/1642273500624/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1068577679367127041/w7GXbl_e_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nathan Marz</div> <div style="text-align: center; font-size: 14px;">@nathanmarz</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nathan Marz. | Data | Nathan Marz | | --- | --- | | Tweets downloaded | 3188 | | Retweets | 459 | | Short tweets | 239 | | Tweets kept | 2490 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1zmjgvn2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nathanmarz's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rr35qq7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rr35qq7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nathanmarz') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Fraser/to_delete
Fraser
2022-01-15T15:08:51Z
0
0
null
[ "program-synthesis", "en", "dataset:program-synthesis", "license:mit", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: - en thumbnail: "https://huggingface.co/Fraser/program-synthesis/resolve/main/img.png" tags: - program-synthesis license: "mit" datasets: - program-synthesis --- # Program Synthesis Data Generated program synthesis datasets used to train [dreamcoder](https://github.com/ellisk42/ec). Currently just supports text & list data. ```python _FEATURES = datasets.Features( { "description": datasets.Value("string"), "input": datasets.Value("string"), "output": datasets.Value("string"), "types": datasets.Value("string") } ) ``` ![](https://huggingface.co/Fraser/program-synthesis/resolve/main/img.png)
Ifromspace/GRIEFSOFT-walr
Ifromspace
2022-01-15T13:07:07Z
8
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "ru", "4ulan", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - ru - 4ulan --- Забавное для дискордика))00)) https://discord.gg/HpeadKH Offers [email protected]
Ifromspace/GRIEFSOFT
Ifromspace
2022-01-15T13:06:43Z
9
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "PyTorch", "Transformers", "4ulan", "ru", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: - ru tags: - PyTorch - Transformers - 4ulan --- **Fork of https://huggingface.co/sberbank-ai/rugpt3large_based_on_gpt2** Забавное для дискордика))00)) ROADMAP: - Собираю датасетик из книжек про попаданцев. <------------------------- Сейчас тут. - Дообучаю. - Выбрасываю в дискордик. https://discord.gg/HpeadKH
jiobiala24/wav2vec2-base-checkpoint-4
jiobiala24
2022-01-15T12:59:52Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-4 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-3](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-3) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Huertas97/es_roberta_base_bne_leetspeak_ner
Huertas97
2022-01-15T11:55:46Z
4
1
spacy
[ "spacy", "token-classification", "es", "license:apache-2.0", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - spacy - token-classification language: - es license: apache-2.0 widget: - text: "La C0v!d es un 3ng@ño de los G0b!3rno$" example_title: "Word camouflage detection" model-index: - name: es_roberta_base_bne_leetspeak_ner results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8979055626 - name: NER Recall type: recall value: 0.9393701406 - name: NER F Score type: f_score value: 0.9181699547 --- | Feature | Description | | --- | --- | | **Name** | `es_roberta_base_bne_leetspeak_ner` | | **Version** | `0.0.0` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model a transformer-based masked language model for the Spanish language pre-trained with a total of 570GB of clean and deduplicated text compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) <br> [LeetSpeak-NER](https://huggingface.co/spaces/Huertas97/LeetSpeak-NER) app where this model is in production for countering information disorders| | **License** | Apache 2.0 | | **Author** | [Álvaro Huertas García](https://www.linkedin.com/in/alvaro-huertas-garcia/) at [AI+DA](http://aida.etsisi.upm.es/) | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `INV_CAMO`, `LEETSPEAK`, `MIX`, `PUNCT_CAMO` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 91.82 | | `ENTS_P` | 89.79 | | `ENTS_R` | 93.94 | | `TRANSFORMER_LOSS` | 166484.92 | | `NER_LOSS` | 318457.35 |
khizon/bert-unreliable-news-eng
khizon
2022-01-15T07:04:33Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Unreliable News Classifier (English) Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets. This model used the pre-trained weights of `bert-base-cased` as starting point and was able to achieve 84% accuracy on the test set. For more details: [Github](https://github.com/khizon/CS284_final_project)
husnu/xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-6
husnu
2022-01-15T05:09:21Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-6 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the Turkish squad dataset. It achieves the following results on the evaluation set: - Loss: 2.8135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 350 | 3.8389 | | 4.4474 | 2.0 | 700 | 3.3748 | | 3.512 | 3.0 | 1050 | 3.0657 | | 3.512 | 4.0 | 1400 | 2.9219 | | 3.1526 | 5.0 | 1750 | 2.8517 | | 2.9972 | 6.0 | 2100 | 2.8135 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/autosport-formulaoneworld-speedcafe
huggingtweets
2022-01-15T03:24:30Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/autosport-formulaoneworld-speedcafe/1642217065882/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1192531689060200448/S9KoiehJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1294927107605356544/CVXTlp9y_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1468895545007775746/NIWzzmye_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Speedcafe.com & Formula One World & Autosport</div> <div style="text-align: center; font-size: 14px;">@autosport-formulaoneworld-speedcafe</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Speedcafe.com & Formula One World & Autosport. | Data | Speedcafe.com | Formula One World | Autosport | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3247 | 3250 | | Retweets | 0 | 2778 | 52 | | Short tweets | 3 | 178 | 15 | | Tweets kept | 3247 | 291 | 3183 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kcn72bl0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @autosport-formulaoneworld-speedcafe's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fq703qs) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fq703qs/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/autosport-formulaoneworld-speedcafe') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
NbAiLab/roberta_NCC_des_128_decayfrom200
NbAiLab
2022-01-15T00:11:52Z
4
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Just for performing some experiments. Do not use.
huggingtweets/blueeyedgirlnft
huggingtweets
2022-01-14T22:28:35Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/blueeyedgirlnft/1642199309839/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1478488866730524675/y4KIjwym_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ᵍᵐBlueEyedGirl.ᴺᶠᵀ😎🔻🦴</div> <div style="text-align: center; font-size: 14px;">@blueeyedgirlnft</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ᵍᵐBlueEyedGirl.ᴺᶠᵀ😎🔻🦴. | Data | ᵍᵐBlueEyedGirl.ᴺᶠᵀ😎🔻🦴 | | --- | --- | | Tweets downloaded | 588 | | Retweets | 349 | | Short tweets | 154 | | Tweets kept | 85 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9tllree8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @blueeyedgirlnft's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q6w52hj) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q6w52hj/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/blueeyedgirlnft') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
husnu/xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6
husnu
2022-01-14T20:57:15Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3828 | 1.0 | 1845 | 1.7946 | | 1.5827 | 2.0 | 3690 | 1.4123 | | 1.404 | 3.0 | 5535 | 1.3142 | | 1.346 | 4.0 | 7380 | 1.2819 | | 1.2871 | 5.0 | 9225 | 1.2630 | | 1.2538 | 6.0 | 11070 | 1.2578 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
anuragshas/wav2vec2-large-xlsr-as
anuragshas
2022-01-14T16:41:25Z
21
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "as", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: as datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Anurag Singh XLSR Wav2Vec2 Large 53 Assamese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice as type: common_voice args: as metrics: - name: Test WER type: wer value: 69.63 --- # Wav2Vec2-Large-XLSR-53-Assamese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "as", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-as") model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-as") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Assamese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "as", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-as") model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-as") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\”\\়\\।]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub('’ ',' ',batch["sentence"]) batch["sentence"] = re.sub(' ‘',' ',batch["sentence"]) batch["sentence"] = re.sub('’|‘','\'',batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 69.63 % ## Training The Common Voice `train` and `validation` datasets were used for training.
erwanlc/t5-coktails_recipe-small
erwanlc
2022-01-14T14:32:10Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-coktails_recipe-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-coktails_recipe-small This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3