modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-03 12:30:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
466 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-03 12:30:29
card
stringlengths
11
1.01M
flax-community/roberta-hindi
flax-community
2021-07-20T12:50:29Z
105
2
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- widget: - text: "मुझे उनसे बात करना <mask> अच्छा लगा" - text: "हम आपके सुखद <mask> की कामना करते हैं" - text: "सभी अच्छी चीजों का एक <mask> होता है" --- # RoBERTa base model for Hindi language Pretrained model on Hindi language using a masked language modeling (MLM) objective. [A more interactive & comparison demo is available here](https://huggingface.co/spaces/flax-community/roberta-hindi). > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-roberta-from-scratch-in-hindi/7091), organized by [Hugging Face](https://huggingface.co/) and TPU usage sponsored by Google. ## Model description RoBERTa Hindi is a transformers model pretrained on a large corpus of Hindi data(a combination of **mc4, oscar and indic-nlp** datasets) ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='flax-community/roberta-hindi') >>> unmasker("हम आपके सुखद <mask> की कामना करते हैं") [{'score': 0.3310680091381073, 'sequence': 'हम आपके सुखद सफर की कामना करते हैं', 'token': 1349, 'token_str': ' सफर'}, {'score': 0.15317578613758087, 'sequence': 'हम आपके सुखद पल की कामना करते हैं', 'token': 848, 'token_str': ' पल'}, {'score': 0.07826550304889679, 'sequence': 'हम आपके सुखद समय की कामना करते हैं', 'token': 453, 'token_str': ' समय'}, {'score': 0.06304813921451569, 'sequence': 'हम आपके सुखद पहल की कामना करते हैं', 'token': 404, 'token_str': ' पहल'}, {'score': 0.058322224766016006, 'sequence': 'हम आपके सुखद अवसर की कामना करते हैं', 'token': 857, 'token_str': ' अवसर'}] ``` ## Training data The RoBERTa Hindi model was pretrained on the reunion of the following datasets: - [OSCAR](https://huggingface.co/datasets/oscar) is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. - [mC4](https://huggingface.co/datasets/mc4) is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. - [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) is a natural language understanding benchmark. - [Samanantar](https://indicnlp.ai4bharat.org/samanantar/) is a parallel corpora collection for Indic language. - [Hindi Text Short and Large Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-and-large-summarization-corpus) is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites. - [Hindi Text Short Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-summarization-corpus) is a collection of ~330k articles with their headlines collected from Hindi News Websites. - [Old Newspapers Hindi](https://www.kaggle.com/crazydiv/oldnewspapershindi) is a cleaned subset of HC Corpora newspapers. ## Training procedure ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>`. - We had to perform cleanup of **mC4** and **oscar** datasets by removing all non hindi (non Devanagari) characters from the datasets. - We tried to filter out evaluation set of WikiNER of [IndicGlue](https://indicnlp.ai4bharat.org/indic-glue/) benchmark by [manual labelling](https://github.com/amankhandelia/roberta_hindi/blob/master/wikiner_incorrect_eval_set.csv) where the actual labels were not correct and modifying the [downstream evaluation dataset](https://github.com/amankhandelia/roberta_hindi/blob/master/utils.py). The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores).A randomized shuffle of combined dataset of **mC4, oscar** and other datasets listed above was used to train the model. Training logs are present in [wandb](https://wandb.ai/wandb/hf-flax-roberta-hindi). ## Evaluation Results RoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below. | Task | Task Type | IndicBERT | HindiBERTa | Indic Transformers Hindi BERT | RoBERTa Hindi Guj San | RoBERTa Hindi | |-------------------------|----------------------|-----------|------------|-------------------------------|-----------------------|---------------| | BBC News Classification | Genre Classification | **76.44** | 66.86 | **77.6** | 64.9 | 73.67 | | WikiNER | Token Classification | - | 90.68 | **95.09** | 89.61 | **92.76** | | IITP Product Reviews | Sentiment Analysis | **78.01** | 73.23 | **78.39** | 66.16 | 75.53 | | IITP Movie Reviews | Sentiment Analysis | 60.97 | 52.26 | **70.65** | 49.35 | **61.29** | ## Team Members - Aman K ([amankhandelia](https://huggingface.co/amankhandelia)) - Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk)) - Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv)) - Prateek Agrawal ([prateekagrawal](https://huggingface.co/prateekagrawal)) - Rahul Dev ([mlkorra](https://huggingface.co/mlkorra)) ## Credits Huge thanks to Hugging Face 🤗 & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to [Suraj Patil](https://huggingface.co/valhalla) & [Patrick von Platen](https://huggingface.co/patrickvonplaten) for mentoring during the whole week. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:medium>
huggingtweets/yigitckahyaoglu
huggingtweets
2021-07-20T03:00:16Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/yigitckahyaoglu/1626750011426/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1407084026507182089/ywRe7M0Z_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">yiğit</div> <div style="text-align: center; font-size: 14px;">@yigitckahyaoglu</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from yiğit. | Data | yiğit | | --- | --- | | Tweets downloaded | 1671 | | Retweets | 165 | | Short tweets | 64 | | Tweets kept | 1442 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2cqhj21l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yigitckahyaoglu's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2k3eal89) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2k3eal89/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/yigitckahyaoglu') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
mnaylor/base-bert-finetuned-mtsamples
mnaylor
2021-07-19T15:53:35Z
10
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# BERT Base Fine-tuned on MTSamples This model is [BERT-base](https://huggingface.co/bert-base-uncased) fine-tuned on the MTSamples dataset, with a classification task defined in [this repo](https://github.com/socd06/medical-nlp).
mnaylor/bioclinical-bert-finetuned-mtsamples
mnaylor
2021-07-19T15:52:36Z
12
3
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# BioClinical BERT Fine-tuned on MTSamples This model is simply [Alsentzer's Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) fine-tuned on the MTSamples dataset, with a classification task defined in [this repo](https://github.com/socd06/medical-nlp).
BumBelDumBel/ZORK_AI_SCIFI
BumBelDumBel
2021-07-19T14:51:33Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer model_index: - name: ZORK_AI_SCIFI results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZORK_AI_SCIFI This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
flax-community/wav2vec2-dhivehi
flax-community
2021-07-19T09:40:30Z
10
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "wav2vec2", "pretraining", "automatic-speech-recognition", "dv", "dataset:common_voice", "arxiv:2006.11477", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: dv tags: - automatic-speech-recognition datasets: - common_voice --- # Wav2Vec2 Dhivehi Wav2vec2 pre-pretrained from scratch using common voice dhivehi dataset. The model was trained with Flax during the [Flax/Jax Community Week](https://discss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organised by HuggingFace. ## Model description The model used for training is [Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by FacebookAI. It was introduced in the paper "wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations" by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli (https://arxiv.org/abs/2006.11477). This model is available in the 🤗 [Model Hub](https://huggingface.co/facebook/wav2vec2-base-960h). ## Training data Dhivehi data from [Common Voice](https://commonvoice.mozilla.org/en/datasets). The dataset is also available in the 🤗 [Datasets](https://huggingface.co/datasets/common_voice) library. ## Team members - Shahu Kareem ([@shahukareem](https://huggingface.co/shahukareem)) - Eyna ([@eyna](https://huggingface.co/eyna))
huggingtweets/deathbattlebot
huggingtweets
2021-07-19T06:42:58Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/deathbattlebot/1626676974616/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1295317265257238529/8q3IptgS_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Death Battle Bot</div> <div style="text-align: center; font-size: 14px;">@deathbattlebot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Death Battle Bot. | Data | Death Battle Bot | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 1 | | Short tweets | 20 | | Tweets kept | 3229 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hcf8oqg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deathbattlebot's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2d0vuhj5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2d0vuhj5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/deathbattlebot') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
flax-community/Image-captioning-Indonesia
flax-community
2021-07-19T05:59:03Z
7
0
transformers
[ "transformers", "jax", "clip-vision-marian", "text2text-generation", "id", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: id --- # Image-captioning-Indonesia This is an encoder-decoder image captioning model using [CLIP](https://huggingface.co/transformers/model_doc/clip.html) as the visual encoder and [Marian](https://huggingface.co/transformers/model_doc/marian.html) as the textual decoder on datasets with Indonesian captions. This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team. ## How to use At time of writing, you will need to install [HuggingFace](https://github.com/huggingface/) from its latest master branch in order to load `FlaxMarian`. You will also need to have the [`flax_clip_vision_marian` folder](https://github.com/indonesian-nlp/Indonesia-Image-Captioning/tree/main/flax_clip_vision_marian) in your project directory to load the model using the `FlaxCLIPVisionMarianForConditionalGeneration` class. ```python from torchvision.io import ImageReadMode, read_image from torchvision.transforms import CenterCrop, ConvertImageDtype, Normalize, Resize from torchvision.transforms.functional import InterpolationMode import torch import numpy as np from transformers import MarianTokenizer from flax_clip_vision_marian.modeling_clip_vision_marian import FlaxCLIPVisionMarianForConditionalGeneration clip_marian_model_name = 'flax-community/Image-captioning-Indonesia' model = FlaxCLIPVisionMarianForConditionalGeneration.from_pretrained(clip_marian_model_name) marian_model_name = 'Helsinki-NLP/opus-mt-en-id' tokenizer = MarianTokenizer.from_pretrained(marian_model_name) config = model.config image_size = config.clip_vision_config.image_size # Image transformation transforms = torch.nn.Sequential( Resize([image_size], interpolation=InterpolationMode.BICUBIC), CenterCrop(image_size), ConvertImageDtype(torch.float), Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), ) # Hyperparameters max_length = 8 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def generate_step(batch): output_ids = model.generate(pixel_values, **gen_kwargs) token_ids = np.array(output_ids.sequences)[0] caption = tokenizer.decode(token_ids) return caption image_file_path = image_file_path image = read_image(image_file_path, mode=ImageReadMode.RGB) image = transforms(image) pixel_values = torch.stack([image]).permute(0, 2, 3, 1).numpy() generated_ids = generate_step(pixel_values) print(generated_ids) ``` ## Training data The Model was trained on translated Coco,Flickr and ViZWiz, each of them were translated using google translate and marian mt. we took only random 2 captions per image for each datasets ## Training procedure The model was trained on a TPUv3-8 VM provided by the Google Cloud team. ## Team members - Cahya Wirawan ([@cahya](https://huggingface.co/cahya)) - Galuh Sahid ([@Galuh](https://huggingface.co/Galuh)) - Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia)) - Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
huggingtweets/heyarav
huggingtweets
2021-07-19T01:28:18Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1416877970132672512/942NnDJA_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Arav</div> <div style="text-align: center; font-size: 14px;">@heyarav</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Arav. | Data | Arav | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 411 | | Short tweets | 786 | | Tweets kept | 2049 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n441q7z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @heyarav's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s8u4vm6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s8u4vm6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/heyarav') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ellis_hughes
huggingtweets
2021-07-18T18:42:16Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/ellis_hughes/1626633732954/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1004536007012651008/ZWJUeJ2W_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ellis Hughes</div> <div style="text-align: center; font-size: 14px;">@ellis_hughes</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ellis Hughes. | Data | Ellis Hughes | | --- | --- | | Tweets downloaded | 2170 | | Retweets | 396 | | Short tweets | 91 | | Tweets kept | 1683 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rqrdlum/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ellis_hughes's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3n17xu9k) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3n17xu9k/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ellis_hughes') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
imvladikon/bert-base-uncased-jigsaw
imvladikon
2021-07-18T15:46:05Z
1
0
transformers
[ "transformers", "pytorch", "bert", "generated_from_trainer", "en", "dataset:jigsaw", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en license: tags: - generated_from_trainer datasets: - jigsaw model_index: - name: bert-base-uncased results: - {} --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased This model is a fine-tuned version of [](https://huggingface.co/) on the jigsaw dataset. It achieves the following results on the evaluation set: - Loss: 0.0393 - Precision Micro: 0.7758 - Recall Micro: 0.7858 - F1 Micro: 0.7808 - F2 Micro: 0.7838 - Precision Macro: 0.6349 - Recall Macro: 0.5972 - F1 Macro: 0.6105 - F2 Macro: 0.6015 - Overall Precision: 0.9841 - Overall Recall: 0.9841 - Overall F1: 0.9841 - Overall F2: 0.9841 - Overall Accuracy: 0.9841 - Matthews Corrcoef: 0.7725 - Aucroc Macro: 0.9897 - Aucroc Micro: 0.9920 - Accuracy Toxic: 0.9678 - F1 Toxic: 0.8295 - Accuracy Severe Toxic: 0.9899 - F1 Severe Toxic: 0.3313 - Accuracy Obscene: 0.9816 - F1 Obscene: 0.8338 - Accuracy Threat: 0.9974 - F1 Threat: 0.4545 - Accuracy Insult: 0.9763 - F1 Insult: 0.7662 - Accuracy Identity Hate: 0.9914 - F1 Identity Hate: 0.4480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Micro | Recall Micro | F1 Micro | F2 Micro | Precision Macro | Recall Macro | F1 Macro | F2 Macro | Overall Precision | Overall Recall | Overall F1 | Overall F2 | Overall Accuracy | Matthews Corrcoef | Aucroc Macro | Aucroc Micro | Accuracy Toxic | F1 Toxic | Accuracy Severe Toxic | F1 Severe Toxic | Accuracy Obscene | F1 Obscene | Accuracy Threat | F1 Threat | Accuracy Insult | F1 Insult | Accuracy Identity Hate | F1 Identity Hate | |:-------------:|:-----:|:-----:|:---------------:|:---------------:|:------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:--------:|:-----------------:|:--------------:|:----------:|:----------:|:----------------:|:-----------------:|:------------:|:------------:|:--------------:|:--------:|:---------------------:|:---------------:|:----------------:|:----------:|:---------------:|:---------:|:---------------:|:---------:|:----------------------:|:----------------:| | 0.0433 | 1.0 | 2659 | 0.0423 | 0.7607 | 0.7798 | 0.7702 | 0.7759 | 0.6398 | 0.5561 | 0.5585 | 0.5535 | 0.9832 | 0.9832 | 0.9832 | 0.9832 | 0.9832 | 0.7615 | 0.9887 | 0.9908 | 0.9671 | 0.8211 | 0.9878 | 0.4354 | 0.9805 | 0.8265 | 0.9974 | 0.2243 | 0.9746 | 0.7602 | 0.9918 | 0.2834 | | 0.0366 | 2.0 | 5318 | 0.0393 | 0.7758 | 0.7858 | 0.7808 | 0.7838 | 0.6349 | 0.5972 | 0.6105 | 0.6015 | 0.9841 | 0.9841 | 0.9841 | 0.9841 | 0.9841 | 0.7725 | 0.9897 | 0.9920 | 0.9678 | 0.8295 | 0.9899 | 0.3313 | 0.9816 | 0.8338 | 0.9974 | 0.4545 | 0.9763 | 0.7662 | 0.9914 | 0.4480 | | 0.0305 | 3.0 | 7977 | 0.0399 | 0.7608 | 0.8186 | 0.7887 | 0.8064 | 0.6621 | 0.6856 | 0.6715 | 0.6794 | 0.9842 | 0.9842 | 0.9842 | 0.9842 | 0.9842 | 0.7810 | 0.9897 | 0.9919 | 0.9662 | 0.8272 | 0.9892 | 0.4772 | 0.9815 | 0.8347 | 0.9977 | 0.5629 | 0.9772 | 0.7740 | 0.9931 | 0.5528 | | 0.0263 | 4.0 | 10636 | 0.0435 | 0.7333 | 0.8336 | 0.7803 | 0.8114 | 0.6395 | 0.7039 | 0.6687 | 0.6890 | 0.9830 | 0.9830 | 0.9830 | 0.9830 | 0.9830 | 0.7732 | 0.9897 | 0.9912 | 0.9608 | 0.8083 | 0.9898 | 0.4791 | 0.9812 | 0.8319 | 0.9972 | 0.5368 | 0.9756 | 0.7700 | 0.9935 | 0.5861 | | 0.0218 | 5.0 | 13295 | 0.0456 | 0.7480 | 0.8108 | 0.7781 | 0.7974 | 0.6661 | 0.6720 | 0.6662 | 0.6691 | 0.9833 | 0.9833 | 0.9833 | 0.9833 | 0.9833 | 0.7701 | 0.9890 | 0.9907 | 0.9612 | 0.8071 | 0.9894 | 0.4642 | 0.9823 | 0.8354 | 0.9977 | 0.5325 | 0.9754 | 0.7613 | 0.9936 | 0.5968 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
sehandev/koelectra-qa
sehandev
2021-07-18T14:21:05Z
4
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model_index: - name: koelectra-qa results: - task: name: Question Answering type: question-answering --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-qa This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1 - Datasets 1.9.0 - Tokenizers 0.10.3
jacobshein/danish-bert-botxo-qa-squad
jacobshein
2021-07-18T11:19:49Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "danish", "question answering", "squad", "machine translation", "botxo", "da", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: da tags: - danish - bert - question answering - squad - machine translation - botxo license: cc-by-4.0 datasets: - common_crawl - wikipedia - dindebat.dk - hestenettet.dk - danish OpenSubtitles widget: - context: Stine sagde hej, men Jacob sagde halløj. --- # Danish BERT (version 2, uncased) by [BotXO](https://github.com/botxo/nordic_bert) fine-tuned for Question Answering (QA) on the [machine-translated SQuAD-da dataset](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/multilingual/squads-tar/da) ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("jacobshein/danish-bert-botxo-qa-squad") model = AutoModelForQuestionAnswering.from_pretrained("jacobshein/danish-bert-botxo-qa-squad") ``` #### Contact For further information on usage or fine-tuning procedure, please reach out by email through [jacobhein.com](https://jacobhein.com/#contact).
huggingtweets/trevorthalacker
huggingtweets
2021-07-18T07:39:42Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/trevorthalacker/1626593979307/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1414806244536242178/hxQldoS0_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Trevor</div> <div style="text-align: center; font-size: 14px;">@trevorthalacker</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Trevor. | Data | Trevor | | --- | --- | | Tweets downloaded | 1792 | | Retweets | 123 | | Short tweets | 227 | | Tweets kept | 1442 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kmb92alm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @trevorthalacker's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37c7xs83) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37c7xs83/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/trevorthalacker') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sehandev/koelectra-long-qa
sehandev
2021-07-18T06:01:25Z
3
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model_index: - name: koelectra-long-qa results: - task: name: Question Answering type: question-answering --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-long-qa This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1 - Datasets 1.9.0 - Tokenizers 0.10.3
johnpaulbin/gpt2-skript-80-v3
johnpaulbin
2021-07-18T04:53:22Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
GPT-2 Skript 80k lines. v3 Training loss: `0.594200` 1.5 GB Inferencing colab: https://colab.research.google.com/drive/1uTAPLa1tuNXFpG0qVLSseMro6iU9-xNc
flax-community/roberta-base-mr
flax-community
2021-07-17T15:30:40Z
28
1
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- widget: - text: "अध्यक्ष <mask> पवार आणि उपमुख्यमंत्री अजित पवार यांची भेट घेतली." - text: "मोठी बातमी! उद्या दुपारी <mask> वाजता जाहीर होणार दहावीचा निकाल" --- # RoBERTa base model for Marathi language (मराठी भाषा) Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). We trained RoBERTa model for Marathi Language during community week hosted by Huggingface 🤗 using JAX/Flax for NLP & CV jax. <img src="https://user-images.githubusercontent.com/15062408/126040902-ea8808db-ec30-4a3f-bf95-5d3b10d674e9.png" alt="huggingface-marathi-roberta" width="350" height="350" style="text-align: center"> ## Model description Marathi RoBERTa is a transformers model pretrained on a large corpus of Marathi data in a self-supervised fashion. ## Intended uses & limitations❗️ You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. We used this model to fine tune on text classification task for iNLTK and indicNLP news text classification problem statement. Since marathi mc4 dataset is made by scraping marathi newspapers text, it will involve some biases which will also affect all fine-tuned versions of this model. ### How to use❓ You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='flax-community/roberta-base-mr') >>> unmasker("मोठी बातमी! उद्या दुपारी <mask> वाजता जाहीर होणार दहावीचा निकाल") [{'score': 0.057209037244319916,'sequence': 'मोठी बातमी! उद्या दुपारी आठ वाजता जाहीर होणार दहावीचा निकाल', 'token': 2226, 'token_str': 'आठ'}, {'score': 0.02796074189245701, 'sequence': 'मोठी बातमी! उद्या दुपारी २० वाजता जाहीर होणार दहावीचा निकाल', 'token': 987, 'token_str': '२०'}, {'score': 0.017235398292541504, 'sequence': 'मोठी बातमी! उद्या दुपारी नऊ वाजता जाहीर होणार दहावीचा निकाल', 'token': 4080, 'token_str': 'नऊ'}, {'score': 0.01691395975649357, 'sequence': 'मोठी बातमी! उद्या दुपारी २१ वाजता जाहीर होणार दहावीचा निकाल', 'token': 1944, 'token_str': '२१'}, {'score': 0.016252165660262108, 'sequence': 'मोठी बातमी! उद्या दुपारी ३ वाजता जाहीर होणार दहावीचा निकाल', 'token': 549, 'token_str': ' ३'}] ``` ## Training data 🏋🏻‍♂️ The RoBERTa Marathi model was pretrained on `mr` dataset of C4 multilingual dataset: <br> <br> [C4 (Colossal Clean Crawled Corpus)](https://yknzhu.wixsite.com/mbweb), Introduced by Raffel et al. in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://paperswithcode.com/paper/exploring-the-limits-of-transfer-learning). The dataset can be downloaded in a pre-processed form from [allennlp](https://github.com/allenai/allennlp/discussions/5056) or huggingface's datsets - [mc4 dataset](https://huggingface.co/datasets/mc4). Marathi (`mr`) dataset consists of 14 billion tokens, 7.8 million docs and with weight ~70 GB of text. ## Data Cleaning 🧹 Though initial `mc4` marathi corpus size ~70 GB, Through data exploration, it was observed it contains docs from different languages especially thai, chinese etc. So we had to clean the dataset before traning tokenizer and model. Surprisingly, results after cleaning Marathi mc4 corpus data: #### **Train set:** Clean docs count 1581396 out of 7774331. <br> **~20.34%** of whole marathi train split is actually Marathi. #### **Validation set** Clean docs count 1700 out of 7928. <br> **~19.90%** of whole marathi validation split is actually Marathi. ## Training procedure 👨🏻‍💻 ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 42K steps with a batch size of 128 and a sequence length of 128. The optimizer used is Adam with a learning rate of 3e-4, β1 = 0.9, β2 = 0.98 and ε = 1e-8, a weight decay of 0.01, learning rate warmup for 1,000 steps and linear decay of the learning rate after. We tracked experiments and hyperparameter tunning on weights and biases platform. Here is link to main dashboard: <br> [Link to Weights and Biases Dashboard for Marathi RoBERTa model](https://wandb.ai/nipunsadvilkar/roberta-base-mr/runs/19qtskbg?workspace=user-nipunsadvilkar) #### **Pretraining Results 📊** RoBERTa Model reached **eval accuracy of 85.28%** around ~35K step **with train loss at 0.6507 and eval loss at 0.6219**. ## Fine Tuning on downstream tasks We performed fine-tuning on downstream tasks. We used following datasets for classification: 1. [IndicNLP Marathi news classification](https://github.com/ai4bharat-indicnlp/indicnlp_corpus#publicly-available-classification-datasets) 2. [iNLTK Marathi news headline classification](https://www.kaggle.com/disisbig/marathi-news-dataset) ### **Fine tuning on downstream task results (Segregated)** #### 1. [IndicNLP Marathi news classification](https://github.com/ai4bharat-indicnlp/indicnlp_corpus#publicly-available-classification-datasets) IndicNLP Marathi news dataset consists 3 classes - `['lifestyle', 'entertainment', 'sports']` - with following docs distribution as per classes: | train | eval | test | -- | -- | -- | 9672 | 477 | 478 💯 Our Marathi RoBERTa **`roberta-base-mr` model outperformed both classifier ** mentioned in [Arora, G. (2020). iNLTK](https://www.semanticscholar.org/paper/iNLTK%3A-Natural-Language-Toolkit-for-Indic-Languages-Arora/5039ed9e100d3a1cbbc25a02c82f6ee181609e83/figure/3) and [Kunchukuttan, Anoop et al. AI4Bharat-IndicNLP.](https://www.semanticscholar.org/paper/AI4Bharat-IndicNLP-Corpus%3A-Monolingual-Corpora-and-Kunchukuttan-Kakwani/7997d432925aff0ba05497d2893c09918298ca55/figure/4) Dataset | FT-W | FT-WC | INLP | iNLTK | **roberta-base-mr 🏆** -- | -- | -- | -- | -- | -- iNLTK Headlines | 83.06 | 81.65 | 89.92 | 92.4 | **97.48** **🤗 Huggingface Model hub repo:**<br> `roberta-base-mr` fine tuned on iNLTK Headlines classification dataset model: [**`flax-community/mr-indicnlp-classifier`**](https://huggingface.co/flax-community/mr-indicnlp-classifier) 🧪 Fine tuning experiment's weight and biases dashboard [link](https://wandb.ai/nipunsadvilkar/huggingface/runs/1242bike?workspace=user-nipunsadvilkar ) #### 2. [iNLTK Marathi news headline classification](https://www.kaggle.com/disisbig/marathi-news-dataset) This dataset consists 3 classes - `['state', 'entertainment', 'sports']` - with following docs distribution as per classes: | train | eval | test | -- | -- | -- | 9658 | 1210 | 1210 💯 Here as well **`roberta-base-mr` outperformed `iNLTK` marathi news text classifier**. Dataset | iNLTK ULMFiT | **roberta-base-mr 🏆** -- | -- | -- iNLTK news dataset (kaggle) | 92.4 | **94.21** **🤗 Huggingface Model hub repo:**<br> `roberta-base-mr` fine tuned on iNLTK news classification dataset model: [**`flax-community/mr-inltk-classifier`**](https://huggingface.co/flax-community/mr-inltk-classifier) Fine tuning experiment's weight and biases dashboard [link](https://wandb.ai/nipunsadvilkar/huggingface/runs/2u5l9hon?workspace=user-nipunsadvilkar ) ## **Want to check how above models generalise on real world Marathi data?** Head to 🤗 Huggingface's spaces 🪐 to play with all three models: 1. Mask Language Modelling with Pretrained Marathi RoBERTa model: <br> [**`flax-community/roberta-base-mr`**](https://huggingface.co/flax-community/roberta-base-mr) 2. Marathi Headline classifier: <br> [**`flax-community/mr-indicnlp-classifier`**](https://huggingface.co/flax-community/mr-indicnlp-classifier) 3. Marathi news classifier: <br> [**`flax-community/mr-inltk-classifier`**](https://huggingface.co/flax-community/mr-inltk-classifier) ![alt text](https://huggingface.co/docs/assets/hub/icon-space.svg) [Streamlit app of Pretrained Roberta Marathi model on Huggingface Spaces](https://huggingface.co/spaces/flax-community/roberta-base-mr) ![image](https://user-images.githubusercontent.com/15062408/126040832-f5723875-b70f-4e2e-93ad-213ddbe6180d.png) ## Team Members - Nipun Sadvilkar [@nipunsadvilkar](https://github.com/nipunsadvilkar) - Haswanth Aekula [@hassiahk](https://github.com/hassiahk) ## Credits Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week. Especially for providing such massive computing resource. Big thanks to [@patil-suraj](https://github.com/patil-suraj) & [@patrickvonplaten](https://github.com/patrickvonplaten) for mentoring during whole week. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
birgermoell/t5-base-swedish
birgermoell
2021-07-17T07:52:39Z
11
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "feature-extraction", "summarization", "translation", "sv", "dataset:oscar", "arxiv:1910.10683", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - sv datasets: - oscar tags: - summarization - translation license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/oscar) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish
birgermoell/swedish-gpt
birgermoell
2021-07-17T07:45:52Z
30
2
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "sv", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: sv widget: - text: "Jag är en svensk språkmodell." --- ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish # GPT-svenska-wikipedia A swedish GPT2 style model trained using Flax CLM pipeline on the Swedish part of the wiki40b dataset and the Oscar dataset. https://huggingface.co/datasets/wiki40b The model was trained for around 22600 steps (42 hours) as part of the Huggingface Jax/Flax challenge with the following loss and learning rate Loss: 3.1715331077575684, Learning Rate: 0.0024816440418362617) The model could likely be trained for longer. ## Data cleaning and preprocessing The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work. ```python from datasets import load_dataset def load_and_clean_wiki(): dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner', split="train") #dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner') dataset = dataset.remove_columns(['wikidata_id', 'version_id']) filtered_dataset = dataset.map(filter_wikipedia) # filtered_dataset[:3] # print(filtered_dataset[:3]) return filtered_dataset def filter_wikipedia(batch): batch["text"] = " ".join(batch["text"].split("\ _START_SECTION_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_PARAGRAPH_\ ")) batch["text"] = " ".join(batch["text"].split("_NEWLINE_")) batch["text"] = " ".join(batch["text"].split("\xa0")) return batch ``` ## Training script The following training script was used to train the model. ```bash ./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="sv" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub ```
flax-community/koclip
flax-community
2021-07-17T05:08:38Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# KoCLIP This repository includes ## Installation Create a virtual env and install `requirements.txt`. ``` pip install -r requirements.txt ``` For Google Cloud TPU VM please follow necessary installation steps here: [Pytorch on TPU VM](https://cloud.google.com/tpu/docs/pytorch-xla-ug-tpu-vm) [JAX/Flax on TPU VM](https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm)
BumBelDumBel/TRUMP
BumBelDumBel
2021-07-16T19:14:17Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer model_index: - name: TRUMP results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TRUMP This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
huggingtweets/vishigondi
huggingtweets
2021-07-16T19:01:40Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/vishigondi/1626462038417/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1019963234793684992/LVdF4ah2_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Vishi Gondi</div> <div style="text-align: center; font-size: 14px;">@vishigondi</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Vishi Gondi. | Data | Vishi Gondi | | --- | --- | | Tweets downloaded | 1154 | | Retweets | 166 | | Short tweets | 21 | | Tweets kept | 967 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2txfnkwt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vishigondi's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eshvn0h5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eshvn0h5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/vishigondi') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BumBelDumBel/ZORK-AI-TEST
BumBelDumBel
2021-07-16T17:12:42Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer model_index: - name: ZORK-AI-TEST results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZORK-AI-TEST This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
keshan/sinhala-gpt2-newswire
keshan
2021-07-16T15:46:36Z
5
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "sinhala", "si", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: si tags: - sinhala - gpt2 pipeline_tag: text-generation widget: - text: "මම" --- This is a finetunes version of keshan/sinhala-gpt2 with newswire articles. This was finetuned on ~12MB of data - Num examples=8395 - Batch size =8 It got a Perplexity of 3.15
flax-community/gpt2-medium-persian
flax-community
2021-07-16T13:01:08Z
364
9
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "gpt2", "text-generation", "fa", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: fa tags: - text-generation widget: - text: "در یک اتفاق شگفت انگیز، پژوهشگران" - text: "گرفتگی بینی در کودکان و به‌خصوص نوزادان باعث می‌شود" - text: "امیدواریم نوروز امسال سالی" --- # GPT2 Medium 4 Persian > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-gpt2-from-scratch-in-persian/7560), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Team Members - [Mehrdad Farahani](huggingface.co/m3hrdadfi) - [Saied Alimoradi](https://discuss.huggingface.co/u/saied) - [M. Reza Zerehpoosh](huggingface.co/ironcladgeek) - [Hooman Sedghamiz](https://discuss.huggingface.co/u/hooman650) - [Mazeyar Moeini Feizabadi](https://discuss.huggingface.co/u/mazy1998) ## Dataset We used [Oscar](https://huggingface.co/datasets/oscar) dataset, which is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus. ## How To Use You can use this model directly with a pipeline for text generation. ```python from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel tokenizer = AutoTokenizer.from_pretrained('flax-community/gpt2-medium-persian') model = GPT2LMHeadModel.from_pretrained('flax-community/gpt2-medium-persian') generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100}) generated_text = generator('در یک اتفاق شگفت انگیز، پژوهشگران') ``` For using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel. ## Demo ... SOON ## Evaluation ... SOON
liam168/trans-opus-mt-en-zh
liam168
2021-07-16T04:17:11Z
446
29
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "en", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - zh tags: - translation widget: - text: "I like to study Data Science and Machine Learning." --- # liam168/trans-opus-mt-en-zh ## Model description * source group: English * target group: Chinese * model: transformer * source language(s): eng * target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant ## How to use ```python >>> from transformers import AutoModelWithLMHead,AutoTokenizer,pipeline >>> mode_name = 'liam168/trans-opus-mt-en-zh' >>> model = AutoModelWithLMHead.from_pretrained(mode_name) >>> tokenizer = AutoTokenizer.from_pretrained(mode_name) >>> translation = pipeline("translation_en_to_zh", model=model, tokenizer=tokenizer) >>> translation('I like to study Data Science and Machine Learning.', max_length=400) [{'translation_text': '我喜欢学习数据科学和机器学习'}] ``` ## Contact [email protected]
liam168/trans-opus-mt-zh-en
liam168
2021-07-16T03:34:38Z
251
21
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "en", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - zh tags: - translation widget: - text: "我喜欢学习数据科学和机器学习。" --- # liam168/trans-opus-mt-zh-en ## Model description * source group: English * target group: Chinese * model: transformer * source language(s): eng ## How to use ```python >>> from transformers import AutoModelWithLMHead,AutoTokenizer,pipeline >>> mode_name = 'liam168/trans-opus-mt-zh-en' >>> model = AutoModelWithLMHead.from_pretrained(mode_name) >>> tokenizer = AutoTokenizer.from_pretrained(mode_name) >>> translation = pipeline("translation_zh_to_en", model=model, tokenizer=tokenizer) >>> translation('我喜欢学习数据科学和机器学习。', max_length=400) [{'translation_text': 'I like to study data science and machine learning.'}] ``` ## Contact [email protected]
huggingtweets/gambsvns
huggingtweets
2021-07-15T21:50:46Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/gambsvns/1626385842515/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1415310065960198148/w9Yr9mLK_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">gãmbs</div> <div style="text-align: center; font-size: 14px;">@gambsvns</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from gãmbs. | Data | gãmbs | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 86 | | Short tweets | 308 | | Tweets kept | 2852 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wahjzcj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gambsvns's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1td3tcaf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1td3tcaf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gambsvns') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/bladeecity-robber0540
huggingtweets
2021-07-15T06:48:46Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/bladeecity-robber0540/1626331680252/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1406669132527976453/Sv0lEtmk_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/822229503212666880/L4UutyTM_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aim & Combat Ballerina</div> <div style="text-align: center; font-size: 14px;">@bladeecity-robber0540</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Aim & Combat Ballerina. | Data | Aim | Combat Ballerina | | --- | --- | --- | | Tweets downloaded | 1604 | 671 | | Retweets | 314 | 66 | | Short tweets | 487 | 303 | | Tweets kept | 803 | 302 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3uvtcfjv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bladeecity-robber0540's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36qst0l8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36qst0l8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bladeecity-robber0540') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/theisaiahw
huggingtweets
2021-07-14T21:05:53Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/theisaiahw/1626296749614/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1388820869762322434/v3h5S7mu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Isaiah Williams</div> <div style="text-align: center; font-size: 14px;">@theisaiahw</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Isaiah Williams. | Data | Isaiah Williams | | --- | --- | | Tweets downloaded | 620 | | Retweets | 65 | | Short tweets | 72 | | Tweets kept | 483 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/336gn9be/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theisaiahw's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ohqpafvm) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ohqpafvm/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/theisaiahw') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
YusufSahin99/IFIS_ZORK_AI_FANTASY
YusufSahin99
2021-07-14T13:18:10Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model_index: - name: IFIS_ZORK_AI_FANTASY results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IFIS_ZORK_AI_FANTASY This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
ehdwns1516/klue-roberta-base-kornli
ehdwns1516
2021-07-14T08:11:08Z
19
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# klue-roberta-base-kornli * This model trained with Korean dataset. * Input premise sentence and hypothesis sentence. * You can use English, but don't expect accuracy. * If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well. klue-roberta-base-kornli DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/) klue-roberta-base-kornli API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/klue-roberta-base_kornli) ## Overview Language model: [klue/roberta-base](https://huggingface.co/klue/roberta-base) Language: Korean Training data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI) Eval data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI) Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/klue-roberta-base_finetunning_ex) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-kornli") classifier = pipeline( "text-classification", model="ehdwns1516/klue-roberta-base-kornli", return_all_scores=True, ) premise = "your premise" hypothesis = "your hypothesis" result = dict() result[0] = classifier(premise + tokenizer.sep_token + hypothesis)[0] ```
cstorm125/wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa
cstorm125
2021-07-14T07:45:06Z
5
0
transformers
[ "transformers", "pytorch", "deberta", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- widget: - text: "สวนกุหลาบเป็นโรงเรียนอะไร" context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย" --- # wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa Finetuning `airesearch/wangchan-deberta_v1-base-wiki-20210520-news-spm` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=wangchan-deberta_v1-base-wiki-20210520-news-spm CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --revision mlm@ckp-41100 \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --model_max_length 400 \ --pad_on_right \ --fp16 \ --use_auth_token ```
airesearch/xlm-roberta-base-finetune-qa
airesearch
2021-07-14T07:13:00Z
8
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- widget: - text: "สวนกุหลาบเป็นโรงเรียนอะไร" context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย" --- # xlm-roberta-base-finetune-qa Finetuning `xlm-roberta-base` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Train with: ``` export WANDB_PROJECT=wangchanberta-qa export MODEL_NAME=xlm-roberta-base python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --pad_on_right \ --fp16 ```
airesearch/bert-base-multilingual-cased-finetune-qa
airesearch
2021-07-14T05:50:52Z
41
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- widget: - text: "สวนกุหลาบเป็นโรงเรียนอะไร" context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย" --- # bert-base-multilingual-cased Finetuning `bert-base-multilingual-cased` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=bert-base-multilingual-cased python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --pad_on_right \ --fp16 ```
andi611/roberta-base-ner-conll2003
andi611
2021-07-14T00:25:37Z
4
1
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - conll2003 model_index: - name: roberta-base-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0814 - eval_precision: 0.9101 - eval_recall: 0.9336 - eval_f1: 0.9217 - eval_accuracy: 0.9799 - eval_runtime: 10.2964 - eval_samples_per_second: 315.646 - eval_steps_per_second: 39.529 - epoch: 1.14 - step: 500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
huggingtweets/jplatzhalter
huggingtweets
2021-07-13T22:13:16Z
9
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/jplatzhalter/1626214256716/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1204103314733821954/O_QCiMdI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Julia Platz-Halter</div> <div style="text-align: center; font-size: 14px;">@jplatzhalter</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Julia Platz-Halter. | Data | Julia Platz-Halter | | --- | --- | | Tweets downloaded | 3235 | | Retweets | 270 | | Short tweets | 373 | | Tweets kept | 2592 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2z39jb5g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jplatzhalter's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1deih6g9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1deih6g9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jplatzhalter') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
YusufSahin99/Zork_AI_SciFi
YusufSahin99
2021-07-13T14:58:01Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model_index: - name: Zork_AI_SciFi results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Zork_AI_SciFi This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2
AIDA-UPM
2021-07-13T14:12:45Z
292
12
transformers
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "multilingual", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity language: "multilingual" tags: - feature-extraction - sentence-similarity - transformers - multilingual --- # mstsb-paraphrase-multilingual-mpnet-base-v2 This is a fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` from [sentence-transformers](https://www.SBERT.net) model with [Semantic Textual Similarity Benchmark](http://ixa2.si.ehu.eus/stswiki/index.php/Main_Page) extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering, semantic search and measuring the similarity between two sentences. <!--- Describe your model here --> This model is fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` for semantic textual similarity with multilingual data. The dataset used for this fine-tuning is STSb extended to 15 languages with Google Translator. For mantaining data quality the sentence pairs with a confidence value below 0.7 were dropped. The extended dataset is available at [GitHub](https://github.com/Huertas97/Multilingual-STSB). The languages included in the extended version are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW. The pooling operation used to condense the word embeddings into a sentence embedding is mean pooling (more info below). <!-- ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer # It support several languages sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"] # The pooling technique is automatically detected (mean pooling) model = SentenceTransformer('mstsb-paraphrase-multilingual-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` --> ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch # We should define the proper pooling function: Mean pooling # Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2') model = AutoModel.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> Check the test results in the Semantic Textual Similarity Tasks. The 15 languages available at the [Multilingual STSB](https://github.com/Huertas97/Multilingual-STSB) have been combined into monolingual and cross-lingual tasks, giving a total of 31 tasks. Monolingual tasks have both sentences from the same language source (e.g., Ar-Ar, Es-Es), while cross-lingual tasks have two sentences, each in a different language being one of them English (e.g., en-ar, en-es). Here we compare the average multilingual semantic textual similairty capabilities between the `paraphrase-multilingual-mpnet-base-v2` based model and the `mstsb-paraphrase-multilingual-mpnet-base-v2` fine-tuned model across the 31 tasks. It is worth noting that both models are multilingual, but the second model is adjusted with multilingual data for semantic similarity. The average of correlation coefficients is computed by transforming each correlation coefficient to a Fisher's z value, averaging them, and then back-transforming to a correlation coefficient. | Model | Average Spearman Cosine Test | |:---------------------------------------------:|:------------------------------:| | mstsb-paraphrase-multilingual-mpnet-base-v2 | 0.835890 | | paraphrase-multilingual-mpnet-base-v2 | 0.818896 | <br> The following tables breakdown the performance of `mstsb-paraphrase-multilingual-mpnet-base-v2` according to the different tasks. For the sake of readability tasks have been splitted into monolingual and cross-lingual tasks. | Monolingual Task | Pearson Cosine test | Spearman Cosine test | |:------------------:|:---------------------:|:-----------------------:| | en;en | 0.868048310692506 | 0.8740170943535747 | | ar;ar | 0.8267139454193487 | 0.8284459741532022 | | cs;cs | 0.8466821720942157 | 0.8485417688803879 | | de;de | 0.8517285961812183 | 0.8557680051557893 | | es;es | 0.8519185309064691 | 0.8552243211580456 | | fr;fr | 0.8430951067985064 | 0.8466614534379704 | | hi;hi | 0.8178258630578092 | 0.8176462079184331 | | it;it | 0.8475909574305637 | 0.8494216064459076 | | ja;ja | 0.8435588859386477 | 0.8456031494178619 | | nl;nl | 0.8486765104527032 | 0.8520856765262531 | | pl;pl | 0.8407840177883407 | 0.8443070467300299 | | pt;pt | 0.8534880178249296 | 0.8578544068829622 | | ru;ru | 0.8390897585455678 | 0.8423041443534423 | | tr;tr | 0.8382125451820572 | 0.8421587450058385 | | zh-CN;zh-CN | 0.826233678946644 | 0.8248515460782744 | | zh-TW;zh-TW | 0.8242683809675422 | 0.8235506799952028 | <br> | Cross-lingual Task | Pearson Cosine test | Spearman Cosine test | |:--------------------:|:---------------------:|:-----------------------:| | en;ar | 0.7990830340462535 | 0.7956792016468148 | | en;cs | 0.8381274879061265 | 0.8388713450024455 | | en;de | 0.8414439600928739 | 0.8441971698649943 | | en;es | 0.8442337511356952 | 0.8445035292903559 | | en;fr | 0.8378437644605063 | 0.8387903367907733 | | en;hi | 0.7951955086055527 | 0.7905052217683244 | | en;it | 0.8415686372978766 | 0.8419480899107785 | | en;ja | 0.8094306665283388 | 0.8032512280936449 | | en;nl | 0.8389526140129767 | 0.8409310421803277 | | en;pl | 0.8261309163979578 | 0.825976253023656 | | en;pt | 0.8475546209070765 | 0.8506606391790897 | | en;ru | 0.8248514914263723 | 0.8224871183202255 | | en;tr | 0.8191803661207868 | 0.8194200775744044 | | en;zh-CN | 0.8147678083378249 | 0.8102089470690433 | | en;zh-TW | 0.8107272160374955 | 0.8056129680510944 | ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 687 with parameters: ``` {'batch_size': 132, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 2, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 140, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lewtun/dummy-translation
lewtun
2021-07-13T12:43:13Z
3
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model_index: - name: dummy-translation results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-translation This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
nateraw/test_model_a
nateraw
2021-07-13T04:52:00Z
128
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:image_folder", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - image_folder model_index: - name: test_model_a results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model_a This model is a fine-tuned version of [lysandre/tiny-vit-random](https://huggingface.co/lysandre/tiny-vit-random) on the image_folder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 40 ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.1.dev0 - Tokenizers 0.10.3
patrickvonplaten/gpt2-als-demo
patrickvonplaten
2021-07-12T14:20:19Z
8
0
transformers
[ "transformers", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# roberta-base-als-demo **roberta-base-als-demo** is a model trained by Patrick von Platen to demonstrate how to train a roberta-base model from scratch on the Alemannic language. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/roberta-base-als-demo)
jaimin/Gujarati-Model
jaimin
2021-07-12T13:23:21Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
tokenizer = AutoTokenizer.from_pretrained("jaimin/Gujarati-Model") model = AutoModel.from_pretrained("jaimin/Gujarati-Model")
Andrija/RobertaFastBPE
Andrija
2021-07-12T11:11:21Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('Andrija/RobertaFastBPE', bos_token="&lt;s&gt;", eos_token="&lt;/s&gt;") encoded = tokenizer('Stručnjaci te bolnice, predvođeni dr Alisom Lim') # {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} tokenizer.decode(encoded['input_ids']) # &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;
huggingtweets/pontifex_es
huggingtweets
2021-07-12T08:26:39Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/pontifex_es/1626078305422/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/507819548834148352/jyx1JOS-_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Papa Francisco</div> <div style="text-align: center; font-size: 14px;">@pontifex_es</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Papa Francisco. | Data | Papa Francisco | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 0 | | Short tweets | 45 | | Tweets kept | 3205 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2a8o5bwd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pontifex_es's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3183nmsb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3183nmsb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/pontifex_es') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Vivek/flax-gpt2-common-sense-reasoning
Vivek
2021-07-12T04:19:26Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
This is to test the common sense reasoning of a GPT-2 model.To assess how intelligent or it is adapted to this datasets which requires not only big models but also a little common sense also.
Littlejohn/analisis_sentimientos
Littlejohn
2021-07-12T00:22:27Z
11
0
transformers
[ "transformers", "text-classification", "en", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - en pipeline_tag: text-classification --- # bert-base-cased-sentiment Es un modelo de BERT (bert-base-cased) afinado para el analisis de sentimientos para dos clases. El sentimiento solo se define como positivo negativo según sea el caso de la oración suministrada. ## Training data El set de datos utilizado para el entrenamiento del modelo fue a traves de una recopilación de reseñas de amazón, el cual se puede descargar desde el autor original en Kaggle [Adam Bittlingmayer](https://www.kaggle.com/bittlingmayer/amazonreviews) Amazon Reviews for Sentiment Analysis. El numero de datos fue solo de 40 000 oraciones de las cuales solo se tomaron las primeras 100 palabras para conformar cada una de las oraciones. ## Accuaracy El modelo afinado fue sometido a 3 pruebas para conocer su precisión. - La primera prueba fue en un set de datos de Reseñas de hoteles | Accuracy (Precisión) | | -------- | | 95% | - La segunda prueba fue en un set de datos de Reseñas de comida | Accuracy (Precisión) | | -------- | | 88% | - La tercera prueba fue en un set de datos de Sentimientos generales | Accuracy (Precisión) | | -------- | | 65% | ## Contact Contacto a traves de github [Murdoocc7](https://github.com/murdoocc)
CarlosPR/mt5-spanish-memmories-analysis
CarlosPR
2021-07-11T15:11:55Z
8
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
**mt5-spanish-memmories-analysis** **// ES** Este es un trabajo en proceso. Este modelo aún es solo un punto de control inicial que mejoraré en los próximos meses. El objetivo es proporcionar un modelo capaz de, utilizando una combinación de tareas del modelo mT5, comprender los recuerdos y proporcionar una interacción útil para las personas con alzeimer o personas como mi propio abuelo que escribió sus recuerdos, pero ahora es solo un libro en la estantería. por lo que este modelo puede hacer que esos recuerdos parezcan "vivos". Pronto (si aún no está cargado) cargaré un cuaderno de **Google Colaboratory con una aplicación visual** que al usar este modelo proporcionará toda la interacción necesaria y deseada con una interfaz fácil de usar. **LINK APLICACIÓN (sobre él se actualizará la versión):** https://drive.google.com/drive/folders/1ewGcxxCYHHwhHhWtGlLiryZfV8wEAaBa?usp=sharing -> Debe descargarse la carpeta "memorium" del enlace y subirse a Google Drive sin incluir en ninguna otra carpeta (directamente en "Mi unidad"). -> A continuación se podrá abrir la app, encontrada dentro de dicha carpeta "memorium" con nombre "APP-Memorium" (el nombre puede incluir además un indicador de versión). -> Si haciendo doble click en el archivo de la app no permite abrirla, debe hacerse pulsando el botón derecho sobre el archivo y seleccionar "Abrir con", "Conectar más aplicaciones", y a continuación escoger Colaboratory (se pedirá instalar). Completada la instalación (tiempo aproximado: 2 minutos) se podrá cerrar la ventana de instalación para volver a visualizar la carpeta donde se encuentra el fichero de la app, que de ahora en adelante se podrá abrir haciendo doble click. -> Se podrán añadir memorias en la carpeta "perfiles" como se indica en la aplicación en el apartado "crear perfil". **// EN** This is a work in process. This model is just an initial checkpoint yet that I will be improving the following months. **APP LINK (it will contain the latest version):** https://drive.google.com/drive/folders/1ewGcxxCYHHwhHhWtGlLiryZfV8wEAaBa?usp=sharing -> The folder "memorium" must be downloaded and then uploaded to Google Drive at "My Drive", NOT inside any other folder. The aim is to provide a model able to, using a mixture of mT5 model's tasks, understand memories and provide an interaction useful for people with alzeimer or people like my own grandfather who wrote his memories but it is now just a book in the shelf, so this model can make those memories seem 'alive'. I will soon (if it is´t uploaded by now) upload a **Google Colaboratory notebook with a visual App** that using this model will provide all the needed and wanted interaction with an easy-to-use Interface.
nateraw/donut-or-bagel
nateraw
2021-07-10T19:54:49Z
71
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: donut-or-bagel results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9375 --- # donut-or-bagel Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bagel ![bagel](images/bagel.jpg) #### donut ![donut](images/donut.jpg)
huggingtweets/averagesmasher
huggingtweets
2021-07-10T13:47:30Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/averagesmasher/1625924846625/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1368753714568327168/oh6BSjqX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">AverageVermontSmasher</div> <div style="text-align: center; font-size: 14px;">@averagesmasher</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from AverageVermontSmasher. | Data | AverageVermontSmasher | | --- | --- | | Tweets downloaded | 41 | | Retweets | 0 | | Short tweets | 2 | | Tweets kept | 39 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/auyr340s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @averagesmasher's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qnfjchi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qnfjchi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/averagesmasher') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sourabharsh/wav2vec2_10july
sourabharsh
2021-07-10T08:25:13Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "de", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: de datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 German by Jonatas Grosman results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice de type: common_voice args: de metrics: - name: Test WER type: wer value: 10.55 - name: Test CER type: cer value: 2.81 ---
huggingtweets/the_robisho
huggingtweets
2021-07-10T03:40:16Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/the_robisho/1625888412499/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1311895700582666243/6N1xGNV9_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nick Landback</div> <div style="text-align: center; font-size: 14px;">@the_robisho</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nick Landback. | Data | Nick Landback | | --- | --- | | Tweets downloaded | 3205 | | Retweets | 1018 | | Short tweets | 21 | | Tweets kept | 2166 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bbz9lc2n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @the_robisho's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/gh2qryi3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/gh2qryi3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/the_robisho') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/i_apx_86
huggingtweets
2021-07-10T03:25:37Z
7
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/i_apx_86/1625887532973/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1404915017703575558/05H2noyT_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">🍮CC🍮</div> <div style="text-align: center; font-size: 14px;">@i_apx_86</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 🍮CC🍮. | Data | 🍮CC🍮 | | --- | --- | | Tweets downloaded | 701 | | Retweets | 391 | | Short tweets | 22 | | Tweets kept | 288 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xwg9l0v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @i_apx_86's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/11srzptq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/11srzptq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/i_apx_86') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
shahukareem/dhivehi-roberta-base
shahukareem
2021-07-10T00:19:12Z
4
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "dv", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: dv tags: - dv - roberta widget: - text: "<mask> މާލެ އަކީ ދިވެހިރާއްޖޭގެ" --- # Dhivehi Roberta Base - Oscar ## Description RoBERTA pretrained from scratch using Jax/Flax backend and with the Dhivehi Oscar Corpus only.
huggingtweets/marxhaunting
huggingtweets
2021-07-09T22:04:38Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/marxhaunting/1625868274804/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1323823559182045184/Vqrrga8t_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Karl Marx</div> <div style="text-align: center; font-size: 14px;">@marxhaunting</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Karl Marx. | Data | Karl Marx | | --- | --- | | Tweets downloaded | 1287 | | Retweets | 16 | | Short tweets | 25 | | Tweets kept | 1246 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1zcjng5j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marxhaunting's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nimlh0s) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nimlh0s/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/marxhaunting') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/freudotheism
huggingtweets
2021-07-09T21:54:33Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/freudotheism/1625867628365/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1412918415703019521/J2TQHTDo_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Evelyn🪶🇰🇵</div> <div style="text-align: center; font-size: 14px;">@freudotheism</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Evelyn🪶🇰🇵. | Data | Evelyn🪶🇰🇵 | | --- | --- | | Tweets downloaded | 3231 | | Retweets | 333 | | Short tweets | 968 | | Tweets kept | 1930 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rbzyyts/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @freudotheism's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/elt06ed5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/elt06ed5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/freudotheism') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
gagan3012/k2t-test3
gagan3012
2021-07-09T19:57:47Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "keytotext", "k2t", "Keywords to Sentences", "en", "dataset:WebNLG", "dataset:Dart", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: "en" thumbnail: "Keywords to Sentences" tags: - keytotext - k2t - Keywords to Sentences license: "MIT" datasets: - WebNLG - Dart metrics: - NLG model-index: - name: k2t-test3 --- #keytotext [![pypi Version](https://img.shields.io/pypi/v/keytotext.svg?logo=pypi&logoColor=white)](https://pypi.org/project/keytotext/) [![Downloads](https://static.pepy.tech/personalized-badge/keytotext?period=total&units=none&left_color=grey&right_color=orange&left_text=Pip%20Downloads)](https://pepy.tech/project/keytotext) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/notebooks/K2T.ipynb) [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) [![API Call](https://img.shields.io/badge/-FastAPI-red?logo=fastapi&labelColor=white)](https://github.com/gagan3012/keytotext#api) [![Docker Call](https://img.shields.io/badge/-Docker%20Image-blue?logo=docker&labelColor=white)](https://hub.docker.com/r/gagan30/keytotext) [![HuggingFace](https://img.shields.io/badge/%F0%9F%A4%97-Models%20on%20Hub-yellow)](https://huggingface.co/models?filter=keytotext) [![Documentation Status](https://readthedocs.org/projects/keytotext/badge/?version=latest)](https://keytotext.readthedocs.io/en/latest/?badge=latest) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) ![keytotext](https://socialify.git.ci/gagan3012/keytotext/image?description=1&forks=1&language=1&owner=1&stargazers=1&theme=Light) Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: - Marketing - Search Engine Optimization - Topic generation etc. - Fine tuning of topic modeling models
flax-community/t5-covid-qa
flax-community
2021-07-09T19:03:44Z
1
0
null
[ "arxiv:2002.08910", "region:us" ]
null
2022-03-02T23:29:05Z
# Covid19 Related Question Answering (Closed book question answering) In 2020, COVID-19 which is caused by a coronavirus called SARS-CoV-2 took over the world. It touched the lives of many people and caused a lot of hardship for humanity. There are still many questions in regards to COVID-19 and it is often difficult to get the right answers. The aim of this project is to finetune models for closed book question answering. In closed-book QA, we feed the model a question *without any context or access to external knowledge* and train it to predict the answer. Since the model doesn't receive any context, the primary way it can learn to answer these questions is based on the "knowledge" it obtained during pre-training [[1]](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb#scrollTo=zSeyoqE7WMwu) [[2]](https://arxiv.org/abs/2002.08910). The main goals of this project are: 1. Train a model for question answering in regards to COVID-19 2. Release the top performing models for further research and enhancement 3. Release all of the preprocessing and postprocessing scripts and findings for future research. ## TO DO LIST: - [x] Team members met and the following was discussed: - Data preparation script is prepared that mixes CORD-19 and Pubmed. - Agreed to finalize the training scripts by 9pm PDT 7/9/2021. - Tokenizer is now trained. - [ ] Setup the pretraining script - [ ] Prepare the finetuning tasks inspired from [T5 Trivia Colab](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb) - What datasets we want to go with? - [Covid-QA](https://huggingface.co/datasets/covid_qa_deepset) (Maybe as test set?) - [Trivia](https://huggingface.co/datasets/covid_qa_deepset) - [CDC-QA](https://www.cdc.gov/coronavirus/2019-ncov/faq.html) (We can scrape quickly using beautiful soup or something) - [More Medical Datasets](https://aclanthology.org/2020.findings-emnlp.289.pdf) (See the dataset section for inspiratio) ## 1. Model We will be using T5 model. ## 2. Datasets The following datasets would be used for finetuning the model. Note that the last dataset is optional and the model is evaluated only using Covid-QA. For **Intermediate Pre-Training**: 1. [CORD-19](https://allenai.org/data/cord-19) For **Fine-Tuning** : 1. [Covid-QA](https://huggingface.co/datasets/covid_qa_deepset) 2. [CDC-QA](https://www.cdc.gov/coronavirus/2019-ncov/faq.html) 4. Optional - [Trivia-QA](https://nlp.cs.washington.edu/triviaqa/) ## 3. Training Scripts We can make use of : 1. [For preprocessing and mixing datasets](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb#:~:text=In%20this%20notebook%2C%20we&#39;ll,it%20to%20predict%20the%20answer.) 2. [For T5 training](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_flax_t5.py) ## 4. Additional Reading - [How Much Knowledge Can You Pack Into the Parameters of a Language Model?](https://arxiv.org/pdf/2002.08910.pdf)
huggingtweets/leduans1
huggingtweets
2021-07-09T18:01:11Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/leduans1/1625853639603/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1400417811407814659/XYjQArW4_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Comrade Based</div> <div style="text-align: center; font-size: 14px;">@leduans1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Comrade Based. | Data | Comrade Based | | --- | --- | | Tweets downloaded | 536 | | Retweets | 9 | | Short tweets | 176 | | Tweets kept | 351 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xffg6kj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @leduans1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g7o54j4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g7o54j4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/leduans1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
flax-community/robit-roberta-base-it
flax-community
2021-07-09T16:47:58Z
8
1
transformers
[ "transformers", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RobIt **RobIt** is a RoBERTa-base model for Italian. It has been trained from scratch on the Italian portion of the OSCAR dataset using [Flax](https://github.com/google/flax), including training scripts. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Team members - Prateek Agrawal (prateekagrawal) - Tanay Mehta (yotanay) - Shreya Gupta (Sheyz-max) - Ruchi Bhatia (ruchi798) ## Dataset : [OSCAR](https://huggingface.co/datasets/oscar) - config : **unshuffled_deduplicated_it** - Size of downloaded dataset files: **26637.62 MB** - Size of the generated dataset: **70661.48 MB** - Total amount of disk used: **97299.10 MB** ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Community Week thread](https://discuss.huggingface.co/t/robit-pretrain-roberta-base-from-scratch-in-italian/7564) - [Community Week channel](https://discord.gg/NTyQNUNs) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/robit-roberta-base-it/)
jephthah/dfjgidfhj
jephthah
2021-07-09T12:39:42Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://natureecoevocommunity.nature.com/users/123movies-hd-watch-hitman-s-wife-s-bodyguard-2021-full-movie-online https://natureecoevocommunity.nature.com/users/123movies-hd-watch-a-quiet-place-part-2-2021-full-movie-online-free-1d44a4a0-bbe0-4b52-a56c-c86e7ce72c1c https://natureecoevocommunity.nature.com/users/123movies-hd-watch-a-quiet-place-part-2-2021-full-movie-online-free-reddit https://natureecoevocommunity.nature.com/users/123movies-hd-watch-the-conjuring-3-2021-full-movie-online-free-reddit https://natureecoevocommunity.nature.com/users/123movies-hd-watch-luca-2021-full-movie-online-free-reddit https://natureecoevocommunity.nature.com/users/123movies-hd-watch-space-jam-2-a-new-legacy-2021-full-movie-online-free-reddit https://natureecoevocommunity.nature.com/users/123movies-hd-watch-cruella-2021-full-movie-online-free-reddit https://natureecoevocommunity.nature.com/users/123movies-hd-watch-the-forever-purge-2021-full-movie-online-free-reddit https://natureecoevocommunity.nature.com/users/123movies-hd-watch-the-boss-baby-2-family-business-2021-full-movie-online-free-reddit https://natureecoevocommunity.nature.com/users/123movies-hd-watch-fast-and-furious-9-2021-online-full-movie-free-reddit https://natureecoevocommunity.nature.com/users/123movies-hd-watch-black-widow-2021-online-full-movie-free-reddit
flax-community/roberta-base-als
flax-community
2021-07-09T12:18:18Z
4
0
transformers
[ "transformers", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
This project pretrains a [`roberta-base`](https://huggingface.co/roberta-base) on the *Alemannic* (`als`) data subset of the [OSCAR](https://oscar-corpus.com/) corpus in JAX/Flax. We will be using the masked-language modeling loss for pretraining.
flax-community/roberta-base-als-demo
flax-community
2021-07-09T12:18:07Z
5
0
transformers
[ "transformers", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# roberta-base-als-demo **roberta-base-als-demo** is a model trained by Patrick von Platen to demonstrate how to train a roberta-base model from scratch on the Alemannic language. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/roberta-base-als-demo)
Alireza1044/bert_classification_lm
Alireza1044
2021-07-09T08:50:58Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
A simple model trained on dialogues of characters in NBC series, `The Office`. The model can do a binary classification between `Michael Scott` and `Dwight Shrute`'s dialogues. <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top} </style> <table class="tg"> <thead> <tr> <th class="tg-c3ow" colspan="2">Label Definitions</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow">Label 0</td> <td class="tg-c3ow">Michael</td> </tr> <tr> <td class="tg-c3ow">Label 1</td> <td class="tg-c3ow">Dwight</td> </tr> </tbody> </table>
ThomasNLG/t5-qg_webnlg_synth-en
ThomasNLG
2021-07-09T07:45:44Z
259
2
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "qa", "question", "generation", "SQuAD", "data2text", "metric", "nlg", "t5-small", "en", "dataset:squad_v2", "arxiv:2104.07555", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en tags: - qa - question - generation - SQuAD - data2text - metric - nlg - t5-small license: mit datasets: - squad_v2 model-index: - name: t5-qg_webnlg_synth-en results: - task: name: Data Question Generation type: Text To Text Generation widget: - text: "The Eagle </s> name [ The Eagle ] , eatType [ coffee shop ] , food [ French ] , priceRange [ £ 2 0 - 2 5 ]" --- # t5-qg_webnlg_synth-en ## Model description This model is a *Data Question Generation* model based on T5-small, that generates questions, given a structured table as input and the conditioned answer. It is actually a component of [QuestEval](https://github.com/ThomasScialom/QuestEval) metric but can be used independently as it is, for QG only. ## How to use ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-qg_webnlg_synth-en") model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-qg_webnlg_synth-en") ``` You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model): `text_input = "{ANSWER} </s> {CONTEXT}"` where `CONTEXT is a structured table that is linearised this way: `CONTEXT = "name [ The Eagle ] , eatType [ coffee shop ] , food [ French ] , priceRange [ £ 2 0 - 2 5 ]"` ## Training data The model was trained on synthetic data as described in [Data-QuestEval: A Referenceless Metric for Data to Text Semantic Evaluation](https://arxiv.org/abs/2104.07555). ### Citation info ```bibtex @article{rebuffel2021data, title={Data-QuestEval: A Referenceless Metric for Data to Text Semantic Evaluation}, author={Rebuffel, Cl{\'e}ment and Scialom, Thomas and Soulier, Laure and Piwowarski, Benjamin and Lamprier, Sylvain and Staiano, Jacopo and Scoutheeten, Geoffrey and Gallinari, Patrick}, journal={arXiv preprint arXiv:2104.07555}, year={2021} } ```
ThomasNLG/t5-qa_webnlg_synth-en
ThomasNLG
2021-07-09T07:45:27Z
260
1
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "qa", "question", "answering", "SQuAD", "data2text", "metric", "nlg", "t5-small", "en", "dataset:squad_v2", "arxiv:2104.07555", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en tags: - qa - question - answering - SQuAD - data2text - metric - nlg - t5-small license: mit datasets: - squad_v2 model-index: - name: t5-qa_webnlg_synth-en results: - task: name: Data Question Answering type: extractive-qa widget: - text: "What is the food type at The Eagle? </s> name [ The Eagle ] , eatType [ coffee shop ] , food [ French ] , priceRange [ £ 2 0 - 2 5 ]" --- # t5-qa_webnlg_synth-en ## Model description This model is a *Data Question Answering* model based on T5-small, that answers questions given a structured table as input. It is actually a component of [QuestEval](https://github.com/ThomasScialom/QuestEval) metric but can be used independently as it is, for QA only. ## How to use ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-qa_webnlg_synth-en") model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-qa_webnlg_synth-en") ``` You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model): `text_input = "{QUESTION} </s> {CONTEXT}"` where `CONTEXT` is a structured table that is linearised this way: `CONTEXT = "name [ The Eagle ] , eatType [ coffee shop ] , food [ French ] , priceRange [ £ 2 0 - 2 5 ]"` ## Training data The model was trained on synthetic data as described in [Data-QuestEval: A Referenceless Metric for Data to Text Semantic Evaluation](https://arxiv.org/abs/2104.07555). ### Citation info ```bibtex @article{rebuffel2021data, title={Data-QuestEval: A Referenceless Metric for Data to Text Semantic Evaluation}, author={Rebuffel, Cl{\'e}ment and Scialom, Thomas and Soulier, Laure and Piwowarski, Benjamin and Lamprier, Sylvain and Staiano, Jacopo and Scoutheeten, Geoffrey and Gallinari, Patrick}, journal={arXiv preprint arXiv:2104.07555}, year={2021} } ```
mboth/distil-eng-quora-sentence
mboth
2021-07-09T06:00:21Z
130
1
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # mboth/distil-eng-quora-sentence This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mboth/distil-eng-quora-sentence') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('mboth/distil-eng-quora-sentence') model = AutoModel.from_pretrained('mboth/distil-eng-quora-sentence') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=mboth/distil-eng-quora-sentence) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
franklu/pubmed_bert_squadv2
franklu
2021-07-09T05:25:26Z
42
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
**[`microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext`](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_qa.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py)** Tunning script: ```bash BASE_MODEL=microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext OUTPUT_DIR=~/Documents/projects/tunned_models/ms_pubmed_bert_squadv2/ python run_qa.py \ --model_name_or_path $BASE_MODEL\ --dataset_name squad_v2 \ --do_train \ --do_eval \ --version_2_with_negative \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir $OUTPUT_DIR ```
kittinan/exercise-feedback-classification
kittinan
2021-07-08T17:44:43Z
6
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Reddit exercise feedback classification Model to classify Reddit's comments for exercise feedback. Current classes are good, correction, bad posture, not informative. If you want to use it locally, ### Usage: ```py from transformers import pipeline classifier = pipeline("text-classification", "kittinan/exercise-feedback-classification") classifier("search for alan thrall deadlift video he will explain basic ques") #[{'label': 'correction', 'score': 0.9998193979263306}] ```
liam168/c2-roberta-base-finetuned-dianping-chinese
liam168
2021-07-08T01:50:53Z
174
23
transformers
[ "transformers", "pytorch", "bert", "text-classification", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: zh widget: - text: "我喜欢下雨。" - text: "我讨厌他。" --- # liam168/c2-roberta-base-finetuned-dianping-chinese ## Model description 用中文对话情绪语料训练的模型,2分类:乐观和悲观。 ## Overview - **Language model**: BertForSequenceClassification - **Model size**: 410M - **Language**: Chinese ## Example ```python >>> from transformers import AutoModelForSequenceClassification , AutoTokenizer, pipeline >>> model_name = "liam168/c2-roberta-base-finetuned-dianping-chinese" >>> class_num = 2 >>> ts_texts = ["我喜欢下雨。", "我讨厌他."] >>> model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=class_num) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) >>> classifier(ts_texts[0]) >>> classifier(ts_texts[1]) [{'label': 'positive', 'score': 0.9973447918891907}] [{'label': 'negative', 'score': 0.9972558617591858}] ```
tdopierre/ProtAugment-ParaphraseGenerator
tdopierre
2021-07-07T14:15:07Z
4
5
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "Paraphase Generation", "Data Augmentation", "en", "dataset:Quora", "dataset:MSR", "dataset:Google-PAWS", "arxiv:2105.12995", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: "en" tags: - Paraphase Generation - Data Augmentation datasets: - Quora - MSR - Google-PAWS --- [![acl](http://img.shields.io/badge/ACL-2021-f31f32)](https://arxiv.org/abs/2105.12995) This model is used to generate paraphrases. It has been trained on a mix of 3 different paraphrase detection datasets: MSR, Quora, Google-PAWS. We use this model in our ACL'21 Paper ["PROTAUGMENT: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning"](https://arxiv.org/abs/2105.12995) Jointly used with generation constraints, this model allows to generate diverse paraphrases. We use those paraphrases as a data augmentation technique to further boosts a classification model's generalization capability. Feel free to play with the [code](https://github.com/tdopierre/ProtAugment)! If you use this model, please consider citing our paper. ``` @article{Dopierre2021ProtAugmentUD, title={ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning}, author={Thomas Dopierre and C. Gravier and Wilfried Logerais}, journal={ArXiv}, year={2021}, volume={abs/2105.12995} } ```
shreeshaaithal/DialoGPT-small-Michael-Scott
shreeshaaithal
2021-07-07T11:56:25Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- # DialoGPT Trained on WhatsApp chats This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on WhatsApp chats or you can train this model on [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). feel free to ask me questions on discord server [discord server](https://discord.gg/Gqhje8Z7DX) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("harrydonni/DialoGPT-small-Michael-Scott") model = AutoModelWithLMHead.from_pretrained("harrydonni/DialoGPT-small-Michael-Scott") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Michael: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ``` this is done by shreesha thank you......
ainize/klue-bert-base-re
ainize
2021-07-07T09:55:52Z
10
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# bert-base for KLUE Relation Extraction task. Fine-tuned klue/bert-base using KLUE RE dataset. - <a href="https://klue-benchmark.com/">KLUE Benchmark Official Webpage</a> - <a href="https://github.com/KLUE-benchmark/KLUE">KLUE Official Github</a> - <a href="https://github.com/ainize-team/klue-re-workspace">KLUE RE Github</a> - Run KLUE RE on free GPU : <a href="https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ainize-team/klue-re-workspace">Ainize Workspace</a> <br> # Usage <pre><code> from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-re") model = AutoModelForSequenceClassification.from_pretrained("ainize/klue-bert-base-re") # Add "&ltsubj&gt", "&lt/subj&gt" to both ends of the subject object and "&ltobj&gt", "&lt/obj&gt" to both ends of the object object. sentence = "&ltsubj&gt손흥민&lt/subj&gt은 &ltobj&gt대한민국&lt/obj&gt에서 태어났다." encodings = tokenizer(sentence, max_length=128, truncation=True, padding="max_length", return_tensors="pt") outputs = model(**encodings) logits = outputs['logits'] preds = torch.argmax(logits, dim=1) </code></pre> <br> # About us - <a href="https://ainize.ai/teachable-nlp">Teachable NLP</a> - Train NLP models with your own text without writing any code - <a href="https://ainize.ai/">Ainize</a> - Deploy ML project using free gpu
andi611/bert-base-uncased-ner-conll2003
andi611
2021-07-07T09:31:59Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model_index: - name: bert-base-uncased-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metric: name: Accuracy type: accuracy value: 0.19881805328292054 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 2.1258 - Precision: 0.0269 - Recall: 0.1379 - F1: 0.0451 - Accuracy: 0.1988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 4 | 2.1296 | 0.0270 | 0.1389 | 0.0452 | 0.1942 | | No log | 2.0 | 8 | 2.1258 | 0.0269 | 0.1379 | 0.0451 | 0.1988 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
vionwinnie/t5-reddit
vionwinnie
2021-07-07T08:15:48Z
6
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
This T5 small model finetuned on Reddit data. It has two subtasks: 1. title generation 2. tag classification
shreeshaaithal/whatsapp-medium-bot-2
shreeshaaithal
2021-07-07T06:28:15Z
7
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- # DialoGPT Trained on WhatsApp chats This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on WhatsApp chats or you can train this model on [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). feel free to ask me questions on discord server [discord server](https://discord.gg/Gqhje8Z7DX) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("harrydonni/whatsapp-medium-bot-2") model = AutoModelWithLMHead.from_pretrained("harrydonni/whatsapp-medium-bot-2") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Messi: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ``` this is done by shreesha thank you......
huggingtweets/gohere4porn-onlinepete
huggingtweets
2021-07-07T06:07:15Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/gohere4porn-onlinepete/1625638031693/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/456958582731603969/QZKpv6eI_400x400.jpeg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1324123540556316673/YQjGLFLJ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">im pete online & Grateful King</div> <div style="text-align: center; font-size: 14px;">@gohere4porn-onlinepete</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from im pete online & Grateful King. | Data | im pete online | Grateful King | | --- | --- | --- | | Tweets downloaded | 3190 | 2141 | | Retweets | 94 | 557 | | Short tweets | 1003 | 217 | | Tweets kept | 2093 | 1367 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w0274vc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gohere4porn-onlinepete's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rvkp85n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rvkp85n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gohere4porn-onlinepete') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
byeongal/bart-base
byeongal
2021-07-07T05:58:29Z
4
0
transformers
[ "transformers", "pytorch", "bart", "feature-extraction", "en", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png language: en tags: - bart --- # BART base model for Teachable NLP - This model forked from [bart-base](https://huggingface.co/facebook/bart-base) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp). The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. The Authors’ code can be found here: https://github.com/pytorch/fairseq/tree/master/examples/bart
huggingtweets/alice333ai-alicecweam
huggingtweets
2021-07-06T22:53:45Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/alice333ai-alicecweam/1625611976936/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410515009252302852/sah1ksNb_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1393311358293356546/tXc-X9fx_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Alice Cream 🐇🍓 Vtuber & 👁️⃤ lison</div> <div style="text-align: center; font-size: 14px;">@alice333ai-alicecweam</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Alice Cream 🐇🍓 Vtuber & 👁️⃤ lison. | Data | Alice Cream 🐇🍓 Vtuber | 👁️⃤ lison | | --- | --- | --- | | Tweets downloaded | 3244 | 3216 | | Retweets | 359 | 1062 | | Short tweets | 463 | 200 | | Tweets kept | 2422 | 1954 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4cfpc23c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alice333ai-alicecweam's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2r62epp4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2r62epp4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/alice333ai-alicecweam') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/alice333ai-jj_visuals
huggingtweets
2021-07-06T20:56:55Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/alice333ai-jj_visuals/1625605011527/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1393311358293356546/tXc-X9fx_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1412466315240030217/yDDNt3-0_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">👁️⃤ lison & JJ (comms closed)</div> <div style="text-align: center; font-size: 14px;">@alice333ai-jj_visuals</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 👁️⃤ lison & JJ (comms closed). | Data | 👁️⃤ lison | JJ (comms closed) | | --- | --- | --- | | Tweets downloaded | 3216 | 3221 | | Retweets | 1062 | 781 | | Short tweets | 200 | 229 | | Tweets kept | 1954 | 2211 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sqkkxt9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alice333ai-jj_visuals's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/327x2oet) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/327x2oet/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/alice333ai-jj_visuals') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
flax-community/gpt-2-german
flax-community
2021-07-06T17:10:39Z
0
0
null
[ "arxiv:1904.09751", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - - thumbnail: tags: - - - license: datasets: - - metrics: - - --- # GPT-2 GERMAN ## Model description See [Open AI's model card](https://github.com/openai/gpt-2/blob/master/model_card.md) and [Huggingface's model card](https://huggingface.co/gpt2) for the original model. ## Intended uses & limitations #### How to use ```python def foo(bar) bar +=1 return bar ``` #### Limitations and bias On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?? https://dl.acm.org/doi/10.1145/3442188.3445922 ``` @inproceedings{10.1145/3442188.3445922, author = {Bender, Emily M. and Gebru, Timnit and McMillan-Major, Angelina and Shmitchell, Shmargaret}, title = {On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ??}, year = {2021}, isbn = {9781450383097}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3442188.3445922}, doi = {10.1145/3442188.3445922}, abstract = {The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.}, booktitle = {Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency}, pages = {610?623}, numpages = {14}, location = {Virtual Event, Canada}, series = {FAccT '21} } ``` ## Training data https://huggingface.co/datasets/german-nlp-group/german_common_crawl ```json {'url': 'http://my-shop.ru/shop/books/545473.html', 'date_download': '2016-10-20T19:38:58Z', 'digest': 'sha1:F62EMGYLZDIKF4UL5JZYU47KWGGUBT7T', 'length': 1155, 'nlines': 4, 'source_domain': 'my-shop.ru', 'title': 'Grammatikalische Liebeslieder. Methodische Vorschläge', 'raw_content': 'Grammatikalische Liebeslieder. [....]', 'cc_segment': 'crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/wet/CC-MAIN-20161020183837-00354-ip-10-171-6-4.ec2.internal.warc.wet.gz', 'original_nlines': 99, 'original_length': 2672, 'language': 'de', 'language_score': 1.0, 'perplexity': 283.0, 'bucket': 'head'}" ``` ## Training procedure TODO (See [training](training.md)) ## Eval results TODO: Self-BLEU, Diversity, and other metrics from https://arxiv.org/abs/1904.09751 ``` @inproceedings{DBLP:conf/iclr/HoltzmanBDFC20, author = {Ari Holtzman and Jan Buys and Li Du and Maxwell Forbes and Yejin Choi}, title = {The Curious Case of Neural Text Degeneration}, booktitle = {8th International Conference on Learning Representations, {ICLR} 2020, Addis Ababa, Ethiopia, April 26-30, 2020}, publisher = {OpenReview.net}, year = {2020}, url = {https://openreview.net/forum?id=rygGQyrFvH}, timestamp = {Thu, 21 Jan 2021 17:36:46 +0100}, biburl = {https://dblp.org/rec/conf/iclr/HoltzmanBDFC20.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### BibTeX entry and citation info Does the Huggingface hub generate DOIs? Otherwise maybe Kaggle or Zenodo to generate one. ```bibtex @inproceedings{..., year={2021} } ```
mrshu/wav2vec2-large-xlsr-slovene
mrshu
2021-07-06T13:25:51Z
5
2
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "sl", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: sl datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Slovene results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice sl type: common_voice args: sl metrics: - name: Test WER type: wer value: 36.97 --- # Wav2Vec2-Large-XLSR-53-Slovene Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Slovene using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sl", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene") model = Wav2Vec2ForCTC.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Slovene test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "sl", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene") model = Wav2Vec2ForCTC.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\«\»\)\(\„\'\–\’\—]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 36.97 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/14uahdilysnFsiYniHxY9fyKjFGuYQe7p)
mrm8488/wav2vec2-large-xlsr-53-ukrainian
mrm8488
2021-07-06T13:20:13Z
4
1
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "uk", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: uk datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Ukrainian Manuel Romero results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice uk type: common_voice args: uk metrics: - name: Test WER type: wer value: 41.82 --- # Wav2Vec2-Large-XLSR-53-ukrainian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "uk", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Ukrainian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "uk", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 41.82 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found ???
mbsouksu/wav2vec2-large-xlsr-turkish-large
mbsouksu
2021-07-06T12:43:27Z
4
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tr", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: tr datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Turkish by Mehmet Berk Souksu results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice tr type: common_voice args: tr metrics: - name: Test WER type: wer value: 29.80 --- # Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("mbsouksu/wav2vec2-large-xlsr-turkish-large") model = Wav2Vec2ForCTC.from_pretrained("mbsouksu/wav2vec2-large-xlsr-turkish-large") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tr", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mbsouksu/wav2vec2-large-xlsr-turkish-large") model = Wav2Vec2ForCTC.from_pretrained("mbsouksu/wav2vec2-large-xlsr-turkish-large") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\`\…\\]\\[\\&\\’\»«]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 29.80 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://github.com/mbsouksu/wav2vec2-turkish)
mbien/fma2vec2popularity
mbien
2021-07-06T12:36:26Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# Predicting music popularity using DNNs This is a model fine-tuned for music popularity classification, created as part of DH-401: Digital Musicology class on EPFL ## Team * Elisa ([email protected]) * Michał ([email protected]) * Noé ([email protected]) ## Milestone 3 Main notebook presenting out results is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone3.ipynb) Notebook describing the details of Wav2Vec2.0 pre-training and fine-tuning for the task is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone3-wav2vec2.ipynb) ## Milestone 2 Exploratory data analysis notebook is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone2.ipynb) ## Milestone 1 Refined project proposal is available [here](https://github.com/Glorf/DH-401/blob/main/milestone0.md) ## Milestone 0 Original project proposal is available in git history [here](https://github.com/Glorf/DH-401/blob/bb14813ff2bbbd9cdc6b6eecf34c9e3c160598eb/milestone0.md)
marma/wav2vec2-large-xlsr-swedish
marma
2021-07-06T12:28:48Z
14
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "sv", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: sv datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Swedish by Marma results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice sv-SE type: common_voice args: sv metrics: - name: Test WER type: wer value: 23.33 --- # Wav2Vec2-Large-XLSR-53-Swedish This model has moved [here](https://huggingface.co/KBLab/wav2vec2-large-xlsr-53-swedish)
marcel/wav2vec2-large-xlsr-german-demo
marcel
2021-07-06T12:09:00Z
24
1
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "de", "dataset:common_voice", "dataset:wer", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: de datasets: - common_voice - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice de type: common_voice args: de metrics: - name: Test WER type: wer value: 29.35 --- # Wav2Vec2-Large-XLSR-53-German Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using 3% of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "de", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo") model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "de", split="test[:10%]") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo") model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\�\カ\æ\無\ན\カ\臣\ѹ\…\«\»\ð\ı\„\幺\א\ב\比\ш\ע\)\ứ\в\œ\ч\+\—\ш\‚\נ\м\ń\乡\$\=\ש\ф\支\(\°\и\к\̇]' substitutions = { 'e' : '[\ə\é\ě\ę\ê\ế\ế\ë\ė\е]', 'o' : '[\ō\ô\ô\ó\ò\ø\ọ\ŏ\õ\ő\о]', 'a' : '[\á\ā\ā\ă\ã\å\â\à\ą\а]', 'c' : '[\č\ć\ç\с]', 'l' : '[\ł]', 'u' : '[\ú\ū\ứ\ů]', 'und' : '[\&]', 'r' : '[\ř]', 'y' : '[\ý]', 's' : '[\ś\š\ș\ş]', 'i' : '[\ī\ǐ\í\ï\î\ï]', 'z' : '[\ź\ž\ź\ż]', 'n' : '[\ñ\ń\ņ]', 'g' : '[\ğ]', 'ss' : '[\ß]', 't' : '[\ț\ť]', 'd' : '[\ď\đ]', "'": '[\ʿ\་\’\`\´\ʻ\`\‘]', 'p': '\р' } resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() for x in substitutions: batch["sentence"] = re.sub(substitutions[x], x, batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 29.35 % ## Training The first 3% of the Common Voice `train`, `validation` datasets were used for training. The script used for training can be found TODO
manandey/wav2vec2-large-xlsr-breton
manandey
2021-07-06T11:29:55Z
7
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "br", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: br datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Breton by Manan Dey results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice br type: common_voice args: br metrics: - name: Test WER type: wer value: 54.04 --- # Wav2Vec2-Large-XLSR-53-Breton Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Breton using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "br", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-breton") model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-breton") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "br", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-breton") model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-breton") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)\/\«\»\½\…]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 54.04% ## Training The Common Voice `train`, `validation` datasets were used for training.
m3hrdadfi/wav2vec2-large-xlsr-turkish
m3hrdadfi
2021-07-06T11:07:44Z
211
8
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tr", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: tr datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Common Voice sample 1378 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-turkish/resolve/main/sample1378.flac - label: Common Voice sample 1589 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-turkish/resolve/main/sample1589.flac model-index: - name: XLSR Wav2Vec2 Turkish by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice tr type: common_voice args: tr metrics: - name: Test WER type: wer value: 27.51 --- # Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Turkish using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish").to(device) dataset = load_dataset("common_voice", "et", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 10).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: ülke şu anda iki federasyona üye predicted: ülke şu anda iki federasyona üye --- reference: foruma dört yüzde fazla kişi katıldı predicted: soruma dört yüzden fazla kişi katıldı --- reference: mobi altmış üç çalışanları da mutsuz predicted: mobia haltmış üç çalışanları da mutsur --- reference: kentin mali esnekliğinin düşük olduğu bildirildi predicted: kentin mali esnekleğinin düşük olduğu bildirildi --- reference: fouere iki ülkeyi sorunu abartmamaya çağırdı predicted: foor iki ülkeyi soruna abartmamaya çanayordı --- reference: o ülkeden herhangi bir tepki geldi mi predicted: o ülkeden herhayın bir tepki geldi mi --- reference: bunlara asla sırtımızı dönmeyeceğiz predicted: bunlara asla sırtımızı dönmeyeceğiz --- reference: sizi ayakta tutan nedir predicted: sizi ayakta tutan nedir --- reference: artık insanlar daha bireysel yaşıyor predicted: artık insanlar daha bir eyselli yaşıyor --- reference: her ikisi de diyaloga hazır olduğunu söylüyor predicted: her ikisi de diyaloğa hazır olduğunu söylüyor --- reference: merkez bankasının başlıca amacı düşük enflasyon predicted: merkez bankasının başlrıca anatı güşükyen flasyon --- reference: firefox predicted: fair foks --- reference: ülke halkı çok misafirsever ve dışa dönük predicted: ülke halktı çok isatirtever ve dışa dönük --- reference: ancak kamuoyu bu durumu pek de affetmiyor predicted: ancak kamuonyulgukirmu pek deafıf etmiyor --- reference: i ki madende iki bin beş yüzden fazla kişi çalışıyor predicted: i ki madende iki bin beş yüzden fazla kişi çalışıyor --- reference: sunnyside park dışarıdan oldukça iyi görünüyor predicted: sani sahip park dışarıdan oldukça iyi görünüyor --- reference: büyük ödül on beş bin avro predicted: büyük ödül on beş bin avro --- reference: köyümdeki camiler depoya dönüştürüldü predicted: küyümdeki camiler depoya dönüştürüldü --- reference: maç oldukça diplomatik bir sonuçla birbir bitti predicted: maç oldukça diplomatik bir sonuçla bir birbitti --- reference: kuşların ikisi de karantinada öldüler predicted: kuşların ikiste karantinada özdüler --- ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", "\u0307": " " } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish").to(device) dataset = load_dataset("common_voice", "tr", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` ] **Test Result**: - WER: 27.51% ## Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_turkish/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Turkish--Vmlldzo1Njc1MDc?accessToken=02vm5cwbi7d342vyt7h9w9859zex0enltdmjoreyjt3bd5qwv0vs0g3u93iv92q0) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
m3hrdadfi/wav2vec2-large-xlsr-persian-shemo
m3hrdadfi
2021-07-06T10:48:23Z
46
3
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "fa", "dataset:shemo", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fa datasets: - shemo tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: ShEMO sample 250 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo/resolve/main/sample250.flac - label: ShEMO sample 52 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo/resolve/main/sample52.flac model-index: - name: XLSR Wav2Vec2 Persian (Farsi) ShEMO by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: ShEMO fa type: shemo args: fa metrics: - name: Test WER type: wer value: 30.00 --- # Wav2Vec2-Large-XLSR-53-Persian ShEMO Fine-tuned [Wav2Vec2-Large-XLSR-53-Persian V2](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2) in Persian (Farsi) using [ShEMO](https://www.kaggle.com/mansourehk/shemo-persian-speech-emotion-detection-database). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer !pip install hazm !pip install num2fawords ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset from num2fawords import words, ordinal_words import numpy as np import hazm import re import string import IPython.display as ipd _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?", ".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„', 'ā', 'š', # "ء", ] # In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ", # "ها": " ها", "ئ": "ی", "a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ", "g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ", "m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ", "s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ", "y": " وای ", "z": " زد ", "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) _text = [] for word in text.split(): try: word = int(word) _text.append(words(word)) except: _text.append(word) text = " ".join(_text) + " " text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo").to(device) dataset = load_dataset("csv", data_files={"test": "/content/fa/dataset/test.csv"}, delimiter="\t")["test"] dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 20).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: همون شبی که قسم خوردی منو از جونت بیشتر دوست داری و تا آخر عمر کنار من می مونی همون شبی که به من وعده دادی بزرگترین جشن های ازدواج رو برام بگیری predicted: همون شبی که قسم خوردی منو از جونت بیشتر دوستاری و تا آخر عمر کنار من می مونیمو یبی که به من وعض دادین بزرگترین جشن های ازدواج و برام بگیری --- reference: خودتون دم به ساعت فحشش می دین کتکش می زنین بس نیست predicted: خودتون دم به ساعت فشش می دیم کتاکش می زنیم بس نیست --- reference: خونه predicted: خونه --- reference: شلوغش نکن predicted: شلوغش نکن --- reference: برای بقیه سوییت هایی در نظر گرفتم predicted: برای بقی سویید هایی در نظر گرفتم --- reference: برو گمشو برو گمشو برو بیرون predicted: برو گمشو برو گمشو برو بیرون --- reference: فقط یک سال بعد از خاتمه جنگ بود که حقیقت رو فهمیدی predicted: فقط یک سال بعد از خاتمه جنگ بود که حقیقت و فهمیدید --- reference: غیر از اون دو نفری که اینجا خوابیدند کسان دیگه ای از دوستانشو به تو معرفی نکرده predicted: غیر از اون دو نفری که اینجا خوابیدند کسانه دیگه ای از دوستانشو به تو معرفی نکرده --- reference: من می دونم اینجایی درو واز کن کویی کوئک predicted: من می دونم این جایی د رو واز کن کوری فکر --- reference: نویسنده باید چهار تا چشم داشته باشه چهار تا گوش predicted: نویسند باید چهار تا چشم داشته باشه و چهار تا گوش --- reference: غیر از اون دو نفری که اینجا خوابیدند کسان دیگه ای از دوستانشو به تو معرفی نکرده predicted: غیر از اون دو نفری که اینجا خوابیدند کسانه دیگه ای از دوستانشو به تو معرفی نکرده --- reference: پس همراهان من چه می کنن چه می کنن که این سرکرده کولی ها تونسته خودشو اینجا برسونه predicted: پس همرا حال من چه می کنن چه می کنن که این سرکرده کلی ها تونسته خودش رو اینجا برسونه --- reference: گوش بدید مادمازل حقیقت اینه که من دلم می خواد به شما کمک کنم زیبایی و جوانی شما دل منو به رحم میاره به من اعتماد کنید دلم می خواد بتونم شما رو از مرگ نجات بدم predicted: هوش بدید مادماز حقیقت اینه که من دلم می خواد به شما کمک کنم زیبای و جوانی شما دل منو به رحم می آره به من اعتماد کنید دلم می خواد بتونم شما رو از مرگ نجات بدم --- reference: قربان به نظر می رسه شما نه تنها به مرگ رونالد دریو بلکه به مرگ خانم مونرو هم مشکوکید predicted: قربان به نظر می رسه شما نه تن ها به مرگ رونال گریو بلکه به مرگ خانم مونرا مشکوکین --- reference: برای اینکه شما رو دوست دارم predicted: برای اینکه شما رو دوست دارم --- reference: مرتبه اول دنبال جسدی می گشتن که انداخته بودن کنار خیابون predicted: حر تبه اول دنبال جسدی می گشتند که انداخته بودن کنار خیابون --- reference: خونه predicted: خونه --- reference: کدبانوی جدید این طبقه هستم predicted: کدبانوی جدید این طبقه هستم --- reference: و این برات خیلی گرون تموم شد predicted: و این برات خیلی گرون تموم شد --- reference: خب چرا نمی دین به خودشون predicted: خبچرا نمی تون به خودشون ``` ## Evaluation The model can be evaluated as follows on the Persian (Farsi) test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric from num2fawords import words, ordinal_words import numpy as np import hazm import re import string _normalizer = hazm.Normalizer() chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?", ".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„', 'ā', 'š', # "ء", ] # In case of farsi chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits) chars_to_mapping = { 'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی', 'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی", "ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع", "ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه", 'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش", 'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ", # "ها": " ها", "ئ": "ی", "a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ", "g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ", "m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ", "s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ", "y": " وای ", "z": " زد ", "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = _normalizer.normalize(text) text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) text = re.sub(" +", " ", text) _text = [] for word in text.split(): try: word = int(word) _text.append(words(word)) except: _text.append(word) text = " ".join(_text) + " " text = text.strip() + " " batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo").to(device) dataset = load_dataset("csv", data_files={"test": "/content/fa/dataset/test.csv"}, delimiter="\t")["test"] dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result:** - WER: 31.00% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Persian_ShEMO_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
m3hrdadfi/wav2vec2-large-xlsr-estonian
m3hrdadfi
2021-07-06T10:28:26Z
5
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "et", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: et datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 widget: - label: Common Voice sample 1123 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-estonian/resolve/main/sample1123.flac - label: Common Voice sample 910 src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-estonian/resolve/main/sample910.flac model-index: - name: XLSR Wav2Vec2 Estonian by Mehrdad Farahani results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice et type: common_voice args: et metrics: - name: Test WER type: wer value: 33.93 --- # Wav2Vec2-Large-XLSR-53-Estonian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Estonian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device) dataset = load_dataset("common_voice", "et", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 10).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: õhulossid lagunevad ning ees ootab maapind predicted: õhulassid lagunevad ning ees ootab maapind --- reference: milliseks kiievisse pääsemise nimel võistlev muusik soome muusikamaastiku hetkeseisu hindab ning kas ta ka ennast sellel tulevikus tegutsemas näeb kuuled videost predicted: milliseks gievisse pääsemise nimel võitlev muusiks soome muusikama aastiku hetke seisu hindab ning kas ta ennast selle tulevikus tegutsemast näeb kuulad videost --- reference: näiteks kui pool seina on tehtud tekib tunne et tahaks tegelikult natuke teistsugust ja hakkame otsast peale predicted: näiteks kui pool seine on tehtud tekib tunnetahaks tegelikult matuka teistsugust jahappanna otsast peane --- reference: neuroesteetilised katsed näitavad et just nägude vaatlemine aktiveerib inimese aju esteetilist keskust predicted: neuroaisteetiliselt katsed näitaval et just nägude vaatlemine aptiveerid inimese aju est eedilist keskust --- reference: paljud inimesed kindlasti kadestavad teid kuid ei julge samamoodi vabalt võtta predicted: paljud inimesed kindlasti kadestavadteid kuid ei julge sama moodi vabalt võtta --- reference: parem on otsida pileteid inkognito veebi kaudu predicted: parem on otsida pileteid ning kognitu veebikaudu --- reference: ja vot siin ma jäin vaikseks predicted: ja vat siisma ja invaikseks --- reference: mida sa iseendale juubeli puhul soovid predicted: mida saise endale jubeli puhul soovid --- reference: kuumuse ja kõrge temperatuuri tõttu kuivas tühjadel karjamaadel rohi mis muutus kergesti süttivaks predicted: kuumuse ja kõrge temperatuuri tõttu kuivast ühjadal karjamaadel rohi mis muutus kergesti süttivaks --- reference: ilmselt on inimesi kelle jaoks on see hea lahendus predicted: ilmselt on inimesi kelle jaoks on see hea lahendus --- ``` ## Evaluation The model can be evaluated as follows on the Estonian test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device) dataset = load_dataset("common_voice", "et", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result**: - WER: 33.93% ## Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_estonian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Estonian--Vmlldzo1NjA1MTI?accessToken=k2b2g3a2i12m1sdwf13q8b226pplmmyw12joxo6vk38eb4djellfzmn9fp2725fw) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Estonian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
m3hrdadfi/wav2vec2-base-100k-gtzan-music-genres
m3hrdadfi
2021-07-06T10:26:27Z
54
20
transformers
[ "transformers", "pytorch", "wav2vec2", "audio", "automatic-speech-recognition", "audio-classification", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - audio - automatic-speech-recognition - audio-classification --- # Music Genre Classification using Wav2Vec 2.0 ## How to use ### Requirements ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ### Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/wav2vec2-base-100k-voxpopuli-gtzan-music" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "genres_original/disco/disco.00067.wav" outputs = predict(path, sampling_rate) ``` ```bash [ {'Label': 'blues', 'Score': '0.0%'}, {'Label': 'classical', 'Score': '0.0%'}, {'Label': 'country', 'Score': '0.0%'}, {'Label': 'disco', 'Score': '99.8%'}, {'Label': 'hiphop', 'Score': '0.0%'}, {'Label': 'jazz', 'Score': '0.0%'}, {'Label': 'metal', 'Score': '0.0%'}, {'Label': 'pop', 'Score': '0.0%'}, {'Label': 'reggae', 'Score': '0.0%'}, {'Label': 'rock', 'Score': '0.0%'} ] ``` ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | label | precision | recall | f1-score | support | |:------------:|:---------:|:------:|:--------:|:-------:| | blues | 0.792 | 0.950 | 0.864 | 20 | | classical | 0.864 | 0.950 | 0.905 | 20 | | country | 0.812 | 0.650 | 0.722 | 20 | | disco | 0.778 | 0.700 | 0.737 | 20 | | hiphop | 0.933 | 0.700 | 0.800 | 20 | | jazz | 1.000 | 0.850 | 0.919 | 20 | | metal | 0.783 | 0.900 | 0.837 | 20 | | pop | 0.917 | 0.550 | 0.687 | 20 | | reggae | 0.543 | 0.950 | 0.691 | 20 | | rock | 0.611 | 0.550 | 0.579 | 20 | | accuracy | 0.775 | 0.775 | 0.775 | 0 | | macro avg | 0.803 | 0.775 | 0.774 | 200 | | weighted avg | 0.803 | 0.775 | 0.774 | 200 | ## Questions? Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
kmfoda/wav2vec2-large-xlsr-arabic
kmfoda
2021-07-06T09:45:10Z
23
1
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ar", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ar datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Arabic by Othmane Rifki results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ar type: common_voice args: ar metrics: - name: Test WER type: wer value: 46.77 --- # Wav2Vec2-Large-XLSR-53-Arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import librosa import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ar", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic") model = Wav2Vec2ForCTC.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic") resamplers = { # all three sampling rates exist in test split 48000: torchaudio.transforms.Resample(48000, 16000), 44100: torchaudio.transforms.Resample(44100, 16000), 32000: torchaudio.transforms.Resample(32000, 16000), } def prepare_example(example): speech, sampling_rate = torchaudio.load(example["path"]) example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy() return example test_dataset = test_dataset.map(prepare_example) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Arabic test data of Common Voice. ```python import librosa import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ar", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic") model = Wav2Vec2ForCTC.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\؟\_\؛\ـ\—]' resamplers = { # all three sampling rates exist in test split 48000: torchaudio.transforms.Resample(48000, 16000), 44100: torchaudio.transforms.Resample(44100, 16000), 32000: torchaudio.transforms.Resample(32000, 16000), } def prepare_example(example): speech, sampling_rate = torchaudio.load(example["path"]) example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy() return example test_dataset = test_dataset.map(prepare_example) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 52.53 ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://huggingface.co/kmfoda/wav2vec2-large-xlsr-arabic/tree/main)
joaoalvarenga/wav2vec2-large-xlsr-portuguese
joaoalvarenga
2021-07-06T09:30:27Z
7
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "apache-2.0", "portuguese-speech-corpus", "xlsr-fine-tuning-week", "PyTorch", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice metrics: - wer tags: - audio - speech - wav2vec2 - pt - apache-2.0 - portuguese-speech-corpus - automatic-speech-recognition - speech - xlsr-fine-tuning-week - PyTorch license: apache-2.0 model-index: - name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice pt type: common_voice args: pt metrics: - name: Test WER type: wer value: 13.766801% --- # Wav2Vec2-Large-XLSR-53-Portuguese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. You need to install Enelvo, an open-source spell correction trained with Twitter user posts `pip install enelvo` ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from enelvo import normaliser import re test_dataset = load_dataset("common_voice", "pt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) norm = normaliser.Normaliser() # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = [norm.normalise(i) for i in processor.batch_decode(pred_ids)] return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (wer)**: 13.766801% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
joaoalvarenga/wav2vec2-large-xlsr-portuguese-a
joaoalvarenga
2021-07-06T09:23:08Z
5
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "apache-2.0", "portuguese-speech-corpus", "xlsr-fine-tuning-week", "PyTorch", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice metrics: - wer tags: - audio - speech - wav2vec2 - pt - apache-2.0 - portuguese-speech-corpus - automatic-speech-recognition - speech - xlsr-fine-tuning-week - PyTorch license: apache-2.0 model-index: - name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice pt type: common_voice args: pt metrics: - name: Test WER type: wer value: 15.037146% --- # Wav2Vec2-Large-XLSR-53-Portuguese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "pt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (wer)**: 15.037146% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
joaoalvarenga/wav2vec2-large-xlsr-53-spanish
joaoalvarenga
2021-07-06T09:14:19Z
0
0
null
[ "audio", "speech", "wav2vec2", "es", "apache-2.0", "spanish-speech-corpus", "automatic-speech-recognition", "xlsr-fine-tuning-week", "PyTorch", "dataset:common_voice", "license:apache-2.0", "model-index", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: es datasets: - common_voice metrics: - wer tags: - audio - speech - wav2vec2 - es - apache-2.0 - spanish-speech-corpus - automatic-speech-recognition - speech - xlsr-fine-tuning-week - PyTorch license: apache-2.0 model-index: - name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Spanish results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ES type: common_voice args: es metrics: - name: Test WER type: wer value: Training --- # Wav2Vec2-Large-XLSR-53-Spanish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "es", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \tbatch["speech"] = resampler(speech_array).squeeze().numpy() \treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "es", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' # TODO: adapt this list to include all special characters you removed from the data resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() \tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \tbatch["speech"] = resampler(speech_array).squeeze().numpy() \treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): \tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) \twith torch.no_grad(): \t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) \tbatch["pred_strings"] = processor.batch_decode(pred_ids) \treturn batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (wer) **: Training ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-spanish/blob/main/fine-tuning.py
joaoalvarenga/wav2vec2-large-100k-voxpopuli-pt
joaoalvarenga
2021-07-06T09:11:37Z
5
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "apache-2.0", "portuguese-speech-corpus", "PyTorch", "voxpopuli", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice metrics: - wer tags: - audio - speech - wav2vec2 - pt - apache-2.0 - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - voxpopuli license: apache-2.0 model-index: - name: JoaoAlvarenga Wav2Vec2 Large 100k VoxPopuli Portuguese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice pt type: common_voice args: pt metrics: - name: Test WER type: wer value: 19.735723% --- # Wav2Vec2-Large-100k-VoxPopuli-Portuguese Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. You need to install Enelvo, an open-source spell correction trained with Twitter user posts `pip install enelvo` ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from enelvo import normaliser import re test_dataset = load_dataset("common_voice", "pt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) norm = normaliser.Normaliser() # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = [norm.normalise(i) for i in processor.batch_decode(pred_ids)] return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (wer)**: 19.735723% ## Training The Common Voice `train`, `validation` datasets were used for training.
joaoalvarenga/wav2vec2-cv-coral-30ep
joaoalvarenga
2021-07-06T09:07:11Z
4
1
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "apache-2.0", "portuguese-speech-corpus", "xlsr-fine-tuning-week", "PyTorch", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice metrics: - wer tags: - audio - speech - wav2vec2 - pt - apache-2.0 - portuguese-speech-corpus - automatic-speech-recognition - speech - xlsr-fine-tuning-week - PyTorch license: apache-2.0 model-index: - name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice pt type: common_voice args: pt metrics: - name: Test WER type: wer value: 15.037146% --- # Wav2Vec2-Large-XLSR-53-Portuguese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "pt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (wer)**: 15.037146% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
joaoalvarenga/model-sid-voxforge-cv-cetuc-0
joaoalvarenga
2021-07-06T08:50:10Z
10
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "apache-2.0", "portuguese-speech-corpus", "xlsr-fine-tuning-week", "PyTorch", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice metrics: - wer tags: - audio - speech - wav2vec2 - pt - apache-2.0 - portuguese-speech-corpus - automatic-speech-recognition - speech - xlsr-fine-tuning-week - PyTorch license: apache-2.0 model-index: - name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice pt type: common_voice args: pt metrics: - name: Test WER type: wer value: 15.037146% --- # Wav2Vec2-Large-XLSR-53-Portuguese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "pt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (wer)**: 15.037146% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil
infinitejoy
2021-07-06T06:33:19Z
5
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ta", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ta datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Tamil results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ta type: common_voice args: ta metrics: - name: Test WER type: wer value: 71.29 --- # Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ta", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil") model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Tamil test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil") model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Tamil") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub('’ ',' ',batch["sentence"]) batch["sentence"] = re.sub(' ‘',' ',batch["sentence"]) batch["sentence"] = re.sub('’|‘','\'',batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 71.29 % ## Training The Common Voice `train` and `validation` datasets were used for training.