modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
huggingtweets/clortown
huggingtweets
2022-04-02T04:51:29Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-02T02:36:56Z
--- language: en thumbnail: http://www.huggingtweets.com/clortown/1648875085007/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1488574779351187458/RlIQNUFG_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">yeosang elf agenda</div> <div style="text-align: center; font-size: 14px;">@clortown</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from yeosang elf agenda. | Data | yeosang elf agenda | | --- | --- | | Tweets downloaded | 3140 | | Retweets | 538 | | Short tweets | 463 | | Tweets kept | 2139 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cupnlna/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clortown's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uii743r9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uii743r9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/clortown') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
nikhil6041/wav2vec2-commonvoice-hindi
nikhil6041
2022-04-02T04:48:26Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T04:27:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-commonvoice-hindi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-commonvoice-hindi This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9825 - Wer: 0.6763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 20.0 | 100 | 0.8801 | 0.6754 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en
JustAdvanceTechonology
2022-04-02T00:07:29Z
4
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-31T10:16:30Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6429 - Validation Loss: 0.8071 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6423 | 0.8071 | 0 | | 0.6424 | 0.8071 | 1 | | 0.6429 | 0.8071 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.5.0 - Datasets 2.0.0 - Tokenizers 0.10.1
DrishtiSharma/poem-gen-spanish-t5-small-d2
DrishtiSharma
2022-04-01T22:38:26Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-01T17:08:12Z
--- license: mit tags: - generated_from_trainer model-index: - name: poem-gen-spanish-t5-small-d2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poem-gen-spanish-t5-small-d2 This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9027 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.223 | 0.73 | 30000 | 3.1479 | | 3.0109 | 1.46 | 60000 | 3.0544 | | 2.8649 | 2.19 | 90000 | 2.9730 | | 2.7603 | 2.93 | 120000 | 2.9301 | | 2.6343 | 3.66 | 150000 | 2.9188 | | 2.5094 | 4.39 | 180000 | 2.9064 | | 2.391 | 5.12 | 210000 | 2.9073 | | 2.3592 | 5.85 | 240000 | 2.9022 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
lgris/wav2vec2-large-xlsr-open-brazilian-portuguese-v2
lgris
2022-04-01T20:35:26Z
858
18
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "hf-asr-leaderboard", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "arxiv:2012.03411", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - hf-asr-leaderboard model-index: - name: wav2vec2-large-xlsr-open-brazilian-portuguese-v2 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice type: common_voice args: pt metrics: - name: Test WER type: wer value: 10.69 license: apache-2.0 --- # Wav2vec 2.0 With Open Brazilian Portuguese Datasets v2 This a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets: - [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus. - [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers. - [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz. - [Common Voice 6.1](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages to train ASR models. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). The set in Portuguese (mostly Brazilian variant) used in this work is the 6.1 version (pt_63h_2020-12-11) that contains about 50 validated hours and 1,120 unique speakers. - [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control. These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. __NOTE: The common voice test reports 10% of WER, however, this model was trained using all the validated instances of Common Voice, except the instances of the test set. This means that some speakers of the train set can be present on the test set.__ ## Imports and dependencies ```python %%capture !pip install datasets !pip install jiwer !pip install torchaudio !pip install transformers !pip install soundfile ``` ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys ``` ## Preparation ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 wer = load_metric("wer") device = "cuda" ``` ```python model_name = 'lgris/wav2vec2-large-xlsr-open-brazilian-portuguese-v2' model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ``` ```python def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["predicted"] = [pred.lower() for pred in batch["predicted"]] batch["target"] = batch["sentence"] return batch ``` ## Tests ### Test against Common Voice (In-domain) ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) for pred, target in zip(result["predicted"][:10], result["target"][:10]): print(pred, "|", target) ``` **Result**: 10.69% ### Test against [TEDx](http://www.openslr.org/100/) (Out-of-domain) ```python !gdown --id 1HJEnvthaGYwcV_whHEywgH2daIN4bQna !tar -xf tedx.tar.gz ``` ```python dataset = load_dataset('csv', data_files={'test': 'test.csv'})['test'] def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) for pred, target in zip(result["predicted"][:10], result["target"][:10]): print(pred, "|", target) ``` **Result**: 34.53%
lgris/bp500-base10k_voxpopuli
lgris
2022-04-01T20:34:35Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "hf-asr-leaderboard", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "dataset:tedx", "dataset:sid", "arxiv:2012.03411", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge - tedx - sid metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - hf-asr-leaderboard model-index: - name: bp500-base10k_voxpopuli results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice type: common_voice args: pt metrics: - name: Test WER type: wer value: 24.9 license: apache-2.0 --- # bp500-base10k_voxpopuli: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets: - [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus. - [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). - [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control. - [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers. - [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech. - [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation; - [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz. These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets. | Dataset | Train | Valid | Test | |--------------------------------|-------:|------:|------:| | CETUC | 94.0h | -- | 5.4h | | Common Voice | 37.8h | 8.9h | 9.5h | | LaPS BM | 0.8h | -- | 0.1h | | MLS | 161.0h | -- | 3.7h | | Multilingual TEDx (Portuguese) | 148.9h | -- | 1.8h | | SID | 7.2h | -- | 1.0h | | VoxForge | 3.9h | -- | 0.1h | | Total | 453.6h | 8.9h | 21.6h | The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/19kkENi8uvczmw9OLSdqnjvKqBE53cl_W/view?usp=sharing). #### Summary | | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG | |----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | bp\_500-base10k_voxpopuli (demonstration below) | 0.120 | 0.249 | 0.039 | 0.227 | 0.169 | 0.349 | 0.116 | 0.181 | | bp\_500-base10k_voxpopuli + 4-gram (demonstration below) | 0.074 | 0.174 | 0.032 | 0.182 | 0.181 | 0.349 | 0.111 | 0.157 | #### Transcription examples | Text | Transcription | |------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| |suco de uva e água misturam bem|suco **deúva** e água **misturão** bem| |culpa do dinheiro|**cupa** do dinheiro| |eu amo shooters call of duty é o meu favorito|eu **omo** **shúters cofedete** é meu favorito| |você pode explicar por que isso acontece|você pode explicar *por* que isso **ontece**| |no futuro você desejará ter começado a investir hoje|no futuro você desejará **a** ter começado a investir hoje| ## Demonstration ```python MODEL_NAME = "lgris/bp500-base10k_voxpopuli" ``` ### Imports and dependencies ```python %%capture !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html !pip install datasets !pip install jiwer !pip install transformers !pip install soundfile !pip install pyctcdecode !pip install https://github.com/kpu/kenlm/archive/master.zip ``` ```python import jiwer import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) from pyctcdecode import build_ctcdecoder import torch import re import sys ``` ### Helpers ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = 16_000 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") batch["target"] = batch["sentence"] return batch ``` ```python def calc_metrics(truths, hypos): wers = [] mers = [] wils = [] for t, h in zip(truths, hypos): try: wers.append(jiwer.wer(t, h)) mers.append(jiwer.mer(t, h)) wils.append(jiwer.wil(t, h)) except: # Empty string? pass wer = sum(wers)/len(wers) mer = sum(mers)/len(mers) wil = sum(wils)/len(wils) return wer, mer, wil ``` ```python def load_data(dataset): data_files = {'test': f'{dataset}/test.csv'} dataset = load_dataset('csv', data_files=data_files)["test"] return dataset.map(map_to_array) ``` ### Model ```python class STT: def __init__(self, model_name, device='cuda' if torch.cuda.is_available() else 'cpu', lm=None): self.model_name = model_name self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) self.processor = Wav2Vec2Processor.from_pretrained(model_name) self.vocab_dict = self.processor.tokenizer.get_vocab() self.sorted_dict = { k.lower(): v for k, v in sorted(self.vocab_dict.items(), key=lambda item: item[1]) } self.device = device self.lm = lm if self.lm: self.lm_decoder = build_ctcdecoder( list(self.sorted_dict.keys()), self.lm ) def batch_predict(self, batch): features = self.processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(self.device) with torch.no_grad(): logits = self.model(input_values).logits if self.lm: logits = logits.cpu().numpy() batch["predicted"] = [] for sample_logits in logits: batch["predicted"].append(self.lm_decoder.decode(sample_logits)) else: pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = self.processor.batch_decode(pred_ids) return batch ``` ### Download datasets ```python %%capture !gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI !mkdir bp_dataset !unzip bp_dataset -d bp_dataset/ ``` ```python %cd bp_dataset ``` /content/bp_dataset ### Tests ```python stt = STT(MODEL_NAME) ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.12096759949218888 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.24977003159495725 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.039769570707070705 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.2269637077788063 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.1691680138494731 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.34908555859018014 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.11649350649350651 ### Tests with LM ```python !rm -rf ~/.cache !gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa') # !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp # stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa') ``` ### Cetuc ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.07499558425787961 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.17442648452610307 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.032774621212121206 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.18213620321569274 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.18102544972868206 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.3491402028105601 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.11189529220779222
lgris/bp500-xlsr
lgris
2022-04-01T20:33:47Z
15
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "hf-asr-leaderboard", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "dataset:tedx", "dataset:sid", "arxiv:2012.03411", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge - tedx - sid metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - hf-asr-leaderboard model-index: - name: bp400-xlsr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice type: common_voice args: pt metrics: - name: Test WER type: wer value: 13.6 license: apache-2.0 --- # bp500-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets: - [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus; - [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt); - [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control; - [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers; - [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz. These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets. | Dataset | Train | Valid | Test | |--------------------------------|-------:|------:|------:| | CETUC | 93.9h | -- | 5.4h | | Common Voice | 37.6h | 8.9h | 9.5h | | LaPS BM | 0.8h | -- | 0.1h | | MLS | 161.0h | -- | 3.7h | | Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h | | SID | 5.0h | -- | 1.0h | | VoxForge | 2.8h | -- | 0.1h | | Total | 437.2h | 8.9h | 21.6h | The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/1J8aR1ltDLQFe-dVrGuyxoRm2uyJjCWgf/view?usp=sharing). #### Summary | | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG | |----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | bp\_500 (demonstration below) | 0.051 | 0.136 | 0.032 | 0.118 | 0.095 | 0.248 | 0.082 | 0.108 | | bp\_500 + 4-gram (demonstration below) | 0.032 | 0.097 | 0.022 | 0.114 | 0.125 | 0.246 | 0.065 | 0.100 | #### Transcription examples | Text | Transcription | |------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| |não há um departamento de mediadores independente das federações e das agremiações|não há um **dearamento** de mediadores independente das federações e das **agrebiações**| |mas que bodega|**masque** bodega| |a cortina abriu o show começou|a cortina abriu o **chô** começou| |por sorte havia uma passadeira|**busote avinhoa** **passadeiro**| |estou maravilhada está tudo pronto|**stou** estou maravilhada está tudo pronto| ## Demonstration ```python MODEL_NAME = "lgris/bp500-xlsr" ``` ### Imports and dependencies ```python %%capture !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html !pip install datasets !pip install jiwer !pip install transformers !pip install soundfile !pip install pyctcdecode !pip install https://github.com/kpu/kenlm/archive/master.zip ``` ```python import jiwer import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) from pyctcdecode import build_ctcdecoder import torch import re import sys ``` ### Helpers ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = 16_000 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") batch["target"] = batch["sentence"] return batch ``` ```python def calc_metrics(truths, hypos): wers = [] mers = [] wils = [] for t, h in zip(truths, hypos): try: wers.append(jiwer.wer(t, h)) mers.append(jiwer.mer(t, h)) wils.append(jiwer.wil(t, h)) except: # Empty string? pass wer = sum(wers)/len(wers) mer = sum(mers)/len(mers) wil = sum(wils)/len(wils) return wer, mer, wil ``` ```python def load_data(dataset): data_files = {'test': f'{dataset}/test.csv'} dataset = load_dataset('csv', data_files=data_files)["test"] return dataset.map(map_to_array) ``` ### Model ```python class STT: def __init__(self, model_name, device='cuda' if torch.cuda.is_available() else 'cpu', lm=None): self.model_name = model_name self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) self.processor = Wav2Vec2Processor.from_pretrained(model_name) self.vocab_dict = self.processor.tokenizer.get_vocab() self.sorted_dict = { k.lower(): v for k, v in sorted(self.vocab_dict.items(), key=lambda item: item[1]) } self.device = device self.lm = lm if self.lm: self.lm_decoder = build_ctcdecoder( list(self.sorted_dict.keys()), self.lm ) def batch_predict(self, batch): features = self.processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(self.device) attention_mask = features.attention_mask.to(self.device) with torch.no_grad(): logits = self.model(input_values, attention_mask=attention_mask).logits if self.lm: logits = logits.cpu().numpy() batch["predicted"] = [] for sample_logits in logits: batch["predicted"].append(self.lm_decoder.decode(sample_logits)) else: pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = self.processor.batch_decode(pred_ids) return batch ``` ### Download datasets ```python %%capture !gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI !mkdir bp_dataset !unzip bp_dataset -d bp_dataset/ ``` ```python %cd bp_dataset ``` /content/bp_dataset ### Tests ```python stt = STT(MODEL_NAME) ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.05159097808687998 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.13659981509705973 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.03196969696969697 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.1178481066463896 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.09544588416964224 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.24868046340420813 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.08246076839826841 ### Tests with LM ```python !rm -rf ~/.cache !gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa') # !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp # stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa') ``` ### Cetuc ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.03222801788375573 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.09713866021093655 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.022310606060606065 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.11408590958696524 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.12502797252979136 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.24603179403904793 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.06542207792207791
lgris/wav2vec2-large-xlsr-open-brazilian-portuguese
lgris
2022-04-01T20:32:58Z
268
9
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "hf-asr-leaderboard", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "arxiv:2012.03411", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - hf-asr-leaderboard license: apache-2.0 model-index: - name: Lucas Gris XLSR Wav2Vec2 Large 53 Brazilian Portuguese results: - task: name: Speech Recognition type: automatic-speech-recognition metrics: - name: Test WER type: wer value: 12.905054857823264% --- # Wav2vec 2.0 With Open Brazilian Portuguese Datasets This a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets: - [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus. - [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers. - [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz. - [Common Voice 6.1](https://commonvoice.mozilla.org/pt) (_only train_): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages to train ASR models. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). The set in Portuguese (mostly Brazilian variant) used in this work is the 6.1 version (pt_63h_2020-12-11) that contains about 50 validated hours and 1,120 unique speakers. - [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control. These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/drive/folders/1XTKIUB4kp3oYOavwH97wq8IPFsxP5sNz?usp=sharing). This model was trained in 80k updates. #### Datasets in number of instances and number of frames The following image shows the overall distribution of the dataset: ![datasets](https://drive.google.com/uc?export=view&id=1DF2_PehB2pZlEJLcBA7yeZQ9EAuLGh_r) #### Transcription examples | Text | Transcription | |------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| | É comum os usuários confundirem software livre com software livre | É comum os __usuares__ __confunder em__ __softwerlivr__ com __softwerlivre__ | | Ele fez tanto ghostwriting que ele começa a se sentir como um fantasma também | Ele fez tanto __golstraitn__ que ele __começou__ a se sentir como um fantasma também | | Arnold apresentou um gráfico mostrando quantas cegonhas ele havia contado nos últimos dez anos | Arnold apresentou um gráfico mostrando quantas __segonhas__ ele havia contado nos últimos dez anos | | Mais cedo ou mais tarde eles descobrirão como ler esses hieróglifos | Mais __sedo__ ou mais tarde eles descobriram como __de__ esses __ierogrôficos__ | | Viver juntos compartilhar objetivos e ter um bom relacionamento | __E ver__ juntos __signafica__ viver juntos ou __fartlhar__ objetivos ter um bom __relacionamentoo__ | | Da mesma forma uma patente pode impedir que concorrentes desenvolvam produtos similares | Da mesma forma uma patente pode impedir que concorrentes __desenvolva__ produtos similares | | Duas mulheres e uma menina levantam com troféus | Duas mulheres e uma menina levantam com __trofés__ | | Esse acrobata de circo deve ter um sistema vestibular bem treinado pensou o espectador | Esse acrobata de __cirko__ deve ter um sistema vestibular __bemtreinado__ pensou o espectador | | Durante a exposição o tribunal pode fazer quaisquer perguntas ou esclarecimentos que considere apropriados | Durante a exposição o tribunal pode fazer quaisquer perguntas ou esclarecimentos que considere __apropriado__ | ## Imports and dependencies ```python %%capture !pip install datasets !pip install jiwer !pip install torchaudio !pip install transformers !pip install soundfile ``` ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys ``` ## Preparation ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 wer = load_metric("wer") device = "cuda" ``` ```python model_name = 'lgris/wav2vec2-large-xlsr-open-brazilian-portuguese' model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ``` ```python def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["predicted"] = [pred.lower() for pred in batch["predicted"]] batch["target"] = batch["sentence"] return batch ``` ## Tests ### Test against Common Voice (In-domain) ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) for pred, target in zip(result["predicted"][:10], result["target"][:10]): print(pred, "|", target) ``` 0.12905054857823264 nem o varanin os altros influmindo os de teterno um bombederster | nem o radar nem os outros instrumentos detectaram o bombardeiro stealth pedir dinheiro é emprestado das pessoas do aldeia | pedir dinheiro emprestado às pessoas da aldeia oito | oito teno calcos | trancá-los realizaram a investigação para resolver o problema | realizar uma investigação para resolver o problema iotube ainda é a melhor plataforma de vídeos | o youtube ainda é a melhor plataforma de vídeos menina e menino beijando nas sombras | menina e menino beijando nas sombras eu sou o senhor | eu sou o senhor duas metcas sentam-se para baixo randes jornais | duas mulheres que sentam-se para baixo lendo jornais eu originalmente esperava | eu originalmente esperava **Result**: 12.90% ### Test against [TEDx](http://www.openslr.org/100/) (Out-of-domain) ```python !gdown --id 1HJEnvthaGYwcV_whHEywgH2daIN4bQna !tar -xf tedx.tar.gz ``` ```python dataset = load_dataset('csv', data_files={'test': 'tedx/test.csv'})['test'] def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) for pred, target in zip(result["predicted"][:10], result["target"][:10]): print(pred, "|", target) ``` 0.35215851987208774 com isso a gente vê que essa rede de pactuação de de deparcerias nos remete a um raciocínio lógico que ao que a gente crê que é a prevenção | com isso a gente vê que essa rede de pactuação de parcerias nos remete a um raciocínio lógico que é o que a gente crê que é a prevenção ente vai para o resultado | e aí a gente vai pro resultado curiosidade hé o que eu descobri desde que comecei a fazer pesquisa lá no ensino médio | e a curiosidade é algo que descobri desde que comecei a fazer pesquisa lá no ensino médio val des quemesho | há vários caminhos que é uma opcissão por comer soldado | que é uma obsessão por comer saudável isso é tão é forte algoltão universal que existem dados que mostram que setenta e cinco por cento das reuniões são dominadas pela voz masculina | e isso é tão forte é algo tão universal que existem dados que mostram que das reuniões são dominadas pela voz masculina não era exatamente isso não estávamos deveto | e não era exatamente isso que nós estávamos a ver durante meci do médio ofiz pesquisa estudei numa escola que chamam a fundação liberate ficava relativamente próximo daqui | durante o ensino médio eu fiz pesquisa estudei numa escola que se chama fundação liberato que fica relativamente próxima daqui oito anos atrás eu fui apresentado por uma doença que até então eu não conhecia e que é bem provável que a maior parte de nós todos aqui não conheçamos | oito anos atrás fui apresentado para uma doença que até então eu não conhecia e que é bem provável que a maior parte de nós todos aqui não conheçamos o terceiro é o museu do ripiopeco | o terceiro é o museu do hip hop **Result**: 35.21%
anwarvic/distilbert-base-uncased-for-fakenews
anwarvic
2022-04-01T19:12:49Z
6
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T21:56:17Z
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # DistilBERT (uncased) for FaceNews Classification This model is a classification model built by fine-tuning [DistilBERT base model](https://huggingface.co/distilbert-base-uncased). This model was trained using [fake-and-real-news-dataset](https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset) for five epochs. > **NOTE:** This model is just a POC (proof-of-concept) for a fellowship I was applying for. ## Intended uses & limitations Note that this model is primarily aimed at classifying an article to either "Fake" or "Real". ### How to use Check this [notebook](https://www.kaggle.com/code/mohamedanwarvic/fakenewsclassifier-fatima-fellowship) on Kaggle.
juaner/distilbert-base-uncased-finetuned-cola
juaner
2022-04-01T18:20:42Z
5
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T17:59:52Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: juaner/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juaner/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1909 - Validation Loss: 0.5553 - Train Matthews Correlation: 0.5279 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5191 | 0.4491 | 0.4718 | 0 | | 0.3270 | 0.4571 | 0.5196 | 1 | | 0.1909 | 0.5553 | 0.5279 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
vicl/canine-c-finetuned-cola
vicl
2022-04-01T17:38:35Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "canine", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T17:13:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: canine-c-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0990441507705203 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-c-finetuned-cola This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6246 - Matthews Correlation: 0.0990 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6142 | 1.0 | 535 | 0.6268 | 0.0 | | 0.607 | 2.0 | 1070 | 0.6234 | 0.0 | | 0.6104 | 3.0 | 1605 | 0.6226 | 0.0 | | 0.5725 | 4.0 | 2140 | 0.6246 | 0.0990 | | 0.5426 | 5.0 | 2675 | 0.6866 | 0.0495 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
bitsanlp/distilbert-base-uncased-distilbert-fakenews-detection
bitsanlp
2022-04-01T17:17:55Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T16:12:00Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-distilbert-fakenews-detection results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilbert-fakenews-detection This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | 0.0125 | 1.0 | 978 | 0.0000 | 1.0 | 1.0 | | 0.0 | 2.0 | 1956 | 0.0000 | 1.0 | 1.0 | | 0.0 | 3.0 | 2934 | 0.0000 | 1.0 | 1.0 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
ahmedzaky91/Fatima-Fake_news_calssifier
ahmedzaky91
2022-04-01T16:54:24Z
0
0
null
[ "region:us" ]
null
2022-04-01T00:00:39Z
## This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on Fake and real dataset on kaggle ## The following hyperparameters were used during training: learning_rate: 5e-05 train_batch_size: 8 num_epochs: 2
vicl/canine-c-finetuned-mrpc
vicl
2022-04-01T16:33:28Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "canine", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-01T16:05:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: canine-c-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8627450980392157 - name: F1 type: f1 value: 0.9014084507042254 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-c-finetuned-mrpc This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4066 - Accuracy: 0.8627 - F1: 0.9014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.5014 | 0.7696 | 0.8479 | | No log | 2.0 | 460 | 0.4755 | 0.7892 | 0.8622 | | 0.5096 | 3.0 | 690 | 0.3645 | 0.8431 | 0.8869 | | 0.5096 | 4.0 | 920 | 0.4066 | 0.8627 | 0.9014 | | 0.2619 | 5.0 | 1150 | 0.4551 | 0.8431 | 0.8877 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
avialfont/ner-dummy-model
avialfont
2022-04-01T14:59:22Z
5
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-01T10:59:27Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ner-dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ner-dummy-model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.6
notexist/ttt
notexist
2022-04-01T13:16:50Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-01T12:45:30Z
--- license: apache-2.0 ---
bmichele/poetry-generation-nextline-mbart-ws-fi-single
bmichele
2022-04-01T11:51:32Z
0
0
null
[ "pytorch", "region:us" ]
null
2022-04-01T11:35:07Z
# poetry-generation-nextline-mbart-ws-fi-single * `nextline`: generates a poem line from previous line(s) * `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) * `ws`: trained on Wikisource data * `fi`: Finnish language * `single`: uses only last poem line as input for generation
blacktree/distilbert-base-uncased-finetuned-cola
blacktree
2022-04-01T09:00:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T15:48:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5285676961321106 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4883 - Matthews Correlation: 0.5286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5269 | 1.0 | 535 | 0.5197 | 0.4187 | | 0.3477 | 2.0 | 1070 | 0.4883 | 0.5286 | | 0.2333 | 3.0 | 1605 | 0.6530 | 0.5079 | | 0.17 | 4.0 | 2140 | 0.7567 | 0.5272 | | 0.1271 | 5.0 | 2675 | 0.8887 | 0.5259 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-10
yy642
2022-04-01T06:04:00Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T23:51:06Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-mnli-rte-wnli-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-mnli-rte-wnli-10 This model is a fine-tuned version of [yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5](https://huggingface.co/yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5876 - Accuracy: 0.9206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0641 | 1.0 | 16558 | 0.4528 | 0.9138 | | 0.0479 | 2.0 | 33116 | 0.5116 | 0.9153 | | 0.0363 | 3.0 | 49674 | 0.5660 | 0.9138 | | 0.0244 | 4.0 | 66232 | 0.5876 | 0.9206 | | 0.0145 | 5.0 | 82790 | 0.6156 | 0.9192 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.0.0 - Tokenizers 0.11.6
Yaxin/xlm-roberta-base-amazon-en-es-fr-mlm
Yaxin
2022-04-01T05:28:33Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "generated_from_trainer", "dataset:Yaxin/amazon_reviews_multi", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-31T14:56:00Z
--- license: mit tags: - generated_from_trainer datasets: - Yaxin/amazon_reviews_multi metrics: - accuracy model-index: - name: xlm-roberta-base-amazon-en-es-fr-mlm results: - task: name: Masked Language Modeling type: fill-mask dataset: name: Yaxin/amazon_reviews_multi type: Yaxin/amazon_reviews_multi metrics: - name: Accuracy type: accuracy value: 0.6951035447140035 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-amazon-en-es-fr-mlm This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the Yaxin/amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.3936 - Accuracy: 0.6951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.6
dchung117/distilbert-base-uncased-finetuned-squad-d5716d28
dchung117
2022-04-01T02:02:28Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
2022-04-01T01:51:41Z
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
arjundd/vortex-release
arjundd
2022-03-31T21:54:43Z
0
0
null
[ "mri", "reconstruction", "artifact correction", "en", "arxiv:2111.02549", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - mri - reconstruction - artifact correction --- # VORTEX <div align="center"> <img src="https://drive.google.com/uc?export=view&id=1q0jAm6Kg5ZhRg3h0w0ZbtIgcRF3_-Vgb" alt="Vortex Schematic" width="700px" /> </div> > **VORTEX: Physics-Driven Data Augmentations for Consistency Training for Robust Accelerated MRI Reconstruction**\ > Arjun Desai, Beliz Gunel, Batu Ozturkler, Harris Beg, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\ > https://arxiv.org/abs/2111.02549 This repository contains the artifacts for the VORTEX paper. To use our code and artifacts in your research, please use the [Meddlr](https://github.com/ad12/meddlr) package.
anisdismail/celebA-orientation-detection
anisdismail
2022-03-31T21:51:37Z
0
2
null
[ "image-classification", "pytorch", "en", "dataset:nielsr/CelebA-faces", "license:cc-by-nc-4.0", "model-index", "region:us" ]
image-classification
2022-03-31T19:48:26Z
--- language: - en license: cc-by-nc-4.0 tags: - image-classification - pytorch datasets: - nielsr/CelebA-faces model-index: - name: celebA_orientation_detection_model results: - task: type: image_classification # Required. Example: automatic-speech-recognition name: Image Classification # Optional. Example: Speech Recognition dataset: type: nielsr/CelebA-faces name: CelebA-faces metrics: - type: f1score # Required. Example: wer value: 0.97 # Required. Example: 20.90 name: Val F1 Score # Optional. Example: Test WER --- ## Detecting the Orientation of CelebA pictures using Deep Learning This model has been trained on a modified version of the CelebA-faces dataset, which was made from flipping 20,000 images upside down and keeping 20,000 images intact.<br> The model relies on Resnet-18 as a backbone and is connected to one output node to classify whether the images are flipped upside down (1) or not (0).
arjundd/noise2recon-release
arjundd
2022-03-31T21:50:44Z
0
1
null
[ "mri", "reconstruction", "denoising", "en", "arxiv:2110.00075", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - mri - reconstruction - denoising --- # Noise2Recon > **Noise2Recon: A Semi-Supervised Framework for Joint MRI Reconstruction and Denoising**\ > Arjun Desai, Batu Ozturkler, Christopher Sandino, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\ > https://arxiv.org/abs/2110.00075 This repository contains the artifacts for the Noise2Recon paper. To use our code and artifacts in your research, please use the [Meddlr](https://github.com/ad12/meddlr) package.
magitz/distilbert-base-uncased-finetuned-emotion
magitz
2022-03-31T20:48:43Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T20:41:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.9267965474109292 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2235 - Accuracy: 0.9265 - F1: 0.9268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8101 | 1.0 | 250 | 0.3177 | 0.9045 | 0.9010 | | 0.2472 | 2.0 | 500 | 0.2235 | 0.9265 | 0.9268 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.8.1 - Datasets 1.18.3 - Tokenizers 0.11.0
arampacha/gpt-neo-therapist-small
arampacha
2022-03-31T20:34:26Z
17
1
transformers
[ "transformers", "pytorch", "tensorboard", "onnx", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-30T08:40:54Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: gpt-neo-therapist-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-therapist-small This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.6731 - Rouge1: 39.5028 - Rouge2: 6.43 - Rougel: 24.0091 - Rougelsum: 35.4481 - Gen Len: 204.1329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 24 - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 9.9955 | 0.97 | 7 | 6.8195 | 18.6047 | 1.0194 | 14.8565 | 17.9774 | 212.0983 | | 6.9729 | 1.97 | 14 | 5.6783 | 26.3789 | 3.0779 | 18.5195 | 24.8592 | 203.0925 | | 5.2614 | 2.97 | 21 | 5.0506 | 34.9428 | 4.921 | 21.9741 | 32.1122 | 206.2775 | | 5.0599 | 3.97 | 28 | 4.7372 | 38.5235 | 6.2251 | 23.5923 | 34.5633 | 204.2428 | | 4.5479 | 4.97 | 35 | 4.6731 | 39.5028 | 6.43 | 24.0091 | 35.4481 | 204.1329 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
novarac23/distilbert-base-uncased-finetuned-emotion
novarac23
2022-03-31T19:39:15Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T19:05:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9251919899321654 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2234 - Accuracy: 0.925 - F1: 0.9252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8213 | 1.0 | 250 | 0.3210 | 0.9025 | 0.8989 | | 0.2463 | 2.0 | 500 | 0.2234 | 0.925 | 0.9252 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Tahsin-Mayeesha/distilbert-finetuned-fakenews
Tahsin-Mayeesha
2022-03-31T17:11:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T15:58:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-finetuned-fakenews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finetuned-fakenews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0049 - Accuracy: 0.9995 - F1: 0.9995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0392 | 1.0 | 500 | 0.0059 | 0.999 | 0.999 | | 0.002 | 2.0 | 1000 | 0.0047 | 0.9995 | 0.9995 | | 0.0001 | 3.0 | 1500 | 0.0047 | 0.9995 | 0.9995 | | 0.0001 | 4.0 | 2000 | 0.0049 | 0.9995 | 0.9995 | | 0.0 | 5.0 | 2500 | 0.0049 | 0.9995 | 0.9995 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.0
eren23/pneumonia-bielefeld-dl-course
eren23
2022-03-31T15:55:27Z
61
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-27T12:17:21Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pneumonia-bielefeld-dl-course results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8456632494926453 --- # pneumonia-bielefeld-dl-course This registry contains the model for making pneumonia predictions and was prepared for Bielefeld University Deep Learning course homework. The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset.
huggingtweets/youtube
huggingtweets
2022-03-31T14:06:33Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-31T14:05:50Z
--- language: en thumbnail: http://www.huggingtweets.com/youtube/1648735587597/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1427292844612595720/RC1YSvuT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">YouTube</div> <div style="text-align: center; font-size: 14px;">@youtube</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from YouTube. | Data | YouTube | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 23 | | Short tweets | 104 | | Tweets kept | 3123 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dx34obn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @youtube's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/youtube') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Edresson/wav2vec2-large-xlsr-coraa-portuguese
Edresson
2022-03-31T13:28:43Z
632
15
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "hf-asr-leaderboard", "PyTorch", "dataset:CORAA", "arxiv:2110.15731", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: pt datasets: - CORAA metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - hf-asr-leaderboard - speech - PyTorch license: apache-2.0 model-index: - name: Edresson Casanova XLSR Wav2Vec2 Large 53 Portuguese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: CORAA type: CORAA args: pt metrics: - name: Test CORAA WER type: wer value: 25.26 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: pt metrics: - name: Test WER on Common Voice 7 type: wer value: 20.08 --- # Wav2vec 2.0 trained with CORAA Portuguese Dataset This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA) # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese") ``` # Results For the results check the [CORAA article](https://arxiv.org/abs/2110.15731) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
scasutt/wav2vec2-base_toy_train_data_slow_10pct
scasutt
2022-03-31T13:12:54Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-27T02:28:24Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_slow_10pct results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_slow_10pct This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3248 - Wer: 0.7175 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0663 | 2.1 | 500 | 3.0725 | 0.9982 | | 1.1679 | 4.2 | 1000 | 1.3620 | 0.8889 | | 0.6789 | 6.3 | 1500 | 1.2182 | 0.8160 | | 0.5764 | 8.4 | 2000 | 1.2469 | 0.7667 | | 0.4603 | 10.5 | 2500 | 1.2851 | 0.7533 | | 0.4085 | 12.6 | 3000 | 1.2351 | 0.7401 | | 0.3583 | 14.7 | 3500 | 1.2455 | 0.7367 | | 0.3158 | 16.81 | 4000 | 1.3663 | 0.7261 | | 0.2817 | 18.91 | 4500 | 1.3248 | 0.7175 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
mustapha/flipped-image-ViT
mustapha
2022-03-31T12:30:19Z
61
2
transformers
[ "transformers", "pytorch", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-30T21:57:42Z
Hello world, This model have been created in the context of ` Fatima Fellowship Programme`. The model was trained on the Cifar10 dataset with a googd final accuracy of arround 98%. This model determines wether an image is flipped of not.
Khalsuu/2nd-wav2vec2-l-xls-r-300m-turkish-test
Khalsuu
2022-03-31T12:09:32Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T08:45:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: 2nd-wav2vec2-l-xls-r-300m-turkish-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2nd-wav2vec2-l-xls-r-300m-turkish-test This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6019 - Wer: 0.4444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0522 | 3.67 | 400 | 0.7773 | 0.7296 | | 0.5369 | 7.34 | 800 | 0.6282 | 0.5888 | | 0.276 | 11.01 | 1200 | 0.5998 | 0.5330 | | 0.1725 | 14.68 | 1600 | 0.5859 | 0.4908 | | 0.1177 | 18.35 | 2000 | 0.6019 | 0.4444 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
Neulvo/bert-finetuned-squad
Neulvo
2022-03-31T12:08:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-31T10:54:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
YiTian/wav2vec2-common_voice-tr-demo
YiTian
2022-03-31T11:40:04Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "tr", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T09:39:08Z
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-tr-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tr-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 2.9841 - Wer: 0.9999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 7.14 | 100 | 3.6689 | 1.0 | | No log | 14.29 | 200 | 3.0280 | 0.9999 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.0 - Datasets 1.18.0 - Tokenizers 0.11.6
frtna/jwt300_mt-Italian-to-Spanish_transformers
frtna
2022-03-31T11:18:09Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:new_dataset", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-29T09:49:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - new_dataset metrics: - sacrebleu model-index: - name: jwt300_mt-Italian-to-Spanish_transformers results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: new_dataset type: new_dataset args: jwt300_mt metrics: - name: Sacrebleu type: sacrebleu value: 0.9057 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # jwt300_mt-Italian-to-Spanish_transformers This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the new_dataset dataset. It achieves the following results on the evaluation set: - Loss: 2.4425 - Sacrebleu: 0.9057 - Gen Len: 18.1276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 2.7545 | 1.0 | 2229 | 2.4425 | 0.9057 | 18.1276 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.6
scasutt/wav2vec2-base_toy_train_data_random_low_pass
scasutt
2022-03-31T10:42:02Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T08:21:35Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_random_low_pass results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_random_low_pass This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3227 - Wer: 0.7288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0795 | 2.1 | 500 | 3.2227 | 0.9982 | | 1.21 | 4.2 | 1000 | 1.3713 | 0.8879 | | 0.742 | 6.3 | 1500 | 1.2660 | 0.8296 | | 0.5877 | 8.4 | 2000 | 1.2921 | 0.7794 | | 0.4823 | 10.5 | 2500 | 1.2899 | 0.7565 | | 0.4036 | 12.6 | 3000 | 1.3486 | 0.7494 | | 0.391 | 14.7 | 3500 | 1.2701 | 0.7466 | | 0.3426 | 16.81 | 4000 | 1.3570 | 0.7279 | | 0.3015 | 18.91 | 4500 | 1.3227 | 0.7288 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
nikhil6041/wav2vec2-commonvoice-tamil
nikhil6041
2022-03-31T09:24:01Z
18
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-31T04:00:23Z
--- license: mit tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-commonvoice-tamil results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-commonvoice-tamil This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-tamil-tam-250](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-tamil-tam-250) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 3.3415 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 5.384 | 1.69 | 200 | 3.3400 | 1.0 | | 3.3085 | 3.39 | 400 | 3.3609 | 1.0 | | 3.3008 | 5.08 | 600 | 3.3331 | 1.0 | | 3.2852 | 6.78 | 800 | 3.3492 | 1.0 | | 3.2908 | 8.47 | 1000 | 3.3318 | 1.0 | | 3.2865 | 10.17 | 1200 | 3.3501 | 1.0 | | 3.2826 | 11.86 | 1400 | 3.3403 | 1.0 | | 3.2875 | 13.56 | 1600 | 3.3335 | 1.0 | | 3.2899 | 15.25 | 1800 | 3.3311 | 1.0 | | 3.2755 | 16.95 | 2000 | 3.3617 | 1.0 | | 3.2877 | 18.64 | 2200 | 3.3317 | 1.0 | | 3.2854 | 20.34 | 2400 | 3.3560 | 1.0 | | 3.2878 | 22.03 | 2600 | 3.3332 | 1.0 | | 3.2766 | 23.73 | 2800 | 3.3317 | 1.0 | | 3.2943 | 25.42 | 3000 | 3.3737 | 1.0 | | 3.2845 | 27.12 | 3200 | 3.3347 | 1.0 | | 3.2765 | 28.81 | 3400 | 3.3415 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
emiyasstar/ch-w2v-conformer
emiyasstar
2022-03-31T08:48:13Z
0
2
null
[ "region:us" ]
null
2022-03-29T15:44:56Z
The ch-w2v-conformer model uses following datasets to pretrain: ISML datasets (6 languages,70k hours): internal dataset contains 40k hours Chinese, Cantonese, Tibetan, Inner Mongolian, Inner Kazakh, Uighur. Babel datasets (17 languages, 2k hours): Assamese, Bengali, Cantonese, Cebuano, Georgian, Haitian, Kazakh, Kurmanji, Lao, Pashto, Swahili, Tagalog, Tamil, Tok, Turkish, Vietnamese, Zulu After pretraining, we build ASR system based on CTC-Attention structure. In very low resource task, we find that if too many initialization network structures are constructed in the upper layer of pre-training conformer encoder, the migration performance of the pre-training model will be destroyed, so we only build a single-layer transformer decoder for joint training. pretrained model link: ## constrained-plus Task Performance * Languages: Cantonese,mongolian,kazakh * config: conf/train_conformer_large_10h.yaml * Feature info: using mfcc feature, with dither 1.0, without cmvn * Training info: lr 0.001, batch size 10, 4 gpus on V100, acc_grad 1, 80 epochs * Decoding info: ctc_weight 0.5, average_num 35 dev set results trained only with 10 hours training set ## w2v-Conformer | decoding_method | Cantonese(CER) | mongolian(WER) | |:-------------------:|:----:|:----:| | ctc_greedy_search | 31.46 | 53.64 | | ctc_prefix_search | 31.47 | 53.50 | | attention_rescoring | 31.45 | 52.96 | ## Conformer (train from scartch) | decoding_method | Cantonese(CER) | mongolian(WER) | |:-------------------:|----:|:----:| | ctc_greedy_search | 61.43 | 89.38 | | ctc_prefix_search | 61.37 | 89.53| | attention_rescoring | 60.61 | 89.60|
thaind/layoutlmv2-jaen-gemai
thaind
2022-03-31T08:13:42Z
4
0
transformers
[ "transformers", "pytorch", "layoutlmv2", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-31T07:38:07Z
This is model fine tune from layoutlmv2 model for japanese and english language
snehatyagi/wav2vec2_test
snehatyagi
2022-03-31T07:21:45Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-26T09:11:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_test This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 91.1661 - Wer: 0.5714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 11.9459 | 100.0 | 100 | 46.9901 | 1.0 | | 3.2175 | 200.0 | 200 | 73.0950 | 1.0 | | 1.8117 | 300.0 | 300 | 78.4884 | 0.6735 | | 1.3694 | 400.0 | 400 | 84.0168 | 0.6327 | | 1.1392 | 500.0 | 500 | 85.2083 | 0.5918 | | 0.979 | 600.0 | 600 | 88.9109 | 0.5918 | | 0.8917 | 700.0 | 700 | 89.0310 | 0.5918 | | 0.8265 | 800.0 | 800 | 90.0659 | 0.6122 | | 0.769 | 900.0 | 900 | 91.8476 | 0.5714 | | 0.7389 | 1000.0 | 1000 | 91.1661 | 0.5714 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.6
unjustify/autotrain-commonsence-689620825
unjustify
2022-03-31T06:38:08Z
7
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain", "en", "dataset:unjustify/autotrain-data-commonsence", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-31T06:18:51Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - unjustify/autotrain-data-commonsence co2_eq_emissions: 20.656741915705204 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 689620825 - CO2 Emissions (in grams): 20.656741915705204 ## Validation Metrics - Loss: 0.7315372824668884 - Accuracy: 0.6354949675117849 - Precision: 0.63792194092827 - Recall: 0.6191451241361658 - AUC: 0.6912165223485615 - F1: 0.6283932978308872 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/unjustify/autotrain-commonsence-689620825 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("unjustify/autotrain-commonsence-689620825", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("unjustify/autotrain-commonsence-689620825", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
lazyturtl/roomclassifier
lazyturtl
2022-03-31T01:09:57Z
2,692
16
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-31T01:09:48Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: roomclassifier results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9402984976768494 --- # roomclassifier Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Bathroom ![Bathroom](images/Bathroom.jpg) #### Bedroom ![Bedroom](images/Bedroom.jpg) #### DinningRoom ![DinningRoom](images/DinningRoom.jpg) #### Kitchen ![Kitchen](images/Kitchen.jpg) #### Laundry room ![Laundry room](images/Laundry_room.jpg) #### Livingroom ![Livingroom](images/Livingroom.jpg)
michiyasunaga/BioLinkBERT-large
michiyasunaga
2022-03-31T00:54:57Z
4,470
33
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "exbert", "linkbert", "biolinkbert", "fill-mask", "question-answering", "text-classification", "token-classification", "en", "dataset:pubmed", "arxiv:2203.15827", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2022-03-08T06:20:38Z
--- license: apache-2.0 language: en datasets: - pubmed tags: - bert - exbert - linkbert - biolinkbert - feature-extraction - fill-mask - question-answering - text-classification - token-classification widget: - text: "Sunitinib is a tyrosine kinase inhibitor" --- ## BioLinkBERT-large BioLinkBERT-large model pretrained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts along with citation link information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT). This model achieves state-of-the-art performance on several biomedical NLP benchmarks such as [BLURB](https://microsoft.github.io/BLURB/) and [MedQA-USMLE](https://github.com/jind11/MedQA). ## Model description LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document. LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval). ## Intended uses & limitations The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text). ### How to use To use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/BioLinkBERT-large') model = AutoModel.from_pretrained('michiyasunaga/BioLinkBERT-large') inputs = tokenizer("Sunitinib is a tyrosine kinase inhibitor", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases. ## Evaluation results When fine-tuned on downstream tasks, LinkBERT achieves the following results. **Biomedical benchmarks ([BLURB](https://microsoft.github.io/BLURB/), [MedQA](https://github.com/jind11/MedQA), [MMLU](https://github.com/hendrycks/test), etc.):** BioLinkBERT attains new state-of-the-art. | | BLURB score | PubMedQA | BioASQ | MedQA-USMLE | | ---------------------- | -------- | -------- | ------- | -------- | | PubmedBERT-base | 81.10 | 55.8 | 87.5 | 38.1 | | **BioLinkBERT-base** | **83.39** | **70.2** | **91.4** | **40.0** | | **BioLinkBERT-large** | **84.30** | **72.2** | **94.8** | **44.6** | | | MMLU-professional medicine | | ---------------------- | -------- | | GPT-3 (175 params) | 38.7 | | UnifiedQA (11B params) | 43.2 | | **BioLinkBERT-large (340M params)** | **50.7** | ## Citation If you find LinkBERT useful in your project, please cite the following: ```bibtex @InProceedings{yasunaga2022linkbert, author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang}, title = {LinkBERT: Pretraining Language Models with Document Links}, year = {2022}, booktitle = {Association for Computational Linguistics (ACL)}, } ```
michiyasunaga/BioLinkBERT-base
michiyasunaga
2022-03-31T00:51:21Z
6,225
36
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "exbert", "linkbert", "biolinkbert", "fill-mask", "question-answering", "text-classification", "token-classification", "en", "dataset:pubmed", "arxiv:2203.15827", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2022-03-08T07:22:12Z
--- license: apache-2.0 language: en datasets: - pubmed tags: - bert - exbert - linkbert - biolinkbert - feature-extraction - fill-mask - question-answering - text-classification - token-classification widget: - text: "Sunitinib is a tyrosine kinase inhibitor" --- ## BioLinkBERT-base BioLinkBERT-base model pretrained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts along with citation link information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT). This model achieves state-of-the-art performance on several biomedical NLP benchmarks such as [BLURB](https://microsoft.github.io/BLURB/) and [MedQA-USMLE](https://github.com/jind11/MedQA). ## Model description LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document. LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval). ## Intended uses & limitations The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text). ### How to use To use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/BioLinkBERT-base') model = AutoModel.from_pretrained('michiyasunaga/BioLinkBERT-base') inputs = tokenizer("Sunitinib is a tyrosine kinase inhibitor", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases. ## Evaluation results When fine-tuned on downstream tasks, LinkBERT achieves the following results. **Biomedical benchmarks ([BLURB](https://microsoft.github.io/BLURB/), [MedQA](https://github.com/jind11/MedQA), [MMLU](https://github.com/hendrycks/test), etc.):** BioLinkBERT attains new state-of-the-art. | | BLURB score | PubMedQA | BioASQ | MedQA-USMLE | | ---------------------- | -------- | -------- | ------- | -------- | | PubmedBERT-base | 81.10 | 55.8 | 87.5 | 38.1 | | **BioLinkBERT-base** | **83.39** | **70.2** | **91.4** | **40.0** | | **BioLinkBERT-large** | **84.30** | **72.2** | **94.8** | **44.6** | | | MMLU-professional medicine | | ---------------------- | -------- | | GPT-3 (175 params) | 38.7 | | UnifiedQA (11B params) | 43.2 | | **BioLinkBERT-large (340M params)** | **50.7** | ## Citation If you find LinkBERT useful in your project, please cite the following: ```bibtex @InProceedings{yasunaga2022linkbert, author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang}, title = {LinkBERT: Pretraining Language Models with Document Links}, year = {2022}, booktitle = {Association for Computational Linguistics (ACL)}, } ```
GleamEyeBeast/ascend_with_english
GleamEyeBeast
2022-03-30T23:35:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:timit_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-30T22:09:15Z
--- tags: - generated_from_trainer datasets: - timit_asr model-index: - name: ascend_with_english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ascend_with_english This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on the timit_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.3049 - Wer: 0.2251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.3524 | 0.3016 | | 0.4246 | 2.0 | 578 | 0.3132 | 0.2607 | | 0.4246 | 3.0 | 867 | 0.3044 | 0.2373 | | 0.2008 | 4.0 | 1156 | 0.3075 | 0.2302 | | 0.2008 | 5.0 | 1445 | 0.3049 | 0.2251 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
yinde/fatimah_fake_news_bert
yinde
2022-03-30T22:41:12Z
16
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T20:54:21Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: fatimah_fake_news_bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fatimah_fake_news_bert This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on [Fake and real dataset on kaggle ]([distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)) It achieves the following results on the evaluation set: - Loss: 0.0010 - Accuracy: 0.9998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 20 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3298 | 0.06 | 200 | 0.0094 | 0.9987 | | 0.0087 | 0.11 | 400 | 0.0091 | 0.9988 | | 0.0126 | 0.17 | 600 | 0.0132 | 0.9965 | | 0.0081 | 0.22 | 800 | 0.0100 | 0.9987 | | 0.0132 | 0.28 | 1000 | 0.0086 | 0.9990 | | 0.0131 | 0.33 | 1200 | 0.0070 | 0.9986 | | 0.0086 | 0.39 | 1400 | 0.0079 | 0.9990 | | 0.0041 | 0.45 | 1600 | 0.0057 | 0.9991 | | 0.0069 | 0.5 | 1800 | 0.0083 | 0.9989 | | 0.0052 | 0.56 | 2000 | 0.0043 | 0.9993 | | 0.0 | 0.61 | 2200 | 0.0047 | 0.9993 | | 0.003 | 0.67 | 2400 | 0.0052 | 0.9994 | | 0.0126 | 0.72 | 2600 | 0.0028 | 0.9997 | | 0.0047 | 0.78 | 2800 | 0.0018 | 0.9996 | | 0.0 | 0.84 | 3000 | 0.0027 | 0.9996 | | 0.0001 | 0.89 | 3200 | 0.0029 | 0.9996 | | 0.0079 | 0.95 | 3400 | 0.0010 | 0.9998 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
UBC-NLP/MARBERTv2
UBC-NLP
2022-03-30T21:52:31Z
3,124
8
transformers
[ "transformers", "pytorch", "tf", "bert", "fill-mask", "Arabic BERT", "MSA", "Twitter", "Masked Langauge Model", "ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - ar tags: - Arabic BERT - MSA - Twitter - Masked Langauge Model widget: - text: "اللغة العربية هي لغة [MASK]." --- <img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="30%" height="30%" align="right"/> **MARBERTv2** is one of three models described in our **ACL 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://aclanthology.org/2021.acl-long.551.pdf)**. We find that results with ARBERT and MARBERT on QA are not competitive, a clear discrepancy from what we have observed thus far on other tasksWe hypothesize this is because the two models are pre-trained with a sequence length of only 128, which does not allow them to sufficiently capture both a question and its likely answer within the same sequence window during the pre-training. To rectify this, we further pre-train the stronger model, MARBERT, on the same MSA data as ARBERT in addition to AraNews dataset but with a bigger sequence length of 512 tokens for 40 epochs. We call this further pre-trained model **MARBERTv2**, noting it has **29B tokens**. MARBERTv2 acquires best performance on all but one test set, where XLM-RLarge marginally outperforms us (only in F1). For more information, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert). # BibTex If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): ```bibtex @inproceedings{abdul-mageed-etal-2021-arbert, title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic", author = "Abdul-Mageed, Muhammad and Elmadany, AbdelRahim and Nagoudi, El Moatez Billah", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.551", doi = "10.18653/v1/2021.acl-long.551", pages = "7088--7105", abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.", } ``` ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
mrm8488/biomedtra-small-es
mrm8488
2022-03-30T21:07:50Z
3
2
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "pretraining", "Spanish", "Electra", "Bio", "Medical", "es", "dataset:cowese", "arxiv:1406.2661", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: es tags: - Spanish - Electra - Bio - Medical datasets: - cowese --- ## 🦠 BIOMEDtra 🏥 **BIOMEDtra** (small) is an Electra like model (discriminator in this case) trained on [Spanish Biomedical Crawled Corpus](https://zenodo.org/record/5510033#.Yhdk1ZHMLJx). As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Training details The model was trained using the Electra base code for 3 days on 1 GPU (Tesla V100 16GB). ## Dataset details The largest Spanish biomedical and heath corpus to date gathered from a massive Spanish health domain crawler over more than 3,000 URLs were downloaded and preprocessed. The collected data have been preprocessed to produce the **CoWeSe** (Corpus Web Salud Español) resource, a large-scale and high-quality corpus intended for biomedical and health NLP in Spanish. ## Model details ⚙ |Param| # Value| |-----|--------| |Layers| 12 | |Hidden | 256 | |Params| 14M | ## Evaluation metrics (for discriminator) 🧾 |Metric | # Score | |-------|---------| |Accuracy| 0.9561| |Precision| 0.808| |Recall | 0.531 | |AUC | 0.949| ## Benchmarks 🔨 WIP 🚧 ## How to use the discriminator in `transformers` ```py from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("mrm8488/biomedtra-small-es") tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/biomedtra-small-es") sentence = "Los españoles tienden a sufir déficit de vitamina c" fake_sentence = "Los españoles tienden a déficit sufrir de vitamina c" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % prediction, end="") for prediction in predictions.tolist()] ``` ## Acknowledgments TBA ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2022biomedtra, title={Spanish BioMedical Electra (small)}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/biomedtra-small-es}, year={2022} } ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
vlsb/autotrain-security-text-classification-albert-688320769
vlsb
2022-03-30T20:59:32Z
15
2
transformers
[ "transformers", "pytorch", "albert", "text-classification", "autotrain", "unk", "dataset:vlsb/autotrain-data-security-text-classification-albert", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T20:55:59Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - vlsb/autotrain-data-security-text-classification-albert co2_eq_emissions: 3.670416179055797 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 688320769 - CO2 Emissions (in grams): 3.670416179055797 ## Validation Metrics - Loss: 0.3046899139881134 - Accuracy: 0.8826530612244898 - Precision: 0.9181818181818182 - Recall: 0.8782608695652174 - AUC: 0.9423510466988727 - F1: 0.8977777777777778 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-text-classification-albert-688320769 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
mrm8488/longformer-base-4096-spanish
mrm8488
2022-03-30T20:36:36Z
49
16
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "Long documents", "longformer", "bertin", "spanish", "es", "dataset:spanish_large_corpus", "arxiv:2004.05150", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - es license: mit widget: - text: "Manuel Romero ha creado con el equipo de BERTIN un modelo que procesa documentos <mask> largos." tags: - Long documents - longformer - bertin - spanish datasets: - spanish_large_corpus --- # longformer-base-4096-spanish ## [Longformer](https://arxiv.org/abs/2004.05150) is a Transformer model for long documents. `longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint (**BERTIN** in this case) and pre-trained for *MLM* on long documents (from BETO's `all_wikis`). It supports sequences of length up to 4,096! **Longformer** uses a combination of a sliding window (*local*) attention and *global* attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations. This model was made following the research done by [Iz Beltagy and Matthew E. Peters and Arman Cohan](https://arxiv.org/abs/2004.05150). ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2022longformer-base-4096-spanish, title={Spanish LongFormer by Manuel Romero}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/longformer-base-4096-spanish}}, year={2022} } ```
sc2qa/msmarco_qa_classifier
sc2qa
2022-03-30T18:33:34Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "arxiv:2109.04689", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
For details, please refer to the following links. Github repo: https://github.com/amazon-research/SC2QA-DRIL Paper: [Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning](https://arxiv.org/pdf/2109.04689.pdf)
SAGAR4REAL/wav2vec2hindiasr
SAGAR4REAL
2022-03-30T17:32:46Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-30T14:51:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2hindiasr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2hindiasr This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
hoangbinhmta99/wav2vec-demo
hoangbinhmta99
2022-03-30T17:18:48Z
9
2
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
Convert from model .pt to transformer Link: https://huggingface.co/tommy19970714/wav2vec2-base-960h Bash: ```bash pip install transformers[sentencepiece] pip install fairseq -U git clone https://github.com/huggingface/transformers.git cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py . wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt -O ./wav2vec_small.pt mkdir dict wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt mkdir outputs python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./finetuned/wav2vec_small.pt --dict_path ./dict/dict.ltr.txt --not_finetuned ``` # install and upload model ``` curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash git lfs install sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/hoangbinhmta99/wav2vec-demo ls cd wav2vec-demo/ git status git add . git commit -m "First model version" git config --global user.email [yourname] git config --global user.name [yourpass] git commit -m "First model version" git push ```
scasutt/wav2vec2-base_toy_train_data_random_high_pass
scasutt
2022-03-30T16:37:23Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-30T13:17:36Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_random_high_pass results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_random_high_pass This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2841 - Wer: 0.7222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.061 | 2.1 | 500 | 3.0551 | 1.0 | | 1.1294 | 4.2 | 1000 | 1.3102 | 0.8777 | | 0.7051 | 6.3 | 1500 | 1.2081 | 0.8092 | | 0.5421 | 8.4 | 2000 | 1.2280 | 0.7684 | | 0.448 | 10.5 | 2500 | 1.2459 | 0.7506 | | 0.3777 | 12.6 | 3000 | 1.3533 | 0.7631 | | 0.3611 | 14.7 | 3500 | 1.2058 | 0.7291 | | 0.3177 | 16.81 | 4000 | 1.3168 | 0.7185 | | 0.279 | 18.91 | 4500 | 1.2841 | 0.7222 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
facebook/stylenerf-ffhq-config-basic
facebook
2022-03-30T14:59:16Z
0
2
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2022-03-20T23:34:44Z
--- license: cc-by-nc-4.0 --- ## StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis **Abstract:** *We propose StyleNeRF, a 3D-aware generative model for photo-realistic high-resolution image synthesis with high multi-view consistency, which can be trained on unstructured 2D images. Existing approaches either cannot synthesize high-resolution images with fine details or yield noticeable 3D-inconsistent artifacts. In addition, many of them lack control over style attributes and explicit 3D camera poses. StyleNeRF integrates the neural radiance field (NeRF) into a style-based generator to tackle the aforementioned challenges, i.e., improving rendering efficiency and 3D consistency for high-resolution image generation. We perform volume rendering only to produce a low-resolution feature map and progressively apply upsampling in 2D to address the first issue. To mitigate the inconsistencies caused by 2D upsampling, we propose multiple designs, including a better upsampler and a new regularization loss. With these designs, StyleNeRF can synthesize high-resolution images at interactive rates while preserving 3D consistency at high quality. StyleNeRF also enables control of camera poses and different levels of styles, which can generalize to unseen views. It also supports challenging tasks, including zoom-in and-out, style mixing, inversion, and semantic editing.* ## Model description This is a pre-trained StyleNeRF checkpoint at a resolution of 512^2 based on the basic configuration used in the original paper. ## How to use Please check the official opensource code at [here](https://github.com/facebookresearch/StyleNeRF).
manu/lilt-camembert-base
manu
2022-03-30T14:49:30Z
5
1
transformers
[ "transformers", "pytorch", "liltrobertalike", "fill-mask", "token-classification", "fr", "dataset:iit-cdip", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-28T13:16:58Z
--- language: - fr tags: - token-classification - fill-mask license: mit datasets: - iit-cdip --- This model is the combined camembert-base model, with the pretrained lilt checkpoint from the paper "LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding". Original repository: https://github.com/jpWang/LiLT To use it, it is necessary to fork the modeling and configuration files from the original repository, and load the pretrained model from the corresponding classes (LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel). They can also be preloaded with the AutoConfig/model factories as such: ```python from transformers import AutoModelForTokenClassification, AutoConfig from path_to_custom_classes import ( LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel ) def patch_transformers(): AutoConfig.register("liltrobertalike", LiLTRobertaLikeConfig) AutoModel.register(LiLTRobertaLikeConfig, LiLTRobertaLikeModel) AutoModelForTokenClassification.register(LiLTRobertaLikeConfig, LiLTRobertaLikeForTokenClassification) # etc... ``` To load the model, it is then possible to use: ```python # patch_transformers() must have been executed beforehand tokenizer = AutoTokenizer.from_pretrained("camembert-base") model = AutoModel.from_pretrained("manu/lilt-camembert-base") model = AutoModelForTokenClassification.from_pretrained("manu/lilt-camembert-base") # to be fine-tuned on a token classification task ```
GioReg/ita1
GioReg
2022-03-30T14:42:06Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-28T20:17:13Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: ita1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ita1 This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5892 - Accuracy: 0.776 - F1: 0.5912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
abdusah/aradia-ctc-v1
abdusah
2022-03-30T13:48:41Z
23
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "abdusahmbzuai/arabic_speech_massive_300hrs", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-23T10:58:05Z
--- tags: - automatic-speech-recognition - abdusahmbzuai/arabic_speech_massive_300hrs - generated_from_trainer model-index: - name: aradia-ctc-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aradia-ctc-v1 This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.7171 - Wer: 0.3336 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.22 | 100 | 5.1889 | 1.0 | | No log | 0.43 | 200 | 3.1129 | 1.0 | | No log | 0.65 | 300 | 3.0503 | 1.0 | | No log | 0.87 | 400 | 3.0279 | 1.0 | | 6.2756 | 1.09 | 500 | 2.9965 | 1.0 | | 6.2756 | 1.3 | 600 | 2.3618 | 0.9993 | | 6.2756 | 1.52 | 700 | 1.2715 | 0.8758 | | 6.2756 | 1.74 | 800 | 0.9971 | 0.7156 | | 6.2756 | 1.96 | 900 | 0.8927 | 0.6382 | | 1.712 | 2.17 | 1000 | 0.8252 | 0.5926 | | 1.712 | 2.39 | 1100 | 0.7794 | 0.5434 | | 1.712 | 2.61 | 1200 | 0.7557 | 0.5092 | | 1.712 | 2.83 | 1300 | 0.7347 | 0.5203 | | 1.712 | 3.04 | 1400 | 0.7189 | 0.4929 | | 0.9305 | 3.26 | 1500 | 0.6820 | 0.4595 | | 0.9305 | 3.48 | 1600 | 0.6792 | 0.4504 | | 0.9305 | 3.69 | 1700 | 0.6596 | 0.4442 | | 0.9305 | 3.91 | 1800 | 0.6756 | 0.4432 | | 0.9305 | 4.13 | 1900 | 0.6663 | 0.4392 | | 0.737 | 4.35 | 2000 | 0.6479 | 0.4372 | | 0.737 | 4.56 | 2100 | 0.6353 | 0.4203 | | 0.737 | 4.78 | 2200 | 0.6251 | 0.4088 | | 0.737 | 5.0 | 2300 | 0.6209 | 0.4177 | | 0.737 | 5.22 | 2400 | 0.6639 | 0.4094 | | 0.6247 | 5.43 | 2500 | 0.6408 | 0.3970 | | 0.6247 | 5.65 | 2600 | 0.6373 | 0.3932 | | 0.6247 | 5.87 | 2700 | 0.6411 | 0.3928 | | 0.6247 | 6.09 | 2800 | 0.6378 | 0.3897 | | 0.6247 | 6.3 | 2900 | 0.6396 | 0.3929 | | 0.5443 | 6.52 | 3000 | 0.6544 | 0.3864 | | 0.5443 | 6.74 | 3100 | 0.6218 | 0.3786 | | 0.5443 | 6.96 | 3200 | 0.6200 | 0.3784 | | 0.5443 | 7.17 | 3300 | 0.6157 | 0.3791 | | 0.5443 | 7.39 | 3400 | 0.6317 | 0.3798 | | 0.4845 | 7.61 | 3500 | 0.6540 | 0.3771 | | 0.4845 | 7.83 | 3600 | 0.6436 | 0.3670 | | 0.4845 | 8.04 | 3700 | 0.6335 | 0.3695 | | 0.4845 | 8.26 | 3800 | 0.6579 | 0.3610 | | 0.4845 | 8.48 | 3900 | 0.6170 | 0.3613 | | 0.4279 | 8.69 | 4000 | 0.6523 | 0.3617 | | 0.4279 | 8.91 | 4100 | 0.6349 | 0.3577 | | 0.4279 | 9.13 | 4200 | 0.6344 | 0.3673 | | 0.4279 | 9.35 | 4300 | 0.6215 | 0.3641 | | 0.4279 | 9.56 | 4400 | 0.6513 | 0.3608 | | 0.3825 | 9.78 | 4500 | 0.6386 | 0.3605 | | 0.3825 | 10.0 | 4600 | 0.6724 | 0.3549 | | 0.3825 | 10.22 | 4700 | 0.6776 | 0.3602 | | 0.3825 | 10.43 | 4800 | 0.6739 | 0.3544 | | 0.3825 | 10.65 | 4900 | 0.6688 | 0.3557 | | 0.3477 | 10.87 | 5000 | 0.6674 | 0.3564 | | 0.3477 | 11.09 | 5100 | 0.6786 | 0.3476 | | 0.3477 | 11.3 | 5200 | 0.6818 | 0.3478 | | 0.3477 | 11.52 | 5300 | 0.6874 | 0.3470 | | 0.3477 | 11.74 | 5400 | 0.6993 | 0.3424 | | 0.3101 | 11.96 | 5500 | 0.6950 | 0.3404 | | 0.3101 | 12.17 | 5600 | 0.6872 | 0.3406 | | 0.3101 | 12.39 | 5700 | 0.6846 | 0.3424 | | 0.3101 | 12.61 | 5800 | 0.7051 | 0.3405 | | 0.3101 | 12.83 | 5900 | 0.7051 | 0.3378 | | 0.2859 | 13.04 | 6000 | 0.6955 | 0.3403 | | 0.2859 | 13.26 | 6100 | 0.7115 | 0.3390 | | 0.2859 | 13.48 | 6200 | 0.7074 | 0.3384 | | 0.2859 | 13.69 | 6300 | 0.7002 | 0.3376 | | 0.2859 | 13.91 | 6400 | 0.7171 | 0.3360 | | 0.2714 | 14.13 | 6500 | 0.7193 | 0.3341 | | 0.2714 | 14.35 | 6600 | 0.7132 | 0.3347 | | 0.2714 | 14.56 | 6700 | 0.7184 | 0.3353 | | 0.2714 | 14.78 | 6800 | 0.7171 | 0.3331 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
javilonso/classificationEsp3_Attraction
javilonso
2022-03-30T12:09:19Z
5
0
transformers
[ "transformers", "tf", "gpt2", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T11:07:40Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: javilonso/classificationEsp3_Attraction results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # javilonso/classificationEsp3_Attraction This model is a fine-tuned version of [PlanTL-GOB-ES/gpt2-base-bne](https://huggingface.co/PlanTL-GOB-ES/gpt2-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0055 - Validation Loss: 0.0515 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17958, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0964 | 0.0662 | 0 | | 0.0265 | 0.0500 | 1 | | 0.0055 | 0.0515 | 2 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Datasets 2.0.0 - Tokenizers 0.11.6
joe5campbell/Horovod_Tweet_Sentiment_1K_4eps
joe5campbell
2022-03-30T11:38:32Z
5
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-24T12:35:50Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Horovod_Tweet_Sentiment_1K_4eps results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Horovod_Tweet_Sentiment_1K_4eps This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6803332 - Train Accuracy: 0.57187504 - Validation Loss: 0.6883397 - Validation Accuracy: 0.54375 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 0.0003, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.70931095 | 0.5078125 | 0.81717503 | 0.528125 | 0 | | 0.77384466 | 0.5296875 | 0.68696874 | 0.51875 | 1 | | 0.68944424 | 0.53125 | 0.6837756 | 0.53125 | 2 | | 0.6803332 | 0.57187504 | 0.6883397 | 0.54375 | 3 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Tokenizers 0.11.6
jeniakim/hedgehog
jeniakim
2022-03-30T09:27:38Z
51
2
transformers
[ "transformers", "pytorch", "bert", "token-classification", "en", "license:mit", "autotrain_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: en license: mit inference: false --- 🦔 HEDGEhog 🦔: BERT-based multi-class uncertainty cues recognition ==================================================================== # Description A fine-tuned multi-class classification model that detects four different types of uncertainty cues (a.k.a hedges) on a token level. # Uncertainty types label | type | description | example ---| ---| ---| --- E | Epistemic | The proposition is possible, but its truth-value cannot be decided at the moment. | She **may** be already asleep. I | Investigation | The proposition is in the process of having its truth-value determined. | She **examined** the role of NF-kappaB in protein activation. D | Doxatic | The proposition expresses beliefs and hypotheses, which may be known as true or false by others. | She **believes** that the Earth is flat. N | Condition | The proposition is true or false based on the truth-value of another proposition. | **If** she gets the job, she will move to Utrecht. C | *certain* | *n/a* | *n/a* # Intended uses and limitations - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled. # How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.ner import NERModel model = NERModel( 'bert', 'jeniakim/hedgehog', use_cuda=False, labels=["C", "D", "E", "I", "N"], ) example = "As much as I definitely enjoy solitude, I wouldn't mind perhaps spending little time with you (Björk)" predictions, raw_outputs = model.predict([example]) ``` The predictions look like this: ``` [[{'As': 'C'}, {'much': 'C'}, {'as': 'C'}, {'I': 'C'}, {'definitely': 'C'}, {'enjoy': 'C'}, {'solitude,': 'C'}, {'I': 'C'}, {"wouldn't": 'C'}, {'mind': 'C'}, {'perhaps': 'E'}, {'spending': 'C'}, {'little': 'C'}, {'time': 'C'}, {'with': 'C'}, {'you': 'C'}, {'(Björk)': 'C'}]] ``` In other words, the token 'perhaps' is recognized as an **epistemic uncertainty cue** and all the other tokens are not uncertainty cues. # Training Data HEDGEhog is trained and evaluated on the [Szeged Uncertainty Corpus](https://rgai.inf.u-szeged.hu/node/160) (Szarvas et al. 2012<sup>1</sup>). The original sentence-level XML version of this dataset is available [here](https://rgai.inf.u-szeged.hu/node/160). The token-level version that was used for the training can be downloaded from [here](https://1drv.ms/u/s!AvPkt_QxBozXk7BiazucDqZkVxLo6g?e=IisuM6) in a form of pickled pandas DataFrame's. You can download either the split sets (```train.pkl``` 137MB, ```test.pkl``` 17MB, ```dev.pkl``` 17MB) or the full dataset (```szeged_fixed.pkl``` 172MB). Each row in the df contains a token, its features (these are not relevant for HEDGEhog; they were used to train the baseline CRF model, see [here](https://github.com/vanboefer/uncertainty_crf)), its sentence ID, and its label. # Training Procedure The following training parameters were used: - Optimizer: AdamW - Learning rate: 4e-5 - Num train epochs: 1 - Train batch size: 16 # Evaluation Results class | precision | recall | F1-score | support ---|---|---|---|--- Epistemic | 0.90 | 0.85 | 0.88 | 624 Doxatic | 0.88 | 0.92 | 0.90 | 142 Investigation | 0.83 | 0.86 | 0.84 | 111 Condition | 0.85 | 0.87 | 0.86 | 86 Certain | 1.00 | 1.00 | 1.00 | 104,751 **macro average** | **0.89** | **0.90** | **0.89** | 105,714 # References <sup>1</sup> Szarvas, G., Vincze, V., Farkas, R., Móra, G., & Gurevych, I. (2012). Cross-genre and cross-domain detection of semantic uncertainty. *Computational Linguistics, 38*(2), 335-367.
Peltarion/xlm-roberta-longformer-base-4096
Peltarion
2022-03-30T09:23:58Z
75
8
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "longformer", "multilingual", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- tags: - longformer language: multilingual license: apache-2.0 datasets: - wikitext --- ## XLM-R Longformer Model XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus. The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r). Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps. ## How to Use The model can be used as expected to fine-tune on a downstream task. For instance for QA. ```python import torch from transformers import AutoModel, AutoTokenizer MAX_SEQUENCE_LENGTH = 4096 MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096" tokenizer = AutoTokenizer.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, padding="max_length", truncation=True, ) model = AutoModelForQuestionAnswering.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, ) ``` ## Training Procedure The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information ```sh wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip unzip wikitext-103-raw-v1.zip export DATA_DIR=./wikitext-103-raw scripts/run_long_lm.py \ --model_name_or_path xlm-roberta-base \ --model_name xlm-roberta-to-longformer \ --output_dir ./output \ --logging_dir ./logs \ --val_file_path $DATA_DIR/wiki.valid.raw \ --train_file_path $DATA_DIR/wiki.train.raw \ --seed 42 \ --max_pos 4096 \ --adam_epsilon 1e-8 \ --warmup_steps 500 \ --learning_rate 3e-5 \ --weight_decay 0.01 \ --max_steps 6000 \ --evaluate_during_training \ --logging_steps 50 \ --eval_steps 50 \ --save_steps 6000 \ --max_grad_norm 1.0 \ --per_device_eval_batch_size 2 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 64 \ --overwrite_output_dir \ --fp16 \ --do_train \ --do_eval ```
Aureliano/electra-if
Aureliano
2022-03-30T09:07:27Z
6
0
transformers
[ "transformers", "pytorch", "tf", "electra", "feature-extraction", "en", "arxiv:1406.2661", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-11T15:40:21Z
--- language: en license: apache-2.0 --- ## ELECTRA for IF **ELECTRA** is a method for self-supervised language representation learning. They are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). For a detailed description and experimental results, please refer to the original paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains a small ELECTRA discriminator finetuned on a corpus of interactive fiction commands labelled with the WordNet synset offset of the verb in the sentence. The original dataset has been collected from the list of action in the walkthroughs for the game included in the [Jericho](https://github.com/microsoft/jericho) framework and manually annotated. For more information visit https://github.com/aporporato/electra and https://github.com/aporporato/jericho-corpora. ## How to use the discriminator in `transformers` (Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb) ```python import math import numpy as np import tensorflow as tf from datasets import load_metric, Dataset, DatasetDict from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer from transformers.keras_callbacks import KerasMetricCallback # This example shows how this model can be used: # you should finetune the model of your specific corpus if commands, bigger than this dict_train = { "idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20"], "sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book", "inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich", "drop sandwich", "x sandwich", "agin"], "label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"] } dict_val = { "idx": ["0", "1", "2", "3", "4", "5"], "sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"], "label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"] } raw_train_dataset = Dataset.from_dict(dict_train) raw_val_dataset = Dataset.from_dict(dict_val) raw_dataset = DatasetDict() raw_dataset["train"] = raw_train_dataset raw_dataset["val"] = raw_val_dataset raw_dataset = raw_dataset.class_encode_column("label") print(raw_dataset) print(raw_dataset["train"].features) print(raw_dataset["val"].features) print(raw_dataset["train"][1]) label2id = {} id2label = {} for i, l in enumerate(raw_dataset["train"].features["label"].names): label2id[l] = i id2label[i] = l discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/electra-if", label2id=label2id, id2label=id2label) tokenizer = AutoTokenizer.from_pretrained("Aureliano/electra-if") tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True) pre_tokenizer_columns = set(raw_dataset["train"].features) encoded_dataset = raw_dataset.map(tokenize_function, batched=True) tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") batch_size = len(encoded_dataset["train"]) tf_train_dataset = encoded_dataset["train"].to_tf_dataset( columns=tokenizer_columns, label_cols=["labels"], shuffle=True, batch_size=batch_size, collate_fn=data_collator ) tf_validation_dataset = encoded_dataset["val"].to_tf_dataset( columns=tokenizer_columns, label_cols=["labels"], shuffle=False, batch_size=batch_size, collate_fn=data_collator ) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) num_epochs = 25 batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size) total_train_steps = int(batches_per_epoch * num_epochs) optimizer, schedule = create_optimizer( init_lr=5e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps ) metric = load_metric("accuracy") def compute_metrics(eval_predictions): logits, labels = eval_predictions predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset) callbacks = [metric_callback] discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"]) discriminator.fit( tf_train_dataset, epochs=num_epochs, validation_data=tf_validation_dataset, callbacks=callbacks ) print("Evaluate on test data") results = discriminator.evaluate(tf_validation_dataset) print("test loss, test acc:", results) text = "i" encoded_input = tokenizer(text, return_tensors='tf') output = discriminator(encoded_input) prediction = tf.nn.softmax(output["logits"][0], -1) label = id2label[tf.math.argmax(prediction).numpy()] print("\n", text, ":", label, "\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset text = "get lamp" encoded_input = tokenizer(text, return_tensors='tf') output = discriminator(encoded_input) prediction = tf.nn.softmax(output["logits"][0], -1) label = id2label[tf.math.argmax(prediction).numpy()] print("\n", text, ":", label, "\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset text = "w" encoded_input = tokenizer(text, return_tensors='tf') output = discriminator(encoded_input) prediction = tf.nn.softmax(output["logits"][0], -1) label = id2label[tf.math.argmax(prediction).numpy()] print("\n", text, ":", label, "\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset ```
javilonso/classificationPolEsp1
javilonso
2022-03-30T09:02:50Z
3
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T07:49:20Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: javilonso/classificationPolEsp1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # javilonso/classificationPolEsp1 This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3728 - Validation Loss: 0.6217 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17958, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6282 | 0.6017 | 0 | | 0.5129 | 0.6177 | 1 | | 0.3728 | 0.6217 | 2 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Datasets 2.0.0 - Tokenizers 0.11.6
neibla/distilbert-base-uncased-finetuned-emotion
neibla
2022-03-30T08:56:26Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T08:22:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9255 - name: F1 type: f1 value: 0.9254917237562972 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2187 - Accuracy: 0.9255 - F1: 0.9255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.855 | 1.0 | 250 | 0.3211 | 0.905 | 0.9017 | | 0.2561 | 2.0 | 500 | 0.2187 | 0.9255 | 0.9255 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
loulou/distilbert-base-uncased-finetuned-emotion
loulou
2022-03-30T04:57:58Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-22T04:55:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9221931901873676 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2285 - Accuracy: 0.922 - F1: 0.9222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8366 | 1.0 | 250 | 0.3212 | 0.9025 | 0.8990 | | 0.2588 | 2.0 | 500 | 0.2285 | 0.922 | 0.9222 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio
scasutt
2022-03-30T03:35:01Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-29T11:30:40Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53_toy_train_data_masked_audio This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6445 - Wer: 0.4938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3761 | 1.05 | 250 | 3.4022 | 0.9954 | | 3.0858 | 2.1 | 500 | 3.4684 | 0.9954 | | 2.6302 | 3.15 | 750 | 1.7989 | 0.9865 | | 1.1292 | 4.2 | 1000 | 0.8558 | 0.7355 | | 0.8371 | 5.25 | 1250 | 0.7319 | 0.6621 | | 0.5992 | 6.3 | 1500 | 0.6848 | 0.6147 | | 0.5189 | 7.35 | 1750 | 0.6522 | 0.5742 | | 0.454 | 8.4 | 2000 | 0.6601 | 0.5531 | | 0.3896 | 9.45 | 2250 | 0.6138 | 0.5439 | | 0.3678 | 10.5 | 2500 | 0.6436 | 0.5320 | | 0.3232 | 11.55 | 2750 | 0.5920 | 0.5174 | | 0.2926 | 12.6 | 3000 | 0.6615 | 0.5107 | | 0.3041 | 13.65 | 3250 | 0.6311 | 0.5015 | | 0.2882 | 14.7 | 3500 | 0.6182 | 0.5004 | | 0.2868 | 15.75 | 3750 | 0.6266 | 0.4943 | | 0.2508 | 16.81 | 4000 | 0.6587 | 0.4965 | | 0.2563 | 17.86 | 4250 | 0.6634 | 0.4939 | | 0.2213 | 18.91 | 4500 | 0.6441 | 0.4925 | | 0.2255 | 19.96 | 4750 | 0.6445 | 0.4938 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
cammiemw/bert-marco-hdct
cammiemw
2022-03-30T01:21:38Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T01:09:55Z
--- license: cc-by-nc-4.0 ---
DrishtiSharma/poem-gen-spanish-t5-small-v7
DrishtiSharma
2022-03-30T00:34:41Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-29T19:14:40Z
--- license: mit tags: - generated_from_trainer model-index: - name: poem-gen-spanish-t5-small-v7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poem-gen-spanish-t5-small-v7 This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000333 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.1716 | 0.73 | 30000 | 3.1114 | | 2.9666 | 1.46 | 60000 | 3.0271 | | 2.8292 | 2.19 | 90000 | 2.9531 | | 2.7264 | 2.93 | 120000 | 2.9126 | | 2.6057 | 3.66 | 150000 | 2.9175 | | 2.4876 | 4.39 | 180000 | 2.9077 | | 2.3791 | 5.12 | 210000 | 2.9240 | | 2.3515 | 5.85 | 240000 | 2.9169 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DrishtiSharma/poem-gen-spanish-t5-small-v6
DrishtiSharma
2022-03-29T23:45:09Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-29T18:58:46Z
--- license: mit tags: - generated_from_trainer model-index: - name: poem-gen-spanish-t5-small-v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poem-gen-spanish-t5-small-v6 This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.8551 | 0.73 | 30000 | 2.9296 | | 2.6961 | 1.46 | 60000 | 2.9005 | | 2.5756 | 2.19 | 90000 | 2.8786 | | 2.5095 | 2.93 | 120000 | 2.8621 | | 2.4061 | 3.66 | 150000 | 2.8830 | | 2.3161 | 4.39 | 180000 | 2.8865 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
efederici/sentence-it5-base
efederici
2022-03-29T23:09:01Z
35
4
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "feature-extraction", "sentence-similarity", "transformers", "it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-29T19:57:59Z
--- pipeline_tag: sentence-similarity language: - it tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-IT5-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-base)) base model. It is trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)), tags/news-article pairs, headline/text pairs ([change-it](https://huggingface.co/datasets/gsarti/change_it)) and on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train). ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"] model = SentenceTransformer('efederici/sentence-IT5-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-base') model = AutoModel.from_pretrained('efederici/sentence-IT5-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
espnet/bur_openslr80_hubert
espnet
2022-03-29T22:19:50Z
0
0
null
[ "region:us" ]
null
2022-03-28T22:04:54Z
<!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Mar 21 22:59:35 UTC 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.10.1` - Git hash: `7ae4efd81778436a98b822483e8123adba6aa430` - Commit date: `Tue Mar 15 20:11:18 2022 -0400` ## asr_train_asr_hubert_transformer_adam_specaug_raw_bpe150 ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|4227|39.1|50.4|10.5|6.1|67.0|99.8| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|33345|82.2|7.6|10.1|3.6|21.4|99.8| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|18237|70.7|17.7|11.6|2.5|31.8|99.8|
Chikashi/t5-small-finetuned-cnndm_3epoch
Chikashi
2022-03-29T19:28:09Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-29T00:14:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cnn_dailymail metrics: - rouge model-index: - name: t5-small-finetuned-cnndm_3epoch results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: cnn_dailymail type: cnn_dailymail args: 3.0.0 metrics: - name: Rouge1 type: rouge value: 24.5435 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-cnndm_3epoch This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6622 - Rouge1: 24.5435 - Rouge2: 11.7919 - Rougel: 20.2929 - Rougelsum: 23.1661 - Gen Len: 18.9996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9113 | 0.14 | 5000 | 1.7162 | 24.4374 | 11.6932 | 20.1741 | 23.0427 | 18.9997 | | 1.8772 | 0.28 | 10000 | 1.7008 | 24.3715 | 11.6699 | 20.1387 | 22.9772 | 18.9997 | | 1.8609 | 0.42 | 15000 | 1.6911 | 24.4174 | 11.6986 | 20.1756 | 23.0205 | 18.9997 | | 1.8564 | 0.56 | 20000 | 1.6871 | 24.4374 | 11.6801 | 20.1663 | 23.0366 | 18.9995 | | 1.8495 | 0.7 | 25000 | 1.6796 | 24.4019 | 11.6901 | 20.177 | 23.034 | 18.999 | | 1.8448 | 0.84 | 30000 | 1.6787 | 24.4813 | 11.7227 | 20.1985 | 23.0847 | 18.999 | | 1.8427 | 0.98 | 35000 | 1.6762 | 24.4905 | 11.7591 | 20.2548 | 23.1006 | 18.9993 | | 1.8341 | 1.11 | 40000 | 1.6747 | 24.4743 | 11.7124 | 20.1782 | 23.0726 | 18.9996 | | 1.822 | 1.25 | 45000 | 1.6753 | 24.4797 | 11.7292 | 20.2319 | 23.0816 | 18.9993 | | 1.8262 | 1.39 | 50000 | 1.6713 | 24.4865 | 11.7079 | 20.2214 | 23.0919 | 18.9986 | | 1.8281 | 1.53 | 55000 | 1.6702 | 24.5095 | 11.7364 | 20.2534 | 23.1264 | 18.9991 | | 1.8228 | 1.67 | 60000 | 1.6678 | 24.5153 | 11.7595 | 20.2544 | 23.1138 | 18.9993 | | 1.824 | 1.81 | 65000 | 1.6662 | 24.5324 | 11.7804 | 20.2671 | 23.1498 | 18.9997 | | 1.8265 | 1.95 | 70000 | 1.6648 | 24.5795 | 11.7917 | 20.2935 | 23.1855 | 18.9992 | | 1.8179 | 2.09 | 75000 | 1.6658 | 24.5426 | 11.804 | 20.2861 | 23.1586 | 18.9996 | | 1.8147 | 2.23 | 80000 | 1.6646 | 24.5429 | 11.7914 | 20.2889 | 23.1542 | 18.9993 | | 1.8026 | 2.37 | 85000 | 1.6632 | 24.5451 | 11.8045 | 20.2781 | 23.1555 | 18.9996 | | 1.8141 | 2.51 | 90000 | 1.6643 | 24.5078 | 11.7781 | 20.2631 | 23.121 | 18.9996 | | 1.8124 | 2.65 | 95000 | 1.6628 | 24.5728 | 11.7958 | 20.2875 | 23.178 | 18.9996 | | 1.8098 | 2.79 | 100000 | 1.6635 | 24.5534 | 11.7998 | 20.2979 | 23.169 | 18.9996 | | 1.8153 | 2.93 | 105000 | 1.6622 | 24.5435 | 11.7919 | 20.2929 | 23.1661 | 18.9996 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
GleamEyeBeast/ascend
GleamEyeBeast
2022-03-29T16:49:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-29T01:37:59Z
--- tags: - generated_from_trainer model-index: - name: ascend results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ascend This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3718 - Wer: 0.6412 - Cer: 0.2428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 0.5769 | 1.0 | 688 | 1.1864 | 0.7716 | 0.3159 | | 0.5215 | 2.0 | 1376 | 1.1613 | 0.7504 | 0.2965 | | 0.4188 | 3.0 | 2064 | 1.1644 | 0.7389 | 0.2950 | | 0.3695 | 4.0 | 2752 | 1.1937 | 0.7184 | 0.2815 | | 0.3404 | 5.0 | 3440 | 1.1947 | 0.7083 | 0.2719 | | 0.2885 | 6.0 | 4128 | 1.2314 | 0.7108 | 0.2685 | | 0.2727 | 7.0 | 4816 | 1.2243 | 0.6850 | 0.2616 | | 0.2417 | 8.0 | 5504 | 1.2506 | 0.6767 | 0.2608 | | 0.2207 | 9.0 | 6192 | 1.2804 | 0.6922 | 0.2595 | | 0.2195 | 10.0 | 6880 | 1.2582 | 0.6818 | 0.2575 | | 0.1896 | 11.0 | 7568 | 1.3101 | 0.6814 | 0.2545 | | 0.1961 | 12.0 | 8256 | 1.2793 | 0.6706 | 0.2526 | | 0.1752 | 13.0 | 8944 | 1.2643 | 0.6584 | 0.2509 | | 0.1638 | 14.0 | 9632 | 1.3152 | 0.6588 | 0.2482 | | 0.1522 | 15.0 | 10320 | 1.3098 | 0.6433 | 0.2439 | | 0.1351 | 16.0 | 11008 | 1.3253 | 0.6537 | 0.2447 | | 0.1266 | 17.0 | 11696 | 1.3394 | 0.6365 | 0.2418 | | 0.1289 | 18.0 | 12384 | 1.3718 | 0.6412 | 0.2443 | | 0.1204 | 19.0 | 13072 | 1.3708 | 0.6433 | 0.2433 | | 0.1189 | 20.0 | 13760 | 1.3718 | 0.6412 | 0.2428 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
tbosse/bert-base-german-cased-finetuned-subj_v1
tbosse
2022-03-29T15:59:49Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-29T14:22:30Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-german-cased-finetuned-subj_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-finetuned-subj_v1 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1594 - Precision: 0.1875 - Recall: 0.0077 - F1: 0.0147 - Accuracy: 0.9508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 136 | 0.1591 | 1.0 | 0.0051 | 0.0102 | 0.9523 | | No log | 2.0 | 272 | 0.1571 | 0.375 | 0.0077 | 0.015 | 0.9518 | | No log | 3.0 | 408 | 0.1594 | 0.1875 | 0.0077 | 0.0147 | 0.9508 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
sayef/fsner-bert-base-uncased
sayef
2022-03-29T14:20:35Z
9
6
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2008.10570", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# FSNER Implemented by [sayef](https://huggingface.co/sayef). # Overview The FSNER model was proposed in [Example-Based Named Entity Recognition](https://arxiv.org/abs/2008.10570) by Morteza Ziyadi, Yuting Sun, Abhishek Goswami, Jade Huang, Weizhu Chen. To identify entity spans in a new domain, it uses a train-free few-shot learning approach inspired by question-answering. ## Abstract > We present a novel approach to named entity recognition (NER) in the presence of scarce data that we call example-based NER. Our train-free few-shot learning approach takes inspiration from question-answering to identify entity spans in a new and unseen domain. In comparison with the current state-of-the-art, the proposed method performs significantly better, especially when using a low number of support examples. ## Model Training Details | identifier | epochs | datasets | | ---------- |:------:|:-----------------------------------------------------------------------------------------------:| | [sayef/fsner-bert-base-uncased](https://huggingface.co/sayef/fsner-bert-base-uncased) | 25 | ontonotes5, conll2003, wnut2017, mit_movie_trivia, mit_restaurant and fin (Alvarado et al.). | ## Installation and Example Usage You can use the FSNER model in 3 ways: 1. Install directly from PyPI: `pip install fsner` and import the model as shown in the code example below or 2. Install from source: `python install .` and import the model as shown in the code example below or 3. Clone [repo](https://github.com/sayef/fsner) and add absolute path of `fsner/src` directory to your PYTHONPATH and import the model as shown in the code example below ```python import json from fsner import FSNERModel, FSNERTokenizerUtils, pretty_embed query_texts = [ "Does Luke's serve lunch?", "Chang does not speak Taiwanese very well.", "I like Berlin." ] # Each list in supports are the examples of one entity type # Wrap entities around with [E] and [/E] in the examples. # Each sentence should have only one pair of [E] ... [/E] support_texts = { "Restaurant": [ "What time does [E] Subway [/E] open for breakfast?", "Is there a [E] China Garden [/E] restaurant in newark?", "Does [E] Le Cirque [/E] have valet parking?", "Is there a [E] McDonalds [/E] on main street?", "Does [E] Mike's Diner [/E] offer huge portions and outdoor dining?" ], "Language": [ "Although I understood no [E] French [/E] in those days , I was prepared to spend the whole day with Chien - chien .", "like what the hell 's that called in [E] English [/E] ? I have to register to be here like since I 'm a foreigner .", "So , I 'm also working on an [E] English [/E] degree because that 's my real interest .", "Al - Jazeera TV station , established in November 1996 in Qatar , is an [E] Arabic - language [/E] news TV station broadcasting global news and reports nonstop around the clock .", "They think it 's far better for their children to be here improving their [E] English [/E] than sitting at home in front of a TV . \"", "The only solution seemed to be to have her learn [E] French [/E] .", "I have to read sixty pages of [E] Russian [/E] today ." ] } device = 'cpu' tokenizer = FSNERTokenizerUtils("sayef/fsner-bert-base-uncased") queries = tokenizer.tokenize(query_texts).to(device) supports = tokenizer.tokenize(list(support_texts.values())).to(device) model = FSNERModel("sayef/fsner-bert-base-uncased") model.to(device) p_starts, p_ends = model.predict(queries, supports) # One can prepare supports once and reuse multiple times with different queries # ------------------------------------------------------------------------------ # start_token_embeddings, end_token_embeddings = model.prepare_supports(supports) # p_starts, p_ends = model.predict(queries, start_token_embeddings=start_token_embeddings, # end_token_embeddings=end_token_embeddings) output = tokenizer.extract_entity_from_scores(query_texts, queries, p_starts, p_ends, entity_keys=list(support_texts.keys()), thresh=0.50) print(json.dumps(output, indent=2)) # install displacy for pretty embed pretty_embed(query_texts, output, list(support_texts.keys())) ``` <!DOCTYPE html> <html lang="en"> <head> <title>displaCy</title> </head> <body style="font-size: 16px; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; padding: 4rem 2rem; direction: ltr"> <figure style="margin-bottom: 6rem"> <div class="entities" style="line-height: 2.5; direction: ltr"> <div class="entities" style="line-height: 2.5; direction: ltr">Does <mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Luke's <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Restaurant</span> </mark> serve lunch?</div> <div class="entities" style="line-height: 2.5; direction: ltr">Chang does not speak <mark class="entity" style="background: #bfeeb7; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Taiwanese <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Language</span> </mark> very well.</div> <div class="entities" style="line-height: 2.5; direction: ltr">I like Berlin.</div> </div> </figure> </body> </html> ## Datasets preparation 1. We need to convert dataset into the following format. Let's say we have a dataset file train.json like following. 2. Each list in supports are the examples of one entity type 3. Wrap entities around with [E] and [/E] in the examples. 4. Each example should have only one pair of [E] ... [/E]. ```json { "CARDINAL_NUMBER": [ "Washington , cloudy , [E] 2 [/E] to 6 degrees .", "New Dehli , sunny , [E] 6 [/E] to 19 degrees .", "Well this is number [E] two [/E] .", "....." ], "LANGUAGE": [ "They do n't have the Quicken [E] Dutch [/E] version ?", "they learned a lot of [E] German [/E] .", "and then [E] Dutch [/E] it 's Mifrau", "...." ], "MONEY": [ "Per capita personal income ranged from $ [E] 11,116 [/E] in Mississippi to $ 23,059 in Connecticut ... .", "The trade surplus was [E] 582 million US dollars [/E] .", "It settled with a loss of 4.95 cents at $ [E] 1.3210 [/E] a pound .", "...." ] } ``` 2. Converted ontonotes5 dataset can be found here: 1. [train](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.train.json) 2. [dev](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.dev.json) 3. Then trainer script can be used to train/evaluate your fsner model. ```bash fsner trainer --pretrained-model bert-base-uncased --mode train --train-data train.json --val-data val.json \ --train-batch-size 6 --val-batch-size 6 --n-examples-per-entity 10 --neg-example-batch-ratio 1/3 --max-epochs 25 --device gpu \ --gpus -1 --strategy ddp ```
Rishav-hub/xlm-roberta-base-finetuned-panx-de
Rishav-hub
2022-03-29T11:05:37Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-29T10:26:12Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8591260810195721 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.257 | 1.0 | 525 | 0.1512 | 0.8302 | | 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 | | 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
beston91/gpt2-xl_ft_logits_5k_experiment
beston91
2022-03-29T10:27:12Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-29T03:13:26Z
--- tags: - generated_from_trainer model-index: - name: gpt2-xl_ft_logits_5k_experiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-xl_ft_logits_5k_experiment This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.8601 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100.0 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.9 | 7 | 6.1556 | | No log | 1.9 | 14 | 6.3365 | | No log | 2.9 | 21 | 6.5909 | | No log | 3.9 | 28 | 6.8601 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6 ### Perplexity Score: 17.589759826660156
KeithHorgan/TweetClimateAnalysis
KeithHorgan
2022-03-29T10:01:24Z
4
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain", "unk", "dataset:KeithHorgan98/autotrain-data-TweetClimateAnalysis", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T10:16:42Z
--- tags: autotrain language: unk widget: - text: "Climate Change is a hoax" - text: "It is freezing, where is global warming" datasets: - KeithHorgan98/autotrain-data-TweetClimateAnalysis co2_eq_emissions: 133.19491276284793 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 678720226 - CO2 Emissions (in grams): 133.19491276284793 ## Validation Metrics - Loss: 0.4864234924316406 - Accuracy: 0.865424430641822 - Macro F1: 0.7665472174344069 - Micro F1: 0.8654244306418221 - Weighted F1: 0.8586375445115083 - Macro Precision: 0.8281449061702826 - Micro Precision: 0.865424430641822 - Weighted Precision: 0.8619727477790186 - Macro Recall: 0.736576343905098 - Micro Recall: 0.865424430641822 - Weighted Recall: 0.865424430641822 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/KeithHorgan98/autotrain-TweetClimateAnalysis-678720226 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
ai4bharat/MultiIndicWikiBioSS
ai4bharat
2022-03-29T09:22:47Z
4
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "wikibio", "multilingual", "nlp", "indicnlp", "as", "bn", "hi", "kn", "ml", "or", "pa", "ta", "te", "dataset:ai4bharat/IndicWikiBio", "arxiv:2203.05437", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-16T11:36:23Z
--- tags: - wikibio - multilingual - nlp - indicnlp datasets: - ai4bharat/IndicWikiBio language: - as - bn - hi - kn - ml - or - pa - ta - te licenses: - cc-by-nc-4.0 widget: - text: <TAG> name </TAG> राम नरेश पांडेय <TAG> office </TAG> विधायक - 205 - कुशीनगर विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1967 से 1968 <TAG> nationality </TAG> भारतीय </s> <2hi> --- # MultiIndicWikiBioSS MultiIndicWikiBioSS is a multilingual, sequence-to-sequence pre-trained model, a [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint fine-tuned on the 9 languages of [IndicWikiBio](https://huggingface.co/datasets/ai4bharat/IndicWikiBio) dataset. For fine-tuning details, see the [paper](https://arxiv.org/abs/2203.05437). You can use MultiIndicWikiBioSS to build biography generation applications for Indian languages by fine-tuning the model with supervised training data. Some salient features of the MultiIndicWikiBioSS are: <ul> <li >Supported languages: Assamese, Bengali, Hindi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li> <li> Fine-tuned on an Indic language corpora (34,653 examples). </li> <li> Unlike ai4bharat/MultiIndicWikiBioUnified, each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li> </ul> You can read more about MultiIndicWikiBioSS in this <a href="https://arxiv.org/abs/2203.05437">paper</a>. ## Using this model in `transformers` ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioSS", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioSS", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicWikiBioSS") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicWikiBioSS") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2as>', '<2bn>', '<2hi>', '<2kn>', '<2ml>', '<2or>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("<TAG> name </TAG> भीखा लाल <TAG> office </TAG> विधायक - 318 - हसनगंज विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1957 से 1962 <TAG> nationality </TAG> भारतीय</s><2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids out = tokenizer("<2hi> भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:]) # For loss model_outputs.loss ## This is not label smoothed. # For logits model_outputs.logits # For generation. Pardon the messiness. Note the decoder_start_token_id. model.eval() # Set dropouts to zero model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # __भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। ``` ## Benchmarks Scores on the `IndicWikiBio` test sets are as follows: Language | RougeL ---------|---------------------------- as | 56.50 bn | 56.58 hi | 67.34 kn | 39.37 ml | 38.42 or | 70.71 pa | 52.78 ta | 51.11 te | 51.72 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" } ``` # License The model is available under the MIT License.
Davlan/m2m100_418M-eng-yor-mt
Davlan
2022-03-29T09:21:53Z
820
1
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "arxiv:2103.08647", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
Hugging Face's logo --- language: - yo - en datasets: - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) --- # m2m100_418M-eng-yor-mt ## Model description **m2m100_418M-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá. Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). #### Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset ## Training procedure This model was trained on NVIDIA V100 GPU ## Eval results on Test set (BLEU score) Fine-tuning m2m100_418M achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82 ### BibTeX entry and citation info By David Adelani ``` ```
Davlan/m2m100_418M-yor-eng-mt
Davlan
2022-03-29T09:21:03Z
5
0
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "arxiv:2103.08647", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
Hugging Face's logo --- language: - yo - en datasets: - JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) --- # m2m100_418M-eng-yor-mt ## Model description **m2m100_418M-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English. Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). #### Limitations and bias This model is limited by its training dataset. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset ## Training procedure This model was trained on NVIDIA V100 GPU ## Eval results on Test set (BLEU score) Fine-tuning m2m100_418M achieves **16.76 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57 ### BibTeX entry and citation info By David Adelani ``` ```
PereLluis13/wav2vec2-xls-r-1b-ca
PereLluis13
2022-03-29T08:44:49Z
17
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "collectivat/tv3_parla", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "projecte-aina/parlament_parla", "robust-speech-event", "ca", "dataset:mozilla-foundation/common_voice_8_0", "dataset:collectivat/tv3_parla", "dataset:projecte-aina/parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ca license: apache-2.0 tags: - automatic-speech-recognition - collectivat/tv3_parla - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - projecte-aina/parlament_parla - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - collectivat/tv3_parla - projecte-aina/parlament_parla model-index: - name: wav2vec2-xls-r-1b-ca results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_8_0 ca type: mozilla-foundation/common_voice_8_0 args: ca metrics: - name: Test WER type: wer value: 11.030639657300516 - name: Test CER type: cer value: 2.8405630530040634 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: projecte-aina/parlament_parla ca type: projecte-aina/parlament_parla args: clean metrics: - name: Test WER type: wer value: 6.483115660665961 - name: Test CER type: cer value: 2.0212863746191828 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: collectivat/tv3_parla ca type: collectivat/tv3_parla args: ca metrics: - name: Test WER type: wer value: 17.917773414943988 - name: Test CER type: cer value: 8.872589572206396 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Catalan Dev Data type: speech-recognition-community-v2/dev_data args: ca metrics: - name: Test WER type: wer value: 27.126683954209097 - name: Test CER type: cer value: 14.213308815078726 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ca metrics: - name: Test WER type: wer value: 18.7 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-ca This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets. ## Model description Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model. ## Intended uses & limitations As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language. ## Training and evaluation data ## Training procedure The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py). ### Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 # Thanks Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
johnowhitaker/sketchy_unet_rn34
johnowhitaker
2022-03-29T08:02:43Z
0
0
null
[ "license:cc-by-4.0", "region:us" ]
null
2022-03-29T07:57:40Z
--- license: cc-by-4.0 --- This is the exported model for a small project I' working on, to test integration with spaces. It is a fastai model and needs some custom code to work. For now please ignore :)
gayanin/t5-small-med-term-conditional-masking-0
gayanin
2022-03-29T03:19:04Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-28T22:04:47Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-med-term-conditional-masking-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-med-term-conditional-masking-0 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6688 - Rouge2 Precision: 0.694 - Rouge2 Recall: 0.4781 - Rouge2 Fmeasure: 0.5479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.9525 | 1.0 | 13915 | 0.8148 | 0.6657 | 0.4581 | 0.5252 | | 0.8541 | 2.0 | 27830 | 0.7562 | 0.6779 | 0.4694 | 0.5371 | | 0.8183 | 3.0 | 41745 | 0.7268 | 0.6827 | 0.4722 | 0.5405 | | 0.8033 | 4.0 | 55660 | 0.7074 | 0.6861 | 0.4729 | 0.5419 | | 0.7727 | 5.0 | 69575 | 0.6934 | 0.6872 | 0.4726 | 0.5419 | | 0.7704 | 6.0 | 83490 | 0.6832 | 0.6901 | 0.4742 | 0.544 | | 0.7485 | 7.0 | 97405 | 0.6771 | 0.6926 | 0.4772 | 0.5469 | | 0.7528 | 8.0 | 111320 | 0.6722 | 0.6934 | 0.4782 | 0.5478 | | 0.7535 | 9.0 | 125235 | 0.6696 | 0.6944 | 0.4782 | 0.5481 | | 0.7444 | 10.0 | 139150 | 0.6688 | 0.694 | 0.4781 | 0.5479 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v9
DrishtiSharma
2022-03-29T00:52:52Z
5
2
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-29T00:13:34Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-sentiment-mesd-v9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-sentiment-mesd-v9 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3500 - Accuracy: 0.9154 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.86 | 3 | 1.7825 | 0.1846 | | 1.9553 | 1.86 | 6 | 1.7212 | 0.4308 | | 1.9553 | 2.86 | 9 | 1.6164 | 0.3769 | | 2.002 | 3.86 | 12 | 1.4904 | 0.3769 | | 1.6191 | 4.86 | 15 | 1.4426 | 0.4385 | | 1.6191 | 5.86 | 18 | 1.3516 | 0.5231 | | 1.6209 | 6.86 | 21 | 1.2176 | 0.5538 | | 1.6209 | 7.86 | 24 | 1.1683 | 0.5692 | | 1.371 | 8.86 | 27 | 1.0885 | 0.5923 | | 1.1568 | 9.86 | 30 | 1.0152 | 0.6385 | | 1.1568 | 10.86 | 33 | 0.9289 | 0.6385 | | 1.1023 | 11.86 | 36 | 0.9141 | 0.6308 | | 1.1023 | 12.86 | 39 | 0.8526 | 0.6462 | | 0.9448 | 13.86 | 42 | 0.8420 | 0.6769 | | 0.7972 | 14.86 | 45 | 0.7976 | 0.6692 | | 0.7972 | 15.86 | 48 | 0.8192 | 0.7308 | | 0.7793 | 16.86 | 51 | 0.7108 | 0.7615 | | 0.7793 | 17.86 | 54 | 0.6712 | 0.7769 | | 0.6468 | 18.86 | 57 | 0.6684 | 0.7923 | | 0.5083 | 19.86 | 60 | 0.6922 | 0.7385 | | 0.5083 | 20.86 | 63 | 0.6148 | 0.7923 | | 0.4988 | 21.86 | 66 | 0.5846 | 0.7923 | | 0.4988 | 22.86 | 69 | 0.6050 | 0.8154 | | 0.4123 | 23.86 | 72 | 0.5506 | 0.7846 | | 0.3511 | 24.86 | 75 | 0.6095 | 0.7846 | | 0.3511 | 25.86 | 78 | 0.5916 | 0.8154 | | 0.3268 | 26.86 | 81 | 0.5912 | 0.8077 | | 0.3268 | 27.86 | 84 | 0.5142 | 0.8538 | | 0.3036 | 28.86 | 87 | 0.5492 | 0.8077 | | 0.3066 | 29.86 | 90 | 0.6007 | 0.8231 | | 0.3066 | 30.86 | 93 | 0.5748 | 0.8231 | | 0.2538 | 31.86 | 96 | 0.6027 | 0.7692 | | 0.2538 | 32.86 | 99 | 0.6979 | 0.7462 | | 0.2281 | 33.86 | 102 | 0.7002 | 0.7615 | | 0.2183 | 34.86 | 105 | 0.6650 | 0.7769 | | 0.2183 | 35.86 | 108 | 0.5192 | 0.8462 | | 0.2202 | 36.86 | 111 | 0.5389 | 0.8308 | | 0.2202 | 37.86 | 114 | 0.5050 | 0.8385 | | 0.1906 | 38.86 | 117 | 0.5722 | 0.7769 | | 0.154 | 39.86 | 120 | 0.5239 | 0.8308 | | 0.154 | 40.86 | 123 | 0.4448 | 0.8615 | | 0.1474 | 41.86 | 126 | 0.4623 | 0.8615 | | 0.1474 | 42.86 | 129 | 0.4282 | 0.8615 | | 0.1345 | 43.86 | 132 | 0.5087 | 0.8615 | | 0.1567 | 44.86 | 135 | 0.4859 | 0.8385 | | 0.1567 | 45.86 | 138 | 0.6603 | 0.8077 | | 0.1731 | 46.86 | 141 | 0.5379 | 0.8385 | | 0.1731 | 47.86 | 144 | 0.8666 | 0.7538 | | 0.1606 | 48.86 | 147 | 0.7518 | 0.8 | | 0.1484 | 49.86 | 150 | 0.5986 | 0.8385 | | 0.1484 | 50.86 | 153 | 0.6368 | 0.8231 | | 0.2256 | 51.86 | 156 | 0.4639 | 0.8692 | | 0.2256 | 52.86 | 159 | 0.5533 | 0.8462 | | 0.1178 | 53.86 | 162 | 0.5038 | 0.8615 | | 0.0815 | 54.86 | 165 | 0.5052 | 0.8692 | | 0.0815 | 55.86 | 168 | 0.4337 | 0.8846 | | 0.0998 | 56.86 | 171 | 0.4422 | 0.8769 | | 0.0998 | 57.86 | 174 | 0.4317 | 0.8692 | | 0.0855 | 58.86 | 177 | 0.4025 | 0.8923 | | 0.0962 | 59.86 | 180 | 0.4605 | 0.8769 | | 0.0962 | 60.86 | 183 | 0.4356 | 0.8769 | | 0.0763 | 61.86 | 186 | 0.4614 | 0.8769 | | 0.0763 | 62.86 | 189 | 0.4382 | 0.8846 | | 0.0902 | 63.86 | 192 | 0.4701 | 0.8692 | | 0.0654 | 64.86 | 195 | 0.4922 | 0.8692 | | 0.0654 | 65.86 | 198 | 0.5413 | 0.8538 | | 0.0651 | 66.86 | 201 | 0.5759 | 0.8615 | | 0.0651 | 67.86 | 204 | 0.4238 | 0.9 | | 0.0822 | 68.86 | 207 | 0.3500 | 0.9154 | | 0.0625 | 69.86 | 210 | 0.3878 | 0.8923 | | 0.0625 | 70.86 | 213 | 0.4952 | 0.8615 | | 0.0548 | 71.86 | 216 | 0.4544 | 0.8615 | | 0.0548 | 72.86 | 219 | 0.5497 | 0.8769 | | 0.054 | 73.86 | 222 | 0.4434 | 0.8846 | | 0.0543 | 74.86 | 225 | 0.4732 | 0.8769 | | 0.0543 | 75.86 | 228 | 0.4425 | 0.8923 | | 0.0881 | 76.86 | 231 | 0.4788 | 0.8769 | | 0.0881 | 77.86 | 234 | 0.5448 | 0.8769 | | 0.061 | 78.86 | 237 | 0.4221 | 0.9077 | | 0.0567 | 79.86 | 240 | 0.4404 | 0.8769 | | 0.0567 | 80.86 | 243 | 0.4099 | 0.9 | | 0.052 | 81.86 | 246 | 0.5259 | 0.8769 | | 0.052 | 82.86 | 249 | 0.5874 | 0.8692 | | 0.0444 | 83.86 | 252 | 0.5555 | 0.8846 | | 0.0332 | 84.86 | 255 | 0.5156 | 0.8615 | | 0.0332 | 85.86 | 258 | 0.4564 | 0.8615 | | 0.0449 | 86.86 | 261 | 0.4826 | 0.8692 | | 0.0449 | 87.86 | 264 | 0.4726 | 0.8615 | | 0.0385 | 88.86 | 267 | 0.4206 | 0.8846 | | 0.0356 | 89.86 | 270 | 0.4050 | 0.8769 | | 0.0356 | 90.86 | 273 | 0.4161 | 0.8923 | | 0.0391 | 91.86 | 276 | 0.4100 | 0.9077 | | 0.0391 | 92.86 | 279 | 0.4047 | 0.9 | | 0.0249 | 93.86 | 282 | 0.4044 | 0.9 | | 0.0399 | 94.86 | 285 | 0.3968 | 0.8846 | | 0.0399 | 95.86 | 288 | 0.3802 | 0.9 | | 0.031 | 96.86 | 291 | 0.3689 | 0.9 | | 0.031 | 97.86 | 294 | 0.3616 | 0.9077 | | 0.036 | 98.86 | 297 | 0.3584 | 0.9077 | | 0.0386 | 99.86 | 300 | 0.3574 | 0.9077 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
i-was-neo-first/hubert-large-ami-shard-experiment-colab
i-was-neo-first
2022-03-29T00:39:37Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-20T02:10:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: hubert-large-ami-shard-experiment-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-large-ami-shard-experiment-colab This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: nan - eval_wer: 1.0 - eval_runtime: 6.0682 - eval_samples_per_second: 16.479 - eval_steps_per_second: 2.142 - epoch: 1.02 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
sanchit-gandhi/wav2vec2-2-bart-large-cnn
sanchit-gandhi
2022-03-29T00:24:41Z
25
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-22T16:26:40Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.3524 - Wer: 0.1042 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.7605 | 4.5 | 500 | 2.6299 | 1.4451 | | 0.1177 | 9.01 | 1000 | 0.3524 | 0.1042 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
frtna/ted_mt-Spanish-to-Italian
frtna
2022-03-28T22:04:21Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:new_dataset", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - new_dataset model-index: - name: ted_mt-Spanish-to-Italian results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ted_mt-Spanish-to-Italian This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-it](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) on the new_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | No log | 1.0 | 46 | 1.4873 | 29.6133 | 26.9081 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.6
jorge-henao/spanish-t5-small-disco-poetry
jorge-henao
2022-03-28T21:26:45Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-28T18:15:25Z
--- license: mit tags: - generated_from_trainer model-index: - name: spanish-t5-small-disco-poetry results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-t5-small-disco-poetry This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1417 | 1.0 | 1284 | 0.0577 | | 0.0902 | 2.0 | 2568 | 0.0516 | | 0.0803 | 3.0 | 3852 | 0.0494 | | 0.0733 | 4.0 | 5136 | 0.0488 | | 0.0683 | 5.0 | 6420 | 0.0480 | | 0.067 | 6.0 | 7704 | 0.0477 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
hf-test/xls-r-300m-sv
hf-test
2022-03-28T20:07:57Z
28
3
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hello", "model_for_talk", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "sv", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - sv-SE license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer - hf-asr-leaderboard - hello - model_for_talk - mozilla-foundation/common_voice_7_0 - robust-speech-event - sv datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - Swedish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: sv-SE metrics: - name: Test WER type: wer value: 16.98 - name: Test CER type: cer value: 5.66 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: sv metrics: - name: Test WER type: wer value: 27.01 - name: Test CER type: cer value: 13.14 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R-300m-SV This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset. It achieves the following results on the evaluation set: - Loss: 0.3171 - Wer: 0.2468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.3349 | 1.45 | 500 | 3.2858 | 1.0 | | 2.9298 | 2.91 | 1000 | 2.9225 | 1.0000 | | 2.0839 | 4.36 | 1500 | 1.1546 | 0.8295 | | 1.7093 | 5.81 | 2000 | 0.6827 | 0.5701 | | 1.5855 | 7.27 | 2500 | 0.5597 | 0.4947 | | 1.4831 | 8.72 | 3000 | 0.4923 | 0.4527 | | 1.4416 | 10.17 | 3500 | 0.4670 | 0.4270 | | 1.3848 | 11.63 | 4000 | 0.4341 | 0.3980 | | 1.3749 | 13.08 | 4500 | 0.4203 | 0.4011 | | 1.3311 | 14.53 | 5000 | 0.4310 | 0.3961 | | 1.317 | 15.99 | 5500 | 0.3898 | 0.4322 | | 1.2799 | 17.44 | 6000 | 0.3806 | 0.3572 | | 1.2771 | 18.89 | 6500 | 0.3828 | 0.3427 | | 1.2451 | 20.35 | 7000 | 0.3702 | 0.3359 | | 1.2182 | 21.8 | 7500 | 0.3685 | 0.3270 | | 1.2152 | 23.26 | 8000 | 0.3650 | 0.3308 | | 1.1837 | 24.71 | 8500 | 0.3568 | 0.3187 | | 1.1721 | 26.16 | 9000 | 0.3659 | 0.3249 | | 1.1764 | 27.61 | 9500 | 0.3547 | 0.3145 | | 1.1606 | 29.07 | 10000 | 0.3514 | 0.3104 | | 1.1431 | 30.52 | 10500 | 0.3469 | 0.3062 | | 1.1047 | 31.97 | 11000 | 0.3313 | 0.2979 | | 1.1315 | 33.43 | 11500 | 0.3298 | 0.2992 | | 1.1022 | 34.88 | 12000 | 0.3296 | 0.2973 | | 1.0935 | 36.34 | 12500 | 0.3278 | 0.2926 | | 1.0676 | 37.79 | 13000 | 0.3208 | 0.2868 | | 1.0571 | 39.24 | 13500 | 0.3322 | 0.2885 | | 1.0536 | 40.7 | 14000 | 0.3245 | 0.2831 | | 1.0525 | 42.15 | 14500 | 0.3285 | 0.2826 | | 1.0464 | 43.6 | 15000 | 0.3223 | 0.2796 | | 1.0415 | 45.06 | 15500 | 0.3166 | 0.2774 | | 1.0356 | 46.51 | 16000 | 0.3177 | 0.2746 | | 1.04 | 47.96 | 16500 | 0.3150 | 0.2735 | | 1.0209 | 49.42 | 17000 | 0.3175 | 0.2731 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py --model_id hf-test/xls-r-300m-sv --dataset mozilla-foundation/common_voice_7_0 --config sv-SE --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id hf-test/xls-r-300m-sv --dataset speech-recognition-community-v2/dev_data --config sv --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ### Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "hf-test/xls-r-300m-sv" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "sv-SE", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text # => "jag lämnade grovjobbet åt honom" ``` ### Eval results on Common Voice 7 "test" (WER): | Without LM | With LM (run `./eval.py`) | |---|---| | 24.68 | 16.98 |
Symbermine/rare-puppers
Symbermine
2022-03-28T19:38:23Z
57
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-28T19:38:13Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9285714030265808 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Husky siberiano ![Husky siberiano](images/Husky_siberiano.jpg) #### cocker spaniel ![cocker spaniel](images/cocker_spaniel.jpg) #### galgo ![galgo](images/galgo.jpg) #### labrador ![labrador](images/labrador.jpg) #### pastor aleman ![pastor aleman](images/pastor_aleman.jpg)
joniponi/distilbert-base-uncased-finetuned-emotion
joniponi
2022-03-28T19:06:11Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-28T15:57:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8357 - Accuracy: 0.6309 - F1: 0.6469 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.9559 | 1.0 | 78 | 0.8585 | 0.6223 | 0.6363 | | 0.7998 | 2.0 | 156 | 0.8472 | 0.6202 | 0.6354 | | 0.7207 | 3.0 | 234 | 0.8357 | 0.6309 | 0.6469 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v2
DrishtiSharma
2022-03-28T19:04:20Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-28T17:20:20Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-sentiment-mesd-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-sentiment-mesd-v2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7213 - Accuracy: 0.3923 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.25e-05 - train_batch_size: 64 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.86 | 3 | 1.7961 | 0.1462 | | 1.9685 | 1.86 | 6 | 1.7932 | 0.1692 | | 1.9685 | 2.86 | 9 | 1.7891 | 0.2 | | 2.1386 | 3.86 | 12 | 1.7820 | 0.2923 | | 1.9492 | 4.86 | 15 | 1.7750 | 0.2923 | | 1.9492 | 5.86 | 18 | 1.7684 | 0.2846 | | 2.1143 | 6.86 | 21 | 1.7624 | 0.3231 | | 2.1143 | 7.86 | 24 | 1.7561 | 0.3308 | | 2.0945 | 8.86 | 27 | 1.7500 | 0.3462 | | 1.9121 | 9.86 | 30 | 1.7443 | 0.3385 | | 1.9121 | 10.86 | 33 | 1.7386 | 0.3231 | | 2.0682 | 11.86 | 36 | 1.7328 | 0.3231 | | 2.0682 | 12.86 | 39 | 1.7272 | 0.3769 | | 2.0527 | 13.86 | 42 | 1.7213 | 0.3923 | | 1.8705 | 14.86 | 45 | 1.7154 | 0.3846 | | 1.8705 | 15.86 | 48 | 1.7112 | 0.3846 | | 2.0263 | 16.86 | 51 | 1.7082 | 0.3769 | | 2.0263 | 17.86 | 54 | 1.7044 | 0.3846 | | 2.0136 | 18.86 | 57 | 1.7021 | 0.3846 | | 1.8429 | 19.86 | 60 | 1.7013 | 0.3846 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
DrishtiSharma/xls-r-es-test-lm-finetuned-sentiment-mesd
DrishtiSharma
2022-03-28T19:03:37Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-28T14:54:48Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: xls-r-es-test-lm-finetuned-sentiment-mesd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-es-test-lm-finetuned-sentiment-mesd This model is a fine-tuned version of [glob-asr/xls-r-es-test-lm](https://huggingface.co/glob-asr/xls-r-es-test-lm) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7851 - Accuracy: 0.2385 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.25e-05 - train_batch_size: 64 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.86 | 3 | 1.7876 | 0.1923 | | 1.9709 | 1.86 | 6 | 1.7869 | 0.2 | | 1.9709 | 2.86 | 9 | 1.7859 | 0.2308 | | 2.146 | 3.86 | 12 | 1.7851 | 0.2385 | | 1.9622 | 4.86 | 15 | 1.7842 | 0.1923 | | 1.9622 | 5.86 | 18 | 1.7834 | 0.1769 | | 2.137 | 6.86 | 21 | 1.7823 | 0.1923 | | 2.137 | 7.86 | 24 | 1.7812 | 0.1923 | | 2.1297 | 8.86 | 27 | 1.7800 | 0.1846 | | 1.9502 | 9.86 | 30 | 1.7787 | 0.1846 | | 1.9502 | 10.86 | 33 | 1.7772 | 0.1846 | | 2.1234 | 11.86 | 36 | 1.7760 | 0.1846 | | 2.1234 | 12.86 | 39 | 1.7748 | 0.1846 | | 2.1186 | 13.86 | 42 | 1.7736 | 0.1846 | | 1.9401 | 14.86 | 45 | 1.7725 | 0.1846 | | 1.9401 | 15.86 | 48 | 1.7715 | 0.1923 | | 2.112 | 16.86 | 51 | 1.7706 | 0.1923 | | 2.112 | 17.86 | 54 | 1.7701 | 0.1923 | | 2.1094 | 18.86 | 57 | 1.7697 | 0.2 | | 1.934 | 19.86 | 60 | 1.7696 | 0.2 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
scasutt/wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
scasutt
2022-03-28T18:53:54Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-28T12:30:15Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53_toy_train_data_fast_10pct results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53_toy_train_data_fast_10pct This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6983 - Wer: 0.5026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3619 | 1.05 | 250 | 3.4334 | 1.0 | | 3.0818 | 2.1 | 500 | 3.4914 | 1.0 | | 2.3245 | 3.15 | 750 | 1.6483 | 0.9486 | | 1.0233 | 4.2 | 1000 | 0.8817 | 0.7400 | | 0.7522 | 5.25 | 1250 | 0.7374 | 0.6529 | | 0.5343 | 6.3 | 1500 | 0.6972 | 0.6068 | | 0.4452 | 7.35 | 1750 | 0.6757 | 0.5740 | | 0.4275 | 8.4 | 2000 | 0.6789 | 0.5551 | | 0.3688 | 9.45 | 2250 | 0.6468 | 0.5394 | | 0.3363 | 10.5 | 2500 | 0.6798 | 0.5358 | | 0.3036 | 11.55 | 2750 | 0.6439 | 0.5265 | | 0.3173 | 12.6 | 3000 | 0.6898 | 0.5196 | | 0.2985 | 13.65 | 3250 | 0.6791 | 0.5169 | | 0.288 | 14.7 | 3500 | 0.6442 | 0.5090 | | 0.2673 | 15.75 | 3750 | 0.6984 | 0.5119 | | 0.2575 | 16.81 | 4000 | 0.7146 | 0.5084 | | 0.239 | 17.86 | 4250 | 0.6847 | 0.5040 | | 0.2266 | 18.91 | 4500 | 0.6900 | 0.5028 | | 0.22 | 19.96 | 4750 | 0.6983 | 0.5026 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
aapot/wav2vec2-large-xlsr-53-finnish
aapot
2022-03-28T17:56:36Z
9
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "fi", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fi datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Finnish by Aapo Tanskanen results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fi type: common_voice args: fi metrics: - name: Test WER type: wer value: 32.378771 --- # NOTE: this is an old model and should not be used anymore!! There are a lot better newer models available at our orgnization hub: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) and [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm) # Wav2Vec2-Large-XLSR-53-Finnish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10 Finnish](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset) and [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) datasets. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import librosa import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "fi", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish") model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish") resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(sampling_rate, speech_array).squeeze() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Finnish test data of Common Voice. ```python import librosa import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish") model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\...\…\–\é]' resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(sampling_rate, speech_array).squeeze() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 32.378771 % ## Training The Common Voice `train`, `validation` and `other` datasets were used for training as well as `CSS10 Finnish` and `Finnish parliament session 2` datasets. The script used for training can be found from [Google Colab](https://colab.research.google.com/drive/1vnEGC9BnNRmVyIHj-0UsVulh_cUYSGWA?usp=sharing)
aapot/wav2vec2-xlsr-300m-finnish-lm
aapot
2022-03-28T17:22:08Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "fi", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "arxiv:2111.09296", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 language: fi metrics: - wer - cer tags: - automatic-speech-recognition - fi - finnish - generated_from_trainer - hf-asr-leaderboard - robust-speech-event datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: wav2vec2-xlsr-300m-finnish-lm results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: fi metrics: - name: Test WER type: wer value: 8.16 - name: Test CER type: cer value: 1.97 --- # Wav2Vec2 XLS-R for Finnish ASR This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in [this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20). This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model. **Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization. ## Model description Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296). This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR. ## Intended uses & limitations You can use this model for Finnish ASR (speech-to-text) task. ### How to use Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-300m-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model. ### Limitations and bias This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking). A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example. The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding. ## Training data This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets: | Dataset | Hours | % of total hours | |:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:| | [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % | | [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % | | [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % | | [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % | | [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % | | [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % | Datasets were filtered to include maximum length of 20 seconds long audio samples. ## Training procedure This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud. Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets. For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP The pretrained `facebook/wav2vec2-xls-r-300m` model was initialized with following hyperparameters: - attention_dropout: 0.094 - hidden_dropout: 0.047 - feat_proj_dropout: 0.04 - mask_time_prob: 0.082 - layerdrop: 0.041 - activation_dropout: 0.055 - ctc_loss_reduction: "mean" ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.973 | 0.17 | 500 | 0.5750 | 0.6844 | | 0.713 | 0.34 | 1000 | 0.3356 | 0.4518 | | 0.6563 | 0.5 | 1500 | 0.3007 | 0.4039 | | 0.642 | 0.67 | 2000 | 0.2619 | 0.3674 | | 0.6203 | 0.84 | 2500 | 0.2488 | 0.3558 | | 0.6016 | 1.01 | 3000 | 0.2795 | 0.3835 | | 0.5423 | 1.17 | 3500 | 0.2652 | 0.3310 | | 0.5639 | 1.34 | 4000 | 0.2479 | 0.3462 | | 0.586 | 1.51 | 4500 | 0.2409 | 0.3295 | | 0.5169 | 1.68 | 5000 | 0.2728 | 0.3352 | | 0.5176 | 1.84 | 5500 | 0.2254 | 0.3149 | | 0.4983 | 2.01 | 6000 | 0.2169 | 0.3009 | | 0.4982 | 2.18 | 6500 | 0.2215 | 0.3079 | | 0.4898 | 2.35 | 7000 | 0.2174 | 0.3023 | | 0.4922 | 2.51 | 7500 | 0.2217 | 0.3081 | | 0.5025 | 2.68 | 8000 | 0.2002 | 0.2710 | | 0.4745 | 2.85 | 8500 | 0.1935 | 0.2783 | | 0.4377 | 3.02 | 9000 | 0.1859 | 0.2742 | | 0.4511 | 3.18 | 9500 | 0.2038 | 0.2786 | | 0.4411 | 3.35 | 10000 | 0.1863 | 0.2651 | | 0.4501 | 3.52 | 10500 | 0.1948 | 0.2605 | | 0.4557 | 3.69 | 11000 | 0.1872 | 0.2695 | | 0.4493 | 3.85 | 11500 | 0.1888 | 0.2632 | | 0.4047 | 4.02 | 12000 | 0.1818 | 0.2559 | | 0.4319 | 4.19 | 12500 | 0.1896 | 0.2648 | | 0.4162 | 4.36 | 13000 | 0.1953 | 0.2595 | | 0.4046 | 4.52 | 13500 | 0.1864 | 0.2606 | | 0.4195 | 4.69 | 14000 | 0.1843 | 0.2467 | | 0.4146 | 4.86 | 14500 | 0.1686 | 0.2450 | | 0.378 | 5.03 | 15000 | 0.1731 | 0.2401 | | 0.3792 | 5.19 | 15500 | 0.1676 | 0.2325 | | 0.3855 | 5.36 | 16000 | 0.1740 | 0.2326 | | 0.4029 | 5.53 | 16500 | 0.1674 | 0.2345 | | 0.386 | 5.7 | 17000 | 0.1735 | 0.2280 | | 0.3811 | 5.86 | 17500 | 0.1692 | 0.2258 | | 0.3607 | 6.03 | 18000 | 0.1797 | 0.2279 | | 0.3604 | 6.2 | 18500 | 0.1651 | 0.2206 | | 0.3362 | 6.37 | 19000 | 0.1627 | 0.2199 | | 0.3611 | 6.53 | 19500 | 0.1652 | 0.2172 | | 0.3671 | 6.7 | 20000 | 0.1564 | 0.2140 | | 0.3769 | 6.87 | 20500 | 0.1525 | 0.2101 | | 0.3539 | 7.04 | 21000 | 0.1639 | 0.2096 | | 0.3225 | 7.21 | 21500 | 0.1611 | 0.2087 | | 0.3323 | 7.37 | 22000 | 0.1633 | 0.2008 | | 0.3327 | 7.54 | 22500 | 0.1692 | 0.1975 | | 0.3456 | 7.71 | 23000 | 0.1555 | 0.1991 | | 0.3058 | 7.88 | 23500 | 0.1590 | 0.1959 | | 0.3034 | 8.04 | 24000 | 0.1531 | 0.1973 | | 0.2925 | 8.21 | 24500 | 0.1583 | 0.1978 | | 0.2967 | 8.38 | 25000 | 0.1546 | 0.1906 | | 0.2974 | 8.55 | 25500 | 0.1540 | 0.1869 | | 0.3131 | 8.71 | 26000 | 0.1534 | 0.1850 | | 0.3306 | 8.88 | 26500 | 0.1482 | 0.1844 | | 0.2842 | 9.05 | 27000 | 0.1490 | 0.1854 | | 0.2879 | 9.22 | 27500 | 0.1463 | 0.1799 | | 0.27 | 9.38 | 28000 | 0.1454 | 0.1798 | | 0.2874 | 9.55 | 28500 | 0.1504 | 0.1787 | | 0.2757 | 9.72 | 29000 | 0.1512 | 0.1784 | | 0.3017 | 9.89 | 29500 | 0.1484 | 0.1800 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 ## Evaluation results Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). To evaluate this model, run the `eval.py` script in this repository: ```bash python3 eval.py --model_id aapot/wav2vec2-xlsr-300m-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test ``` This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models: | | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) | |-----------------------------------------|---------------|------------------|---------------|------------------| |aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** | |aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 | |aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 | ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗