modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-03 12:30:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 466
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-03 12:30:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
toasthans/Facebook_and_Twitter_Ohne_HPS | toasthans | 2021-12-23T14:55:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Facebook_and_Twitter_Ohne_HPS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Facebook_and_Twitter_Ohne_HPS
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9218
- Accuracy: 0.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4364 | 1.0 | 713 | 0.4107 | 0.8302 |
| 0.2843 | 2.0 | 1426 | 0.4316 | 0.8495 |
| 0.0869 | 3.0 | 2139 | 0.7700 | 0.8558 |
| 0.0443 | 4.0 | 2852 | 0.9218 | 0.8512 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Monsia/test-model-lg-data | Monsia | 2021-12-23T14:03:38Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: test-model-lg-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-model-lg-data
This model is a fine-tuned version of [Monsia/test-model-lg-data](https://huggingface.co/Monsia/test-model-lg-data) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3354
- Wer: 0.4150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0236 | 0.67 | 100 | 0.4048 | 0.4222 |
| 0.0304 | 1.35 | 200 | 0.4266 | 0.4809 |
| 0.0545 | 2.03 | 300 | 0.4309 | 0.4735 |
| 0.0415 | 2.7 | 400 | 0.4269 | 0.4595 |
| 0.033 | 3.38 | 500 | 0.4085 | 0.4537 |
| 0.0328 | 4.05 | 600 | 0.3642 | 0.4224 |
| 0.0414 | 4.73 | 700 | 0.3354 | 0.4150 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
redbloodyknife/DialoGPT-medium-shayo | redbloodyknife | 2021-12-23T12:17:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
#Shayo Bot by Shogun
#Ai Chatbot Testing based on GPT2 and DialoGPT-Medium by Microsoft
#shoguπ#9999 |
toasthans/Facebook_Mit_HPS_5_Epoch | toasthans | 2021-12-23T08:27:55Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Facebook_Mit_HPS_5_Epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Facebook_Mit_HPS_5_Epoch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4774
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.546392051994155e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 292 | 0.2181 | 0.9264 |
| 0.2411 | 2.0 | 584 | 0.2571 | 0.9289 |
| 0.2411 | 3.0 | 876 | 0.5712 | 0.8947 |
| 0.0558 | 4.0 | 1168 | 0.4675 | 0.9332 |
| 0.0558 | 5.0 | 1460 | 0.4774 | 0.9315 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
YYJ/KunquChat | YYJ | 2021-12-23T07:21:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # 经典昆曲欣赏 期末作业
## KunquChat
Author: 1900012921 俞跃江
|
BigSalmon/InformalToFormalLincolnDistilledGPT2 | BigSalmon | 2021-12-23T03:39:15Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincolnDistilledGPT2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincolnDistilledGPT2")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english:
```` |
Ayham/albert_gpt2_summarization_cnndm | Ayham | 2021-12-23T01:36:49Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: albert_large_gpt2_summarization_cnndm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_large_gpt2_summarization_cnndm
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
alecmullen/autonlp-group-classification-441411446 | alecmullen | 2021-12-22T23:03:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:alecmullen/autonlp-data-group-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- alecmullen/autonlp-data-group-classification
co2_eq_emissions: 0.4362732160754736
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 441411446
- CO2 Emissions (in grams): 0.4362732160754736
## Validation Metrics
- Loss: 0.7598486542701721
- Accuracy: 0.8222222222222222
- Macro F1: 0.2912091747693842
- Micro F1: 0.8222222222222222
- Weighted F1: 0.7707160863181806
- Macro Precision: 0.29631463146314635
- Micro Precision: 0.8222222222222222
- Weighted Precision: 0.7341339689524508
- Macro Recall: 0.30174603174603176
- Micro Recall: 0.8222222222222222
- Weighted Recall: 0.8222222222222222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alecmullen/autonlp-group-classification-441411446
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
BigSalmon/InformalToFormalLincoln14 | BigSalmon | 2021-12-22T22:40:51Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln14")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln14")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english:
```` |
s3h/opus-mt-ar-en-finetuned-src-to-trg-testing | s3h | 2021-12-22T20:20:22Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned-src-to-trg-testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-src-to-trg-testing
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3973
- Bleu: 0.1939
- Gen Len: 37.6364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Apex, opt level O1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 5 | 3.4353 | 0.1994 | 36.6364 |
| No log | 2.0 | 10 | 3.4015 | 0.1994 | 36.0909 |
| No log | 3.0 | 15 | 3.3973 | 0.1939 | 37.6364 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.5.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
SajjadAyoubi/clip-fa-text | SajjadAyoubi | 2021-12-22T19:02:56Z | 1,578 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2103.00020",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | # CLIPfa: Connecting Farsi Text and Images
OpenAI released [`the paper Learning Transferable Visual Models From Natural Language Supervision`](https://arxiv.org/abs/2103.00020) in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used [`Farahani's RoBERTa-fa`](https://huggingface.co/m3hrdadfi/roberta-zwnj-wnli-mean-tokens) as the text encoder and [`ViT`](https://huggingface.co/openai/clip-vit-base-patch32) as the vision encoder from Original CLIP and finetuned them.
- It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip.
## How to use?
Both models generate vectors with 768 dimensions.
```python
from transformers import CLIPVisionModel, RobertaModel, AutoTokenizer, CLIPFeatureExtractor
# download pre-trained models
vision_encoder = CLIPVisionModel.from_pretrained('SajjadAyoubi/clip-fa-vision')
preprocessor = CLIPFeatureExtractor.from_pretrained('SajjadAyoubi/clip-fa-vision')
text_encoder = RobertaModel.from_pretrained('SajjadAyoubi/clip-fa-text')
tokenizer = AutoTokenizer.from_pretrained('SajjadAyoubi/clip-fa-text')
# define input image and input text
text = 'something'
image = PIL.Image.open('my_favorite_image.jpg')
# compute embeddings
text_embedding = text_encoder(**tokenizer(text,
return_tensors='pt')).pooler_output
image_embedding = vision_encoder(**preprocessor(image,
return_tensors='pt')).pooler_output
text_embedding.shape == image_embedding.shape
```
## Demo:
The followings are just some use cases of CLIPfa on 25K [`Unsplash images`](https://github.com/unsplash/datasets)
- use `pip install -q git+https://github.com/sajjjadayobi/clipfa.git`
```python
from clipfa import CLIPDemo
demo = CLIPDemo(vision_encoder, text_encoder, tokenizer)
demo.compute_text_embeddings(['گاو' ,'اسب' ,'ماهی'])
demo.compute_image_embeddings(test_df.image_path.to_list())
```
## Online Demo: [CLIPfa at Huggingface🤗 spaces](https://huggingface.co/spaces/SajjadAyoubi/CLIPfa-Demo)
We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database.
> Made with ❤️ in my basement🤫
|
gngpostalsrvc/BERiTmodel2 | gngpostalsrvc | 2021-12-22T17:25:25Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiTmodel2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiTmodel2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 280
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.1924 | 1.0 | 2854 | 3.4329 |
| 3.0936 | 2.0 | 5708 | 3.5036 |
| 2.9998 | 3.0 | 8562 | 3.1906 |
| 2.9064 | 4.0 | 11416 | 3.4867 |
| 2.8493 | 5.0 | 14270 | 3.2027 |
| 2.7538 | 6.0 | 17124 | 2.9772 |
| 2.7273 | 7.0 | 19978 | 2.9950 |
| 2.7399 | 8.0 | 22832 | 2.9690 |
| 2.67 | 9.0 | 25686 | 3.0311 |
| 2.6388 | 10.0 | 28540 | 3.1508 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
deepparag/DumBot-Beta | deepparag | 2021-12-22T16:32:40Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
thumbnail: https://cdn.discordapp.com/app-icons/870239976690970625/c02cae78ae105f07969cfd8f8ea3d0a0.png
tags:
- conversational
license: mit
---
An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
Trained on:
https://www.kaggle.com/Cornell-University/movie-dialog-corpus
https://www.kaggle.com/jef1056/discord-data
Important:
The AI can be a bit weird at times as it is still undergoing training!
At times it send stuff using :<random_wierd_words>: as they are discord emotes.
It also send random @RandomName as it is trying to ping people.
This works well on discord but on the web not so much but it is easy enough to remove such stuff using [re.sub](https://docs.python.org/3/library/re.html#re.sub)
Issues:
The AI like with all conversation AI lacks a character, it changes its name way too often. This can be solved using an AIML chatbot to give it a stable character!
[Live Demo](https://dumbot-331213.uc.r.appspot.com/)
Example:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot")
model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=4,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
NbAiLabArchive/test_w5 | NbAiLabArchive | 2021-12-22T16:11:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | Just for performing some experiments. Do not use. |
huggingartists/100-gecs | huggingartists | 2021-12-22T15:23:59Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/100-gecs",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/100-gecs
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9fd98af9a817af8cd78636f71895b6ad.500x500x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">100 gecs</div>
<a href="https://genius.com/artists/100-gecs">
<div style="text-align: center; font-size: 14px;">@100-gecs</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from 100 gecs.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/100-gecs).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/100-gecs")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3c9j4tvq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 100 gecs's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1v0ffa4e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1v0ffa4e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/100-gecs')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/100-gecs")
model = AutoModelWithLMHead.from_pretrained("huggingartists/100-gecs")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
dtomas/roberta-base-bne-irony | dtomas | 2021-12-22T13:55:36Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"irony",
"sarcasm",
"spanish",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- es
tags:
- irony
- sarcasm
- spanish
widget:
- text: "¡Cómo disfruto peleándome con los Transformers!"
example_title: "Ironic"
- text: "Madrid es la capital de España"
example_title: "Non ironic"
---
# RoBERTa base finetuned for Spanish irony detection
## Model description
Model to perform irony detection in Spanish. This is a finetuned version of the [RoBERTa-base-bne model](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the [IroSvA](https://www.autoritas.net/IroSvA2019/) corpus. Only the Spanish from Spain variant was used in the training process. It comprises 2,400 tweets labeled as ironic/non-ironic.
|
CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper | CheonggyeMountain-Sherpa | 2021-12-22T09:29:39Z | 1 | 0 | null | [
"gpt2",
"ko",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language:
- ko
tags:
- gpt2
license: cc-by-nc-sa-4.0
---
## Model based on
[Ko-GPT-Trinity 1.2B (v0.5)](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5)
## Example
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper",
revision="punct_wrapper-related_words-overfit", # or punct_wrapper-related_words-minevalloss
bos_token="<s>",
eos_token="</s>",
unk_token="<unk>",
pad_token="<pad>",
mask_token="<mask>",
)
model = AutoModelForCausalLM.from_pretrained(
"CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper",
revision="punct_wrapper-related_words-overfit", # or punct_wrapper-related_words-minevalloss
pad_token_id=tokenizer.eos_token_id,
).to(device="cuda")
model.eval()
prompt = "석양이 보이는 경치"
wrapped_prompt = f"@{prompt}@<usr>\n"
with torch.no_grad():
tokens = tokenizer.encode(wrapped_prompt, return_tensors="pt").to(device="cuda")
gen_tokens = model.generate(
tokens,
max_length=64,
repetition_penalty=2.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
top_k=16,
top_p=0.8,
)
generated = tokenizer.decode(gen_tokens[0][len(tokens[0]):])
print(generated)
# 해가 지고 있을 무렵
# 나는 석양을 보러 간다
# 붉은 하늘과 하얀 구름이 나를 반겨줄 것 같아서리
# 하지만 내가 본 해는 저물어만 가고
# 구름마저 자취를 감춘 어둠만이 남아있을 뿐이네
# 내가 탄 배는 보이지도 않고
``` |
ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa | ayameRushia | 2021-12-22T08:52:47Z | 78,229 | 14 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"id",
"dataset:indonlu",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
model-index:
- name: bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9373015873015873
language: id
widget:
- text: "Saya mengapresiasi usaha anda"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa
This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3390
- Accuracy: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2864 | 1.0 | 688 | 0.2154 | 0.9286 |
| 0.1648 | 2.0 | 1376 | 0.2238 | 0.9357 |
| 0.0759 | 3.0 | 2064 | 0.3351 | 0.9365 |
| 0.044 | 4.0 | 2752 | 0.3390 | 0.9373 |
| 0.0308 | 5.0 | 3440 | 0.4346 | 0.9365 |
| 0.0113 | 6.0 | 4128 | 0.4708 | 0.9365 |
| 0.006 | 7.0 | 4816 | 0.5533 | 0.9325 |
| 0.0047 | 8.0 | 5504 | 0.5888 | 0.9310 |
| 0.0001 | 9.0 | 6192 | 0.5961 | 0.9333 |
| 0.0 | 10.0 | 6880 | 0.5992 | 0.9357 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
hrdipto/wav2vec2-base-timit-demo-colab | hrdipto | 2021-12-22T08:25:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4241
- Wer: 0.3381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7749 | 4.0 | 500 | 2.0639 | 1.0018 |
| 0.9252 | 8.0 | 1000 | 0.4853 | 0.4821 |
| 0.3076 | 12.0 | 1500 | 0.4507 | 0.4044 |
| 0.1732 | 16.0 | 2000 | 0.4315 | 0.3688 |
| 0.1269 | 20.0 | 2500 | 0.4481 | 0.3559 |
| 0.1087 | 24.0 | 3000 | 0.4354 | 0.3464 |
| 0.0832 | 28.0 | 3500 | 0.4241 | 0.3381 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
dpasch01/finetune-clm-employment | dpasch01 | 2021-12-22T07:59:51Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetune-clm-employment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-clm-employment
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3283 | 1.0 | 3989 | 1.9578 |
| 2.0824 | 2.0 | 7978 | 1.9013 |
| 1.9936 | 3.0 | 11967 | 1.8625 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
hrdipto/wav2vec2-xls-r-timit-tokenizer-base | hrdipto | 2021-12-22T07:19:26Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-timit-tokenizer-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-timit-tokenizer-base
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0828
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.3134 | 4.03 | 500 | 3.0814 | 1.0 |
| 2.9668 | 8.06 | 1000 | 3.0437 | 1.0 |
| 2.9604 | 12.1 | 1500 | 3.0337 | 1.0 |
| 2.9619 | 16.13 | 2000 | 3.0487 | 1.0 |
| 2.9588 | 20.16 | 2500 | 3.0859 | 1.0 |
| 2.957 | 24.19 | 3000 | 3.0921 | 1.0 |
| 2.9555 | 28.22 | 3500 | 3.0828 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17 | csukuangfj | 2021-12-22T04:24:10Z | 0 | 1 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | # Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17
cd icefall-asr-librispeech-transducer-bpe-500-2021-12-17
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `cb04c8a7509425ab45fae888b0ca71bbbd23f0de`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout cb04c8a7509425ab45fae888b0ca71bbbd23f0de
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/cb04c8a7509425ab45fae888b0ca71bbbd23f0de/egs/librispeech/ASR/transducer/train.py#L196>
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer, plus a 4-layer LSTM with hidden size 512.
-----
## Description
This repo provides pre-trained RNN-T Conformer model for the librispeech dataset
using [icefall][icefall].
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer/train.py \
--world-size 4 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir transducer/exp-lr-2.5-full \
--full-libri 1 \
--max-duration 250 \
--lr-factor 2.5
```
The command for decoding is:
```
epoch=26
avg=12
./transducer/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer/exp-lr-2.5-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100
```
You can find the decoding log for the above command in this
repo: [log/log-decode-epoch-26-avg-12-2021-12-17-09-33-04](log/log-decode-epoch-26-avg-12-2021-12-17-09-33-04).
The best WER using greedy search is:
| | test-clean | test-other |
|-----|------------|------------|
| WER | 3.16 | 7.71 |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```
./transducer/export.py \
--epoch 26 \
--avg 12 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer/exp-lr-2.5-full
```
**HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-bpe-500-2021-12-17/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
huggingtweets/_luisinhobr-beckvencido | huggingtweets | 2021-12-22T02:57:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/_luisinhobr-beckvencido/1640141850327/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1470914400764715012/YO9XqA0n_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1390224220643278850/LcIZLss-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">agrummgit ag😜 & luisfer nando</div>
<div style="text-align: center; font-size: 14px;">@_luisinhobr-beckvencido</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from agrummgit ag😜 & luisfer nando.
| Data | agrummgit ag😜 | luisfer nando |
| --- | --- | --- |
| Tweets downloaded | 3226 | 2366 |
| Retweets | 379 | 367 |
| Short tweets | 672 | 503 |
| Tweets kept | 2175 | 1496 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34idoh6o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_luisinhobr-beckvencido's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1w6ipjqa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1w6ipjqa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_luisinhobr-beckvencido')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tingtingyuli/wav2vec2-base-timit-demo-colab | tingtingyuli | 2021-12-21T22:26:02Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4371
- Wer: 0.3402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6515 | 4.0 | 500 | 1.9481 | 0.9825 |
| 0.8007 | 8.0 | 1000 | 0.4364 | 0.4424 |
| 0.2559 | 12.0 | 1500 | 0.4188 | 0.3848 |
| 0.1483 | 16.0 | 2000 | 0.4466 | 0.3524 |
| 0.1151 | 20.0 | 2500 | 0.4492 | 0.3519 |
| 0.0971 | 24.0 | 3000 | 0.4568 | 0.3453 |
| 0.0765 | 28.0 | 3500 | 0.4371 | 0.3402 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
iliketurtles/distilgpt2-finetuned-wikitext2 | iliketurtles | 2021-12-21T19:51:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab | akashsivanandan | 2021-12-21T18:26:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tamil-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8072
- Wer: 0.6531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.0967 | 1.0 | 118 | 4.6437 | 1.0 |
| 3.4973 | 2.0 | 236 | 3.2588 | 1.0 |
| 3.1305 | 3.0 | 354 | 2.6566 | 1.0 |
| 1.2931 | 4.0 | 472 | 0.9156 | 0.9944 |
| 0.6851 | 5.0 | 590 | 0.7474 | 0.8598 |
| 0.525 | 6.0 | 708 | 0.6649 | 0.7995 |
| 0.4325 | 7.0 | 826 | 0.6740 | 0.7752 |
| 0.3766 | 8.0 | 944 | 0.6220 | 0.7628 |
| 0.3256 | 9.0 | 1062 | 0.6316 | 0.7322 |
| 0.2802 | 10.0 | 1180 | 0.6442 | 0.7305 |
| 0.2575 | 11.0 | 1298 | 0.6885 | 0.7280 |
| 0.2248 | 12.0 | 1416 | 0.6702 | 0.7197 |
| 0.2089 | 13.0 | 1534 | 0.6781 | 0.7173 |
| 0.1893 | 14.0 | 1652 | 0.6981 | 0.7049 |
| 0.1652 | 15.0 | 1770 | 0.7154 | 0.7436 |
| 0.1643 | 16.0 | 1888 | 0.6798 | 0.7023 |
| 0.1472 | 17.0 | 2006 | 0.7381 | 0.6947 |
| 0.1372 | 18.0 | 2124 | 0.7240 | 0.7065 |
| 0.1318 | 19.0 | 2242 | 0.7305 | 0.6714 |
| 0.1211 | 20.0 | 2360 | 0.7288 | 0.6597 |
| 0.1178 | 21.0 | 2478 | 0.7417 | 0.6699 |
| 0.1118 | 22.0 | 2596 | 0.7476 | 0.6753 |
| 0.1016 | 23.0 | 2714 | 0.7973 | 0.6647 |
| 0.0998 | 24.0 | 2832 | 0.8027 | 0.6633 |
| 0.0917 | 25.0 | 2950 | 0.8045 | 0.6680 |
| 0.0907 | 26.0 | 3068 | 0.7884 | 0.6565 |
| 0.0835 | 27.0 | 3186 | 0.8009 | 0.6622 |
| 0.0749 | 28.0 | 3304 | 0.8123 | 0.6536 |
| 0.0755 | 29.0 | 3422 | 0.8006 | 0.6555 |
| 0.074 | 30.0 | 3540 | 0.8072 | 0.6531 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
espnet/ftshijt_espnet2_asr_totonac_transformer | espnet | 2021-12-21T16:10:01Z | 1 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:totonac",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- totonac
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/ftshijt_espnet2_asr_totonac_transformer`
This model was trained by ftshijt using totonac recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd els/totonac/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_totonac_transformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Nov 7 09:22:09 EST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_transformer_specaug_raw_bpe250_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|3547|59.8|32.9|7.3|6.5|46.7|87.4|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|5018|55.5|35.7|8.8|6.1|50.6|92.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|22510|88.1|4.4|7.4|3.9|15.8|87.4|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|32990|86.9|4.3|8.8|4.0|17.1|92.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|9360|70.3|15.8|13.8|4.3|34.0|87.4|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|13835|70.5|16.0|13.6|4.4|33.9|92.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_specaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_specaug_raw_bpe250_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 15
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 32
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe250_sp/train/speech_shape
- exp/asr_stats_raw_bpe250_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe250_sp/valid/speech_shape
- exp/asr_stats_raw_bpe250_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /tmp/jiatong-7359.okvPvI3Z/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /tmp/jiatong-7359.okvPvI3Z/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /tmp/jiatong-7359.okvPvI3Z/raw/dev/wav.scp
- speech
- kaldi_ark
- - /tmp/jiatong-7359.okvPvI3Z/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- ':'
- ▁N
- NI
- N
- ▁IYMA
- ▁NA
- NA
- ▁WA
- WA
- ▁
- ''''
- KA
- ▁MA
- MA
- T
- ▁XA
- TA
- NCHU
- WI
- ▁LI
- ▁NI
- PA
- YI
- ▁PUS
- K
- ▁PI
- ▁X
- S
- ▁TA
- YA
- ▁LA
- Q
- QA
- TI
- ▁KA
- QO
- W
- ▁KAH
- ▁PALA
- H
- X
- XA
- ▁KI
- A
- LH
- I
- LA
- ▁CHA
- ▁A
- ▁XLI
- ▁LHI
- U
- ▁K
- KANI
- KU
- Y
- ▁LU
- Á
- ▁CHU
- O
- KI
- ▁KIWI
- NTLA
- ▁TLA
- M
- ▁TAWA
- ▁TI
- ▁S
- WANI
- CHA
- LHI
- LI
- ▁TU
- ▁PALHA
- Í
- ▁CHANÁ
- ▁KILHWAMPA
- KÁN
- ▁WAYMA
- E
- SA
- ▁E
- ▁LHU
- LHA
- PU
- ▁LHA
- ▁PA
- ▁LAK
- ▁ANTA
- ▁KITI
- NCHÚ
- SI
- TLA
- PI
- ▁KINI
- CHI
- ▁PEROH
- ▁PU
- QÓ
- QALHCHIWINA
- TU
- ▁TLHA
- ▁WI
- NÁ
- ▁KAN
- ▁NAYI
- CH
- 'NO'
- ▁U
- TSA
- MÁ
- NQO
- ▁ANA
- ▁LIKWA
- ▁XTA
- J
- ▁QALH
- TO
- TÁ
- ▁USA
- ▁PORQUE
- ▁MI
- L
- ▁TAWÁ
- XI
- LHAQAPASA
- P
- CHIWI
- WÁ
- NTI
- ▁JKA
- Ú
- NTLHA
- R
- TSI
- C
- STA
- ▁LH
- LHU
- MPI
- ▁I
- ▁NILH
- ▁KATSI
- ▁LHAK
- MAKLHAKASKI
- ▁WANIKÁN
- ▁WIXI
- ▁TSI
- KÚ
- NÍ
- ▁PAKS
- NU
- TLHA
- YÁ
- KUCHAN
- XAQATLI
- ▁MAX
- ▁LAQAPASA
- ▁LAQ
- QALH
- KATSI
- Ó
- LAQAPASA
- ▁J
- ▁QAMA
- NTU
- MI
- KIWI
- ▁KIN
- ▁XANAT
- ▁CHI
- JA
- ▁IY
- ▁TSU
- MAKLAKAS
- ▁MAQA
- LÁ
- ▁KATSIYA
- ▁TLANKA
- ▁STAK
- ▁XLA
- ▁LHIKWA
- ▁SQA
- ▁P
- TAHNA
- ▁TLAQ
- ▁JKATSI
- MAKLAKASKINKA
- YÁW
- WATIYA
- CHÁ
- ▁IPORQUEI
- ▁AKXNI
- TSU
- ▁TSINÓ
- ▁STAKA
- ▁AKXNÍ
- LAKATA
- KATSÍ
- ▁XALHAK
- TLAWAYA
- SPUT
- ▁XATAWA
- QALHCHIWI
- PÁ
- JU
- ▁XAXANAT
- ▁PÉREZ
- ▁AKTSU
- ▁JKI
- NTÚ
- ▁KATSIYÁ
- ▁IESTEI
- LAQAPASÁ
- ▁MASKI
- ▁LAQSQATÁ
- ▁TLHANKA
- ▁WANIKANI
- ▁LÓPEZ
- MAKLAKASKINKÁN
- ▁ANTÁ
- ▁TACHIWÍ
- ▁SEBAST
- ▁CANO
- ▁XKUTNI
- ▁UKXILH
- TANKAH
- LAKASKINQO
- LAKAPASTAK
- ▁XCHACHAT
- TAKAWANÍ
- ▁TLÁ
- ▁TSINOH
- KAXTLAWA
- ▁NÚÑEZ
- ▁XLAKASKINKA
- ▁WÁTIYA
- ONCE
- Z
- É
- D
- Ñ
- V
- F
- G
- '1'
- B
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram250/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_bpe250_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer | espnet | 2021-12-21T15:43:26Z | 4 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:puebla_nahuatl",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- puebla_nahuatl
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer`
This model was trained by ftshijt using puebla_nahuatl recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd els/puebla_nahuatl/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Nov 7 18:16:55 EST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_transformer_hubert_raw_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|90532|77.0|17.0|6.0|3.6|26.6|74.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|590273|92.2|2.1|5.7|3.0|10.8|74.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|242435|86.0|7.3|6.8|3.5|17.5|74.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_hubert.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_hubert_raw_bpe500_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 15
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 32
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_bpe500_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /tmp/jiatong-150390.uytFFbyG/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /tmp/jiatong-150390.uytFFbyG/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /tmp/jiatong-150390.uytFFbyG/raw/dev/wav.scp
- speech
- kaldi_ark
- - /tmp/jiatong-150390.uytFFbyG/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ':'
- N
- ▁A
- ▁WA
- ▁KE
- ▁YO
- ▁NE
- ▁SE
- H
- MO
- WA
- ''''
- ▁NO
- ▁I
- ▁N
- S
- ▁KI
- K
- ▁
- MAH
- KA
- TA
- L
- ▁POS
- PA
- ▁KA
- ▁TA
- ▁MO
- T
- ▁YEHWA
- I
- MEH
- ▁YA
- ▁DE
- MA
- A
- ▁TE
- TI
- TSI
- NI
- CHI
- ▁PERO
- KI
- LI
- TO
- WI
- ▁PARA
- KO
- E
- ▁O
- ▁IKA
- TE
- O
- W
- ▁NEH
- ▁NOCHI
- CH
- ▁TI
- ▁TIK
- LO
- ▁SAH
- ▁MAH
- NA
- LA
- ▁OMPA
- ▁IHKÓ
- YA
- ▁NI
- ▁PORQUE
- ▁MA
- YO
- ▁TEIN
- LIA
- ▁E
- MPA
- ▁NIKA
- X
- YAH
- ▁KWALTSI
- SA
- TSA
- ▁MOCHI
- ▁NIK
- ▁WE
- ▁TO
- TSÍ
- ▁SEMI
- ▁KITA
- WAK
- KWI
- MI
- ▁MM
- ▁XO
- ▁SEKI
- JÓ
- AH
- ▁KOMO
- R
- NE
- ▁OK
- ▁KWALI
- ▁CHI
- ▁YEH
- ▁NELI
- SE
- PO
- WAH
- PI
- ME
- KWA
- ▁PA
- ▁ONKAK
- KE
- ▁YE
- ▁T
- LTIK
- ▁TEHWA
- TAH
- ▁TIKI
- ▁QUE
- ▁NIKI
- PE
- ▁IWKI
- XI
- TOK
- ▁TAMAN
- ▁KO
- TSO
- LE
- RA
- SI
- WÍ
- MAN
- ▁TIMO
- 'NO'
- SO
- ▁MIAK
- U
- ▁TEH
- ▁KICHI
- ▁XA
- WE
- ▁KOW
- KEH
- NÍ
- LIK
- ▁ITECH
- TIH
- ▁PE
- ▁KIPIA
- ▁CUANDO
- ▁KWALTIA
- ▁HASTA
- LOWA
- ▁ENTÓ
- ▁NA
- XO
- RO
- TIA
- ▁NIKITA
- CHIHCHI
- ▁SEPA
- ▁MAHYÁ
- ▁PAHTI
- ▁K
- LIAH
- ▁SAYOH
- MATI
- ▁PI
- TS
- ▁MÁS
- XMATI
- KAH
- ▁XI
- M
- ▁ESTE
- HKO
- KOWIT
- MIKI
- CHO
- ▁TAK
- Á
- ▁KILIAH
- CHIO
- ▁KIHTOWA
- ▁KITE
- NEKI
- ▁ME
- XA
- ▁TEL
- B
- ▁KOWIT
- ▁ATA
- TIK
- ▁EKINTSI
- ▁IMA
- ▁KWA
- ▁OSO
- ▁NEHJÓ
- ▁ITEYO
- Y
- SKEH
- ▁ISTA
- ▁NIKILIA
- LIH
- ▁TIKWI
- ▁PANÉ
- KOWA
- ▁OX
- TEKI
- ▁SA
- NTE
- ▁KIKWI
- TSITSI
- NOH
- AHSI
- ▁IXO
- WIA
- LTSI
- ▁KIMA
- C
- ▁WEHWEI
- ▁TEPITSI
- ▁IHK
- ▁XIWIT
- YI
- LIS
- ▁CA
- XMATTOK
- SÁ
- ▁MOTA
- RE
- ▁TIKIHTO
- ▁MI
- ▁X
- D
- ▁SAN
- WIH
- ▁WEHKA
- KWE
- CHA
- ▁SI
- KTIK
- ▁YETOK
- ▁MOKA
- NEMI
- LILIA
- ▁¿
- TIW
- ▁KIHTOWAH
- LTI
- Ó
- MASÁ
- ▁POR
- ▁TIKITA
- KETSA
- ▁IWA
- METS
- YOH
- ▁TAKWA
- HKEH
- ▁KIKWIH
- ▁KIKWA
- NIA
- ▁ACHI
- ▁KIKWAH
- ▁KACHI
- ▁PO
- ▁IGUAL
- NAL
- ▁PILI
- ▁NIMAN
- YE
- ▁NIKMATI
- WIAH
- ▁KIPA
- ▁M
- J
- ▁KWI
- ▁WI
- WAYA
- Z
- ▁KITEKI
- G
- ▁'
- ▁IHKO
- CE
- ▁TONI
- ▁TSIKITSI
- P
- DO
- TOKEH
- NIK
- ▁TIKILIAH
- ▁KOWTAH
- ▁TAI
- ▁TATA
- TIAH
- CA
- PIL
- CHOWA
- ▁KIMATI
- ▁TAMA
- XKA
- XIWIT
- TOS
- KILIT
- ILWI
- SKI
- YEH
- DA
- WAYO
- ▁TAPA
- ▁NIMO
- CHIT
- ▁NIMITS
- ▁KINA
- PAHTI
- RI
- ▁BUENO
- ▁ESKI
- WAYAH
- PANO
- KOW
- WEYAK
- LPAN
- LTIA
- ▁KITO
- CO
- ▁TINE
- KIH
- JO
- ▁KATKA
- ▁TIKTA
- PAHTIA
- ▁XIWTSI
- ▁CHIKA
- ▁KANAH
- ▁KOYO
- MPI
- ▁IXIWYO
- IHTIK
- ▁KWE
- ▁XIW
- WILIA
- XTIK
- ▁VE
- ▁TIKMATI
- ▁KOKOLIS
- LKWI
- ▁AHKO
- MEKAT
- ▁TIKMA
- ▁NIMITSILIA
- ▁MITS
- XTA
- ▁CO
- ▁KOMA
- ▁KOMOHKÓ
- F
- ▁OKSEKI
- ▁TEISÁ
- ▁ESO
- ▁IKOWYO
- ▁ES
- TOHTO
- XTI
- ▁TSI
- ▁TIKO
- PIHPI
- ▁OKSÉ
- ▁WEHKAPAN
- KALAKI
- ▁WEL
- ▁MIGUEL
- TEKITI
- ▁TOKNI
- ROWA
- ▁MOSKALTIA
- Í
- XOKO
- ▁TIKCHI
- ▁EHE
- ▁KWO
- LPI
- HTOK
- TSTI
- TÍ
- ▁TEIHSÁ
- KILO
- ▁PUES
- SKIA
- HTIW
- LILIAH
- ▁IHWA
- ▁KOSTIK
- ▁TIKIHTOWAH
- ▁CHA
- ▁COMO
- ▁KIMANA
- CU
- TAMAN
- WITS
- ▁KOKO
- ILPIA
- ▁NIMONO
- ▁WELI
- ▁NIKWI
- WTOK
- ▁KINEKI
- KOKOH
- ▁P
- LTIAH
- XKO
- ▁ONKAYA
- TAPOWI
- MATTOK
- ▁MISMO
- ▁NIKIHTO
- ▁NIKMATTOK
- MESKIA
- ▁SOH
- KWOWIT
- XTIA
- WELITA
- ▁DESPUÉS
- ▁IXWA
- ZA
- TSAPOT
- SKAL
- ▁SIEMPRE
- TINEMI
- Ñ
- ▁ESKIA
- NELOWA
- ▁TZINACAPAN
- ▁DI
- XIWYO
- ▁AHA
- ▁AHWIA
- É
- ▁KIKWIAH
- MATTOKEH
- ▁ACHTO
- XTILIA
- TAPAL
- ▁KIHTO
- TEHTE
- ▁PORIN
- ▁TSOPE
- ▁KAHFE
- GU
- ▁NIMITSTAHTANI
- ▁TAHTA
- ▁KOWTATI
- ISWAT
- ▁TIKPIA
- ▁KOMEKAT
- TIOWIH
- ▁TIMONOHNO
- ▁TIEMPO
- WEHKA
- QUI
- ▁TIHTI
- ▁XOXOKTIK
- ▁TAXKAL
- EHE
- ▁AJÁ
- NANAKAT
- NIWKI
- ▁CI
- ▁ITSMOL
- ▁NIKPIA
- TEKPA
- ▁BO
- ▁TASOHKA
- Ú
- ¡
- '8'
- '9'
- '0'
- '1'
- '2'
- ¿
- Ò
- '4'
- À
- '7'
- '5'
- '3'
- ́
- V
- ̈
- Ï
- '6'
- Q
- Ì
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huggingtweets/_luisinhobr-nomesdegato-nomesdj | huggingtweets | 2021-12-21T14:04:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/_luisinhobr-nomesdegato-nomesdj/1640095484918/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1390224220643278850/LcIZLss-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1175884636624510976/KtBI_1GE_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1245550936807874560/j_zCtKSJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">luisfer nando & nomes foda de dj & nomes de gato</div>
<div style="text-align: center; font-size: 14px;">@_luisinhobr-nomesdegato-nomesdj</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from luisfer nando & nomes foda de dj & nomes de gato.
| Data | luisfer nando | nomes foda de dj | nomes de gato |
| --- | --- | --- | --- |
| Tweets downloaded | 2357 | 3250 | 3211 |
| Retweets | 365 | 6 | 69 |
| Short tweets | 503 | 632 | 1710 |
| Tweets kept | 1489 | 2612 | 1432 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mwm543c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_luisinhobr-nomesdegato-nomesdj's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nbxg8c7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nbxg8c7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_luisinhobr-nomesdegato-nomesdj')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hrdipto/wav2vec2-xls-r-timit-tokenizer | hrdipto | 2021-12-21T11:49:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-timit-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-timit-tokenizer
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4285
- Wer: 0.3662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1571 | 4.03 | 500 | 0.5235 | 0.5098 |
| 0.2001 | 8.06 | 1000 | 0.4172 | 0.4375 |
| 0.0968 | 12.1 | 1500 | 0.4562 | 0.4016 |
| 0.0607 | 16.13 | 2000 | 0.4640 | 0.4050 |
| 0.0409 | 20.16 | 2500 | 0.4688 | 0.3914 |
| 0.0273 | 24.19 | 3000 | 0.4414 | 0.3763 |
| 0.0181 | 28.22 | 3500 | 0.4285 | 0.3662 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bhavikardeshna/multilingual-bert-base-cased-english | bhavikardeshna | 2021-12-21T11:42:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhavikardeshna/multilingual-bert-base-cased-chinese | bhavikardeshna | 2021-12-21T11:41:47Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhavikardeshna/multilingual-bert-base-cased-arabic | bhavikardeshna | 2021-12-21T11:41:30Z | 27 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhavikardeshna/xlm-roberta-base-arabic | bhavikardeshna | 2021-12-21T11:41:04Z | 27 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhavikardeshna/xlm-roberta-base-german | bhavikardeshna | 2021-12-21T11:40:35Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhavikardeshna/xlm-roberta-base-spanish | bhavikardeshna | 2021-12-21T11:39:52Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
patrickvonplaten/xls-r-300m-it-phoneme | patrickvonplaten | 2021-12-21T11:15:39Z | 16 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_3_0",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_3_0
- generated_from_trainer
model-index:
- name: xls-r-300m-it-phoneme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-it-phoneme
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the mozilla-foundation/common_voice_3_0 - IT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3899
- Wer: 0.0770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000075
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
kwang1993/wav2vec2-base-timit-demo | kwang1993 | 2021-12-21T04:54:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | https://huggingface.co/blog/fine-tune-wav2vec2-english
Use the processor from https://huggingface.co/facebook/wav2vec2-base |
vuiseng9/pegasus-billsum | vuiseng9 | 2021-12-21T01:41:33Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | This model is developed with transformers v4.13 with minor patch in this [fork](https://github.com/vuiseng9/transformers/tree/pegasus-v4p13).
# Setup
```bash
git clone https://github.com/vuiseng9/transformers
cd transformers
git checkout pegasus-v4p13 && git reset --hard 41eeb07
# installation, set summarization dependency
# . . .
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1,2,3
NEPOCH=10
RUNID=pegasus-billsum-${NEPOCH}eph-run1
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path google/pegasus-large \
--dataset_name billsum \
--do_train \
--adafactor \
--learning_rate 2e-4 \
--label_smoothing_factor 0.1 \
--num_train_epochs $NEPOCH \
--per_device_train_batch_size 2 \
--do_eval \
--per_device_eval_batch_size 2 \
--num_beams 8 \
--max_source_length 1024 \
--max_target_length 256 \
--evaluation_strategy steps \
--eval_steps 1000 \
--save_strategy steps \
--save_steps 2000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
# Eval
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=3
DT=$(date +%F_%H-%M)
RUNID=pegasus-billsum-${DT}
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-test/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path vuiseng9/pegasus-billsum \
--dataset_name billsum \
--max_source_length 1024 \
--max_target_length 256 \
--do_predict \
--per_device_eval_batch_size 8 \
--predict_with_generate \
--num_beams 8 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
Although fine-tuning is carried out for 10 epochs, this model is the checkpoint (@12000 steps, 6.6epoch, 210mins) with lowest eval loss during training. Test/predict with this checkpoint should give results below.
```
***** predict metrics *****
predict_gen_len = 179.7363
predict_loss = 1.2452
predict_rouge1 = 56.8657
predict_rouge2 = 38.6531
predict_rougeL = 44.8399
predict_rougeLsum = 51.6266
predict_runtime = 1:19:28.20
predict_samples = 3269
predict_samples_per_second = 0.686
predict_steps_per_second = 0.086
``` |
patrickvonplaten/wavlm-libri-clean-100h-base-plus | patrickvonplaten | 2021-12-20T12:59:01Z | 14,635 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"wavlm_libri_finetune",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- wavlm_libri_finetune
model-index:
- name: wavlm-libri-clean-100h-base-plus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-libri-clean-100h-base-plus
This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0819
- Wer: 0.0683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.8877 | 0.34 | 300 | 2.8649 | 1.0 |
| 0.2852 | 0.67 | 600 | 0.2196 | 0.1830 |
| 0.1198 | 1.01 | 900 | 0.1438 | 0.1273 |
| 0.0906 | 1.35 | 1200 | 0.1145 | 0.1035 |
| 0.0729 | 1.68 | 1500 | 0.1055 | 0.0955 |
| 0.0605 | 2.02 | 1800 | 0.0936 | 0.0859 |
| 0.0402 | 2.35 | 2100 | 0.0885 | 0.0746 |
| 0.0421 | 2.69 | 2400 | 0.0848 | 0.0700 |
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-common_voice-tr-demo | patrickvonplaten | 2021-12-20T12:54:39Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- tr
license: apache-2.0
tags:
- speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
- Wer: 0.3556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7391 | 0.92 | 100 | 3.5760 | 1.0 |
| 2.927 | 1.83 | 200 | 3.0796 | 0.9999 |
| 0.9009 | 2.75 | 300 | 0.9278 | 0.8226 |
| 0.6529 | 3.67 | 400 | 0.5926 | 0.6367 |
| 0.3623 | 4.59 | 500 | 0.5372 | 0.5692 |
| 0.2888 | 5.5 | 600 | 0.4407 | 0.4838 |
| 0.285 | 6.42 | 700 | 0.4341 | 0.4694 |
| 0.0842 | 7.34 | 800 | 0.4153 | 0.4302 |
| 0.1415 | 8.26 | 900 | 0.4317 | 0.4136 |
| 0.1552 | 9.17 | 1000 | 0.4145 | 0.4013 |
| 0.1184 | 10.09 | 1100 | 0.4115 | 0.3844 |
| 0.0556 | 11.01 | 1200 | 0.4182 | 0.3862 |
| 0.0851 | 11.93 | 1300 | 0.3985 | 0.3688 |
| 0.0961 | 12.84 | 1400 | 0.4030 | 0.3665 |
| 0.0596 | 13.76 | 1500 | 0.3880 | 0.3631 |
| 0.0917 | 14.68 | 1600 | 0.3878 | 0.3582 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist | patrickvonplaten | 2021-12-20T12:53:43Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: wav2vec2-librispeech-clean-100h-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-librispeech-clean-100h-demo-dist
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0572
- Wer: 0.0417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.399 | 0.11 | 100 | 3.6153 | 1.0 |
| 2.8892 | 0.22 | 200 | 2.8963 | 1.0 |
| 2.8284 | 0.34 | 300 | 2.8574 | 1.0 |
| 0.7347 | 0.45 | 400 | 0.6158 | 0.4850 |
| 0.1138 | 0.56 | 500 | 0.2038 | 0.1560 |
| 0.248 | 0.67 | 600 | 0.1274 | 0.1024 |
| 0.2586 | 0.78 | 700 | 0.1108 | 0.0876 |
| 0.0733 | 0.9 | 800 | 0.0936 | 0.0762 |
| 0.044 | 1.01 | 900 | 0.0834 | 0.0662 |
| 0.0393 | 1.12 | 1000 | 0.0792 | 0.0622 |
| 0.0941 | 1.23 | 1100 | 0.0769 | 0.0627 |
| 0.036 | 1.35 | 1200 | 0.0731 | 0.0603 |
| 0.0768 | 1.46 | 1300 | 0.0713 | 0.0559 |
| 0.0518 | 1.57 | 1400 | 0.0686 | 0.0537 |
| 0.0815 | 1.68 | 1500 | 0.0639 | 0.0515 |
| 0.0603 | 1.79 | 1600 | 0.0636 | 0.0500 |
| 0.056 | 1.91 | 1700 | 0.0609 | 0.0480 |
| 0.0265 | 2.02 | 1800 | 0.0621 | 0.0465 |
| 0.0496 | 2.13 | 1900 | 0.0607 | 0.0449 |
| 0.0436 | 2.24 | 2000 | 0.0591 | 0.0446 |
| 0.0421 | 2.35 | 2100 | 0.0590 | 0.0428 |
| 0.0641 | 2.47 | 2200 | 0.0603 | 0.0443 |
| 0.0466 | 2.58 | 2300 | 0.0580 | 0.0429 |
| 0.0132 | 2.69 | 2400 | 0.0574 | 0.0423 |
| 0.0073 | 2.8 | 2500 | 0.0586 | 0.0417 |
| 0.0021 | 2.91 | 2600 | 0.0574 | 0.0412 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/hubert-librispeech-clean-100h-demo-dist | patrickvonplaten | 2021-12-20T12:53:35Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: hubert-librispeech-clean-100h-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-librispeech-clean-100h-demo-dist
This model is a fine-tuned version of [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0984
- Wer: 0.0883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9031 | 0.11 | 100 | 2.9220 | 1.0 |
| 2.6437 | 0.22 | 200 | 2.6268 | 1.0 |
| 0.3934 | 0.34 | 300 | 0.4860 | 0.4182 |
| 0.3531 | 0.45 | 400 | 0.3088 | 0.2894 |
| 0.2255 | 0.56 | 500 | 0.2568 | 0.2426 |
| 0.3379 | 0.67 | 600 | 0.2073 | 0.2011 |
| 0.2419 | 0.78 | 700 | 0.1849 | 0.1838 |
| 0.2128 | 0.9 | 800 | 0.1662 | 0.1690 |
| 0.1341 | 1.01 | 900 | 0.1600 | 0.1541 |
| 0.0946 | 1.12 | 1000 | 0.1431 | 0.1404 |
| 0.1643 | 1.23 | 1100 | 0.1373 | 0.1304 |
| 0.0663 | 1.35 | 1200 | 0.1293 | 0.1307 |
| 0.162 | 1.46 | 1300 | 0.1247 | 0.1266 |
| 0.1433 | 1.57 | 1400 | 0.1246 | 0.1262 |
| 0.1581 | 1.68 | 1500 | 0.1219 | 0.1154 |
| 0.1036 | 1.79 | 1600 | 0.1127 | 0.1081 |
| 0.1352 | 1.91 | 1700 | 0.1087 | 0.1040 |
| 0.0471 | 2.02 | 1800 | 0.1085 | 0.1005 |
| 0.0945 | 2.13 | 1900 | 0.1066 | 0.0973 |
| 0.0843 | 2.24 | 2000 | 0.1102 | 0.0964 |
| 0.0774 | 2.35 | 2100 | 0.1079 | 0.0940 |
| 0.0952 | 2.47 | 2200 | 0.1056 | 0.0927 |
| 0.0635 | 2.58 | 2300 | 0.1026 | 0.0920 |
| 0.0665 | 2.69 | 2400 | 0.1012 | 0.0905 |
| 0.034 | 2.8 | 2500 | 0.1009 | 0.0900 |
| 0.0251 | 2.91 | 2600 | 0.0993 | 0.0883 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
abhishek/autonlp-prodigy-10-3362554 | abhishek | 2021-12-20T11:11:03Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autonlp",
"en",
"dataset:abhishek/autonlp-data-prodigy-10",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-prodigy-10
co2_eq_emissions: 5.340540212393564
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 3362554
- CO2 Emissions (in grams): 5.340540212393564
## Validation Metrics
- Loss: 0.14167872071266174
- Accuracy: 0.9587076867229332
- Precision: 0.7351351351351352
- Recall: 0.7923728813559322
- F1: 0.7626816212082591
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-prodigy-10-3362554
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("abhishek/autonlp-prodigy-10-3362554", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-prodigy-10-3362554", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
juliusco/biobert-base-cased-v1.1-squad-finetuned-covbiobert | juliusco | 2021-12-20T07:58:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: biobert-base-cased-v1.1-squad-finetuned-covbiobert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-squad-finetuned-covbiobert
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1-squad](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-squad) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 486 | 0.3787 |
| 0.161 | 2.0 | 972 | 0.3959 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Amalq/roberta-base-finetuned-schizophreniaReddit2 | Amalq | 2021-12-20T05:41:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-schizophreniaReddit2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-schizophreniaReddit2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 490 | 1.8093 |
| 1.9343 | 2.0 | 980 | 1.7996 |
| 1.8856 | 3.0 | 1470 | 1.7966 |
| 1.8552 | 4.0 | 1960 | 1.7844 |
| 1.8267 | 5.0 | 2450 | 1.7839 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
addy88/wav2vec2-assamese-stt | addy88 | 2021-12-19T16:55:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/addy88/wav2vec2-assamese-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/addy88/wav2vec2-assamese-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-bengali-stt | addy88 | 2021-12-19T16:52:02Z | 4 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-bengali-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-bengali-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-bhojpuri-stt | addy88 | 2021-12-19T16:48:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-bhojpuri-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-bhojpuri-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-marathi-stt | addy88 | 2021-12-19T16:31:22Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-marathi-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-marathi-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-rajsthani-stt | addy88 | 2021-12-19T15:52:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-rajsthani-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-rajsthani-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-nepali-stt | addy88 | 2021-12-19T15:36:06Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-nepali-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-nepali-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
Ayham/bert_gpt2_summarization_cnndm_new | Ayham | 2021-12-19T15:09:12Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: bert_gpt2_summarization_cnndm_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_gpt2_summarization_cnndm_new
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
addy88/wav2vec2-english-stt | addy88 | 2021-12-19T15:08:42Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-english-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-english-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-kannada-stt | addy88 | 2021-12-19T13:35:26Z | 248 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-kannada-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-kannada-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
rlagusrlagus123/XTC4096 | rlagusrlagus123 | 2021-12-19T11:19:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
---
#12 epochs, each batch size 4, gradient accumulation steps 1, tail 4096.
#THIS SEEMS TO BE THE OPTIMAL SETUP. |
rlagusrlagus123/XTC20000 | rlagusrlagus123 | 2021-12-19T11:00:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
---
#12 epochs, each batch size 2, gradient accumulation steps 2, tail 20000 |
NbAiLabArchive/test_w5_long_roberta_tokenizer | NbAiLabArchive | 2021-12-19T10:36:40Z | 41 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | Just for performing some experiments. Do not use. |
haotieu/en-vi-mt-model | haotieu | 2021-12-19T10:17:03Z | 14 | 1 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # Helsinki-NLP/opus-mt-en-vi
- This model is a fine-tune checkpoint of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi).
- This model reaches BLEU score = 33.086 on the test set of IWSLT'15 English-Vietnamese data.
# Fine-tuning hyper-parameters
- learning_rate = 1e-4
- batch_size = 4
- num_train_epochs = 3.0 |
Langame/gpt2-waiting | Langame | 2021-12-19T09:02:26Z | 11 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"en",
"dataset:waiting-messages",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language:
- en # Example: en
license: mit # Example: apache-2.0 or any license from https://hf.co/docs/hub/model-repos#list-of-license-identifiers
tags:
- text-generation
datasets:
- waiting-messages # Example: common_voice. Use dataset id from https://hf.co/datasets
widget:
- text: 'List of funny waiting messages:'
example_title: 'Funny waiting messages'
---
# Langame/gpt2-waiting
This fine-tuned model can generate funny waiting messages.
[Langame](https://langa.me) uses these within its platform 😛.
|
Ayham/roberta_gpt2_summarization_cnn_dailymail | Ayham | 2021-12-19T06:58:26Z | 14 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: roberta_gpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_gpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
This model uses RoBerta encoder and GPT2 decoder and fine-tuned on the summarization task. It got Rouge scores as follows:
Rouge1= 35.886
Rouge2= 16.292
RougeL= 23.499
## Intended uses & limitations
To use its API:
from transformers import RobertaTokenizerFast, GPT2Tokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_pretrained("Ayham/roberta_gpt2_summarization_cnn_dailymail")
input_tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base')
output_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
article = """Your Input Text"""
input_ids = input_tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(output_tokenizer.decode(output_ids[0], skip_special_tokens=True))
More information needed
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Ayham/roberta_gpt2_summarization_xsum | Ayham | 2021-12-19T06:35:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: roberta_gpt2_summarization_xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_gpt2_summarization_xsum
This model is a fine-tuned version of [](https://huggingface.co/) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Ayham/xlnet_gpt2_summarization_xsum | Ayham | 2021-12-19T04:50:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: xlnet_gpt2_summarization_xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet_gpt2_summarization_xsum
This model is a fine-tuned version of [](https://huggingface.co/) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
zaccharieramzi/UNet-fastmri | zaccharieramzi | 2021-12-19T02:05:48Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | # UNet-fastmri
---
tags:
- TensorFlow
- MRI reconstruction
- MRI
datasets:
- fastMRI
---
This model can be used to reconstruct single coil fastMRI data with an acceleration factor of 4.
## Model description
For more details, see https://www.mdpi.com/2076-3417/10/5/1816.
This section is WIP.
## Intended uses and limitations
This model can be used to reconstruct single coil knee data from Siemens scanner at acceleration factor 4.
It cannot be used on multi-coil data.
## How to use
This model can be loaded using the following repo: https://github.com/zaccharieramzi/fastmri-reproducible-benchmark.
After cloning the repo, `git clone https://github.com/zaccharieramzi/fastmri-reproducible-benchmark`, you can install the package via `pip install fastmri-reproducible-benchmark`.
The framework is TensorFlow.
You can initialize and load the model weights as follows:
```python
from fastmri_recon.models.functional_models.unet import unet
model = unet(n_layers=4, layers_n_channels=[16, 32, 64, 128], layers_n_non_lins=2,)
model.load_weights('UNet-fastmri/model_weights.h5')
```
Using the model is then as simple as:
```python
model(zero_filled_recon)
```
## Limitations and bias
The limitations and bias of this model have not been properly investigated.
## Training data
This model was trained using the [fastMRI dataset](https://fastmri.org/dataset/).
## Training procedure
The training procedure is described in https://www.mdpi.com/2076-3417/10/5/1816 for brain data.
This section is WIP.
## Evaluation results
This model was evaluated using the [fastMRI dataset](https://fastmri.org/dataset/).
| Contrast | PD | PDFS |
|----------|-------|--------|
| PSNR | 33.64 | 29.89 |
| SSIM | 0.807 | 0.6334 |
## Bibtex entry
```
@article{ramzi2020benchmarking,
title={Benchmarking MRI reconstruction neural networks on large public datasets},
author={Ramzi, Zaccharie and Ciuciu, Philippe and Starck, Jean-Luc},
journal={Applied Sciences},
volume={10},
number={5},
pages={1816},
year={2020},
publisher={Multidisciplinary Digital Publishing Institute}
}
```
|
zaccharieramzi/KIKI-net-OASIS | zaccharieramzi | 2021-12-19T01:59:51Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | # KIKI-net-OASIS
---
tags:
- TensorFlow
- MRI reconstruction
- MRI
datasets:
- OASIS
---
This model can be used to reconstruct single coil OASIS data with an acceleration factor of 4.
## Model description
For more details, see https://www.mdpi.com/2076-3417/10/5/1816.
This section is WIP.
## Intended uses and limitations
This model can be used to reconstruct single coil brain retrospective data from the OASIS database at acceleration factor 4.
It cannot be used on multi-coil data.
## How to use
This model can be loaded using the following repo: https://github.com/zaccharieramzi/fastmri-reproducible-benchmark.
After cloning the repo, `git clone https://github.com/zaccharieramzi/fastmri-reproducible-benchmark`, you can install the package via `pip install fastmri-reproducible-benchmark`.
The framework is TensorFlow.
You can initialize and load the model weights as follows:
```python
from fastmri_recon.models.functional_models.kiki_sep import full_kiki_net
from fastmri_recon.models.utils.non_linearities import lrelu
model = full_kiki_net(n_convs=16, n_filters=48, activation=lrelu)
model.load_weights('model_weights.h5')
```
Using the model is then as simple as:
```python
model([
kspace, # shape: [n_slices, n_rows, n_cols, 1]
mask, # shape: [n_slices, n_rows, n_cols]
])
```
## Limitations and bias
The limitations and bias of this model have not been properly investigated.
## Training data
This model was trained using the [OASIS dataset](https://www.oasis-brains.org/).
## Training procedure
The training procedure is described in https://www.mdpi.com/2076-3417/10/5/1816 for brain data.
This section is WIP.
## Evaluation results
This model was evaluated using the [OASIS dataset](https://www.oasis-brains.org/).
- PSNR: 30.08
- SSIM: 0.853
## Bibtex entry
```
@article{ramzi2020benchmarking,
title={Benchmarking MRI reconstruction neural networks on large public datasets},
author={Ramzi, Zaccharie and Ciuciu, Philippe and Starck, Jean-Luc},
journal={Applied Sciences},
volume={10},
number={5},
pages={1816},
year={2020},
publisher={Multidisciplinary Digital Publishing Institute}
}
```
|
zaccharieramzi/NCPDNet-singlecoil-spiral | zaccharieramzi | 2021-12-19T00:47:15Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | # NCPDNet-singlecoil-spiral
---
tags:
- TensorFlow
- MRI reconstruction
- MRI
datasets:
- fastMRI
---
This is a non-Cartesian MRI reconstruction model for spiral trajectories at acceleration factor 4.
The model uses 10 iterations and a small vanilla CNN.
## Model description
For more details, see https://hal.inria.fr/hal-03188997.
This section is WIP.
## Intended uses and limitations
This model can be used to reconstruct knee data from Siemens scanner at acceleration factor 4 in a spiral acquisition setting.
## How to use
This model can be loaded using the following repo: https://github.com/zaccharieramzi/fastmri-reproducible-benchmark.
After cloning the repo, `git clone https://github.com/zaccharieramzi/fastmri-reproducible-benchmark`, you can install the package via `pip install fastmri-reproducible-benchmark`.
The framework is TensorFlow.
You can initialize and load the model weights as follows:
```python
import tensorflow as tf
from fastmri_recon.models.subclassed_models.ncpdnet import NCPDNet
model = NCPDNet(
im_size=(640, 400),
dcomp=True,
)
kspace_shape = 1
inputs = [
tf.zeros([1, 1, kspace_shape, 1], dtype=tf.complex64),
tf.zeros([1, 2, kspace_shape], dtype=tf.float32),
(tf.constant([320]), tf.ones([1, kspace_shape], dtype=tf.float32)),
]
model(inputs)
model.load_weights('model_weights.h5')
```
Using the model is then as simple as:
```python
model([
kspace, # shape: [n_slices, 1, n_kspace_samples, 1]
traj, # shape: [n_slices, 1, 2, n_kspace_samples]
(
output_shape, # shape: [n_slices, 1]
dcomp, # shape: [n_slices, n_kspace_samples]
)
])
```
## Limitations and bias
The limitations and bias of this model have not been properly investigated.
## Training data
This model was trained using the [fastMRI dataset](https://fastmri.org/dataset/).
## Training procedure
The training procedure is described in https://hal.inria.fr/hal-03188997.
This section is WIP.
## Evaluation results
On the fastMRI validation dataset:
- PSNR: 33.08
- SSIM: 0.7534
## Bibtex entry
```
@unpublished{ramzi:hal-03188997,
TITLE = {{NC-PDNet: a Density-Compensated Unrolled Network for 2D and 3D non-Cartesian MRI Reconstruction}},
AUTHOR = {Ramzi, Zaccharie and G R, Chaithya and Starck, Jean-Luc and Ciuciu, Philippe},
YEAR = {2021},
MONTH = Sep,
}
```
|
tasosk/bert-base-uncased-airlines | tasosk | 2021-12-18T20:20:24Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-airlines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-airlines
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3458
- Accuracy: 0.9021
- F1: 0.9022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 405 | 0.3230 | 0.8754 | 0.8750 |
| 0.4658 | 2.0 | 810 | 0.2738 | 0.8986 | 0.8985 |
| 0.2473 | 3.0 | 1215 | 0.2944 | 0.9110 | 0.9111 |
| 0.2498 | 4.0 | 1620 | 0.3322 | 0.8950 | 0.8949 |
| 0.2174 | 5.0 | 2025 | 0.3342 | 0.9021 | 0.9021 |
| 0.2174 | 6.0 | 2430 | 0.3526 | 0.8986 | 0.8985 |
| 0.2055 | 7.0 | 2835 | 0.3458 | 0.9021 | 0.9022 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
IlyaGusev/rut5_base_headline_gen_telegram | IlyaGusev | 2021-12-18T19:27:52Z | 13,204 | 8 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:04Z | ---
language:
- ru
tags:
- summarization
license: apache-2.0
widget:
- text: "Комиссия Совета Федерации по информационной политике и взаимодействию со СМИ совместно с заинтересованными ведомствами думает над разработкой национального законодательства в области налогообложения глобальных интернет-компаний, таких как Google и Facebook. Об этом сообщил ТАСС председатель комиссии Алексей Пушков. «В настоящее время по линии ОЭСР [Организация экономического сотрудничества и развития] ведется разработка международной конвенции, однако работа над ней еще не завершена. В этих условиях мы исходим из того, что самая разумная позиция - начать разработку национального законодательства, не дожидаясь конвенции», — пояснил сенатор. Пушков отметил, что по такому пути пошли еще несколько стран, в числе которых Франция, Австралия и Турция. По его словам, в России важно задействовать в этой работе Минфин, ФНС, МИД РФ и Роскомнадзор. «Интернет-платформы не фигурируют у нас сейчас как отдельный объект налогообложения. Когда они откроют в России свои представительства в рамках закона о «приземлении», возникнет вопрос: как их официальное присутствие на территории России, которого сейчас нет, будет соотноситься с нашим налоговым режимом. Мы сейчас продумываем, как установить эту взаимосвязь», — сказал Пушков, добавляя, что вопрос внесения изменений в российское законодательство в части налогообложения крупных IT-компаний находится «на первой стадии изучения». Сам сенатор выступает за введение прогрессивной ставки налога в зависимости от прибыли IT-компаний на территории страны. При этом, подчеркнул он, одна из задач национальной системы налогообложения будет заключаться в подсчете налогооблагаемой базы. Сейчас крупные ИТ-компании самостоятельно отчитываются о своей прибыли. Однако России нужна собственная система подсчета их доходов, которая позволит определить их «реальную налогооблагаемую базу», считает Пушков. (https://www.gazeta.ru/tech/news/2021/12/17/n_17024239.shtml)"
example_title: "Новость про налоги в IT"
- text: "Первую многоножку, у которой более тысячи ног, обнаружили в австралийских пещерах биологи, изучавшие там подземные воды. Предыдущей рекордсменкой по количеству ног была 700-ногая многоножка. Новый вид имеет длинное тонкое тело, похожее на нить, и большое количество конечностей, по-видимому, дает преимущества для быстрого перемещения и проникновения в труднодоступные места — ученые полагают, такая многоножка может спокойно перемещаться по трещинам в камнях. Австралия известна своими огромными и жутковатыми животными вроде 25-сантиметровых пауков. Теперь список пугающих членистоногих пополнился самой «многоногой» в мире многоножкой, у которой более тысячи ног. Необычное животное обнаружила группа исследователей из Австралии и США в пещерах на западе страны. Подробнее многоножку ученые описали в статье в журнале Scientific Reports. Исследователи занимались оценкой воздействия подземных вод на окружающую среду в зоне добычи полезных ископаемых на западе страны, когда наткнулись на новый вид многоножек. В отличие от большинства сородичей, живущих на поверхности, эти многоножки обитали в пещерах на глубине до 60 метров. Новый вид исследователи назвали Eumillipes persephone, в честь Персефоны — древнегреческой богини подземного мира. У многоножки оказалось 1306 ног — больше, чем у любого другого известного вида. Предыдущей рекордсменкой была калифорнийская Illacme plenipes, у которой насчитывалось до 750 ног. «Эти животные были настолько уникальны, — говорит биолог Бруно Бузатто. — Как только я понял, какой длины они были... Стало ясно, что это что-то совершенно новое». У Е. persephone нитевидное тело длиной около 9,5 см и шириной всего миллиметр, состоящее из 330 сегментов, короткие ноги и конусообразная голова. Как и другие животные, живущие в постоянной темноте, эти многоножки бледны и слепы. Энтомолог Пол Марек сравнивает ее с белой нитью, выдернутой из рубашки. Чтобы посчитать количество ног, ученым пришлось сначала снять многоножку в высоком разрешении, а затем закрашивать на фото каждый десяток ног другим цветом. (https://www.gazeta.ru/science/2021/12/17_a_14325355.shtml)"
example_title: "Новость про многоножку"
- text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо."
example_title: "Википедия"
---
# RuT5TelegramHeadlines
## Model description
Based on [rut5-base](https://huggingface.co/cointegrated/rut5-base) model
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "IlyaGusev/rut5_base_headline_gen_telegram"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True)
print(headline)
```
## Training data
- Dataset: [ru_all_split.tar.gz](https://www.dropbox.com/s/ykqk49a8avlmnaf/ru_all_split.tar.gz)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py) |
tasosk/distilbert-base-uncased-airlines | tasosk | 2021-12-18T19:25:39Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-airlines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-airlines
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tasosk/airlines dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3174
- Accuracy: 0.9288
- F1: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 203 | 0.2281 | 0.9164 | 0.9164 |
| No log | 2.0 | 406 | 0.2676 | 0.9164 | 0.9164 |
| 0.2314 | 3.0 | 609 | 0.3117 | 0.9217 | 0.9217 |
| 0.2314 | 4.0 | 812 | 0.3175 | 0.9270 | 0.9271 |
| 0.08 | 5.0 | 1015 | 0.3174 | 0.9288 | 0.9289 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
zaccharieramzi/UPDNet-knee-af8 | zaccharieramzi | 2021-12-18T18:08:29Z | 0 | 0 | null | [
"arxiv:2010.07290",
"region:us"
] | null | 2022-03-02T23:29:05Z | # UPDNet-knee-af8
---
tags:
- TensorFlow
- MRI reconstruction
- MRI
datasets:
- fastMRI
---
This model was used to achieve the 9th highest submission in terms of PSNR on the fastMRI dataset (see https://fastmri.org/leaderboards/) (0.2dB behind the 2nd submission).
It is a base model for acceleration factor 8.
The model uses 25 iterations and a medium-ca-prelu U-net, and a medium sensitivity maps refiner.
## Model description
For more details, see https://arxiv.org/abs/2010.07290.
This section is WIP.
## Intended uses and limitations
This model can be used to reconstruct knee data from Siemens scanner at acceleration factor 8.
## How to use
This model can be loaded using the following repo: https://github.com/zaccharieramzi/fastmri-reproducible-benchmark.
After cloning the repo, `git clone https://github.com/zaccharieramzi/fastmri-reproducible-benchmark`, you can install the package via `pip install fastmri-reproducible-benchmark`.
The framework is TensorFlow.
You can initialize and load the model weights as follows:
```python
import tensorflow as tf
from fastmri_recon.models.subclassed_models.updnet import UPDNet
model = UPDNet(
multicoil=True,
n_dual=1,
primal_only=True,
n_layers=4,
n_iter=25,
channel_attention_kwargs={'dense': True},
refine_smaps=True,
non_linearity='prelu',
layers_n_channels=[16 * 2**i for i in range(4)],
)
kspace_size = [1, 1, 320, 320]
inputs = [
tf.zeros(kspace_size + [1], dtype=tf.complex64), # kspace
tf.zeros(kspace_size, dtype=tf.complex64), # mask
tf.zeros(kspace_size, dtype=tf.complex64), # smaps
]
model(inputs)
model.load_weights('model_weights.h5')
```
Using the model is then as simple as:
```python
model([
kspace, # shape: [n_slices, n_coils, n_rows, n_cols, 1]
mask, # shape: [n_slices, n_coils, n_rows, n_cols]
smaps, # shape: [n_slices, n_coils, n_rows, n_cols]
])
```
## Limitations and bias
The limitations and bias of this model have not been properly investigated.
## Training data
This model was trained using the [fastMRI dataset](https://fastmri.org/dataset/).
## Training procedure
The training procedure is described in https://arxiv.org/abs/2010.07290.
This section is WIP.
## Evaluation results
No evaluation available outside the one from the fastMRI leaderboard (id: `updnet_v3`).
## Bibtex entry
```
@inproceedings{Ramzi2020d,
archivePrefix = {arXiv},
arxivId = {2010.07290},
author = {Ramzi, Zaccharie and Ciuciu, Philippe and Starck, Jean-Luc},
booktitle = {ISMRM},
eprint = {2010.07290},
pages = {1--4},
title = {{XPDNet for MRI Reconstruction: an application to the 2020 fastMRI challenge}},
url = {http://arxiv.org/abs/2010.07290},
year = {2021}
}
```
|
zaccharieramzi/UPDNet-knee-af4 | zaccharieramzi | 2021-12-18T18:08:04Z | 0 | 0 | null | [
"arxiv:2010.07290",
"region:us"
] | null | 2022-03-02T23:29:05Z | # UPDNet-knee-af4
---
tags:
- TensorFlow
- MRI reconstruction
- MRI
datasets:
- fastMRI
---
This model was used to achieve the 9th highest submission in terms of PSNR on the fastMRI dataset (see https://fastmri.org/leaderboards/) (0.2dB behind the 2nd submission).
It is a base model for acceleration factor 4.
The model uses 25 iterations and a medium-ca-prelu U-net, and a medium sensitivity maps refiner.
## Model description
For more details, see https://arxiv.org/abs/2010.07290.
This section is WIP.
## Intended uses and limitations
This model can be used to reconstruct knee data from Siemens scanner at acceleration factor 4.
## How to use
This model can be loaded using the following repo: https://github.com/zaccharieramzi/fastmri-reproducible-benchmark.
After cloning the repo, `git clone https://github.com/zaccharieramzi/fastmri-reproducible-benchmark`, you can install the package via `pip install fastmri-reproducible-benchmark`.
The framework is TensorFlow.
You can initialize and load the model weights as follows:
```python
import tensorflow as tf
from fastmri_recon.models.subclassed_models.updnet import UPDNet
model = UPDNet(
multicoil=True,
n_dual=1,
primal_only=True,
n_layers=4,
n_iter=25,
channel_attention_kwargs={'dense': True},
refine_smaps=True,
non_linearity='prelu',
layers_n_channels=[16 * 2**i for i in range(4)],
)
kspace_size = [1, 1, 320, 320]
inputs = [
tf.zeros(kspace_size + [1], dtype=tf.complex64), # kspace
tf.zeros(kspace_size, dtype=tf.complex64), # mask
tf.zeros(kspace_size, dtype=tf.complex64), # smaps
]
model(inputs)
model.load_weights('model_weights.h5')
```
Using the model is then as simple as:
```python
model([
kspace, # shape: [n_slices, n_coils, n_rows, n_cols, 1]
mask, # shape: [n_slices, n_coils, n_rows, n_cols]
smaps, # shape: [n_slices, n_coils, n_rows, n_cols]
])
```
## Limitations and bias
The limitations and bias of this model have not been properly investigated.
## Training data
This model was trained using the [fastMRI dataset](https://fastmri.org/dataset/).
## Training procedure
The training procedure is described in https://arxiv.org/abs/2010.07290.
This section is WIP.
## Evaluation results
No evaluation available outside the one from the fastMRI leaderboard (id: `updnet_v3`).
## Bibtex entry
```
@inproceedings{Ramzi2020d,
archivePrefix = {arXiv},
arxivId = {2010.07290},
author = {Ramzi, Zaccharie and Ciuciu, Philippe and Starck, Jean-Luc},
booktitle = {ISMRM},
eprint = {2010.07290},
pages = {1--4},
title = {{XPDNet for MRI Reconstruction: an application to the 2020 fastMRI challenge}},
url = {http://arxiv.org/abs/2010.07290},
year = {2021}
}
```
|
jcsilva/wav2vec2-base-timit-demo-colab | jcsilva | 2021-12-18T13:45:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7665
- Wer: 0.6956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.052 | 0.8 | 100 | 3.0167 | 1.0 |
| 2.7436 | 1.6 | 200 | 1.9369 | 1.0006 |
| 1.4182 | 2.4 | 300 | 0.7665 | 0.6956 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiho0304/bad-korean-tokenizer | jiho0304 | 2021-12-18T04:17:15Z | 6 | 0 | transformers | [
"transformers",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | KcELECTRA([https://github.com/Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA))의 Tokenizer에서 [UNK]로 대체되는 토큰들을 추가했습니다. |
microsoft/unispeech-sat-large-sd | microsoft | 2021-12-17T18:42:36Z | 72 | 1 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"audio-frame-classification",
"speech",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.05752",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
datasets:
tags:
- speech
---
# UniSpeech-SAT-Large for Speaker Diarization
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Fine-tuning details
The model is fine-tuned on the [LibriMix dataset](https://github.com/JorisCos/LibriMix) using just a linear layer for mapping the network outputs.
# Usage
## Speaker Diarization
```python
from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-large-sd')
model = UniSpeechSatForAudioFrameClassification.from_pretrained('microsoft/unispeech-sat-large-sd')
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt")
logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
Eyvaz/wav2vec2-base-russian-modified-kaggle | Eyvaz | 2021-12-17T18:39:50Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-base-russian-modified-kaggle
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-russian-modified-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
microsoft/unispeech-sat-base-sd | microsoft | 2021-12-17T18:39:23Z | 38 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"audio-frame-classification",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2110.05752",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
datasets:
- librispeech_asr
tags:
- speech
---
# UniSpeech-SAT-Base for Speaker Diarization
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 960 hours of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Fine-tuning details
The model is fine-tuned on the [LibriMix dataset](https://github.com/JorisCos/LibriMix) using just a linear layer for mapping the network outputs.
# Usage
## Speaker Diarization
```python
from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-base-sd')
model = UniSpeechSatForAudioFrameClassification.from_pretrained('microsoft/unispeech-sat-base-sd')
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt")
logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
microsoft/unispeech-sat-base-sv | microsoft | 2021-12-17T18:11:05Z | 200 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"audio-xvector",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2110.05752",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
datasets:
- librispeech_asr
tags:
- speech
---
# UniSpeech-SAT-Base for Speaker Verification
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 960 hours of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Fine-tuning details
The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss
[X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf)
# Usage
## Speaker Verification
```python
from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-base-sv')
model = UniSpeechSatForXVector.from_pretrained('microsoft/unispeech-sat-base-sv')
# audio files are decoded on the fly
inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt")
embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.86 # the optimal threshold is dataset-dependent
if similarity < threshold:
print("Speakers are not the same!")
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
butchland/bert-finetuned-ner | butchland | 2021-12-17T15:53:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9389679126695336
- name: Recall
type: recall
value: 0.9554022214742511
- name: F1
type: f1
value: 0.9471137804471137
- name: Accuracy
type: accuracy
value: 0.9873138282215812
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0586
- Precision: 0.9390
- Recall: 0.9554
- F1: 0.9471
- Accuracy: 0.9873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0877 | 1.0 | 1756 | 0.0662 | 0.9081 | 0.9344 | 0.9210 | 0.9827 |
| 0.0376 | 2.0 | 3512 | 0.0599 | 0.9362 | 0.9502 | 0.9431 | 0.9862 |
| 0.0209 | 3.0 | 5268 | 0.0586 | 0.9390 | 0.9554 | 0.9471 | 0.9873 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
osanseviero/fastai_cat_vs_dog_fork2 | osanseviero | 2021-12-17T14:27:39Z | 33 | 0 | generic | [
"generic",
"image-classification",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
library_name: generic
---
# Dog vs Cat Image Classification with FastAI CNN
Training is based in FastAI [Quick Start](https://docs.fast.ai/quick_start.html). Example training
## Training
The model was trained as follows
```python
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
``` |
Rocketknight1/gbert-base-germaner | Rocketknight1 | 2021-12-17T14:04:59Z | 5 | 1 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/gbert-base-germaner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/gbert-base-germaner
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0340
- Validation Loss: 0.0881
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4176, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1345 | 0.0865 | 0 |
| 0.0550 | 0.0878 | 1 |
| 0.0340 | 0.0881 | 2 |
### Framework versions
- Transformers 4.15.0.dev0
- TensorFlow 2.6.0
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
llange/xlm-roberta-large-spanish-clinical | llange | 2021-12-17T10:27:39Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2112.08754",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | # CLIN-X-ES: a pre-trained language model for the Spanish clinical domain
Details on the model, the pre-training corpus and the downstream task performance are given in the paper: "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain" by Lukas Lange, Heike Adel, Jannik Strötgen and Dietrich Klakow.
The paper can be found [here](https://arxiv.org/abs/2112.08754).
In case of questions, please contact the authors as listed on the paper.
Please cite the above paper when reporting, reproducing or extending the results.
@misc{lange-etal-2021-clin-x,
author = {Lukas Lange and
Heike Adel and
Jannik Str{\"{o}}tgen and
Dietrich Klakow},
title = {CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain},
year={2021},
eprint={2112.08754},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2112.08754}
}
## Training details
The model is based on the multilingual XLM-R transformer `(xlm-roberta-large)`, which was trained on 100 languages and showed superior performance in many different tasks across languages and can even outperform monolingual models in certain settings (Conneau et al. 2020).
Even though XLM-R was pre-trained on 53GB of Spanish documents, this was only 2% of the overall training data. To steer this model towards the Spanish clinical domain, we sample documents from the Scielo archive (https://scielo.org/)
and the MeSpEn resources (Villegas et al. 2018). The resulting corpus has a size of 790MB and is highly specific for the clinical domain.
We initialize CLIN-X using the pre-trained XLM-R weights and train masked language modeling (MLM) on the Spanish clinical corpus for 3 epochs which roughly corresponds to 32k steps. This allows researchers and practitioners to address
the Spanish clinical domain with an out-of-the-box tailored model.
## Results for Spanish concept extraction
We apply CLIN-X-ES to five Spanish concept extraction tasks from the clinical domain in a standard sequence labeling architecture similar to Devlin et al. 2019 and compare to a Spanish BERT model called BETO. In addition, we perform experiments with an improved architecture `(+ OurArchitecture)` as described in the paper linked above. The code for our model architecture can be found [here](https://github.com/boschresearch/clin_x).
| | Cantemist | Meddocan | Meddoprof (NER) | Meddoprof (CLASS) | Pharmaconer |
|------------------------------------------|-----------|----------|-----------------|-------------------|-------------|
| BETO (Spanish BERT) | 81.30 | 96.81 | 79.19 | 74.59 | 87.70 |
| CLIN-X (ES) | 83.22 | 97.08 | 79.54 | 76.95 | 90.05 |
| CLIN-X (ES) + OurArchitecture | **88.24** | **98.00** | **81.68** | **80.54** | **92.27** |
### Results for English concept extraction
As the CLIN-X-ES model is based on XLM-R, the model is still multilingual and we demonstrate the positive impact of cross-language domain adaptation by applying this model to five different English sequence labeling tasks from i2b2.
We found that further transfer from related concept extraction is particularly helpful in this cross-language setting. For a detailed description of the transfer process and all other models, we refer to our paper.
| | i2b2 2006 | i2b2 2010 | i2b2 2012 (Concept) | i2b2 2012 (Time) | i2b2 2014 |
|------------------------------------------|-----------|-----------|---------------|---------------|-----------|
| BERT | 94.80 | 85.25 | 76.51 | 75.28 | 94.86 |
| ClinicalBERT | 94.8 | 87.8 | 78.9 | 76.6 | 93.0 |
| CLIN-X (ES) | 95.49 | 87.94 | 79.58 | 77.57 | 96.80 |
| CLIN-X (ES) + OurArchitecture | 98.30 | 89.10 | 80.42 | 78.48 | **97.62** |
| CLIN-X (ES) + OurArchitecture + Transfer | **89.50** | **89.74** | **80.93** | **79.60** | 97.46 |
## Purpose of the project
This software is a research prototype, solely developed for and published as part of the publication cited above. It will neither be maintained nor monitored in any way.
## License
The CLIN-X models are open-sourced under the CC-BY 4.0 license.
See the [LICENSE](LICENSE) file for details. |
digio/Twitter4SSE | digio | 2021-12-17T09:01:29Z | 17 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"Pytorch",
"Sentence Transformers",
"Transformers",
"sentence-similarity",
"en",
"arxiv:2110.02030",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
language:
- en
pipeline_tag: sentence-similarity
tags:
- Pytorch
- Sentence Transformers
- Transformers
license: "apache-2.0"
---
# Twitter4SSE
This model maps texts to 768 dimensional dense embeddings that encode semantic similarity.
It was trained with Multiple Negatives Ranking Loss (MNRL) on a Twitter dataset.
It was initialized from [BERTweet](https://huggingface.co/vinai/bertweet-base) and trained with [Sentence-transformers](https://www.sbert.net/).
## Usage
The model is easier to use with sentence-trainsformers library
```
pip install -U sentence-transformers
```
```
from sentence_transformers import SentenceTransformer
sentences = ["This is the first tweet", "This is the second tweet"]
model = SentenceTransformer('digio/Twitter4SSE')
embeddings = model.encode(sentences)
print(embeddings)
```
Without sentence-transfomer library, please refer to [this repository](https://huggingface.co/sentence-transformers) for detailed instructions on how to use Sentence Transformers on Huggingface.
## Citing & Authors
The official paper [Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings](https://arxiv.org/abs/2110.02030) will be presented at EMNLP 2021. Further details will be available soon.
```
@inproceedings{di-giovanni-brambilla-2021-exploiting,
title = "Exploiting {T}witter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings",
author = "Di Giovanni, Marco and
Brambilla, Marco",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.780",
pages = "9902--9910",
}
```
The official code is available on [GitHub](https://github.com/marco-digio/Twitter4SSE)
|
jamescalam/bert-stsb-gold | jamescalam | 2021-12-17T08:57:06Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Gold-only BERT STSb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It is used as a demo model within the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp), for the chapter on [In-domain Data Augmentation with BERT](https://www.pinecone.io/learn/data-augmentation/).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bert-stsb-gold')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bert-stsb-gold')
model = AutoModel.from_pretrained('bert-stsb-gold')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
jamescalam/bert-stsb-cross-encoder | jamescalam | 2021-12-17T08:54:27Z | 1,081 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"text-classification",
"sentence-similarity",
"transformers",
"cross-encoder",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
- cross-encoder
---
# Augmented SBERT STSb
This is a [sentence-transformers](https://www.SBERT.net) cross encoder model.
It is used as a demo model within the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp), for the chapter on [In-domain Data Augmentation with BERT](https://www.pinecone.io/learn/data-augmentation/).
|
jamescalam/bert-stsb-aug | jamescalam | 2021-12-17T08:52:21Z | 4 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Augmented SBERT STSb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It is used as a demo model within the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp), for the chapter on [In-domain Data Augmentation with BERT](https://www.pinecone.io/learn/data-augmentation/).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bert-stsb-aug')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bert-stsb-aug')
model = AutoModel.from_pretrained('bert-stsb-aug')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2059 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 308,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
huggingtweets/bladeefan91 | huggingtweets | 2021-12-17T07:39:20Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/bladeefan91/1639726754777/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1470642032851009537/LWrcZk48_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sweetie p1e</div>
<div style="text-align: center; font-size: 14px;">@bladeefan91</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sweetie p1e.
| Data | sweetie p1e |
| --- | --- |
| Tweets downloaded | 2249 |
| Retweets | 351 |
| Short tweets | 547 |
| Tweets kept | 1351 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/cacbnxbr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bladeefan91's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kupw7ab) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kupw7ab/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bladeefan91')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nvidia/qdqbert-base-uncased | nvidia | 2021-12-17T06:31:27Z | 0 | 1 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | <!---
Copyright 2021 NVIDIA Corporation. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# QDQBERT base model (uncased)
## Model description
[QDQBERT](https://huggingface.co/docs/transformers/model_doc/qdqbert) model inserts fake quantization operations (pair of QuantizeLinear/DequantizeLinear operators) to (i) linear layer inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model.
QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example bert-base-uncased), and perform Quantization Aware Training/Post Training Quantization.
In this model card, **qdqbert-base-uncased** corresponds to the **bert-base-uncased** model with QuantizeLinear/DequantizeLinear ops (**Q/DQ nodes**). Similarly, one can also use the QDQBERT model for qdqbert-large-cased corresponding to bert-large-cased, etc.
## How to run QDQBERT using Transformers
### Prerequisites
QDQBERT requires the dependency of [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/main/tools/pytorch-quantization). To install Pytorch Quantization Toolkit, run
```
pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com
```
### Set default quantizers
QDQBERT model inserts Q/DQ nodes to BERT by **TensorQuantizer** in Pytorch Quantization Toolkit. **TensorQuantizer** is the module for quantizing tensors, with **QuantDescriptor** defining how the tensor should be quantized. Refer to [Pytorch Quantization Toolkit userguide](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/userguide.html) for more details.
Before creating QDQBERT model, one has to set the default **QuantDescriptor** defining default tensor quantizers. Example:
```python
import pytorch_quantization.nn as quant_nn
from pytorch_quantization.tensor_quant import QuantDescriptor
# The default tensor quantizer is set to use Max calibration method
input_desc = QuantDescriptor(num_bits=8, calib_method="max")
# The default tensor quantizer is set to be per-channel quantization for weights
weight_desc = QuantDescriptor(num_bits=8, axis=((0,)))
quant_nn.QuantLinear.set_default_quant_desc_input(input_desc)
quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc)
```
### Calibration
Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model:
```python
# Find the TensorQuantizer and enable calibration
for name, module in model.named_modules():
if name.endswith('_input_quantizer'):
module.enable_calib()
module.disable_quant() # Use full precision data to calibrate
# Feeding data samples
model(x)
# ...
# Finalize calibration
for name, module in model.named_modules():
if name.endswith('_input_quantizer'):
module.load_calib_amax()
module.enable_quant()
# If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process
model.cuda()
# Keep running the quantized model
# ...
```
### Export to ONNX
The goal of exporting to ONNX is to deploy inference by [TensorRT](https://developer.nvidia.com/tensorrt). Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting the static member **TensorQuantizer** to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow the instructions in [torch.onnx](https://pytorch.org/docs/stable/onnx.html). Example:
```python
from pytorch_quantization.nn import TensorQuantizer
TensorQuantizer.use_fb_fake_quant = True
# Load the calibrated model
...
# ONNX export
torch.onnx.export(...)
```
## Complete example
A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for SQUAD task can be found at [transformers/examples/research_projects/quantization-qdqbert](https://github.com/huggingface/transformers/tree/master/examples/research_projects/quantization-qdqbert) |
HenryAI/KerasBERTv1 | HenryAI | 2021-12-17T03:20:18Z | 6 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | Thanks for checking this out! <br />
This video explains the ideas behind KerasBERT (still very much a work in progress)
https://www.youtube.com/watch?v=J3P8WLAELqk |
baffo32/t5-base-ptmap | baffo32 | 2021-12-16T23:38:12Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
airKlizz/mt5-small-wikinewssum-test | airKlizz | 2021-12-16T16:18:08Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-wikinewssum-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-wikinewssum-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9354
- Rouge1: 6.8433
- Rouge2: 2.5498
- Rougel: 5.6114
- Rougelsum: 6.353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 661 | 3.2810 | 6.4161 | 2.403 | 5.3674 | 6.0329 |
| No log | 2.0 | 1322 | 3.1515 | 6.9291 | 2.6826 | 5.6839 | 6.4359 |
| No log | 3.0 | 1983 | 3.0565 | 6.7939 | 2.6113 | 5.6133 | 6.3126 |
| No log | 4.0 | 2644 | 2.9815 | 6.0279 | 2.1637 | 4.9892 | 5.5962 |
| No log | 5.0 | 3305 | 2.9645 | 6.3926 | 2.339 | 5.2716 | 5.9443 |
| 3.9937 | 6.0 | 3966 | 2.9476 | 6.4739 | 2.3615 | 5.3473 | 6.0089 |
| 3.9937 | 7.0 | 4627 | 2.9405 | 6.615 | 2.4309 | 5.4493 | 6.1445 |
| 3.9937 | 8.0 | 5288 | 2.9354 | 6.8433 | 2.5498 | 5.6114 | 6.353 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
lewtun/xlm-roberta-base-finetuned-marc-en-hslu | lewtun | 2021-12-16T14:55:28Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en-hslu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en-hslu
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8826
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1121 | 1.0 | 235 | 0.9400 | 0.5732 |
| 0.9487 | 2.0 | 470 | 0.8826 | 0.5 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mateocolina/xlm-roberta-base-finetuned-marc-en | mateocolina | 2021-12-16T14:39:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9276
- Mae: 0.5366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0992 | 1.0 | 235 | 0.9340 | 0.5122 |
| 0.945 | 2.0 | 470 | 0.9276 | 0.5366 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Giannipinelli/xlm-roberta-base-finetuned-marc-en | Giannipinelli | 2021-12-16T14:34:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9161
- Mae: 0.4634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1217 | 1.0 | 235 | 0.9396 | 0.4878 |
| 0.9574 | 2.0 | 470 | 0.9161 | 0.4634 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
TomO/xlm-roberta-base-finetuned-marc-en | TomO | 2021-12-16T14:31:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9237
- Mae: 0.5122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1089 | 1.0 | 235 | 0.9380 | 0.4878 |
| 0.9546 | 2.0 | 470 | 0.9237 | 0.5122 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rafiulrumy/wav2vec2-large-xlsr-53-demo-colab | rafiulrumy | 2021-12-16T05:09:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7860
- Wer: 1.1067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.2273 | 44.42 | 400 | 3.3544 | 1.0 |
| 0.9228 | 88.84 | 800 | 4.7054 | 1.1601 |
| 0.1423 | 133.32 | 1200 | 5.9489 | 1.1578 |
| 0.0751 | 177.74 | 1600 | 5.5939 | 1.1717 |
| 0.0554 | 222.21 | 2000 | 6.1230 | 1.1717 |
| 0.0356 | 266.63 | 2400 | 6.2845 | 1.1613 |
| 0.0288 | 311.11 | 2800 | 6.6109 | 1.2100 |
| 0.0223 | 355.53 | 3200 | 6.5605 | 1.1299 |
| 0.0197 | 399.95 | 3600 | 7.1242 | 1.1682 |
| 0.0171 | 444.42 | 4000 | 7.2452 | 1.1578 |
| 0.0149 | 488.84 | 4400 | 7.4048 | 1.0684 |
| 0.0118 | 533.32 | 4800 | 6.6227 | 1.1172 |
| 0.011 | 577.74 | 5200 | 6.7909 | 1.1566 |
| 0.0095 | 622.21 | 5600 | 6.8088 | 1.1102 |
| 0.0077 | 666.63 | 6000 | 7.4451 | 1.1311 |
| 0.0062 | 711.11 | 6400 | 6.8486 | 1.0777 |
| 0.0051 | 755.53 | 6800 | 6.8812 | 1.1241 |
| 0.0051 | 799.95 | 7200 | 6.9987 | 1.1450 |
| 0.0041 | 844.42 | 7600 | 7.3048 | 1.1323 |
| 0.0044 | 888.84 | 8000 | 6.6644 | 1.1125 |
| 0.0031 | 933.32 | 8400 | 6.6298 | 1.1148 |
| 0.0027 | 977.74 | 8800 | 6.7860 | 1.1067 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingtweets/ai_hexcrawl | huggingtweets | 2021-12-15T19:46:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ai_hexcrawl/1639597537705/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1467327234365181953/gFho8YCv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AI Hexcrawl</div>
<div style="text-align: center; font-size: 14px;">@ai_hexcrawl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AI Hexcrawl.
| Data | AI Hexcrawl |
| --- | --- |
| Tweets downloaded | 1164 |
| Retweets | 42 |
| Short tweets | 2 |
| Tweets kept | 1120 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vdxugbwr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ai_hexcrawl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/r9ejkubu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/r9ejkubu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ai_hexcrawl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shainahub/covid_qa_distillbert | shainahub | 2021-12-15T19:10:48Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
metrics:
- squad_v2 # Example: wer. Use metric id from https://hf.co/metrics
widget:
- text: "What is COVID-19?"
context: "Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first known case was identified in Wuhan, China, in December 2019.[7] The disease has since spread worldwide, leading to an ongoing pandemic."
- text: "Where was COVID-19 first discovered?"
context: "The first known infections from SARS-CoV-2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event."
- text: "What is Post-COVID syndrome?"
context: "Long COVID, also known as post-COVID-19 syndrome, post-acute sequelae of COVID-19 (PASC), or chronic COVID syndrome (CCS) is a condition characterized by long-term sequelae appearing or persisting after the typical convalescence period of COVID-19. Long COVID can affect nearly every organ system, with sequelae including respiratory system disorders, nervous system and neurocognitive disorders, mental health disorders, metabolic disorders, cardiovascular disorders, gastrointestinal disorders, malaise, fatigue, musculoskeletal pain, and anemia. A wide range of symptoms are commonly reported, including fatigue, headaches, shortness of breath, anosmia (loss of smell), parosmia (distorted smell), muscle weakness, low fever and cognitive dysfunction."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2502 | 1.0 | 3880 | 0.1824 |
| 0.2007 | 2.0 | 7760 | 0.1250 |
| 0.1338 | 3.0 | 11640 | 0.0976 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Ayham/xlnet_gpt2_summarization_cnn_dailymail | Ayham | 2021-12-15T18:08:27Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: xlnet_gpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet_gpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nguyenvulebinh/spelling-oov | nguyenvulebinh | 2021-12-15T17:00:58Z | 672 | 1 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ```python
from transformers import EncoderDecoderModel
from importlib.machinery import SourceFileLoader
from transformers.file_utils import cached_path, hf_bucket_url
import torch
import os
## Load model & tokenizer
cache_dir='./cache'
model_name='nguyenvulebinh/spelling-oov'
def download_tokenizer_files():
resources = ['envibert_tokenizer.py', 'dict.txt', 'sentencepiece.bpe.model']
for item in resources:
if not os.path.exists(os.path.join(cache_dir, item)):
tmp_file = hf_bucket_url(model_name, filename=item)
tmp_file = cached_path(tmp_file,cache_dir=cache_dir)
os.rename(tmp_file, os.path.join(cache_dir, item))
download_tokenizer_files()
spell_tokenizer = SourceFileLoader("envibert.tokenizer",os.path.join(cache_dir,'envibert_tokenizer.py')).load_module().RobertaTokenizer(cache_dir)
spell_model = EncoderDecoderModel.from_pretrained(model_name)
def oov_spelling(word, num_candidate=1):
result = []
inputs = spell_tokenizer([word.lower()])
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
inputs = {
"input_ids": torch.tensor(input_ids),
"attention_mask": torch.tensor(attention_mask)
}
outputs = spell_model.generate(**inputs, num_return_sequences=num_candidate)
for output in outputs.cpu().detach().numpy().tolist():
result.append(spell_tokenizer.sp_model.DecodePieces(spell_tokenizer.decode(output, skip_special_tokens=True).split()))
return result
oov_spelling('spacespeaker')
# output: ['x pây x pếch cơ']
``` |
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09 | Jeska | 2021-12-15T16:50:47Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09
This model is a fine-tuned version of [outputDAQonly09/](https://huggingface.co/outputDAQonly09/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4978
- Accuracy: 0.9031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 330 | 3.9692 | 0.2249 |
| 4.3672 | 2.0 | 660 | 3.1312 | 0.4031 |
| 4.3672 | 3.0 | 990 | 2.5068 | 0.5658 |
| 3.1495 | 4.0 | 1320 | 2.0300 | 0.6600 |
| 2.2491 | 5.0 | 1650 | 1.6517 | 0.7450 |
| 2.2491 | 6.0 | 1980 | 1.3604 | 0.7943 |
| 1.622 | 7.0 | 2310 | 1.1328 | 0.8327 |
| 1.1252 | 8.0 | 2640 | 0.9484 | 0.8611 |
| 1.1252 | 9.0 | 2970 | 0.8212 | 0.8757 |
| 0.7969 | 10.0 | 3300 | 0.7243 | 0.8830 |
| 0.5348 | 11.0 | 3630 | 0.6597 | 0.8867 |
| 0.5348 | 12.0 | 3960 | 0.5983 | 0.8857 |
| 0.3744 | 13.0 | 4290 | 0.5635 | 0.8976 |
| 0.2564 | 14.0 | 4620 | 0.5437 | 0.8985 |
| 0.2564 | 15.0 | 4950 | 0.5124 | 0.9013 |
| 0.1862 | 16.0 | 5280 | 0.5074 | 0.9022 |
| 0.1349 | 17.0 | 5610 | 0.5028 | 0.9049 |
| 0.1349 | 18.0 | 5940 | 0.4876 | 0.9077 |
| 0.0979 | 19.0 | 6270 | 0.4971 | 0.9049 |
| 0.0763 | 20.0 | 6600 | 0.4941 | 0.9022 |
| 0.0763 | 21.0 | 6930 | 0.4957 | 0.9049 |
| 0.0602 | 22.0 | 7260 | 0.4989 | 0.9049 |
| 0.0504 | 23.0 | 7590 | 0.4959 | 0.9040 |
| 0.0504 | 24.0 | 7920 | 0.4944 | 0.9031 |
| 0.0422 | 25.0 | 8250 | 0.4985 | 0.9040 |
| 0.0379 | 26.0 | 8580 | 0.4970 | 0.9049 |
| 0.0379 | 27.0 | 8910 | 0.4949 | 0.9040 |
| 0.0351 | 28.0 | 9240 | 0.4971 | 0.9040 |
| 0.0321 | 29.0 | 9570 | 0.4967 | 0.9031 |
| 0.0321 | 30.0 | 9900 | 0.4978 | 0.9031 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits