modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-29 06:27:49
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-29 06:23:06
card
stringlengths
11
1.01M
quangtran199hust/layoutlmv2_roige
quangtran199hust
2021-10-28T07:32:00Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2_roige results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2_roige This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.0+cu101 - Datasets 1.14.0 - Tokenizers 0.10.3
aditeyabaral/sentencetransformer-indic-bert
aditeyabaral
2021-10-28T02:17:50Z
8
0
sentence-transformers
[ "sentence-transformers", "pytorch", "albert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-indic-bert This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-indic-bert') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-indic-bert') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-indic-bert') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-indic-bert) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 9234 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: AlbertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
patrickvonplaten/sew-d-mid-400k-librispeech-clean-100h-ft
patrickvonplaten
2021-10-27T23:44:33Z
6
1
transformers
[ "transformers", "pytorch", "tensorboard", "sew-d", "automatic-speech-recognition", "librispeech_asr", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - automatic-speech-recognition - librispeech_asr - generated_from_trainer model-index: - name: sew-d-mid-400k-librispeech-clean-100h-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sew-d-mid-400k-librispeech-clean-100h-ft This model is a fine-tuned version of [asapp/sew-d-mid-400k](https://huggingface.co/asapp/sew-d-mid-400k) on the LIBRISPEECH_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 2.3540 - Wer: 1.0536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.319 | 0.11 | 100 | 11.0572 | 1.0 | | 3.6726 | 0.22 | 200 | 4.2003 | 1.0 | | 2.981 | 0.34 | 300 | 3.5742 | 0.9919 | | 2.9411 | 0.45 | 400 | 3.2599 | 1.0 | | 2.903 | 0.56 | 500 | 2.9350 | 1.0 | | 2.8597 | 0.67 | 600 | 2.9514 | 1.0 | | 2.7771 | 0.78 | 700 | 2.8521 | 1.0 | | 2.7926 | 0.9 | 800 | 2.7821 | 1.0120 | | 2.6623 | 1.01 | 900 | 2.7027 | 0.9924 | | 2.5893 | 1.12 | 1000 | 2.6667 | 1.0240 | | 2.5733 | 1.23 | 1100 | 2.6341 | 1.0368 | | 2.5455 | 1.35 | 1200 | 2.5928 | 1.0411 | | 2.4919 | 1.46 | 1300 | 2.5695 | 1.0817 | | 2.5182 | 1.57 | 1400 | 2.5559 | 1.1072 | | 2.4766 | 1.68 | 1500 | 2.5229 | 1.1257 | | 2.4267 | 1.79 | 1600 | 2.4991 | 1.1151 | | 2.3919 | 1.91 | 1700 | 2.4768 | 1.1139 | | 2.3883 | 2.02 | 1800 | 2.4452 | 1.0636 | | 2.3737 | 2.13 | 1900 | 2.4304 | 1.0594 | | 2.3569 | 2.24 | 2000 | 2.4095 | 1.0539 | | 2.3641 | 2.35 | 2100 | 2.3997 | 1.0511 | | 2.3281 | 2.47 | 2200 | 2.3856 | 1.0414 | | 2.2912 | 2.58 | 2300 | 2.3750 | 1.0696 | | 2.3028 | 2.69 | 2400 | 2.3684 | 1.0436 | | 2.2906 | 2.8 | 2500 | 2.3613 | 1.0538 | | 2.2822 | 2.91 | 2600 | 2.3558 | 1.0506 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.13.4.dev0 - Tokenizers 0.10.3
jwuthri/autonlp-shipping_status_2-27366103
jwuthri
2021-10-27T21:34:42Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "unk", "dataset:jwuthri/autonlp-data-shipping_status_2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - jwuthri/autonlp-data-shipping_status_2 co2_eq_emissions: 32.912881644048 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 27366103 - CO2 Emissions (in grams): 32.912881644048 ## Validation Metrics - Loss: 0.18175844848155975 - Accuracy: 0.9437683592110785 - Precision: 0.9416809605488851 - Recall: 0.8459167950693375 - AUC: 0.9815242330050846 - F1: 0.8912337662337663 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/jwuthri/autonlp-shipping_status_2-27366103 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
anton-l/distilhubert-ft-common-language
anton-l
2021-10-27T21:29:13Z
12
2
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:common_language", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - audio-classification - generated_from_trainer datasets: - common_language metrics: - accuracy model-index: - name: distilhubert-ft-common-language results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-ft-common-language This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the common_language dataset. It achieves the following results on the evaluation set: - Loss: 2.7214 - Accuracy: 0.2797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.6543 | 1.0 | 173 | 3.7611 | 0.0491 | | 3.2221 | 2.0 | 346 | 3.4868 | 0.1352 | | 2.9332 | 3.0 | 519 | 3.2732 | 0.1861 | | 2.7299 | 4.0 | 692 | 3.0944 | 0.2172 | | 2.5638 | 5.0 | 865 | 2.9790 | 0.2400 | | 2.3871 | 6.0 | 1038 | 2.8668 | 0.2590 | | 2.3384 | 7.0 | 1211 | 2.7972 | 0.2653 | | 2.2648 | 8.0 | 1384 | 2.7625 | 0.2695 | | 2.2162 | 9.0 | 1557 | 2.7405 | 0.2782 | | 2.1915 | 10.0 | 1730 | 2.7214 | 0.2797 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
huggingtweets/void_vomicae
huggingtweets
2021-10-27T21:01:11Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/void_vomicae/1635368467642/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1452295981517742087/v8HfhHLT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">《 𝚟 o̶ 𝚒 𝚍 》</div> <div style="text-align: center; font-size: 14px;">@void_vomicae</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 《 𝚟 o̶ 𝚒 𝚍 》. | Data | 《 𝚟 o̶ 𝚒 𝚍 》 | | --- | --- | | Tweets downloaded | 2083 | | Retweets | 417 | | Short tweets | 422 | | Tweets kept | 1244 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fju0lp9t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @void_vomicae's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wos3ytc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wos3ytc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/void_vomicae') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/universal_lucas-void_vomicae
huggingtweets
2021-10-27T20:48:34Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/universal_lucas-void_vomicae/1635367710499/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1433429860358049800/y-stiIg9_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1452295981517742087/v8HfhHLT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">GWF HeyGirl & 《 𝚟 o̶ 𝚒 𝚍 》</div> <div style="text-align: center; font-size: 14px;">@universal_lucas-void_vomicae</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from GWF HeyGirl & 《 𝚟 o̶ 𝚒 𝚍 》. | Data | GWF HeyGirl | 《 𝚟 o̶ 𝚒 𝚍 》 | | --- | --- | --- | | Tweets downloaded | 292 | 2083 | | Retweets | 46 | 417 | | Short tweets | 30 | 422 | | Tweets kept | 216 | 1244 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3hd0b7j8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @universal_lucas-void_vomicae's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24n8knho) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24n8knho/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/universal_lucas-void_vomicae') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
prajjwal1/bert-small
prajjwal1
2021-10-27T18:31:52Z
442,830
23
transformers
[ "transformers", "pytorch", "BERT", "MNLI", "NLI", "transformer", "pre-training", "en", "arxiv:1908.08962", "arxiv:2110.01518", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en license: - mit tags: - BERT - MNLI - NLI - transformer - pre-training --- The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-small), [bert-mini]([bert-small](https://huggingface.co/prajjwal1/bert-mini) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. If you use the model, please consider citing both the papers: ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{DBLP:journals/corr/abs-1908-08962, author = {Iulia Turc and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation}, journal = {CoRR}, volume = {abs/1908.08962}, year = {2019}, url = {http://arxiv.org/abs/1908.08962}, eprinttype = {arXiv}, eprint = {1908.08962}, timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Config of this model: - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small) Other models to check out: - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny) - `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini) - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium) Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
prajjwal1/bert-medium
prajjwal1
2021-10-27T18:30:16Z
37,177
3
transformers
[ "transformers", "pytorch", "BERT", "MNLI", "NLI", "transformer", "pre-training", "en", "arxiv:1908.08962", "arxiv:2110.01518", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en license: - mit tags: - BERT - MNLI - NLI - transformer - pre-training --- The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny), [bert-mini](https://huggingface.co/prajjwal1/bert-mini) and [bert-small](https://huggingface.co/prajjwal1/bert-small). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. If you use the model, please consider citing both the papers: ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{DBLP:journals/corr/abs-1908-08962, author = {Iulia Turc and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation}, journal = {CoRR}, volume = {abs/1908.08962}, year = {2019}, url = {http://arxiv.org/abs/1908.08962}, eprinttype = {arXiv}, eprint = {1908.08962}, timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Config of this model: - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium) Other models to check out: - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny) - `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini) - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small) Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
prajjwal1/bert-tiny
prajjwal1
2021-10-27T18:29:01Z
487,487
103
transformers
[ "transformers", "pytorch", "BERT", "MNLI", "NLI", "transformer", "pre-training", "en", "arxiv:1908.08962", "arxiv:2110.01518", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en license: - mit tags: - BERT - MNLI - NLI - transformer - pre-training --- The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. If you use the model, please consider citing both the papers: ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{DBLP:journals/corr/abs-1908-08962, author = {Iulia Turc and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation}, journal = {CoRR}, volume = {abs/1908.08962}, year = {2019}, url = {http://arxiv.org/abs/1908.08962}, eprinttype = {arXiv}, eprint = {1908.08962}, timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Config of this model: - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny) Other models to check out: - `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini) - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small) - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium) Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
patrickvonplaten/sew-d-small-100k-timit
patrickvonplaten
2021-10-27T17:15:26Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "sew-d", "automatic-speech-recognition", "timit_asr", "generated_from_trainer", "dataset:timit_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: sew-d-small-100k-timit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sew-d-small-100k-timit This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 1.7541 - Wer: 0.8061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.2068 | 0.69 | 100 | 4.0802 | 1.0 | | 2.9805 | 1.38 | 200 | 2.9792 | 1.0 | | 2.9781 | 2.07 | 300 | 2.9408 | 1.0 | | 2.9655 | 2.76 | 400 | 2.9143 | 1.0 | | 2.8953 | 3.45 | 500 | 2.8775 | 1.0 | | 2.7718 | 4.14 | 600 | 2.7787 | 1.0 | | 2.6711 | 4.83 | 700 | 2.6401 | 0.9786 | | 2.6403 | 5.52 | 800 | 2.5435 | 1.0392 | | 2.4052 | 6.21 | 900 | 2.4580 | 1.0706 | | 2.1708 | 6.9 | 1000 | 2.2800 | 1.0090 | | 2.2555 | 7.59 | 1100 | 2.1493 | 0.9579 | | 2.3673 | 8.28 | 1200 | 2.0709 | 0.9051 | | 2.091 | 8.97 | 1300 | 2.0258 | 0.8926 | | 1.8433 | 9.66 | 1400 | 1.9645 | 0.8243 | | 1.6824 | 10.34 | 1500 | 1.9211 | 0.8707 | | 2.2282 | 11.03 | 1600 | 1.8914 | 0.8695 | | 1.9027 | 11.72 | 1700 | 1.8718 | 0.8343 | | 1.6303 | 12.41 | 1800 | 1.8646 | 0.8232 | | 1.648 | 13.1 | 1900 | 1.8297 | 0.8177 | | 2.0429 | 13.79 | 2000 | 1.8127 | 0.8642 | | 1.8833 | 14.48 | 2100 | 1.8005 | 0.8307 | | 1.5996 | 15.17 | 2200 | 1.7926 | 0.8467 | | 1.4876 | 15.86 | 2300 | 1.7795 | 0.8341 | | 1.8925 | 16.55 | 2400 | 1.7716 | 0.8199 | | 1.814 | 17.24 | 2500 | 1.7846 | 0.8086 | | 1.536 | 17.93 | 2600 | 1.7655 | 0.8019 | | 1.4476 | 18.62 | 2700 | 1.7599 | 0.8070 | | 1.7629 | 19.31 | 2800 | 1.7589 | 0.8119 | | 1.7646 | 20.0 | 2900 | 1.7541 | 0.8061 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-large-xlsr-129-turkish-colab
patrickvonplaten
2021-10-27T17:08:13Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-129-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-129-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-129](https://huggingface.co/facebook/wav2vec2-large-xlsr-129) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3149 - Wer: 0.4748 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.4837 | 3.67 | 400 | 3.2526 | 1.0 | | 3.0896 | 7.34 | 800 | 2.8037 | 1.0 | | 1.5604 | 11.01 | 1200 | 0.5688 | 0.6613 | | 0.6511 | 14.68 | 1600 | 0.3998 | 0.5580 | | 0.4798 | 18.35 | 2000 | 0.3505 | 0.5118 | | 0.4047 | 22.02 | 2400 | 0.3273 | 0.4858 | | 0.3519 | 25.69 | 2800 | 0.3224 | 0.4796 | | 0.343 | 29.36 | 3200 | 0.3149 | 0.4748 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
en/distilbert-base-uncased-finetuned-squad
en
2021-10-27T15:09:11Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2065 | 1.0 | 5577 | 1.1289 | | 0.9226 | 2.0 | 11154 | 1.1019 | | 0.7411 | 3.0 | 16731 | 1.1453 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
suwani/BERT_NER_Ep5_PAD_50-finetuned-ner
suwani
2021-10-27T13:13:15Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: BERT_NER_Ep5_PAD_50-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_NER_Ep5_PAD_50-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3893 - Precision: 0.6540 - Recall: 0.7348 - F1: 0.6920 - Accuracy: 0.9006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 288 | 0.3705 | 0.5852 | 0.6215 | 0.6028 | 0.8793 | | 0.4885 | 2.0 | 576 | 0.3351 | 0.5925 | 0.7317 | 0.6548 | 0.8865 | | 0.4885 | 3.0 | 864 | 0.3196 | 0.6471 | 0.7138 | 0.6788 | 0.8994 | | 0.2172 | 4.0 | 1152 | 0.3368 | 0.6454 | 0.7323 | 0.6861 | 0.8992 | | 0.2172 | 5.0 | 1440 | 0.3491 | 0.6507 | 0.7312 | 0.6886 | 0.9008 | | 0.1459 | 6.0 | 1728 | 0.3833 | 0.6715 | 0.7018 | 0.6863 | 0.9013 | | 0.1045 | 7.0 | 2016 | 0.3893 | 0.6540 | 0.7348 | 0.6920 | 0.9006 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
patrickvonplaten/unispeech-sat-base-timit-ft
patrickvonplaten
2021-10-27T10:51:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "unispeech-sat", "automatic-speech-recognition", "timit_asr", "generated_from_trainer", "dataset:timit_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: unispeech-sat-base-timit-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unispeech-sat-base-timit-ft This model is a fine-tuned version of [microsoft/unispeech-sat-base](https://huggingface.co/microsoft/unispeech-sat-base) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.6712 - Wer: 0.4101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2582 | 0.69 | 100 | 3.1651 | 1.0 | | 2.9542 | 1.38 | 200 | 2.9567 | 1.0 | | 2.9656 | 2.07 | 300 | 2.9195 | 1.0 | | 2.8946 | 2.76 | 400 | 2.8641 | 1.0 | | 1.9305 | 3.45 | 500 | 1.7680 | 1.0029 | | 1.0134 | 4.14 | 600 | 1.0184 | 0.6942 | | 0.8355 | 4.83 | 700 | 0.7769 | 0.6080 | | 0.8724 | 5.52 | 800 | 0.7182 | 0.6035 | | 0.5619 | 6.21 | 900 | 0.6823 | 0.5406 | | 0.4247 | 6.9 | 1000 | 0.6279 | 0.5237 | | 0.4257 | 7.59 | 1100 | 0.6056 | 0.5000 | | 0.5007 | 8.28 | 1200 | 0.5870 | 0.4918 | | 0.3854 | 8.97 | 1300 | 0.6200 | 0.4804 | | 0.264 | 9.66 | 1400 | 0.6030 | 0.4600 | | 0.1989 | 10.34 | 1500 | 0.6049 | 0.4588 | | 0.3196 | 11.03 | 1600 | 0.5946 | 0.4599 | | 0.2622 | 11.72 | 1700 | 0.6282 | 0.4422 | | 0.1697 | 12.41 | 1800 | 0.6559 | 0.4413 | | 0.1464 | 13.1 | 1900 | 0.6349 | 0.4328 | | 0.2277 | 13.79 | 2000 | 0.6133 | 0.4284 | | 0.221 | 14.48 | 2100 | 0.6617 | 0.4219 | | 0.1391 | 15.17 | 2200 | 0.6705 | 0.4235 | | 0.112 | 15.86 | 2300 | 0.6207 | 0.4218 | | 0.1717 | 16.55 | 2400 | 0.6749 | 0.4184 | | 0.2081 | 17.24 | 2500 | 0.6756 | 0.4169 | | 0.1244 | 17.93 | 2600 | 0.6750 | 0.4181 | | 0.0978 | 18.62 | 2700 | 0.6500 | 0.4115 | | 0.128 | 19.31 | 2800 | 0.6750 | 0.4106 | | 0.1791 | 20.0 | 2900 | 0.6712 | 0.4101 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
patrickvonplaten/unispeech-large-1500h-cv-timit
patrickvonplaten
2021-10-27T10:50:16Z
5,699
0
transformers
[ "transformers", "pytorch", "tensorboard", "unispeech", "automatic-speech-recognition", "timit_asr", "generated_from_trainer", "dataset:timit_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: unispeech-large-1500h-cv-timit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unispeech-large-1500h-cv-timit This model is a fine-tuned version of [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.3099 - Wer: 0.2196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.64 | 0.69 | 100 | 3.9717 | 0.9981 | | 2.6793 | 1.38 | 200 | 2.6264 | 1.0 | | 1.2221 | 2.07 | 300 | 0.9999 | 0.7167 | | 0.9009 | 2.76 | 400 | 0.6509 | 0.5570 | | 0.4352 | 3.45 | 500 | 0.4682 | 0.4332 | | 0.227 | 4.14 | 600 | 0.3661 | 0.3565 | | 0.2169 | 4.83 | 700 | 0.3244 | 0.3203 | | 0.2687 | 5.52 | 800 | 0.3137 | 0.2981 | | 0.127 | 6.21 | 900 | 0.3220 | 0.2828 | | 0.0922 | 6.9 | 1000 | 0.3075 | 0.2708 | | 0.0965 | 7.59 | 1100 | 0.2779 | 0.2576 | | 0.1298 | 8.28 | 1200 | 0.3111 | 0.2480 | | 0.0855 | 8.97 | 1300 | 0.3021 | 0.2421 | | 0.0629 | 9.66 | 1400 | 0.3122 | 0.2511 | | 0.0471 | 10.34 | 1500 | 0.2965 | 0.2368 | | 0.0871 | 11.03 | 1600 | 0.3247 | 0.2387 | | 0.0503 | 11.72 | 1700 | 0.3359 | 0.2363 | | 0.0402 | 12.41 | 1800 | 0.2976 | 0.2332 | | 0.0336 | 13.1 | 1900 | 0.3139 | 0.2321 | | 0.0634 | 13.79 | 2000 | 0.3188 | 0.2309 | | 0.0429 | 14.48 | 2100 | 0.3145 | 0.2335 | | 0.028 | 15.17 | 2200 | 0.3244 | 0.2242 | | 0.0255 | 15.86 | 2300 | 0.2914 | 0.2196 | | 0.0406 | 16.55 | 2400 | 0.3249 | 0.2202 | | 0.0512 | 17.24 | 2500 | 0.3037 | 0.2198 | | 0.0269 | 17.93 | 2600 | 0.3218 | 0.2242 | | 0.0287 | 18.62 | 2700 | 0.3106 | 0.2185 | | 0.0319 | 19.31 | 2800 | 0.3124 | 0.2217 | | 0.0494 | 20.0 | 2900 | 0.3099 | 0.2196 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-base-timit-fine-tuned
patrickvonplaten
2021-10-27T10:49:08Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "timit_asr", "generated_from_trainer", "dataset:timit_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: wav2vec2-base-timit-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-fine-tuned This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.3457 - Wer: 0.2151 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1621 | 0.69 | 100 | 3.1102 | 1.0 | | 2.9592 | 1.38 | 200 | 2.9603 | 1.0 | | 2.9116 | 2.07 | 300 | 2.8921 | 1.0 | | 2.1332 | 2.76 | 400 | 1.9718 | 0.9958 | | 0.8477 | 3.45 | 500 | 0.7813 | 0.5237 | | 0.4251 | 4.14 | 600 | 0.5166 | 0.3982 | | 0.3743 | 4.83 | 700 | 0.4400 | 0.3578 | | 0.4194 | 5.52 | 800 | 0.4077 | 0.3370 | | 0.23 | 6.21 | 900 | 0.4018 | 0.3142 | | 0.1554 | 6.9 | 1000 | 0.3623 | 0.2995 | | 0.1511 | 7.59 | 1100 | 0.3433 | 0.2697 | | 0.1983 | 8.28 | 1200 | 0.3539 | 0.2715 | | 0.1443 | 8.97 | 1300 | 0.3622 | 0.2551 | | 0.0971 | 9.66 | 1400 | 0.3580 | 0.2519 | | 0.0764 | 10.34 | 1500 | 0.3529 | 0.2437 | | 0.1203 | 11.03 | 1600 | 0.3455 | 0.2431 | | 0.0881 | 11.72 | 1700 | 0.3648 | 0.2415 | | 0.0521 | 12.41 | 1800 | 0.3564 | 0.2320 | | 0.0434 | 13.1 | 1900 | 0.3485 | 0.2270 | | 0.0864 | 13.79 | 2000 | 0.3517 | 0.2228 | | 0.0651 | 14.48 | 2100 | 0.3506 | 0.2285 | | 0.0423 | 15.17 | 2200 | 0.3428 | 0.2247 | | 0.0302 | 15.86 | 2300 | 0.3372 | 0.2198 | | 0.0548 | 16.55 | 2400 | 0.3496 | 0.2196 | | 0.0674 | 17.24 | 2500 | 0.3407 | 0.2166 | | 0.0291 | 17.93 | 2600 | 0.3512 | 0.2171 | | 0.0298 | 18.62 | 2700 | 0.3363 | 0.2158 | | 0.0419 | 19.31 | 2800 | 0.3493 | 0.2145 | | 0.046 | 20.0 | 2900 | 0.3457 | 0.2151 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
doc2query/S2ORC-t5-base-v1
doc2query
2021-10-27T10:04:09Z
35
4
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:S2ORC", "arxiv:1904.08375", "arxiv:2104.08663", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - S2ORC widget: - text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." license: apache-2.0 --- # doc2query/S2ORC-t5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/S2ORC-t5-base-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it. ## Training This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 156k training steps. For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (title, abstract) pairs from [S2ORC](https://github.com/allenai/s2orc).
VariableZee/DialoGPT-small-ivylia03
VariableZee
2021-10-27T08:50:29Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational ---
lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli
lighteternal
2021-10-27T07:47:56Z
188
4
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "textual-entailment", "nli", "en", "dataset:mnli", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - textual-entailment - nli - pytorch datasets: - mnli license: mit widget : - text: "EpCAM is overexpressed in breast cancer. </s></s> EpCAM is downregulated in breast cancer." --- # BiomedNLP-PubMedBERT finetuned on textual entailment (NLI) The [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext?text=%5BMASK%5D+is+a+tumor+suppressor+gene) finetuned on the MNLI dataset. It should be useful in textual entailment tasks involving biomedical corpora. ## Usage Given two sentences (a premise and a hypothesis), the model outputs the logits of entailment, neutral or contradiction. You can test the model using the HuggingFace model widget on the side: - Input two sentences (premise and hypothesis) one after the other. - The model returns the probabilities of 3 labels: entailment(LABEL:0), neutral(LABEL:1) and contradiction(LABEL:2) respectively. To use the model locally on your machine: ```python # import torch # device = torch.device("cuda" if torch.cuda.is_available() else "cpu") from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli") model = AutoModelForSequenceClassification.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli") premise = 'EpCAM is overexpressed in breast cancer' hypothesis = 'EpCAM is downregulated in breast cancer.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = model(x)[0] probs = logits.softmax(dim=1) print('Probabilities for entailment, neutral, contradiction \n', np.around(probs.cpu(). detach().numpy(),3)) # Probabilities for entailment, neutral, contradiction # 0.001 0.001 0.998 ``` ## Metrics Evaluation on classification accuracy (entailment, contradiction, neutral) on MNLI test set: | Metric | Value | | --- | --- | | Accuracy | 0.8338| See Training Metrics tab for detailed info.
espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char
espnet
2021-10-27T02:55:53Z
3
11
espnet
[ "espnet", "audio", "automatic-speech-recognition", "zh", "dataset:wenetspeech", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: zh datasets: - wenetspeech license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char` This model was trained by Pengcheng Guo using wenetspeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 5c21f63e45e0961a5d817017c282b0cafd68a3aa pip install -e . cd egs2/wenetspeech/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Oct 6 15:11:20 CST 2021` - python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]` - espnet version: `espnet 0.10.2a1` - pytorch version: `pytorch 1.9.0` - Git hash: `` - Commit date: `` ## asr_train_asr_conformer_raw_zh_char ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|7176|67.1|32.9|0.0|0.1|33.0|32.9| |decode_asr_rnn_asr_model_valid.acc.ave_10best/dev|13825|16684|32.1|54.1|13.8|0.1|68.0|64.2| |decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|8599|13.4|84.6|2.0|0.1|86.7|86.8| |decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|25995|46.2|50.4|3.4|1.1|54.9|52.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|104765|96.3|3.6|0.1|0.2|3.9|32.9| |decode_asr_rnn_asr_model_valid.acc.ave_10bestdev|13825|333357|90.7|3.4|5.9|0.4|9.7|64.2| |decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|220614|84.6|5.0|10.4|0.5|15.9|86.8| |decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|416968|91.8|5.3|2.9|0.6|8.8|52.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr_conformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_raw_zh_char ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 8 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 44205 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 30 patience: null val_scheduler_criterion: - valid - acc early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 4 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 30000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_zh_char/train/speech_shape - exp/asr_stats_raw_zh_char/train/text_shape.char valid_shape_file: - exp/asr_stats_raw_zh_char/valid/speech_shape - exp/asr_stats_raw_zh_char/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 51200 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_l/wav.scp - speech - sound - - dump/raw/train_l/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0015 scheduler: warmuplr scheduler_conf: warmup_steps: 30000 token_list: - <blank> - <unk> - 的 - 我 - 是 - 你 - 了 - 一 - 不 - 这 - 个 - 有 - 就 - 们 - 在 - 他 - 人 - 么 - 来 - 说 - 那 - 要 - 好 - 啊 - 大 - 到 - 上 - 也 - 没 - 都 - 去 - 能 - 子 - 会 - 为 - 得 - 时 - 还 - 可 - 以 - 什 - 家 - 后 - 看 - 呢 - 对 - 事 - 天 - 下 - 过 - 想 - 多 - 小 - 出 - 自 - 儿 - 生 - 给 - 里 - 现 - 着 - 然 - 吧 - 样 - 道 - 吗 - 心 - 跟 - 中 - 很 - 点 - 年 - 和 - 地 - 怎 - 知 - 十 - 老 - 当 - 把 - 话 - 别 - 所 - 之 - 情 - 实 - 开 - 面 - 回 - 行 - 国 - 做 - 己 - 经 - 如 - 真 - 起 - 候 - 些 - 让 - 发 - 她 - 觉 - 但 - 成 - 定 - 意 - 二 - 长 - 最 - 方 - 三 - 前 - 因 - 用 - 呀 - 种 - 只 - 走 - 其 - 问 - 再 - 果 - 而 - 分 - 两 - 打 - 学 - 间 - 您 - 本 - 于 - 明 - 手 - 公 - 听 - 比 - 作 - 女 - 太 - 今 - 从 - 关 - 妈 - 同 - 法 - 动 - 已 - 见 - 才 - 孩 - 感 - 吃 - 常 - 次 - 它 - 进 - 先 - 找 - 身 - 全 - 理 - 又 - 力 - 正 - 主 - 应 - 高 - 被 - 钱 - 快 - 等 - 头 - 重 - 车 - 谢 - 日 - 东 - 放 - 无 - 工 - 咱 - 哪 - 五 - 者 - 像 - 西 - 该 - 干 - 相 - 信 - 机 - 百 - 特 - 业 - 活 - 师 - 边 - 爱 - 友 - 新 - 外 - 位 - 更 - 直 - 几 - 第 - 非 - 四 - 题 - 接 - 少 - 哥 - 死 - 完 - 刚 - 电 - 气 - 安 - 爸 - 白 - 告 - 美 - 解 - 叫 - 月 - 带 - 欢 - 谁 - 体 - 喜 - 部 - 场 - 姐 - 军 - 万 - 结 - 合 - 难 - 八 - 每 - 目 - 亲 - 朋 - 认 - 总 - 加 - 通 - 办 - 马 - 件 - 受 - 任 - 请 - 住 - 王 - 思 - 门 - 名 - 平 - 系 - 文 - 帮 - 路 - 变 - 记 - 水 - 九 - 算 - 将 - 口 - 男 - 度 - 报 - 六 - 张 - 管 - 够 - 性 - 表 - 提 - 何 - 讲 - 期 - 拿 - 保 - 嘛 - 司 - 原 - 始 - 此 - 诉 - 处 - 清 - 内 - 产 - 金 - 晚 - 早 - 交 - 离 - 眼 - 队 - 七 - 入 - 山 - 代 - 市 - 海 - 物 - 零 - 望 - 世 - 婚 - 命 - 越 - 收 - 向 - 花 - 房 - 错 - 节 - 父 - 反 - 战 - 买 - 量 - 或 - 员 - 号 - 千 - 怕 - 底 - 且 - 品 - 民 - 化 - 爷 - 并 - 与 - 服 - 需 - 资 - 求 - 教 - 娘 - 医 - 数 - 院 - 书 - 利 - 往 - 确 - 各 - 单 - 风 - 送 - 必 - 条 - 包 - 准 - 光 - 整 - 病 - 弟 - 嗯 - 计 - 照 - 强 - 务 - 影 - 城 - 夫 - 俩 - 决 - 声 - 连 - 乐 - 息 - 远 - 北 - 至 - 饭 - 留 - 宝 - 神 - 近 - 考 - 备 - 案 - 界 - 容 - 况 - 母 - 较 - 持 - 证 - 选 - 制 - 程 - 喝 - 害 - 字 - 失 - 立 - 台 - 玩 - 查 - 块 - 便 - 挺 - 段 - 周 - 由 - 句 - 紧 - 李 - 据 - 杀 - 南 - 商 - 识 - 网 - 式 - 愿 - 传 - 流 - 消 - 伤 - 根 - 演 - 希 - 故 - 坐 - 建 - 注 - 许 - 调 - 共 - 空 - 半 - 却 - 酒 - 联 - 微 - 言 - 肯 - 赶 - 跑 - 笑 - 区 - 岁 - 红 - 达 - 官 - 轻 - 易 - 火 - 线 - 拉 - 首 - 导 - 团 - 慢 - 指 - 写 - 深 - 论 - 片 - 改 - 啥 - 满 - 步 - 音 - 功 - 聊 - 客 - 未 - 格 - 基 - 睡 - 观 - 份 - 视 - 色 - 价 - 政 - 转 - 终 - 复 - 啦 - 呃 - 阿 - 倒 - 义 - 警 - 林 - 使 - 科 - 运 - 苦 - 待 - 费 - 随 - 救 - 试 - 班 - 敢 - 精 - 及 - 术 - 造 - 续 - 养 - 展 - 答 - 绝 - 众 - 站 - 妹 - 差 - 谈 - 卖 - 播 - 创 - 领 - 象 - 志 - 投 - 习 - 兄 - 元 - 皇 - 专 - 态 - 急 - 局 - 兴 - 楚 - 飞 - 护 - 装 - 热 - 奶 - 取 - 设 - 游 - 读 - 福 - 药 - 担 - 历 - 忙 - 规 - 掉 - 刘 - 切 - 断 - 尽 - 社 - 久 - 支 - 板 - 星 - 姑 - 曾 - 突 - 除 - 华 - 责 - 排 - 京 - 值 - 士 - 统 - 换 - 德 - 衣 - 组 - 示 - 脸 - 刻 - 黑 - 遇 - 虽 - 顾 - 戏 - 怪 - 懂 - 叔 - 夜 - 陈 - 亮 - 江 - 兵 - 负 - 布 - 青 - 落 - 推 - 假 - 类 - 令 - 技 - 英 - 质 - 黄 - 治 - 形 - 助 - 球 - 歌 - 参 - 广 - 继 - 简 - 画 - 奇 - 陪 - 阳 - 险 - 须 - 念 - 迎 - 幸 - 抓 - 破 - 另 - 争 - 竟 - 户 - 律 - 择 - 究 - 龙 - 足 - 店 - 脑 - 斯 - 党 - 权 - 约 - 疑 - 议 - 严 - 密 - 克 - 存 - 穿 - 承 - 校 - 击 - 际 - 标 - 云 - 营 - 察 - 超 - 食 - 集 - 级 - 礼 - 静 - 背 - 武 - 初 - 拍 - 梦 - 验 - 响 - 角 - 石 - 股 - 追 - 怀 - 婆 - 适 - 独 - 忘 - 血 - 醒 - 具 - 罪 - 享 - 毛 - 香 - 状 - 配 - 靠 - 语 - 仅 - 低 - 细 - 米 - 既 - 钟 - 极 - 停 - 味 - 则 - 油 - 器 - 楼 - 菜 - 研 - 互 - 压 - 贵 - 村 - 属 - 派 - 乎 - 坏 - 控 - 显 - 图 - 双 - 职 - 永 - 哈 - 鬼 - 依 - 料 - 按 - 府 - 坚 - 某 - 甚 - 居 - 练 - 顺 - 模 - 即 - 州 - 引 - 乱 - 速 - 庭 - 朝 - 室 - 似 - 付 - 划 - 尔 - 境 - 犯 - 烦 - 环 - 伙 - 巴 - 春 - 古 - 妇 - 势 - 款 - 增 - 财 - 河 - 守 - 虑 - 汉 - 枪 - 妻 - 爹 - 弄 - 委 - 企 - 冲 - 置 - 麻 - 育 - 项 - 防 - 胡 - 杨 - 致 - 辈 - 括 - 毕 - 卫 - 修 - 史 - 型 - 牌 - 嘴 - 苏 - 群 - 举 - 痛 - 座 - 概 - 搞 - 围 - 土 - 毒 - 唱 - 冷 - 累 - 玉 - 获 - 误 - 跳 - 脚 - 雨 - 剧 - 休 - 皮 - 止 - 济 - 肉 - 丽 - 借 - 铁 - 牛 - 哭 - 招 - 闹 - 银 - 优 - 温 - 狗 - 退 - 洗 - 拜 - 否 - 票 - 偷 - 抱 - 博 - 般 - 效 - 套 - 维 - 普 - 康 - 富 - 宫 - 索 - 罗 - 堂 - 智 - 省 - 介 - 孙 - 灵 - 评 - 藏 - 称 - 课 - 货 - 姨 - 艺 - 骗 - 雪 - 赛 - 景 - 昨 - 健 - 鱼 - 激 - 危 - 熟 - 圈 - 闻 - 监 - 替 - 君 - 恋 - 良 - 掌 - 草 - 松 - 供 - 努 - 例 - 短 - 帝 - 姓 - 率 - 族 - 亿 - 赵 - 蛋 - 判 - 预 - 频 - 卡 - 架 - 纪 - 弃 - 秀 - 兰 - 层 - 检 - 伴 - 抗 - 讨 - 源 - 夏 - 咋 - 惊 - 录 - 善 - 补 - 刀 - 充 - 升 - 章 - 午 - 若 - 私 - 吴 - 素 - 旅 - 临 - 挑 - 唐 - 露 - 树 - 斗 - 舞 - 左 - 叶 - 副 - 晓 - 厂 - 弹 - 印 - 秘 - 屋 - 田 - 木 - 困 - 园 - 封 - 逃 - 批 - 馆 - 疼 - 败 - 陆 - 敌 - 散 - 采 - 翻 - 缺 - 胜 - 免 - 销 - 鸡 - 降 - 波 - 测 - 限 - 释 - 忍 - 归 - 床 - 餐 - 茶 - 码 - 宁 - 乡 - 辛 - 彩 - 亚 - 浪 - 漂 - 庆 - 训 - 范 - 烧 - 词 - 吵 - 媳 - 探 - 余 - 恐 - 积 - 农 - 遍 - 舒 - 顶 - 构 - 呼 - 丝 - 执 - 雅 - 惯 - 右 - 脱 - 恩 - 野 - 折 - 趣 - 笔 - 谓 - 盘 - 贝 - 宣 - 绍 - 嘉 - 宋 - 抢 - 嫌 - 尊 - 碰 - 绪 - 丢 - 厉 - 沙 - 轮 - 施 - 织 - 托 - 县 - 策 - 杯 - 逼 - 傻 - 束 - 街 - 疗 - 益 - 骨 - 迷 - 姻 - 恶 - 默 - 寻 - 搜 - 哦 - 材 - 吸 - 劳 - 勇 - 占 - 暴 - 船 - 徐 - 虎 - 融 - 异 - 审 - 攻 - 雷 - 稳 - 呗 - 输 - 睛 - 臣 - 端 - 威 - 秋 - 欧 - 冰 - 韩 - 减 - <space> - 操 - 混 - 汽 - 暗 - 隐 - 嫂 - 沉 - 烟 - 顿 - 凭 - 洋 - 嫁 - 购 - 粉 - 遗 - 杂 - 协 - 尝 - 键 - 亡 - 秦 - 纸 - 拥 - 革 - 猫 - 伯 - 祝 - 签 - 傅 - 牙 - 湖 - 莫 - 杰 - 旁 - 港 - 劲 - 宗 - 偏 - 触 - 唯 - 吓 - 辆 - 沈 - 列 - 梅 - 祖 - 舍 - 尤 - 赚 - 疫 - 腾 - 拼 - 奖 - 刺 - 齐 - 诚 - 媒 - 戴 - 账 - 炸 - 骂 - 避 - 麦 - 爆 - 域 - 烈 - 暖 - 季 - 猜 - 佳 - 净 - 腿 - 磨 - 曲 - 虚 - 阵 - 荣 - 访 - 核 - 鲜 - 阶 - 镇 - 灯 - 估 - 剩 - 硬 - 租 - 敬 - 损 - 惜 - 挂 - 董 - 巨 - 忆 - 登 - 丈 - 帅 - 童 - 耳 - 央 - 软 - 移 - 略 - 额 - 厅 - 挥 - 透 - 络 - 弱 - 珍 - 恨 - 巧 - 丁 - 谋 - 孤 - 豆 - 诗 - 冒 - 狼 - 渐 - 峰 - 售 - 凡 - 聚 - 洞 - 抽 - 劝 - 闭 - 摆 - 冬 - 凶 - 魔 - 灭 - 雄 - 挣 - 搬 - 龄 - 朱 - 编 - 航 - 席 - 驾 - 授 - 鼓 - 握 - 隔 - 猪 - 仙 - 颜 - 镜 - 胖 - 赢 - 仇 - 晨 - 欺 - 刑 - 谷 - 旦 - 亏 - 盖 - 症 - 喊 - 蓝 - 讯 - 殿 - 梁 - 躲 - 旧 - 针 - 箱 - 丰 - 洲 - 鞋 - 征 - 蒙 - 伟 - 袋 - 庄 - 患 - 怨 - 佛 - 稍 - 朵 - 纳 - 吉 - 川 - 典 - 迹 - 瑞 - 废 - 搭 - 涨 - 汤 - 启 - 桌 - 摸 - 赔 - 宜 - 纯 - 贴 - 聪 - 熊 - 延 - 瓶 - 版 - 缘 - 距 - 甜 - 析 - 盛 - 孕 - 彻 - 桥 - 尚 - 染 - 撞 - 途 - 沟 - 疯 - 敏 - 瞧 - 漫 - 胆 - 诺 - 刷 - 饿 - 仍 - 喂 - 辞 - 迟 - 淡 - 郑 - 歉 - 扰 - 宾 - 圆 - 赞 - 肚 - 慧 - 泪 - 吹 - 拖 - 遭 - 穷 - 罚 - 悔 - 绿 - 忽 - 唉 - 毫 - 绩 - 暂 - 射 - 岛 - 拾 - 珠 - 欠 - 忠 - 陷 - 阴 - 尼 - 悲 - 糊 - 撤 - 徒 - 剑 - 币 - 娜 - 违 - 泡 - 仗 - 粮 - 培 - 趟 - 菲 - 拒 - 棒 - 脾 - 赏 - 窗 - 宇 - 闲 - 附 - 踏 - 彼 - 涉 - 锁 - 撒 - 魂 - 羊 - 述 - 屈 - 库 - 滚 - 凉 - 颗 - 寒 - 呐 - 墙 - 娃 - 序 - 迪 - 丹 - 扬 - 瞎 - 递 - 凤 - 碗 - 屁 - 锅 - 奔 - 幅 - 债 - 糖 - 奋 - 汇 - 圣 - 订 - 偶 - 残 - 宽 - 狂 - 鼠 - 狠 - 幕 - 固 - 竞 - 蜜 - 吐 - 摄 - 骑 - 篇 - 毁 - 尾 - 摇 - 奥 - 厚 - 妖 - 禁 - 逐 - 均 - 尸 - 冠 - 阅 - 辑 - 捕 - 载 - 郭 - 俺 - 诊 - 欲 - 扎 - 鸟 - 柔 - 迫 - 豪 - 踪 - 扔 - 碎 - 末 - 娶 - 扫 - 朕 - 励 - 乔 - 闺 - 档 - 厨 - 倍 - 湾 - 郎 - 幼 - 纷 - 奴 - 阻 - 饮 - 怒 - 妙 - 琴 - 曹 - 脏 - 牵 - 瓜 - 滴 - 炮 - 缓 - 含 - 献 - 柜 - 仔 - 艾 - 潜 - 赌 - 震 - 础 - 添 - 兔 - 焦 - 躺 - 森 - 肥 - 洪 - 孝 - 偿 - 悉 - 撑 - 甘 - 桃 - 苹 - 魏 - 鲁 - 池 - 狱 - 厌 - 纠 - 朗 - 贷 - 铺 - 殊 - 坦 - 爬 - 擦 - 酸 - 钢 - 咖 - 瞒 - 蛮 - 谅 - 耐 - 申 - 夸 - 欣 - 诶 - 驶 - 屏 - 烂 - 凌 - 甲 - 胎 - 仪 - 貌 - 番 - 涂 - 抬 - 舅 - 扯 - 鹿 - 摩 - 诸 - 秒 - 泽 - 埋 - 蒋 - 隆 - 赖 - 奸 - 咬 - 恢 - 宿 - 乖 - 邀 - 抵 - 臭 - 闪 - 莉 - 熬 - 链 - 盯 - 侦 - 灾 - 堆 - 灰 - 卷 - 盾 - 障 - 截 - 恰 - 佩 - 戒 - 莲 - 裁 - 芬 - 戚 - 匪 - 滑 - 趁 - 询 - 绑 - 辣 - 挖 - 俗 - 祸 - 符 - 扣 - 插 - 仁 - 壁 - 腰 - 斤 - 燕 - 筑 - 柱 - 夺 - 援 - 映 - 壮 - 杜 - 摔 - 润 - 恭 - 乌 - 慰 - 啡 - 著 - 井 - 跌 - 牢 - 荐 - 拔 - 惹 - 侯 - 玲 - 炎 - 胸 - 旗 - 牲 - 喽 - 涛 - 衡 - 矛 - 伍 - 贤 - 惨 - 糟 - 慌 - 伏 - 醉 - 仓 - 拆 - 乘 - 疾 - 鼻 - 潮 - 予 - 奉 - 伦 - 劫 - 伊 - 怜 - 孟 - 肺 - 忧 - 倾 - 矩 - 荒 - 奏 - 塔 - 塞 - 迅 - 轨 - 瞬 - 丫 - 狐 - 叛 - 繁 - 眠 - 孔 - 谱 - 悄 - 泰 - 姜 - 侵 - 妃 - 冯 - 柳 - 洛 - 岸 - 凯 - 陛 - 幺 - 仿 - 氏 - 窝 - 曼 - 挡 - 浩 - 盟 - 轩 - 牺 - 贫 - 绕 - 谎 - 措 - 扶 - 梯 - 炼 - 勤 - 霸 - 横 - 罢 - 呆 - 税 - 桂 - 哎 - 慕 - 植 - 允 - 荡 - 洁 - 肖 - 耗 - 贼 - 艰 - 贺 - 幻 - 饱 - 胃 - 袭 - 廷 - 泥 - 丧 - 缩 - 砸 - 姥 - 拦 - 扮 - 糕 - 肤 - 猴 - 脆 - 炒 - 耀 - 盗 - 邓 - 扩 - 纵 - 振 - 敲 - 鹏 - 姆 - 湿 - 丑 - 召 - 苗 - 伸 - 惑 - 碍 - 萨 - 瘦 - 闯 - 迁 - 坑 - 弯 - 卑 - 尖 - 遥 - 侠 - 犹 - 押 - 冤 - 钻 - 汗 - 闷 - 邻 - 淘 - 抛 - 妆 - 贾 - 侧 - 傲 - 描 - 耍 - 猛 - 薇 - 裤 - 憾 - 督 - 贸 - 墨 - 勒 - 薄 - 嘞 - 渡 - 紫 - 悟 - 锦 - 溜 - 逆 - 惠 - 辉 - 贪 - 圾 - 垃 - 券 - 燃 - 虫 - 悠 - 伪 - 尿 - 懒 - 俊 - 寄 - 歇 - 盒 - 潘 - 储 - 愈 - 脉 - 粗 - 返 - 昌 - 泉 - 蔡 - 愧 - 赤 - 岳 - 婷 - 猎 - 饼 - 肩 - 勾 - 巡 - 竹 - 催 - 陌 - 踩 - 促 - 扭 - 堵 - 酷 - 芳 - 逛 - 陵 - 耽 - 凑 - 寿 - 缝 - 剪 - 郁 - 宅 - 抚 - 筹 - 沿 - 烤 - 奈 - 挨 - 晋 - 崩 - 浮 - 阁 - 彭 - 裂 - 崇 - 眉 - 桑 - 辩 - 漏 - 稀 - 液 - 汪 - 袁 - 掩 - 浑 - 坡 - 晕 - 缠 - 仰 - 挤 - 睁 - 羽 - 岗 - 捡 - 墓 - 综 - 矿 - 妥 - 厕 - 辱 - 惧 - 逗 - 帽 - 寸 - 搁 - 跨 - 渴 - 饰 - 璃 - 琳 - 爽 - 愤 - 饶 - 卧 - 誓 - 滋 - 鉴 - 腐 - 鸭 - 蛇 - 妮 - 莱 - 哟 - 钥 - 甄 - 肠 - 畅 - 慎 - 悬 - 逻 - 胁 - 辰 - 呈 - 棋 - 寨 - 萌 - 覆 - 姚 - 津 - 笨 - 轰 - 乏 - 匙 - 摊 - 陶 - 恼 - 昏 - 抑 - 姿 - 愁 - 誉 - 椅 - 羞 - 澡 - 踢 - 晶 - 萧 - 箭 - 罩 - 宠 - 羡 - 亦 - 祥 - 串 - 昆 - 煮 - 疏 - 纹 - 泄 - 痕 - 喷 - 册 - 跃 - 卢 - 岩 - 跪 - 兽 - 桶 - 飘 - 漠 - 堪 - 哄 - 寂 - 崔 - 腹 - 癌 - 拳 - 驻 - 霍 - 拨 - 诞 - 捐 - 御 - 榜 - 唤 - 荷 - 径 - 署 - 锋 - 玛 - 匆 - 恒 - 吕 - 邮 - 圳 - 黎 - 掏 - 莎 - 寞 - 佐 - 诈 - 牧 - 盐 - 叹 - 尬 - 匹 - 狸 - 膀 - 谨 - 尘 - 驱 - 乳 - 晒 - 宴 - 辜 - 哲 - 铜 - 薪 - 盆 - 割 - 忌 - 旋 - 翼 - 哀 - 咨 - 遵 - 夹 - 侣 - 译 - 胞 - 浅 - 邦 - 俄 - 弗 - 豫 - 甭 - 乃 - 扛 - 杭 - 瓦 - 槽 - 污 - 尴 - 琢 - 枝 - 详 - 柴 - 佑 - 盼 - 抖 - 惩 - 捷 - 葬 - 贡 - 艳 - 塑 - 茫 - 叨 - 浓 - 拐 - 捉 - 憋 - 稿 - 苍 - 葛 - 扑 - 娱 - 赋 - 杆 - 绘 - 聆 - 肌 - 婴 - 摘 - 岂 - 呵 - 冻 - 泳 - 揭 - 坤 - 盈 - 毅 - 撕 - 娇 - 唠 - 宏 - 吊 - 籍 - 楠 - 肃 - 抹 - 玄 - 湘 - 迈 - 酱 - 骄 - 咐 - 扇 - 幽 - 疲 - 邪 - 吞 - 趋 - 尺 - 玻 - 溃 - 诱 - 翠 - 兼 - 辅 - 岭 - 栏 - 柏 - 址 - 寺 - 逢 - 琪 - 慈 - 愣 - 契 - 渠 - 齿 - 薛 - 拟 - 填 - 坛 - 抄 - 痴 - 绳 - 役 - 擅 - 晃 - 斌 - 愉 - 届 - 悦 - 旨 - 砍 - 弥 - 挽 - 肝 - 鸣 - 庙 - 烫 - 聘 - 皆 - 婶 - 舌 - 枉 - 赫 - 蓉 - 瞅 - 阔 - 俱 - 循 - 鸿 - 彪 - 伺 - 堡 - 谦 - 剂 - 洒 - 赴 - 妨 - 磊 - 嘱 - 蝶 - 兆 - 豹 - 绣 - 篮 - 锻 - 陕 - 霉 - 涵 - 疆 - 丸 - 蠢 - 铃 - 浙 - 庞 - 萝 - 泛 - 芝 - 煤 - 甩 - 氛 - 页 - 逸 - 袖 - 携 - 躁 - 夕 - 匠 - 蹈 - 坊 - 雾 - 蹲 - 颠 - 脂 - 塌 - 棵 - 鹰 - 澳 - 哇 - 筋 - 纽 - 脖 - 棉 - 渣 - 寡 - 践 - 侄 - 披 - 魅 - 虹 - 肿 - 胶 - 霞 - 罐 - 晴 - 拓 - 卿 - 耻 - 砖 - 宪 - 歪 - 兜 - 衰 - 捧 - 歹 - 雕 - 穆 - 栋 - 瑶 - 毙 - 衷 - 膜 - 囊 - 莹 - 垫 - 吻 - 嘟 - 舰 - 虾 - 壳 - 穴 - 勉 - 裙 - 旺 - 柯 - 磕 - 贩 - 腻 - 蹦 - 卜 - 茹 - 驴 - 臂 - 删 - 菌 - 妾 - 蜂 - 祭 - 菊 - 咸 - 淑 - 笼 - 涯 - 碧 - 宙 - 骚 - 皓 - 赐 - 晰 - 腔 - 龟 - 泼 - 鹅 - 啪 - 巾 - 炉 - 沾 - 醋 - 澜 - 朴 - 棍 - 伞 - 雀 - 赠 - 妞 - 淋 - 刮 - 汁 - 椒 - 埃 - 嚷 - 盲 - 窃 - 辽 - 贱 - 滩 - 昭 - 贯 - 珊 - 涌 - 辨 - 捞 - 仲 - 拘 - 碑 - 侍 - 剿 - 搅 - 狮 - 藤 - 旭 - 翅 - 滨 - 禀 - 遮 - 瑟 - 斩 - 攒 - 犬 - 挫 - 僧 - 吩 - 渊 - 蒂 - 萍 - 庸 - 蓄 - 鼎 - 咪 - 姬 - 溪 - 郡 - 镖 - 怡 - 杉 - 畏 - 瓷 - 枚 - 煎 - 劣 - 饺 - 妄 - 卓 - 蔽 - 蒸 - 垂 - 嘲 - 慨 - 谊 - 蹭 - 逮 - 锐 - 钉 - 舟 - 沃 - 凝 - 翔 - 颈 - 靖 - 灌 - 膊 - 崖 - 娟 - 胳 - 铭 - 灿 - 亭 - 粒 - 卸 - 咕 - 坎 - 攀 - 婿 - 奢 - 茂 - 趴 - 耿 - 捏 - 怖 - 浴 - 婉 - 煌 - 霖 - 揍 - 昂 - 驰 - 壶 - 械 - 卦 - 粥 - 尹 - 瘾 - 雇 - 翰 - 肆 - 寇 - 曦 - 厢 - 杠 - 屠 - 芒 - 谣 - 沫 - 掘 - 酬 - 讼 - 乾 - 玫 - 瑰 - 逊 - 惦 - 儒 - 肾 - 粹 - 愚 - 渔 - 暑 - 伐 - 潇 - 喘 - 敦 - 翁 - 斥 - 帖 - 纱 - 梳 - 缴 - 茅 - 谭 - 氧 - 遣 - 履 - 刹 - 枕 - 婢 - 徽 - 轿 - 寓 - 咽 - 叉 - 嗓 - 捣 - 裹 - 览 - 拯 - 疚 - 蜀 - 丛 - 框 - 斑 - 宵 - 郝 - 蛙 - 熙 - 祁 - 哑 - 葱 - 唇 - 韦 - 媛 - 魄 - 锤 - 绵 - 炫 - 吨 - 稻 - 碌 - 刊 - 漆 - 搏 - 讶 - 痒 - 枫 - 妒 - 冥 - 郊 - 爵 - 逝 - 栽 - 叠 - 蚁 - 裕 - 帕 - 剥 - 谐 - 巫 - 颇 - 娥 - 廊 - 蕾 - 丘 - 丞 - 葡 - 坠 - 鸦 - 糗 - 虐 - 唬 - 屎 - 顽 - 巷 - 硅 - 罕 - 殖 - 嘿 - 韵 - 歧 - 垮 - 淮 - 馈 - 昊 - 宰 - 钦 - 霜 - 兑 - 萄 - 塘 - 胀 - 樱 - 枯 - 咳 - 窑 - 募 - 缸 - 昧 - 仑 - 恕 - 氓 - 叮 - 吼 - 坟 - 轴 - 贞 - 赎 - 帆 - 嫩 - 蚂 - 僵 - 颖 - 噜 - 咒 - 琐 - 勃 - 芯 - 绸 - 哼 - 仨 - 挪 - 狡 - 禅 - 粘 - 雯 - 扒 - 恳 - 蔬 - 匈 - 钓 - 桐 - 菇 - 哒 - 稚 - 膏 - 纲 - 狄 - 硕 - 廉 - 衙 - 艘 - 廖 - 腊 - 蟹 - 邱 - 缉 - 曝 - 桩 - 啤 - 嫉 - 棚 - 矮 - 汰 - 衍 - 拽 - 削 - 彤 - 斜 - 揉 - 樊 - 馨 - 钩 - 浦 - 肢 - 敷 - 喻 - 鞭 - 瞪 - 耕 - 掐 - 屡 - 榴 - 勋 - 泊 - 竭 - 鹤 - 溢 - 淳 - 倩 - 驳 - 抠 - 捅 - 筒 - 窄 - 鄙 - 嗦 - 袍 - 劈 - 炖 - 裸 - 贬 - 敞 - 嘎 - 淹 - 耶 - 秩 - 舱 - 厦 - 叙 - 孽 - 筷 - 浇 - 饥 - 噩 - 蚊 - 兮 - 皱 - 侃 - 辟 - 弊 - 袜 - 吾 - 俘 - 芸 - 夷 - 芦 - 囚 - 倡 - 琦 - 哨 - 巢 - 烛 - 帐 - 燥 - 讽 - 俞 - 馅 - 柿 - 墅 - 妍 - 瘤 - 沦 - 衬 - 瑜 - 蒜 - 蛛 - 窟 - 勿 - 沛 - 磁 - 狭 - 栈 - 懵 - 酿 - 戈 - 邵 - 龚 - 衫 - 勺 - 哗 - 叽 - 畜 - 爪 - 惫 - 颁 - 浸 - 摧 - 勘 - 惕 - 蔓 - 馒 - 挠 - 陀 - 豁 - 帘 - 淀 - 藩 - 蜡 - 凳 - 蘑 - 琼 - 棺 - 蝴 - 骆 - 掰 - 枣 - 遂 - 飙 - 咧 - 掀 - 梨 - 杏 - 嗑 - 棠 - 绽 - 捆 - 舆 - 肇 - 葩 - 呦 - 膝 - 鹊 - 揣 - 瓣 - 靓 - 卵 - 鲍 - 炭 - 戳 - 颤 - 禄 - 菩 - 崛 - 驸 - 佣 - 眨 - 聂 - 乙 - 嘻 - 拧 - 喵 - 佟 - 靳 - 阎 - 拢 - 厘 - 凰 - 疤 - 螺 - 淇 - 涩 - 拎 - 嗨 - 魁 - 薯 - 歼 - 沪 - 筛 - 谍 - 揪 - 刁 - 秃 - 谜 - 撇 - 肪 - 绊 - 逞 - 滥 - 寝 - 麟 - 奕 - 侮 - 喉 - 柄 - 荆 - 撼 - 窦 - 姗 - 乞 - 艇 - 竖 - 剖 - 嗽 - 捂 - 腕 - 鸽 - 刃 - 弓 - 辙 - 粤 - 泣 - 梗 - 茄 - 茜 - 驼 - 冈 - 倔 - 啃 - 蹄 - 唧 - 祈 - 腺 - 焰 - 睿 - 崽 - A - 苛 - 窍 - 凿 - 倭 - 骤 - 槛 - 碳 - 诏 - 芽 - 浆 - 隶 - 搂 - 睦 - 彬 - 岔 - 诀 - 嚼 - 掺 - 殷 - 吁 - 啰 - 侈 - 亩 - 纤 - 倦 - 揽 - 媚 - 潭 - 莽 - 赃 - 睹 - 脊 - 逍 - 淼 - 沸 - 峡 - 仆 - 眷 - 屯 - 璐 - 雁 - 澄 - 渗 - 咔 - 啸 - 怂 - 娄 - 惶 - 恍 - 锡 - 秉 - 猾 - 挟 - 舔 - 弦 - 阱 - 俭 - 嚣 - 搓 - 懈 - 诡 - 隙 - 苟 - 倘 - 瘫 - 扁 - 鑫 - 撩 - 蓬 - 铲 - 峥 - 巅 - 葫 - 膳 - 狙 - 晏 - 祠 - 峻 - 尉 - 毯 - 沧 - 熏 - 咯 - 株 - 沐 - 奎 - 锣 - 霄 - 彦 - 叭 - 臻 - 昔 - 灶 - 傍 - 腥 - 屑 - 禾 - 彰 - 冉 - 矫 - 滞 - 瘩 - 匀 - 椎 - 槐 - 岚 - 跷 - 剔 - 倪 - 盏 - 泌 - 灸 - 隧 - 函 - 壤 - 剃 - 蹊 - 葵 - 拌 - 琅 - 炳 - 跋 - 瑾 - 哩 - 蔷 - 鳌 - 莺 - 诵 - 疙 - 吱 - 蓓 - 绎 - 匿 - 铮 - 怼 - 踹 - 嗅 - 焚 - 躯 - 蝇 - 橘 - 祟 - 辖 - 砂 - 韧 - 粪 - 诬 - 擒 - 黏 - 衔 - 溺 - 蜘 - 篷 - 贿 - 闫 - 焕 - 邢 - 兹 - 窖 - 旬 - 铸 - 咚 - 惭 - 佬 - 裴 - 裳 - 犀 - 弘 - 莓 - 钏 - 鄂 - 陋 - 伽 - 鞠 - 氪 - 垒 - 窜 - 橙 - 讳 - 甥 - 淫 - 拱 - 袱 - 坨 - 暧 - 渺 - 蕉 - 晗 - 茬 - 盔 - 妓 - 蚕 - 僻 - 朽 - 呛 - 挚 - 擎 - 绅 - 喇 - 鳄 - 巩 - 蜗 - 遛 - 俯 - 汹 - 猩 - 奠 - 钙 - 悍 - 躬 - 菱 - 翘 - 琉 - 虏 - 凄 - 稼 - 炕 - 皂 - 漱 - 斋 - 撂 - 敛 - 阮 - 芭 - 阀 - 缚 - 懦 - 亨 - 螃 - 侥 - 膨 - 筝 - 惟 - 黛 - 眯 - 茨 - 怠 - 辐 - 捎 - 殴 - 桓 - 瞄 - 冀 - 雍 - 霾 - 酵 - 檬 - 哺 - 裔 - 兢 - 麒 - 烹 - 绒 - 丐 - 娅 - 钞 - 垄 - 笛 - 赣 - 蕊 - 暮 - 噪 - 沮 - 肋 - 庇 - 橡 - 摁 - 痘 - 棘 - 拂 - 绷 - 刨 - 晾 - 蹬 - 鸥 - 璇 - 掠 - 瘟 - 俐 - 糙 - 骏 - 牡 - 撵 - 嘘 - 沥 - 庶 - 赁 - 喧 - 涡 - 瞳 - 迭 - 肘 - 颂 - 珑 - 觅 - 埔 - G - 跤 - 朔 - 詹 - 梭 - 暇 - 惺 - 甸 - 怯 - 聋 - 赦 - 屉 - 闸 - 坝 - 吟 - 凸 - 拴 - 堤 - 矣 - 斧 - 呸 - 啼 - 韬 - 钧 - 坞 - 纺 - 氢 - 嵩 - 镯 - 髓 - 檐 - 涕 - 剁 - 稽 - 烨 - 钮 - 闽 - 仕 - 驯 - 吭 - 漓 - 眸 - 鞅 - 枢 - 煞 - 昕 - 畔 - 疹 - 矶 - 呱 - 熄 - 吏 - 泻 - 拙 - 蛤 - 禽 - 甫 - 厮 - 乍 - 蝉 - 撬 - 嘀 - 衅 - 鲨 - 萱 - 霹 - 旷 - 辫 - 坷 - 眶 - 蟆 - 呜 - 猬 - 嬷 - 萎 - 靶 - 雳 - 煲 - 溯 - 蚀 - 狈 - 滤 - 恙 - 瑛 - 栓 - 嫣 - 碟 - 祷 - 驿 - 犊 - 灼 - 哆 - 宛 - 榨 - 寥 - 翟 - 栗 - 滔 - 馋 - 杖 - 茉 - 饲 - 庐 - 隋 - 旱 - 崎 - 颅 - 焉 - 墩 - 篱 - 晟 - 扳 - 咎 - 竿 - 僚 - 溶 - 俏 - 霆 - 堕 - 冕 - 叩 - 绰 - 洽 - 襄 - 蛊 - 缅 - 侨 - 伶 - 蕴 - 酥 - 坂 - 拇 - 庚 - 卒 - 诛 - 禧 - 瓢 - 锯 - 扉 - 饷 - 诅 - 烘 - 浏 - 痰 - 榆 - 窥 - 鲸 - 捋 - 戎 - 笋 - 璋 - 诫 - 珈 - 癫 - 囤 - 厥 - 癖 - 翩 - 芹 - 匣 - 噬 - 栖 - 蝎 - 锄 - 玺 - 疮 - 缕 - 猥 - 槿 - 蔑 - 汝 - 珂 - 撮 - 坪 - 蒲 - 倚 - 嗷 - 撰 - 荧 - 芙 - 豚 - 筱 - 敖 - 孵 - 猝 - D - 弈 - 徊 - 辗 - 赘 - 徘 - 烙 - 娲 - 嚎 - 迢 - 绥 - 羁 - 屌 - 铅 - 澎 - S - 嬛 - 晦 - 煽 - 逾 - 饵 - 虞 - 筐 - 哧 - 抒 - 醇 - 祀 - 瑕 - 岐 - 潼 - 惚 - C - 苑 - 靡 - 菠 - 赡 - 惰 - 梓 - 铛 - 澈 - 莞 - 呕 - 驭 - 邝 - 砰 - 轼 - 窒 - 慷 - 绞 - 絮 - 虔 - 惮 - 柬 - 嗡 - 拣 - 羲 - 蹋 - 隘 - 帜 - 卤 - 雌 - 唾 - 邹 - 俑 - 碾 - 婪 - 咏 - 粟 - 崭 - 钝 - 彝 - 陡 - 谛 - 秤 - 磅 - 淌 - 炊 - 鲤 - 羹 - 殉 - 曰 - 萤 - 阐 - 鬟 - 拭 - T - 沁 - 滇 - 梧 - 烁 - 瞻 - 淤 - 凹 - 撸 - 棕 - 腌 - 缪 - 祺 - 痊 - 忑 - 柠 - 矜 - 忐 - 讹 - 瀚 - 尧 - 昼 - 芊 - 憨 - 鳞 - 匮 - 鸳 - 鸯 - 湃 - 屿 - 馍 - 沽 - 栾 - 蝠 - 窘 - 绛 - 巍 - 悯 - 焊 - 谴 - 浊 - 娴 - 畴 - 湛 - 螂 - 韭 - 哮 - 拷 - 攥 - 凛 - 颓 - 恺 - 蝙 - 襟 - 粑 - 洼 - 笃 - 渝 - 骁 - 殃 - 酌 - 乒 - 臊 - 疵 - 诧 - 谬 - 锈 - 袄 - 膛 - 瘸 - 嫖 - 梢 - 沼 - 棱 - 嚓 - 耸 - 喳 - 舵 - 橱 - 涮 - 檀 - 瞩 - 腑 - 岑 - 痪 - 墟 - 蔚 - 捍 - 徙 - 棣 - 猖 - 掷 - 恬 - 嫦 - 噔 - 饪 - 掂 - 恤 - 叱 - 芷 - 弩 - 楷 - 镶 - 茧 - 诠 - 咙 - 匡 - 擂 - 亵 - 杞 - 乓 - 渤 - 藉 - 憔 - 渭 - 禹 - 睐 - 趾 - 抉 - 悴 - 忒 - 茸 - 纬 - 懊 - 浚 - 溅 - 遏 - 琛 - 靴 - 戮 - 翎 - 谕 - 濒 - 锵 - 嬉 - 籽 - 殆 - 叼 - 苔 - 灏 - 嗖 - 俪 - 亢 - 冶 - 嗜 - 磋 - 汀 - 讪 - 萃 - 菁 - 镑 - 紊 - 脯 - 缆 - 哉 - 赂 - 婊 - B - 蕃 - 迄 - 蜓 - 舜 - 嚏 - 昱 - 黔 - 犟 - 汐 - 昵 - 嗣 - 唆 - 蛾 - 黯 - 绯 - 瀑 - 憬 - 狩 - 掖 - 崴 - 褪 - 髦 - 酝 - 弧 - 咄 - 吝 - 馄 - 娩 - 窿 - 蜻 - 袒 - 玮 - 阙 - 篡 - 邯 - 朦 - 邑 - 喃 - 粽 - 捶 - 嫔 - 钗 - 穗 - 骼 - 胭 - 寐 - 噎 - M - 碱 - 荤 - 笙 - 矢 - 芥 - 廓 - 扼 - 厄 - 毋 - 糯 - 惋 - 纶 - 碜 - 胧 - 懿 - 偃 - 沏 - 痹 - 慑 - 鹦 - 娠 - 铐 - 绢 - 傀 - 孜 - 饨 - 儡 - 孰 - 焱 - 峭 - 伎 - 幌 - 椰 - 譬 - 藕 - 坍 - 铝 - 鞍 - 蘸 - 貂 - 猿 - 炙 - 琊 - 峙 - 硝 - 幂 - 钰 - 眩 - 亥 - 簇 - 鹉 - 睫 - 斟 - 簧 - 颐 - 薰 - 癞 - 祛 - 燎 - 缎 - 簸 - 咣 - 绚 - 簿 - 邋 - 嵌 - 肮 - 稷 - 辍 - 闵 - 枸 - 撅 - 曙 - 苇 - K - 悼 - 汶 - 匕 - 皖 - 腮 - 琶 - 汲 - 鼹 - 礁 - 颊 - 怔 - 汕 - 喀 - 砌 - 釜 - 畸 - 鹃 - 峨 - 奄 - 骡 - 斐 - 芈 - 莘 - 蟑 - 荔 - 缇 - 犒 - 宓 - 汾 - 沌 - 宦 - 憧 - 咤 - 吆 - 攘 - 漩 - 梵 - 阂 - 吒 - 芜 - 缔 - 秧 - 翊 - 晌 - 剐 - 蜕 - 芋 - 彷 - 牟 - 诲 - 臀 - 徨 - Q - 杵 - 荫 - 榄 - 蹿 - 豌 - 迂 - 琵 - 拗 - 帷 - 楞 - 嘶 - 橄 - 胺 - 圭 - 砚 - 藻 - 凋 - 啄 - 褒 - 嗝 - 殡 - 嫡 - 恃 - 濡 - 缜 - 孺 - 泸 - 妊 - 衩 - 驹 - 榻 - 腆 - 鹂 - 箍 - 璧 - 熔 - 悚 - 遢 - 弛 - 诋 - 羚 - 鹭 - 嘚 - 骸 - 瘪 - 铠 - 瞿 - 屹 - 邸 - 痨 - 辘 - 浒 - 忏 - 钊 - 潦 - 怅 - 肴 - 蚯 - 胚 - 茵 - 蚓 - 戬 - 瘀 - 翡 - 恪 - 卉 - 蝌 - 雏 - 祯 - 谏 - 蚪 - 钵 - 馊 - 嗒 - 犁 - 寅 - V - 锥 - 娼 - 晖 - 啬 - 纣 - 淆 - 丙 - 夯 - 竣 - 褚 - 褥 - 轧 - 氨 - 褂 - 钳 - 轲 - 竺 - 疡 - 淞 - 胤 - 摹 - 鳅 - 珀 - 偕 - 匾 - 觑 - 扈 - 傣 - 绫 - 枷 - 阑 - 柚 - 烊 - 怦 - 腼 - 珺 - 缀 - 裘 - 碉 - 峪 - 俸 - 羯 - 姊 - 疟 - 砺 - 盎 - 嘣 - 釉 - 溥 - 熠 - 垢 - 摞 - 哽 - 槟 - 囧 - 胰 - 遁 - 痞 - 熹 - 忡 - 稠 - 顷 - 瑚 - 卯 - 渎 - 炅 - 褶 - 烽 - 瞑 - 嘈 - 硫 - 壹 - 悖 - 酪 - 跺 - 阜 - 帛 - 漪 - 蝗 - 迦 - 蟒 - 咀 - 谤 - 睬 - 辕 - 绮 - 搀 - 裆 - 鳖 - 囡 - 羔 - 痣 - 滕 - 佘 - 樟 - 韶 - 霓 - 劾 - 赈 - 唏 - 闰 - 脐 - 沓 - 瓮 - 篓 - 笠 - 暄 - 涅 - 诽 - 洱 - 栅 - 蚱 - 囔 - 攸 - 酣 - 阪 - 榕 - 骇 - 婧 - 陨 - 憎 - 沂 - 磷 - 壕 - 醺 - 惬 - 璀 - 璨 - 喋 - P - 炽 - 瘁 - 羿 - 褐 - 簪 - 冽 - 驮 - 芮 - 辄 - 咆 - 渍 - 觐 - 炷 - 蛰 - 驷 - 帚 - 蜷 - O - X - 邂 - 逅 - 缭 - 秽 - 琰 - 龌 - 龊 - 俨 - 涟 - 噼 - 掇 - 哔 - 炬 - 佯 - 粱 - 霁 - 鱿 - 夭 - 擀 - 陇 - 瞥 - 壑 - 盹 - 馁 - 蚌 - 焖 - 蛟 - 囱 - 蚝 - 抿 - 脓 - 蒿 - 飓 - 渲 - 宸 - 酗 - 荻 - 缥 - 弑 - 偎 - 宕 - 耘 - 瞌 - 瘴 - 溉 - 涝 - 咿 - 垛 - 垦 - 缈 - 苞 - 惆 - 汛 - 鹑 - 町 - 抡 - 慵 - 浣 - 耙 - 砥 - 噱 - 孬 - 札 - 弼 - 酋 - 镳 - 萦 - 泾 - 挞 - 钾 - 讷 - 圃 - 舶 - 穹 - 戾 - 汴 - 锂 - 昀 - 镀 - 眺 - 捺 - 猕 - 阚 - 骋 - 悸 - 蜚 - 咩 - 讥 - 篆 - 鸠 - 哐 - 锚 - 幢 - 翱 - 螳 - 徇 - 踞 - 蔗 - 蔼 - 漉 - 衲 - N - 漳 - 枭 - 漾 - 歆 - 烬 - 曳 - 岌 - 孚 - 戛 - 呲 - 箫 - 娓 - 桨 - 涓 - 獭 - 芃 - 摒 - 戍 - 踝 - 轱 - 沱 - 锢 - 堰 - 抨 - 昙 - 鹌 - 蔻 - 迸 - 泯 - 龈 - 痔 - 骛 - 淄 - 泵 - 烯 - 蔫 - F - 胥 - 忱 - 纫 - 搪 - 茎 - 暨 - 泞 - 踵 - 璞 - 佗 - 荃 - 鬓 - 蚣 - 罔 - 臆 - 贻 - 橇 - 麓 - 槌 - 琥 - I - 纥 - 薅 - 樵 - 苓 - 熨 - 钨 - 骞 - 诣 - 涤 - 踊 - 醛 - 碴 - 蹴 - 缤 - 赊 - 岖 - 戊 - 禺 - 坯 - 戟 - 楂 - 隅 - 酶 - 邃 - 蛀 - 皎 - 炯 - 垣 - 锹 - 镰 - 夙 - 甬 - 叵 - 茁 - 珞 - 妲 - 涸 - 兀 - 嘤 - 谙 - 噗 - 榔 - 稣 - 剽 - 奚 - 啕 - 袅 - 讧 - 钠 - 怄 - 晤 - 肛 - 氰 - 迥 - 唰 - 诩 - 籁 - 砒 - 谩 - 诟 - 斓 - 泷 - 幡 - 爻 - 痫 - 眈 - 漕 - 惘 - 挎 - 噶 - 喱 - 氯 - U - 跆 - 嗤 - 锏 - 睽 - 缮 - 蟋 - 蠕 - 扪 - 狞 - 飒 - 吮 - 弋 - 奘 - 蟠 - 梆 - 拈 - 帧 - 蟀 - 胯 - 掳 - 蝈 - 帼 - 瞰 - 嵇 - 阉 - 篝 - 笆 - 亘 - L - 喔 - 愕 - 谚 - 轶 - 岱 - 丕 - 婕 - 羌 - 毡 - 呻 - 鼾 - 蜥 - 偌 - 庵 - 敝 - 蛐 - 麝 - 鞘 - 拮 - 涣 - 葆 - 雹 - 踌 - 蜈 - 馥 - 跻 - 狰 - 桀 - 毗 - 皿 - 缨 - 磐 - 啾 - 牒 - 缰 - 躇 - 踮 - 糠 - 嗲 - 刽 - 咫 - 殇 - 瀛 - 胱 - 炀 - 虱 - 砾 - 獒 - 涎 - 袤 - 鄱 - 瓯 - 锭 - 塾 - 蹉 - 珏 - 豺 - 锌 - 蜿 - 牦 - 瓒 - 莆 - 蜴 - 氮 - 跎 - 咛 - 骜 - 郸 - 搐 - 堑 - 涞 - 寰 - 跛 - 鸵 - 毂 - 妩 - 铤 - 薏 - 烩 - 遐 - 煦 - 仃 - 髅 - 酮 - 榷 - 腋 - 珩 - 臃 - 愫 - 蜒 - 荼 - 侬 - 淬 - 婵 - 偻 - 焯 - 骊 - 恻 - 濮 - 泱 - 庖 - 惴 - 鲫 - 硌 - 肓 - 芪 - 礴 - 磺 - 腱 - 冢 - 谪 - 骷 - 哏 - 腩 - 蓦 - 焙 - 桢 - 阖 - 睾 - 疱 - 郴 - 铿 - 铡 - 祉 - 跄 - 桦 - 椭 - 拄 - 皙 - 膈 - 裱 - 髋 - 伢 - 罹 - 鳍 - 赝 - 嬴 - 痤 - 藿 - 镐 - 铎 - 瘠 - 簌 - 杳 - 铢 - 阡 - 忤 - 舀 - 悻 - 媲 - 茗 - 湍 - 舫 - 瘙 - 瞟 - 擞 - 荀 - 刍 - J - 潍 - 莴 - 斛 - 郦 - 栩 - 绾 - 蕙 - 黜 - 湄 - 藓 - 躏 - 锱 - 捻 - 佼 - 砝 - E - 罡 - 忻 - 鹜 - 滟 - 傥 - 蛳 - W - 铀 - 魇 - 觎 - 蹂 - 佞 - 诃 - 灞 - 镣 - 痱 - 侏 - 峦 - 榛 - 饽 - 龋 - 嗔 - 芍 - 椿 - 璎 - 渥 - 蟾 - 骰 - 吠 - 挛 - 倜 - 鳝 - 糜 - 噢 - 黝 - 藐 - 绡 - 掣 - 鳗 - 璜 - 犷 - 痉 - 膺 - 罄 - 阄 - 纨 - 纭 - 彗 - 嵘 - 埠 - 潢 - 桔 - 耷 - 逵 - 诓 - 怵 - 蚤 - 苯 - 邈 - 谑 - 颌 - 珐 - 踱 - 髻 - 倏 - 啷 - 篑 - 冗 - 蹶 - 荥 - 涧 - 镂 - 踉 - 呷 - 衢 - 荟 - 箴 - 桧 - 恿 - 坳 - 瑙 - 珅 - 莅 - 膘 - 宥 - 氟 - 秆 - 诙 - 蹑 - 茴 - 翳 - 渚 - H - 唁 - 诿 - 窈 - 窕 - 膻 - 荨 - 蛔 - 筵 - 钛 - 獾 - 琏 - 箩 - 栀 - 隼 - 煸 - 罂 - 蛎 - 咂 - 谗 - 颦 - 佝 - 苣 - 搡 - 仄 - 垠 - 濂 - 泗 - 亟 - 蔺 - 蛆 - 霏 - 榈 - 裟 - 瑁 - 酚 - 蝼 - 怆 - 犄 - 沣 - 揖 - 斡 - 刎 - 鲟 - 峒 - 瞭 - 晁 - 袈 - 蓟 - 镁 - 骥 - 掸 - 玳 - 娑 - 馀 - 跚 - 槃 - 缄 - 猢 - 粕 - 隍 - 佃 - 獗 - 唢 - 菏 - 酰 - 腚 - 笈 - 哙 - 孢 - 飕 - 嘹 - 茱 - 蹒 - 殓 - 柩 - 谀 - 姣 - 戌 - 柑 - 粼 - 淅 - 啧 - 盅 - 鼬 - 啜 - 绉 - 咻 - 锲 - 铆 - Y - 螨 - 茯 - 憩 - 臼 - 谄 - 讴 - 濠 - 雎 - 噻 - 淦 - 懋 - 尕 - 氦 - 褛 - 颉 - 喆 - 铬 - 褴 - 燮 - 銮 - 侗 - 蹙 - 煜 - 邺 - 锃 - 麋 - 矗 - 娆 - 匐 - 噌 - 潸 - 碘 - 浔 - 檄 - 皈 - 铂 - 遨 - 炜 - 曜 - 饴 - 舷 - 胫 - 叟 - 祎 - 沅 - 潺 - 楣 - 埂 - 瞠 - 幔 - 稞 - 抻 - 匝 - 幄 - 殒 - 瑭 - 袂 - 囫 - 瓴 - 攫 - 鲈 - 箔 - 哝 - 馗 - 蜍 - 痧 - 脘 - 姘 - 苒 - 缢 - 觞 - 蛹 - 饬 - 胄 - 筏 - 鸾 - 儆 - 痿 - 矬 - 酊 - 纾 - 铖 - 荏 - 掬 - 膑 - 贮 - 觊 - 囵 - 泓 - 搔 - 汞 - 蚩 - 婀 - 谧 - 恣 - 霎 - 饕 - 赅 - 鲶 - 梏 - 獠 - 俶 - 龛 - 桅 - 鹄 - 旌 - 鲲 - 姒 - 蠡 - 繇 - 祜 - 诨 - 汩 - 觥 - 孀 - R - 谥 - 蕨 - 祐 - 榭 - 皑 - 纂 - 獐 - 覃 - 痂 - 孑 - 砧 - 圩 - 桎 - 啵 - 葚 - 嗫 - 浃 - 荠 - 阈 - 遴 - 枇 - 狒 - 秸 - 筠 - 硒 - 卞 - 玷 - 杈 - 狲 - 忿 - 俎 - 拚 - 颍 - 睢 - 颧 - 滦 - 霭 - 雉 - 毽 - 蓑 - 歙 - 鳃 - 鹬 - 墉 - 楔 - 舐 - 绔 - 弭 - 馏 - 挝 - 奂 - 嘭 - 忪 - 箕 - 诌 - 谒 - 颚 - 滂 - 醍 - 洵 - 鹫 - 虢 - 苋 - 玥 - 臾 - 蹩 - Z - 杷 - 痍 - 酉 - 疸 - 鄢 - 垩 - 烷 - 湮 - 钎 - 樽 - 旮 - 葭 - 邬 - 缱 - 糍 - 亳 - 咦 - 苷 - 伉 - 隽 - 伫 - 聒 - 匍 - 飚 - 桠 - 睑 - 脍 - 焘 - 谶 - 赳 - 萸 - 讣 - 疽 - 臧 - 巽 - 毓 - 鸢 - 纰 - 啐 - 噙 - 舛 - 敕 - 醐 - 痢 - 嚯 - 婺 - 勖 - 岷 - 溧 - 骅 - 犸 - 麾 - 嗟 - 诘 - 懑 - 貔 - 貅 - 啉 - 崂 - 鸩 - 镭 - 绻 - 逑 - 煨 - 褓 - 姝 - 藜 - 溟 - 儋 - 谡 - 欸 - 郢 - 荚 - 疝 - 遽 - 陂 - 饯 - 孪 - 巳 - 荞 - 泔 - 岿 - 谆 - 镍 - 洙 - 佻 - 盂 - 睨 - 铄 - 餮 - 酯 - 癣 - 浜 - 酩 - 焗 - 挲 - 鬃 - 鲠 - 仞 - 诰 - 谔 - 胛 - 萼 - 涿 - 莠 - 珲 - 旯 - 蜢 - 黍 - 肽 - 涪 - 髡 - 氙 - 陉 - 鬶 - 侩 - 糅 - 氤 - 芾 - 砷 - 鳕 - 钣 - 锒 - 闱 - 铵 - 镊 - 玑 - 砀 - 癜 - 颔 - 楹 - 螈 - 醚 - 琮 - 铩 - 笄 - 瓤 - 裨 - 潋 - 悌 - 聿 - 祢 - 郜 - 汨 - 棂 - 氲 - 嶙 - 聩 - 菅 - 腧 - 妯 - 龇 - 谲 - 耄 - 耋 - 囿 - 黢 - 揄 - 鲇 - 仝 - 個 - 忖 - 峋 - 揶 - 迩 - 诳 - 踽 - 骐 - 趸 - 颞 - 撺 - 辇 - 猷 - 铉 - 羸 - 徜 - 徉 - 襁 - 镌 - 孱 - 钒 - 铣 - 呤 - 遑 - 俾 - 皋 - 笕 - 笺 - 趔 - 趄 - 辋 - 鄞 - 殚 - 岫 - 跬 - 嘌 - 苻 - 绶 - 郅 - 瑄 - 萋 - 蘼 - 湎 - 砣 - 钜 - 捭 - 喹 - 恹 - 娌 - 螯 - 锰 - 祚 - 阆 - 矾 - 厩 - 龅 - 炝 - 黠 - 妁 - 濑 - 鞑 - 柒 - 滁 - 淖 - 鸬 - 鬣 - 晔 - 恸 - 赓 - 侉 - 溏 - 還 - 珮 - 鸨 - 嚅 - 笤 - 靥 - 啮 - 滓 - 俚 - 唳 - 苜 - 蓿 - 鹚 - 耦 - 莜 - 麸 - 粳 - 綦 - 盱 - 噤 - 遒 - 玟 - 魍 - 魉 - 旖 - 栉 - 锷 - 醴 - 泮 - 恁 - 甾 - 琬 - 丶 - 擤 - 桉 - 踟 - 誊 - 谟 - 澧 - 玖 - 畿 - 顼 - 兖 - 贰 - 茏 - 愎 - 豇 - 旎 - 蹰 - 蜃 - 屐 - 芡 - 鎏 - 癸 - 卅 - 枥 - 陟 - 琨 - 粝 - 掮 - 妪 - 姹 - 鏖 - 捯 - 钴 - 竽 - 恽 - 佰 - 胗 - 崧 - 磴 - 绺 - 鳏 - 槁 - 啖 - 矍 - 徕 - 忾 - 烃 - 喏 - 囹 - 圄 - 砭 - 邕 - 犍 - 鸮 - 剜 - 琚 - 瘢 - 魑 - 眦 - 锉 - 柘 - 痦 - 苕 - 牯 - 湟 - 厝 - 濛 - 赭 - 馐 - 蜇 - 嶂 - 贲 - 靼 - 臬 - 陲 - 潞 - 芩 - 腓 - 锨 - 寮 - 於 - 洇 - 愠 - 疖 - 鹧 - 鸪 - 茕 - 戕 - 壬 - 庾 - 莒 - 鹈 - 鹕 - 蠹 - 勐 - 疥 - 辎 - 耒 - 嗬 - 沔 - 睥 - 邙 - 篾 - 揩 - 肱 - 胍 - 磬 - 菟 - 豢 - 垓 - 唑 - 剌 - 阗 - 汜 - 佤 - 璟 - 麽 - 鬻 - 怏 - 蕤 - 茭 - 睚 - 淙 - 牍 - 榫 - 濯 - 稹 - 媾 - 悱 - 骶 - 蛭 - 鞣 - 椁 - 槊 - 擢 - 滢 - 佚 - 菡 - 沭 - 扦 - 镆 - 闾 - 缛 - 窠 - 疣 - 骠 - 俅 - 喙 - 蹼 - 硼 - 黩 - 腴 - 醮 - 邛 - 漯 - 豉 - 昶 - 刿 - 凇 - 鲅 - 舸 - 邳 - 俟 - 铰 - 翌 - 鳟 - 葳 - 寤 - 碣 - 秭 - 揠 - 熵 - 燧 - 靛 - 嵊 - 窨 - 鹗 - 芎 - 颢 - 佶 - 骢 - 圜 - 岘 - 燊 - 壅 - 畲 - 萘 - 煊 - 粲 - 倌 - 嗳 - 橹 - 椽 - 夔 - 鲑 - 赧 - 殄 - 沆 - 瀣 - 廪 - 舢 - 狍 - 挈 - 鹳 - 蚜 - 彧 - 羟 - 盥 - 镛 - 痈 - 蜊 - 皲 - 篦 - 喑 - 鲢 - 邡 - 蕲 - 僳 - 秣 - 蛉 - 讫 - 祗 - 鹩 - 撷 - 狎 - 郓 - 镕 - 榉 - 鲷 - 娣 - 淝 - 桷 - 镉 - 郫 - 髌 - 醪 - 僭 - 伧 - 嵬 - 苁 - 鹘 - 徭 - 歃 - 阕 - 鸱 - 貉 - 闳 - 坻 - 缙 - 媪 - 莨 - 菪 - 绦 - 恫 - 崆 - 喟 - 葺 - 逶 - 迤 - 骈 - 馔 - 苎 - 溘 - 垭 - 樯 - 诤 - 魃 - 搽 - 绀 - 蚴 - 澶 - 蒺 - 罘 - 眙 - 怍 - 來 - 荪 - 贶 - 亓 - 唻 - 畈 - 谌 - 芨 - 鲀 - 窸 - 窣 - 荜 - 楫 - 衮 - 趵 - 勰 - 髯 - 椴 - 缶 - 荸 - 秫 - 菖 - 甙 - 翦 - 椟 - 峤 - 掼 - 謇 - 洄 - 鄯 - 妗 - 浐 - 颀 - 箸 - 畦 - 痼 - 橛 - 鲛 - 蝾 - 愍 - 蒹 - 嘁 - 韪 - 劭 - 垅 - 暹 - 僮 - 稗 - 筚 - 煅 - 嬅 - 蜉 - 骝 - 碚 - 冼 - 吶 - 洹 - 郧 - 炴 - 绌 - 泠 - 呓 - 簋 - 溴 - 篁 - 仟 - 锟 - 羧 - 鹞 - 嘬 - 渌 - 笸 - 霰 - 稔 - 钡 - 齁 - 胪 - 衾 - 尻 - 洮 - 蘅 - 鲳 - 殂 - 腭 - 涔 - 蝣 - 孳 - 澍 - 钼 - 蒡 - 枳 - 渑 - 茼 - 馕 - 埙 - 珣 - 菘 - 邰 - 樾 - 铱 - 鳐 - 唔 - 篙 - 箜 - 篌 - 耆 - 啫 - 枞 - 杼 - 嵋 - 舂 - 娉 - 铨 - 崃 - 笳 - 邗 - 逡 - 僖 - 泫 - 疴 - 捱 - 醅 - 堇 - 肄 - 荇 - 虬 - 谯 - 酞 - 桡 - 艮 - 膦 - 艹 - 啻 - 滏 - 茆 - 圪 - 磡 - 麼 - 闼 - 郯 - 仡 - 氐 - 贽 - 俦 - 蓖 - 跹 - 帏 - 氅 - 趿 - 暝 - 缟 - 棹 - 滹 - 毖 - 蝰 - 虻 - 缫 - 诮 - 闩 - ○ - 潴 - 樨 - 瘘 - 襦 - 妤 - 郾 - 衿 - 鸷 - 旰 - 镢 - 傈 - 倨 - 笏 - 蒽 - 醌 - 驽 - 浠 - 涠 - 蓁 - 柞 - 钺 - 蜮 - 诂 - 徵 - 锆 - 椋 - 叻 - 廿 - 藁 - 乜 - 摈 - 這 - 茌 - 辊 - 岬 - 郇 - 杓 - 轳 - 酎 - 蟥 - 時 - 镒 - 蚬 - 澹 - 赟 - 後 - 怿 - 箐 - 囍 - 揆 - 蹁 - 鬄 - 苫 - 蕖 - 卺 - 辔 - 偈 - 俳 - 吲 - 哚 - 瘆 - 蕞 - 笞 - 氩 - 嫘 - 墁 - 帔 - 褡 - 裢 - 乩 - 褊 - 颏 - 喒 - 錾 - 皌 - 戗 - 唪 - 啭 - 伥 - 茔 - 斫 - 齉 - 仵 - 赉 - 吡 - 啶 - 蹇 - 螅 - 汊 - 湓 - 凫 - 珙 - 腈 - 洌 - Ω - 憷 - 跶 - 抔 - 濞 - 崤 - 殍 - 浥 - 铳 - 酽 - 馑 - 髂 - 隗 - 韫 - 晷 - 诒 - 埭 - 鹪 - 蕻 - 昃 - 瓠 - 萁 - 癔 - 怩 - 疳 - 跖 - 疔 - 簟 - 汆 - 疠 - 卟 - 墒 - 穰 - 铍 - 珥 - 钤 - 隻 - 樓 - 墎 - 鳜 - 沒 - 岀 - 杪 - 単 - 鲧 - 呋 - 彀 - 祇 - 豸 - 胴 - 唷 - 丨 - 燚 - 麴 - 觇 - 缑 - 橐 - 蚡 - 朊 - 俣 - 垡 - <sos/eos> init: null input_size: null ctc_conf: ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true use_preprocessor_valid: false token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_utt_prefix: null rir_apply_prob: 1.0 noise_scp: null noise_utt_prefix: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish macaron_style: true use_cnn_module: true cnn_module_kernel: 15 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list version: 0.10.2a1 distributed: true ``` </details> ## LM config <details><summary>expand</summary> ``` NONE ``` </details>
huggingtweets/cliobscure-mmmalign-weftofsoul
huggingtweets
2021-10-26T23:26:21Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1447655419430809609/PIJr1Fky_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1452658892132032513/m4mpoMLK_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1450907553769082881/spVYXld-_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">𝒟𝓇. 𝒞𝓁𝒾𝑜🌵🔪🌷🐍💕 & Marras 🖤 & 𝕄𝖆𝖑</div> <div style="text-align: center; font-size: 14px;">@cliobscure-mmmalign-weftofsoul</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 𝒟𝓇. 𝒞𝓁𝒾𝑜🌵🔪🌷🐍💕 & Marras 🖤 & 𝕄𝖆𝖑. | Data | 𝒟𝓇. 𝒞𝓁𝒾𝑜🌵🔪🌷🐍💕 | Marras 🖤 | 𝕄𝖆𝖑 | | --- | --- | --- | --- | | Tweets downloaded | 3051 | 3230 | 3247 | | Retweets | 2281 | 782 | 123 | | Short tweets | 133 | 284 | 893 | | Tweets kept | 637 | 2164 | 2231 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3turzf62/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cliobscure-mmmalign-weftofsoul's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1rw7flqz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1rw7flqz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cliobscure-mmmalign-weftofsoul') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingartists/arctic-monkeys
huggingartists
2021-10-26T17:28:49Z
5
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/arctic-monkeys", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/arctic-monkeys tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/12c27f4fbb06ef32dc1c1e432098f447.570x570x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Arctic Monkeys</div> <a href="https://genius.com/artists/arctic-monkeys"> <div style="text-align: center; font-size: 14px;">@arctic-monkeys</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Arctic Monkeys. Dataset is available [here](https://huggingface.co/datasets/huggingartists/arctic-monkeys). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/arctic-monkeys") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1x4ii6qz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Arctic Monkeys's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/bmnqvn53) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/bmnqvn53/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/arctic-monkeys') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/arctic-monkeys") model = AutoModelWithLMHead.from_pretrained("huggingartists/arctic-monkeys") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
chaitanya97/german_trained
chaitanya97
2021-10-26T12:37:19Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: german_trained results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german_trained This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9367 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 12.0352 | 5.0 | 5 | 12.6165 | 1.0 | | 4.0249 | 10.0 | 10 | 6.6453 | 1.0 | | 2.6661 | 15.0 | 15 | 5.7873 | 1.0 | | 2.4123 | 20.0 | 20 | 4.3250 | 1.0 | | 1.9481 | 25.0 | 25 | 3.9899 | 1.0 | | 1.7533 | 30.0 | 30 | 3.9367 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
BSC-LT/RoBERTalex
BSC-LT
2021-10-26T10:10:38Z
12
5
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "legal", "spanish", "es", "dataset:legal_ES", "dataset:temu_legal", "arxiv:2110.12201", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - legal - spanish datasets: - legal_ES - temu_legal metrics: - ppl widget: - text: "La ley fue <mask> finalmente." - text: "El Tribunal <mask> desestimó el recurso de amparo." - text: "Hay base legal dentro del marco <mask> actual." --- **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/RoBERTalex # Spanish Legal-domain RoBERTa There are few models trained for the Spanish language. Some of the models have been trained with a low resource, unclean corpora. The ones derived from the Spanish National Plan for Language Technologies are proficient solving several tasks and have been trained using large scale clean corpora. However, the Spanish Legal domain language could be think of an independent language on its own. We therefore created a Spanish Legal model from scratch trained exclusively on legal corpora. ## Citing ``` @misc{gutierrezfandino2021legal, title={Spanish Legalese Language Model and Corpora}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2110.12201}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` For more information visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-legal-es) ## Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
owen99630/catexp2
owen99630
2021-10-26T04:58:10Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
{0: 'Anorexia', 1: 'Anxiety', 2: 'Bullying', 3: 'Care', 4: 'Creativity', 5: 'Culture', 6: 'Depression', 7: 'Friends', 8: 'Getting help', 9: 'Happiness', 10: 'Helping others', 11: 'Helping yourself', 12: 'Hope', 13: 'Learning', 14: 'Life Issues', 15: 'Mental Health', 16: 'Mental Health Matters', 17: 'Mental health awareness', 18: 'PTSD', 19: 'Positivity', 20: 'Resilience', 21: 'Self-care', 22: 'Sharing', 23: 'Support', 24: 'University'}
kornesh/xlm-roberta-base
kornesh
2021-10-26T01:25:22Z
146
1
transformers
[ "transformers", "tf", "xlm-roberta", "feature-extraction", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
Converted for Tensorflow ``` !pip install transformers sentencepiece from transformers import TFAutoModel, AutoTokenizer name = "xlm-roberta-base" model = TFAutoModel.from_pretrained(name, from_pt=True) tokenizer = AutoTokenizer.from_pretrained(name) model.save_pretrained("local-xlm-roberta-base") tokenizer.save_pretrained("local-xlm-roberta-base") ```
espnet/siddhana_slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
espnet
2021-10-25T23:23:39Z
1
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:slurp", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - slurp license: cc-by-4.0 --- ## ESPnet2 SLU pretrained model ### `siddhana/slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best` ♻️ Imported from https://zenodo.org/record/5590384 This model was trained by siddhana using slurp/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
danielvasic/en_acnl_electra_pipeline
danielvasic
2021-10-25T18:45:15Z
4
0
spacy
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_acnl_electra_pipeline results: - task: name: POS type: token-classification metrics: - name: POS Accuracy type: accuracy value: 0.9769257272 - task: name: SENTER type: token-classification metrics: - name: SENTER Precision type: precision value: 0.9508884151 - name: SENTER Recall type: recall value: 0.94805839 - name: SENTER F Score type: f_score value: 0.9494712937 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Dependencies Accuracy type: accuracy value: 0.9577103137 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Dependencies Accuracy type: accuracy value: 0.9577103137 --- | Feature | Description | | --- | --- | | **Name** | `en_acnl_electra_pipeline` | | **Version** | `0.0.1` | | **spaCy** | `>=3.1.3,<3.2.0` | | **Default Pipeline** | `transformer`, `tagger`, `parser` | | **Components** | `transformer`, `tagger`, `parser` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | GPL | | **Author** | Daniel Vasić() | ### Label Scheme <details> <summary>View label scheme (87 labels for 2 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `VERB`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `dative`, `dep`, `det`, `dobj`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nummod`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `TAG_ACC` | 97.69 | | `DEP_UAS` | 95.77 | | `DEP_LAS` | 94.52 | | `SENTS_P` | 95.09 | | `SENTS_R` | 94.81 | | `SENTS_F` | 94.95 | | `TRANSFORMER_LOSS` | 6123357.72 | | `TAGGER_LOSS` | 338995.26 | | `PARSER_LOSS` | 4101825.66 |
chaitanya97/custom_german
chaitanya97
2021-10-25T16:27:15Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: custom_german results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # custom_german This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.6832 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 8.7718 | 5.0 | 5 | 8.5148 | 1.0 | | 3.7125 | 10.0 | 10 | 5.4304 | 1.0 | | 2.7679 | 15.0 | 15 | 5.0388 | 1.0 | | 2.0516 | 20.0 | 20 | 4.4628 | 1.0 | | 1.6702 | 25.0 | 25 | 4.5341 | 1.0 | | 1.515 | 30.0 | 30 | 4.6832 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
kwang2049/TSDAE-cqadupstack
kwang2049
2021-10-25T16:18:29Z
5
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.06979", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# kwang2049/TSDAE-cqadupstack2nli_stsb This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model: 1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased); 2. Unsupervised training on cqadupstack with the TSDAE objective; The pooling method is CLS-pooling. ## Usage To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via: ```bash pip install sentence-transformers ``` And then load the model and use it to encode sentences: ```python from sentence_transformers import SentenceTransformer, models dataset = 'cqadupstack' model_name_or_path = f'kwang2049/TSDAE-{dataset}' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.']) ``` ## Evaluation To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb): ```bash pip install useb # Or git clone and pip install . python -m useb.downloading all # Download both training and evaluation data ``` And then do the evaluation: ```python from sentence_transformers import SentenceTransformer, models import torch from useb import run_on dataset = 'cqadupstack' model_name_or_path = f'kwang2049/TSDAE-{dataset}' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling @torch.no_grad() def semb_fn(sentences) -> torch.Tensor: return torch.Tensor(model.encode(sentences, show_progress_bar=False)) result = run_on( dataset, semb_fn=semb_fn, eval_type='test', data_eval_path='data-eval' ) ``` ## Training Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers. ## Cite & Authors If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979): ```bibtex @article{wang-2021-TSDAE, title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning", author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.06979", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.06979", } ```
kwang2049/TSDAE-askubuntu
kwang2049
2021-10-25T16:17:47Z
6
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.06979", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# kwang2049/TSDAE-askubuntu2nli_stsb This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on AskUbuntu in an unsupervised manner. Training procedure of this model: 1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased); 2. Unsupervised training on AskUbuntu with the TSDAE objective; The pooling method is CLS-pooling. ## Usage To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via: ```bash pip install sentence-transformers ``` And then load the model and use it to encode sentences: ```python from sentence_transformers import SentenceTransformer, models dataset = 'askubuntu' model_name_or_path = f'kwang2049/TSDAE-{dataset}' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.']) ``` ## Evaluation To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb): ```bash pip install useb # Or git clone and pip install . python -m useb.downloading all # Download both training and evaluation data ``` And then do the evaluation: ```python from sentence_transformers import SentenceTransformer, models import torch from useb import run_on dataset = 'askubuntu' model_name_or_path = f'kwang2049/TSDAE-{dataset}' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling @torch.no_grad() def semb_fn(sentences) -> torch.Tensor: return torch.Tensor(model.encode(sentences, show_progress_bar=False)) result = run_on( dataset, semb_fn=semb_fn, eval_type='test', data_eval_path='data-eval' ) ``` ## Training Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers. ## Cite & Authors If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979): ```bibtex @article{wang-2021-TSDAE, title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning", author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.06979", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.06979", } ```
kwang2049/TSDAE-scidocs2nli_stsb
kwang2049
2021-10-25T16:15:23Z
4
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.06979", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# kwang2049/TSDAE-scidocs2nli_stsb This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain scidocs. Training procedure of this model: 1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased); 2. Unsupervised training on scidocs with the TSDAE objective; 3. Supervised training on the NLI data with cross-entropy loss; 4. Supervised training on the STSb data with MSE loss. The pooling method is CLS-pooling. ## Usage To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via: ```bash pip install sentence-transformers ``` And then load the model and use it to encode sentences: ```python from sentence_transformers import SentenceTransformer, models dataset = 'scidocs' model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.']) ``` ## Evaluation To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb): ```bash pip install useb # Or git clone and pip install . python -m useb.downloading all # Download both training and evaluation data ``` And then do the evaluation: ```python from sentence_transformers import SentenceTransformer, models import torch from useb import run_on dataset = 'scidocs' model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling @torch.no_grad() def semb_fn(sentences) -> torch.Tensor: return torch.Tensor(model.encode(sentences, show_progress_bar=False)) result = run_on( dataset, semb_fn=semb_fn, eval_type='test', data_eval_path='data-eval' ) ``` ## Training Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers. ## Cite & Authors If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979): ```bibtex @article{wang-2021-TSDAE, title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning", author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.06979", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.06979", } ```
kwang2049/TSDAE-cqadupstack2nli_stsb
kwang2049
2021-10-25T16:14:19Z
5
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.06979", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# kwang2049/TSDAE-cqadupstack2nli_stsb This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model: 1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased); 2. Unsupervised training on cqadupstack with the TSDAE objective; 3. Supervised training on the NLI data with cross-entropy loss; 4. Supervised training on the STSb data with MSE loss. The pooling method is CLS-pooling. ## Usage To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via: ```bash pip install sentence-transformers ``` And then load the model and use it to encode sentences: ```python from sentence_transformers import SentenceTransformer, models dataset = 'cqadupstack' model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.']) ``` ## Evaluation To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb): ```bash pip install useb # Or git clone and pip install . python -m useb.downloading all # Download both training and evaluation data ``` And then do the evaluation: ```python from sentence_transformers import SentenceTransformer, models import torch from useb import run_on dataset = 'cqadupstack' model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling @torch.no_grad() def semb_fn(sentences) -> torch.Tensor: return torch.Tensor(model.encode(sentences, show_progress_bar=False)) result = run_on( dataset, semb_fn=semb_fn, eval_type='test', data_eval_path='data-eval' ) ``` ## Training Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers. ## Cite & Authors If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979): ```bibtex @article{wang-2021-TSDAE, title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning", author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.06979", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.06979", } ```
kwang2049/TSDAE-askubuntu2nli_stsb
kwang2049
2021-10-25T16:13:34Z
5
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.06979", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# kwang2049/TSDAE-askubuntu2nli_stsb This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model: 1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased); 2. Unsupervised training on AskUbuntu with the TSDAE objective; 3. Supervised training on the NLI data with cross-entropy loss; 4. Supervised training on the STSb data with MSE loss. The pooling method is CLS-pooling. ## Usage To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via: ```bash pip install sentence-transformers ``` And then load the model and use it to encode sentences: ```python from sentence_transformers import SentenceTransformer, models dataset = 'askubuntu' model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.']) ``` ## Evaluation To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb): ```bash pip install useb # Or git clone and pip install . python -m useb.downloading all # Download both training and evaluation data ``` And then do the evaluation: ```python from sentence_transformers import SentenceTransformer, models import torch from useb import run_on dataset = 'askubuntu' model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb' model = SentenceTransformer(model_name_or_path) model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling @torch.no_grad() def semb_fn(sentences) -> torch.Tensor: return torch.Tensor(model.encode(sentences, show_progress_bar=False)) result = run_on( dataset, semb_fn=semb_fn, eval_type='test', data_eval_path='data-eval' ) ``` ## Training Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers. ## Cite & Authors If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979): ```bibtex @article{wang-2021-TSDAE, title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning", author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.06979", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.06979", } ```
napoler/bart-chinese-6-960-words-pkuseg
napoler
2021-10-25T15:05:51Z
6
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# 使用 这个模型是在uer/bart-chinese-6-960-cluecorpussmall基础上训练的,数据量不是很大,但是修改了默认分词。 使用pkuseg分词,禁用BertTokenizer的do_basic_tokenize分词,不禁用do_basic_tokenize的话会把正常词汇按照逐字分词,禁用后可以导入自己的分词方案。 pip install git+https://github.com/napoler/tkit-AutoTokenizerPosition ```python import pkuseg from tkitAutoTokenizerPosition.AutoPos import AutoPos seg = pkuseg.pkuseg(model_name='medicine') # 程序会自动下载所对应的细领域模型 tokenizer = BertTokenizer.from_pretrained("uer/chinese_roberta_L-2_H-128",do_basic_tokenize=False) ATP=AutoPos(seg,tokenizer) # 清理文本中的问题 ATP.getTokenize(text) ``` 分词结果如下 ``` ['他', '##们', '的', '伤', '##害', ',', '以', '##及', '陷', '##阱', '能', '##力', '的', '组', '##合', ',', '猎', '##人', '对', '##于', '任', '##何', '团', '##队', '都', '是', '最', '##好', '的', '拉', '##怪', '##者', '.'], 'cut': ['他们', '的', '伤害', ',', '以及', '陷阱', '能力', '的', '组合', ',', '猎人', '对于', '任何', '团队', '都', '是', '最好', '的', '拉怪者', '.'] ``` https://www.kaggle.com/terrychanorg/napolerbartchinese6960wordspkuseg https://www.kaggle.com/terrychanorg/buliddataforbert-7803feff2 https://www.kaggle.com/terrychanorg/bart-notebook8wewew6eeb0f8af https://www.kaggle.com/terrychanorg/fork-of-bart-notebook8wewew6eeb0f8af/data?scriptVersionId=77962540
lvwerra/pegasus-samsum
lvwerra
2021-10-25T14:57:33Z
6
3
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 0.4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6092 | 0.03 | 500 | 1.6488 | | 1.9715 | 0.07 | 1000 | 1.5444 | | 1.8325 | 0.1 | 1500 | 1.5093 | | 1.876 | 0.14 | 2000 | 1.4890 | | 1.3081 | 0.17 | 2500 | 1.4737 | | 1.7769 | 0.2 | 3000 | 1.4496 | | 1.6276 | 0.24 | 3500 | 1.4430 | | 1.6624 | 0.27 | 4000 | 1.4288 | | 1.9202 | 0.31 | 4500 | 1.4235 | | 1.4404 | 0.34 | 5000 | 1.4189 | | 1.8016 | 0.37 | 5500 | 1.4177 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps
patrickvonplaten
2021-10-25T13:15:45Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
https://wandb.ai/patrickvonplaten/test/reports/Wav2Vec2-Base--VmlldzoxMTUyODQ0?accessToken=rg6e8u9yizx964k8q47zctq1m4afpvtn1i3qi9exgdmzip6xwkfzvagfajpzj55n
teacookies/autonlp-more_fine_tune_24465520-26265899
teacookies
2021-10-25T09:51:18Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 124.66009281731397 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265899 - CO2 Emissions (in grams): 124.66009281731397 ## Validation Metrics - Loss: 0.7011443972587585 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265899 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265899", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265899", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265904
teacookies
2021-10-25T09:36:11Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 108.63800043275934 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265904 - CO2 Emissions (in grams): 108.63800043275934 ## Validation Metrics - Loss: 0.5807144045829773 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265904 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265904", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265904", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265907
teacookies
2021-10-25T09:35:36Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 103.5636883689371 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265907 - CO2 Emissions (in grams): 103.5636883689371 ## Validation Metrics - Loss: 0.6072460412979126 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265907 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265907", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265907", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265911
teacookies
2021-10-25T09:35:36Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 97.58591836686978 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265911 - CO2 Emissions (in grams): 97.58591836686978 ## Validation Metrics - Loss: 6.2383246421813965 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265911 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265911", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265911", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265905
teacookies
2021-10-25T09:32:48Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 103.35758036182682 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265905 - CO2 Emissions (in grams): 103.35758036182682 ## Validation Metrics - Loss: 0.5223112106323242 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265905 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265905", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265905", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265898
teacookies
2021-10-25T09:22:22Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 82.78379967029494 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265898 - CO2 Emissions (in grams): 82.78379967029494 ## Validation Metrics - Loss: 0.5732079148292542 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265898 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265898", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265898", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265902
teacookies
2021-10-25T09:22:00Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 83.78453848505326 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265902 - CO2 Emissions (in grams): 83.78453848505326 ## Validation Metrics - Loss: 0.5470030903816223 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265902 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265902", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265902", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265897
teacookies
2021-10-25T09:21:10Z
4
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 81.7509252560808 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265897 - CO2 Emissions (in grams): 81.7509252560808 ## Validation Metrics - Loss: 0.5754176378250122 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265897 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265897", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265897", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265901
teacookies
2021-10-25T09:21:03Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 80.04360178242067 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265901 - CO2 Emissions (in grams): 80.04360178242067 ## Validation Metrics - Loss: 0.5551259517669678 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265901 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265901", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265901", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
teacookies/autonlp-more_fine_tune_24465520-26265909
teacookies
2021-10-25T09:20:12Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-more_fine_tune_24465520", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - autonlp - question-answering language: unk widget: - text: "Who loves AutoNLP?" context: "Everyone loves AutoNLP" datasets: - teacookies/autonlp-data-more_fine_tune_24465520 co2_eq_emissions: 80.25874179679201 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 26265909 - CO2 Emissions (in grams): 80.25874179679201 ## Validation Metrics - Loss: 5.950643062591553 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265909 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265909", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265909", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
tftransformers/t5-small
tftransformers
2021-10-25T08:13:06Z
4
0
transformers
[ "transformers", "summarization", "translation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1910.10683", "license:apache-2.0", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) ## Usage ``` from tf_transformers.models import T5Model # Any T5 model (t5-small, t5-base, t5-large etc) model_name = 't5-small' model = T5Model.from_pretrained(model_name) ```
yseop/distilbert-base-financial-relation-extraction
yseop
2021-10-25T07:33:13Z
24
5
transformers
[ "transformers", "pytorch", "feature-extraction", "text-classification", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- inference: true pipeline_tag: text-classification tags: - feature-extraction - text-classification library: pytorch --- <div style="clear: both;"> <div style="float: left; margin-right 1em;"> <h1><strong>FReE (Financial Relation Extraction)</strong></h1> </div> <div> <h2><img src="https://pbs.twimg.com/profile_images/1333760924914753538/fQL4zLUw_400x400.png" alt="" width="25" height="25"></h2> </div> </div> We present FReE, a [DistilBERT](https://huggingface.co/distilbert-base-uncased) base model fine-tuned on a custom financial dataset for financial relation type detection and classification. ## Process Detecting the presence of a relationship between financial terms and qualifying the relationship in case of its presence. Example use cases: * An A-B trust is a joint trust created by a married couple for the purpose of minimizing estate taxes. (<em>Relationship **exists**, type: **is**</em>) * There are no withdrawal penalties. (<em>Relationship **does not exist**, type: **x**</em>) ## Data The data consists of financial definitions collected from different sources (Wikimedia, IFRS, Investopedia) for financial indicators. Each definition has been split up into sentences, and term relationships in a sentence have been extracted using the [Stanford Open Information Extraction](https://nlp.stanford.edu/software/openie.html) module. A typical row in the dataset consists of a definition sentence and its corresponding relationship label. The labels were restricted to the 5 most-widely identified relationships, namely: **x** (no relationship), **has**, **is in**, **is** and **are**. ## Model The model used is a standard DistilBERT-base transformer model from the Hugging Face library. See [HUGGING FACE DistilBERT base model](https://huggingface.co/distilbert-base-uncased) for more details about the model. In addition, the model has been pretrained to initializa weigths that would otherwise be unused if loaded from an existing pretrained stock model. ## Metrics The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set. | relation | precision | recall | f1-score | support | | ------------- |:-------------:|:-------------:|:-------------:| -----:| | has | 0.7416 | 0.9674 | 0.8396 | 2362 | | is in | 0.7813 | 0.7925 | 0.7869 | 2362 | | is | 0.8650 | 0.6863 | 0.7653 | 2362 | | are | 0.8365 | 0.8493 | 0.8429 | 2362 | | x | 0.9515 | 0.8302 | 0.8867 | 2362 | | | | | | | | macro avg | 0.8352 | 0.8251 | 0.8243 | 11810 | | weighted avg | 0.8352 | 0.8251 | 0.8243 | 11810 |
Bhumika/roberta-base-finetuned-sst2
Bhumika
2021-10-25T06:17:25Z
38
4
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: roberta-base-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.944954128440367 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-sst2 This model was trained from scratch on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3000 - Accuracy: 0.9450 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 0.1106 | 1.0 | 4210 | 0.9255 | 0.3326 | | 0.1497 | 2.0 | 8420 | 0.9369 | 0.2858 | | 0.1028 | 3.0 | 12630 | 0.3128 | 0.9335 | | 0.0872 | 4.0 | 16840 | 0.3000 | 0.9450 | | 0.0571 | 5.0 | 21050 | 0.3378 | 0.9427 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
TransQuest/monotransquest-hter-en_any
TransQuest
2021-10-24T18:41:16Z
8
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "Quality Estimation", "monotransquest", "HTER", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en-multilingual tags: - Quality Estimation - monotransquest - HTER license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_any", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
ThatSkyFox/DialoGPT-small-joshua
ThatSkyFox
2021-10-24T17:12:13Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational --- #This is a chatbot trained on the transcript of the game "The World Ends with You"
ydshieh/vit-gpt2-coco-en-ckpts
ydshieh
2021-10-24T12:01:42Z
32
11
generic
[ "generic", "pytorch", "jax", "tensorboard", "vision-encoder-decoder", "image-classification", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification library_name: generic --- ## Example The model is by no means a state-of-the-art model, but nevertheless produces reasonable image captioning results. It was mainly fine-tuned as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework. The model can be used as follows: ```python import requests from PIL import Image from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel loc = "ydshieh/vit-gpt2-coco-en" feature_extractor = ViTFeatureExtractor.from_pretrained(loc) tokenizer = AutoTokenizer.from_pretrained(loc) model = FlaxVisionEncoderDecoderModel.from_pretrained(loc) # We will verify our results on an image of cute cats url = "http://images.cocodataset.org/val2017/000000039769.jpg" with Image.open(requests.get(url, stream=True).raw) as img: pixel_values = feature_extractor(images=img, return_tensors="np").pixel_values def generate_step(pixel_values): output_ids = model.generate(pixel_values, max_length=16, num_beams=4).sequences preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds preds = generate_step(pixel_values) print(preds) # should produce # ['a cat laying on top of a couch next to another cat'] ```
Crasher222/kaggle-comp-test
Crasher222
2021-10-24T11:40:04Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:Crasher222/autonlp-data-kaggle-test", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Crasher222/autonlp-data-kaggle-test co2_eq_emissions: 60.744727079482495 --- # Model Finetuned from BERT-base for - Problem type: Multi-class Classification - Model ID: 25805800 ## Validation Metrics - Loss: 0.4422711133956909 - Accuracy: 0.8615328555811976 - Macro F1: 0.8642434650461513 - Micro F1: 0.8615328555811976 - Weighted F1: 0.8617743626671308 - Macro Precision: 0.8649112225076049 - Micro Precision: 0.8615328555811976 - Weighted Precision: 0.8625407179375096 - Macro Recall: 0.8640777539828228 - Micro Recall: 0.8615328555811976 - Weighted Recall: 0.8615328555811976 ## Usage ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Crasher222/kaggle-comp-test") tokenizer = AutoTokenizer.from_pretrained("Crasher222/kaggle-comp-test") inputs = tokenizer("I am in love with you", return_tensors="pt") outputs = model(**inputs) ```
tftransformers/gpt2-medium
tftransformers
2021-10-24T08:42:17Z
3
0
transformers
[ "transformers", "exbert", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - exbert license: mit --- # GPT-2 Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python from tf_transformers.models import GPT2Model from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained("gpt2-medium") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] outputs_tf = model(inputs_tf) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
tftransformers/gpt2
tftransformers
2021-10-24T08:41:46Z
1
0
transformers
[ "transformers", "exbert", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - exbert license: mit --- # GPT-2 Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python from tf_transformers.models import GPT2Model from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained("gpt2") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] outputs_tf = model(inputs_tf) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
tftransformers/albert-xxlarge-v2
tftransformers
2021-10-24T08:39:00Z
3
0
transformers
[ "transformers", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT XXLarge v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2') model = AlbertModel.from_pretrained("albert-xxlarge-v2") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/albert-xlarge-v2
tftransformers
2021-10-24T08:37:58Z
1
0
transformers
[ "transformers", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT XLarge v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v2') model = AlbertModel.from_pretrained("albert-xlarge-v2") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/albert-xlarge-v1
tftransformers
2021-10-24T08:37:26Z
3
0
transformers
[ "transformers", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT XLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1') model = AlbertModel.from_pretrained("albert-xlarge-v1") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/albert-base-v2
tftransformers
2021-10-24T08:36:40Z
3
0
transformers
[ "transformers", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT Base v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = AlbertModel.from_pretrained("albert-base-v2") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/albert-base-v1
tftransformers
2021-10-24T08:34:54Z
2
0
transformers
[ "transformers", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - exbert language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT Base v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: In tf_transformers ```python from tf_transformers.models import AlbertModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1') model = AlbertModel.from_pretrained("albert-base-v1") text = "Replace me by any text you'd like." inputs_tf = {} inputs = tokenizer(text, return_tensors='tf') inputs_tf["input_ids"] = inputs["input_ids"] inputs_tf["input_type_ids"] = inputs["token_type_ids"] inputs_tf["input_mask"] = inputs["attention_mask"] outputs_tf = model(inputs_tf) ``` This bias will also affect all fine-tuned versions of this model. ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=albert-base-v1"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
tftransformers/bart-large
tftransformers
2021-10-24T08:24:25Z
2
0
transformers
[ "transformers", "en", "arxiv:1910.13461", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 language: en --- # BART (large-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in tf_transformers: ```python from tf_transformers.models import BartModel from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') model = BartModel.from_pretrained('facebook/bart-large') inputs_tf = {} inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") inputs_tf["encoder_input_ids"] = inputs["input_ids"] inputs_tf["encoder_input_mask"] = inputs["attention_mask"] inputs_tf["decoder_input_ids"] = decoder_input_ids outputs_tf = model(inputs_tf) ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/bart-base
tftransformers
2021-10-24T08:22:19Z
2
1
transformers
[ "transformers", "en", "arxiv:1910.13461", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 language: en --- # BART (base-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in tf_transformers: ```python from tf_transformers.models import BartModel from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartModel.from_pretrained('facebook/bart-base') inputs_tf = {} inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") inputs_tf["encoder_input_ids"] = inputs["input_ids"] inputs_tf["encoder_input_mask"] = inputs["attention_mask"] inputs_tf["decoder_input_ids"] = decoder_input_ids outputs_tf = model(inputs_tf) ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tftransformers/mt5-small
tftransformers
2021-10-24T08:18:10Z
4
0
transformers
[ "transformers", "multilingual", "dataset:mc4", "arxiv:2010.11934", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: multilingual datasets: - mc4 license: apache-2.0 --- [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available. ## Usage ``` from tf_transformers.models import MT5Model # Any MT5 model (mt5-small, mt5-base etc) model_name = 'mt5-small' model = MT5Model.from_pretrained(model_name) ```
huggingartists/sqwore
huggingartists
2021-10-24T04:23:45Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/sqwore", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/sqwore tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/3557a234d4c5912569afbea078a23eff.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sqwore</div> <a href="https://genius.com/artists/sqwore"> <div style="text-align: center; font-size: 14px;">@sqwore</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Sqwore. Dataset is available [here](https://huggingface.co/datasets/huggingartists/sqwore). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/sqwore") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3gzd5crq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Sqwore's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/vzeft23g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/vzeft23g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/sqwore') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/sqwore") model = AutoModelWithLMHead.from_pretrained("huggingartists/sqwore") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/praisegodbarbon
huggingtweets
2021-10-24T03:47:17Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/praisegodbarbon/1635047234116/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1381764452098437120/74IgKP07_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Boston Psychology PhD</div> <div style="text-align: center; font-size: 14px;">@praisegodbarbon</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Boston Psychology PhD. | Data | Boston Psychology PhD | | --- | --- | | Tweets downloaded | 3212 | | Retweets | 810 | | Short tweets | 265 | | Tweets kept | 2137 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h4r5tyq8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @praisegodbarbon's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o2225sd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o2225sd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/praisegodbarbon') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ddddd/EDCLasVegas
ddddd
2021-10-24T01:16:07Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://teespring.com/dashboard/listings/113925135/edit
huggingtweets/nikkihaleyfan93
huggingtweets
2021-10-23T22:45:26Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/nikkihaleyfan93/1635029077906/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1329566476987232256/wpiYdhhz_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Richard Smit 🦅 🚁 🚔 💰 🇻🇦 🇳🇱 🇺🇸 🇬🇧 🇮🇱</div> <div style="text-align: center; font-size: 14px;">@nikkihaleyfan93</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Richard Smit 🦅 🚁 🚔 💰 🇻🇦 🇳🇱 🇺🇸 🇬🇧 🇮🇱. | Data | Richard Smit 🦅 🚁 🚔 💰 🇻🇦 🇳🇱 🇺🇸 🇬🇧 🇮🇱 | | --- | --- | | Tweets downloaded | 3248 | | Retweets | 406 | | Short tweets | 255 | | Tweets kept | 2587 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20va5xqa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nikkihaleyfan93's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1v26x5ax) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1v26x5ax/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nikkihaleyfan93') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
espnet/kan-bayashi_libritts_xvector_vits
espnet
2021-10-23T20:52:03Z
3
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:libritts", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - libritts license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/libritts_xvector_vits` ♻️ Imported from https://zenodo.org/record/5521416/ This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_tsukuyomi_full_band_vits_prosody
espnet
2021-10-23T20:50:36Z
2
3
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:tsukuyomi", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - tsukuyomi license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/tsukuyomi_full_band_vits_prosody` ♻️ Imported from https://zenodo.org/record/5521446/ This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest
espnet
2021-10-23T20:50:21Z
0
3
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:tsukuyomi", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - tsukuyomi license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest` ♻️ Imported from https://zenodo.org/record/5521446/ This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_full_band_vits_prosody
espnet
2021-10-23T20:47:17Z
11
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_full_band_vits_prosody` ♻️ Imported from https://zenodo.org/record/5521340/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_p-truncated-66d5fc
espnet
2021-10-23T20:45:49Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5521340/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_vctk_full_band_multi_spk_vits
espnet
2021-10-23T20:44:14Z
0
1
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:vctk", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - vctk license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/vctk_full_band_multi_spk_vits` ♻️ Imported from https://zenodo.org/record/5521431/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_vctk_multi_spk_vits
espnet
2021-10-23T20:42:58Z
2
0
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:vctk", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - vctk license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/vctk_multi_spk_vits` ♻️ Imported from https://zenodo.org/record/5500759/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_transformer_teacher_r-truncated-f43d8f
espnet
2021-10-23T20:31:48Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave` ♻️ Imported from https://zenodo.org/record/5499066/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave
espnet
2021-10-23T20:29:19Z
2
0
espnet
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: zh datasets: - csmsc license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5499120/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave
espnet
2021-10-23T20:28:30Z
1
0
espnet
[ "espnet", "audio", "text-to-speech", "zh", "dataset:csmsc", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: zh datasets: - csmsc license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5443852/ This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_ljspeech_vits
espnet
2021-10-23T20:27:43Z
2,253
218
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - ljspeech license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/ljspeech_vits` ♻️ Imported from https://zenodo.org/record/5443814/ This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjta-truncated-d57a28
espnet
2021-10-23T20:25:39Z
1
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jvs license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest` ♻️ Imported from https://zenodo.org/record/5432566/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjta-truncated-178804
espnet
2021-10-23T20:24:54Z
3
1
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jvs", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jvs license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest` ♻️ Imported from https://zenodo.org/record/5432540/ This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/kan-bayashi_jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with-truncated-ba3566
espnet
2021-10-23T20:20:33Z
0
0
espnet
[ "espnet", "audio", "text-to-speech", "ja", "dataset:jsut", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ja datasets: - jsut license: cc-by-4.0 --- ## ESPnet2 TTS pretrained model ### `kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave` ♻️ Imported from https://zenodo.org/record/5414980/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/Yushi_Ueda_mini_librispeech_diar_train_diar_raw_max_epoch20_valid.acc.best
espnet
2021-10-23T20:10:22Z
2
0
espnet
[ "espnet", "audio", "speaker-diarization", "en", "dataset:mini_librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - espnet - audio - speaker-diarization language: en datasets: - mini_librispeech license: cc-by-4.0 --- ## ESPnet2 DIAR pretrained model ### `Yushi Ueda/mini_librispeech_diar_train_diar_raw_max_epoch20_valid.acc.best` ♻️ Imported from https://zenodo.org/record/5264020/ This model was trained by Yushi Ueda using mini_librispeech/diar1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
huggingtweets/islamocommunism
huggingtweets
2021-10-23T18:38:04Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/islamocommunism/1635014280450/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1448436144388009985/zWh5cSQ3_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">نورهان</div> <div style="text-align: center; font-size: 14px;">@islamocommunism</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from نورهان. | Data | نورهان | | --- | --- | | Tweets downloaded | 3196 | | Retweets | 1205 | | Short tweets | 227 | | Tweets kept | 1764 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2l8ikj22/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @islamocommunism's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kngkxcq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kngkxcq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/islamocommunism') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
2umm3r/distilbert-base-uncased-finetuned-cola
2umm3r
2021-10-23T11:46:51Z
21
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5155709926752544 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7816 - Matthews Correlation: 0.5156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5291 | 1.0 | 535 | 0.5027 | 0.4092 | | 0.3492 | 2.0 | 1070 | 0.5136 | 0.4939 | | 0.2416 | 3.0 | 1605 | 0.6390 | 0.5056 | | 0.1794 | 4.0 | 2140 | 0.7816 | 0.5156 | | 0.1302 | 5.0 | 2675 | 0.8836 | 0.5156 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
stamas01/vgg19_skin_auto_encoder
stamas01
2021-10-23T06:04:31Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
A simple Auto Encoder made up of VGG19 trained to reconstruct skin lesion images.
tiennvcs/bert-large-uncased-finetuned-infovqa
tiennvcs
2021-10-23T06:01:27Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-uncased-finetuned-infovqa results: - task: name: Question Answering type: question-answering --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-infovqa This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.3170 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 250500 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.7861 | 0.12 | 1000 | 3.2778 | | 3.2186 | 0.23 | 2000 | 3.0658 | | 2.8504 | 0.35 | 3000 | 3.0456 | | 2.8621 | 0.46 | 4000 | 2.8758 | | 2.7851 | 0.58 | 5000 | 2.8680 | | 2.8016 | 0.69 | 6000 | 2.9244 | | 2.7592 | 0.81 | 7000 | 2.7735 | | 2.5737 | 0.93 | 8000 | 2.7640 | | 2.3493 | 1.04 | 9000 | 2.7257 | | 2.1041 | 1.16 | 10000 | 2.8442 | | 2.1713 | 1.27 | 11000 | 2.7723 | | 2.0594 | 1.39 | 12000 | 2.9982 | | 2.1825 | 1.5 | 13000 | 2.8272 | | 2.2486 | 1.62 | 14000 | 2.8897 | | 2.097 | 1.74 | 15000 | 2.8557 | | 2.1645 | 1.85 | 16000 | 2.6342 | | 2.15 | 1.97 | 17000 | 2.8680 | | 1.5662 | 2.08 | 18000 | 3.2126 | | 1.6168 | 2.2 | 19000 | 3.1646 | | 1.5886 | 2.32 | 20000 | 3.3139 | | 1.6539 | 2.43 | 21000 | 3.2610 | | 1.6486 | 2.55 | 22000 | 3.3144 | | 1.637 | 2.66 | 23000 | 3.0437 | | 1.7186 | 2.78 | 24000 | 2.9936 | | 1.7543 | 2.89 | 25000 | 3.1641 | | 1.5301 | 3.01 | 26000 | 4.0560 | | 1.1436 | 3.13 | 27000 | 4.0116 | | 1.1902 | 3.24 | 28000 | 4.0240 | | 1.2728 | 3.36 | 29000 | 4.3068 | | 1.2586 | 3.47 | 30000 | 3.7894 | | 1.3164 | 3.59 | 31000 | 3.9242 | | 1.3093 | 3.7 | 32000 | 4.0444 | | 1.2812 | 3.82 | 33000 | 4.1779 | | 1.3165 | 3.94 | 34000 | 3.6633 | | 0.8357 | 4.05 | 35000 | 5.8137 | | 0.9583 | 4.17 | 36000 | 5.3305 | | 0.9135 | 4.28 | 37000 | 5.4973 | | 1.0011 | 4.4 | 38000 | 5.0349 | | 0.9553 | 4.51 | 39000 | 5.2086 | | 1.0182 | 4.63 | 40000 | 5.1197 | | 0.9569 | 4.75 | 41000 | 5.4579 | | 0.9437 | 4.86 | 42000 | 5.4467 | | 0.9791 | 4.98 | 43000 | 4.7657 | | 0.648 | 5.09 | 44000 | 6.5780 | | 0.7528 | 5.21 | 45000 | 6.2827 | | 0.7247 | 5.33 | 46000 | 6.8500 | | 0.702 | 5.44 | 47000 | 6.4572 | | 0.6786 | 5.56 | 48000 | 6.5462 | | 0.7272 | 5.67 | 49000 | 6.2406 | | 0.6778 | 5.79 | 50000 | 6.4727 | | 0.6446 | 5.9 | 51000 | 6.3170 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.8.0+cu101 - Datasets 1.11.0 - Tokenizers 0.10.3
espnet/sujay_catslu_map
espnet
2021-10-22T21:01:58Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "zh", "dataset:catslu", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: zh datasets: - catslu license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/sujay_catslu_map` This model was trained by Sujay S Kumar using catslu recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout e31965d55993766461f0964216a0bb9aea3cfb7a pip install -e . cd egs2/catslu/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/sujay_catslu_map ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Oct 3 12:53:16 EDT 2021` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a3` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `b41391336042a4876e30d9fe5c66afb4e4be404c` - Commit date: `Wed Sep 22 10:02:03 2021 -0400` ## asr_train_asr_smaller_aishell_xlsr_raw_zh_word ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/test|1577|11441|46.1|30.1|23.7|2.5|56.4|81.3| |inference_asr_model_valid.acc.ave_5best/valid|921|6438|49.4|29.2|21.4|2.7|53.4|79.2| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/test|1577|45924|74.4|13.0|12.5|3.2|28.8|81.3| |inference_asr_model_valid.acc.ave_5best/valid|921|26110|77.0|11.9|11.1|2.7|25.7|79.2| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr_smaller_aishell_xlsr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp_train_asr_smaller_aishell_xlsr/asr_train_asr_smaller_aishell_xlsr_raw_zh_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: 5 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/speech_shape - exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/text_shape.word valid_shape_file: - exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/speech_shape - exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/text_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid/wav.scp - speech - sound - - dump/raw/valid/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0001 scheduler: warmuplr scheduler_conf: warmup_steps: 2500 token_list: - <blank> - <unk> - 航 - 导 - inform_操作_none - inform_终点名称_none - 去 - none_none_none - 我 - 到 - inform_poi名称_none - unknown - 要 - 市 - side - 一 - 个 - 路 - 区 - 第 - 大 - 县 - 你 - inform_序列号_none - 小 - 城 - 站 - 家 - 南 - 中 - 山 - 州 - 好 - 镇 - 场 - 的 - 院 - 西 - 店 - 东 - 车 - 阳 - 学 - 北 - 园 - dialect - 安 - 新 - 海 - 回 - 公 - 医 - 二 - 不 - 三 - 广 - 天 - 村 - 有 - 闭 - 开 - 酒 - 下 - 江 - 消 - 人 - 帮 - 金 - 是 - 取 - 花 - 近 - 政 - 民 - 口 - 十 - 里 - 河 - 府 - 请 - 关 - 国 - 了 - 华 - 那 - 高 - robot - 出 - 平 - 湖 - 在 - 省 - 定 - 号 - 门 - 想 - 街 - 四 - 道 - 水 - 龙 - 京 - 啊 - 地 - 行 - 么 - 五 - 都 - 桥 - 上 - 给 - 明 - 业 - 哪 - 附 - 八 - 宁 - 心 - 长 - 馆 - 百 - 这 - 汽 - 机 - 工 - 庄 - 方 - 商 - 司 - 石 - 确 - 兴 - 火 - 走 - 乡 - 万 - 通 - 加 - 银 - 青 - 发 - 校 - 速 - 交 - 退 - 德 - 际 - 电 - 楼 - 宾 - 找 - 苑 - 和 - 嗯 - 油 - 林 - 乐 - 景 - 打 - 达 - 来 - 七 - 川 - inform_请求类型_none - 最 - noise - 兰 - 湾 - 台 - 所 - 保 - 什 - 福 - 建 - 说 - 就 - 沙 - 页 - 宝 - 子 - 厂 - 科 - 尔 - 光 - inform_页码_none - 六 - 费 - 环 - 成 - 昌 - 吗 - 汉 - 白 - 黄 - 限 - 局 - 泉 - 怎 - 云 - 武 - 源 - 吃 - 前 - 点 - 收 - 物 - 滨 - 溪 - 马 - 贵 - 务 - 世 - 岛 - 没 - 生 - 常 - 理 - 会 - 们 - 重 - 浦 - 名 - 合 - 运 - 顺 - 美 - 儿 - 头 - 乌 - 设 - 厦 - 化 - 郑 - 时 - inform_poi目标_none - 现 - 农 - 港 - 泰 - 停 - 宜 - 昆 - 九 - 对 - 管 - 看 - 界 - 张 - 庆 - 文 - 博 - 嘉 - 零 - 苏 - 能 - 面 - 客 - 红 - 搜 - 远 - 古 - 津 - 始 - 王 - 呃 - 用 - 瑞 - 后 - 雅 - 带 - 流 - 木 - 之 - 汇 - 夏 - 他 - 还 - 清 - 临 - 服 - 渡 - 日 - 幺 - 济 - 田 - 锦 - 吉 - 呀 - 利 - 神 - 饭 - 香 - 太 - 双 - 永 - 图 - 洲 - 集 - 特 - 吧 - request_位置_none - 技 - 把 - 寺 - 爱 - 丰 - 春 - 盛 - 罗 - 队 - 也 - 亚 - 线 - 玉 - 哦 - 贸 - 果 - 连 - 正 - 结 - 与 - 米 - 鲁 - 警 - 信 - 捷 - 样 - 温 - 岭 - 丽 - 育 - 凤 - 位 - 听 - 动 - 可 - 原 - 年 - 经 - 纪 - 齐 - 索 - inform_对象_none - 义 - 多 - 叫 - 况 - 气 - 老 - 派 - 池 - 曲 - 营 - 返 - 置 - 品 - 程 - 同 - 辉 - 批 - 音 - 康 - 威 - 幼 - 斯 - 库 - 拉 - 星 - 团 - 风 - 岗 - 话 - 放 - 泽 - 晋 - 部 - 知 - 外 - 塔 - 沈 - 奇 - 卫 - 月 - 庭 - 眼 - 总 - 梅 - 房 - 千 - 哈 - 自 - 字 - 呢 - 豪 - 直 - 盘 - 屯 - 超 - 祥 - 佳 - 恒 - 过 - 以 - 两 - 蓝 - 修 - 入 - 松 - 铁 - 职 - 珠 - 凯 - 快 - 丹 - 体 - 书 - 游 - 转 - 莱 - 寨 - 克 - 当 - 李 - 钱 - s - 货 - 惠 - 格 - 岳 - 淮 - 束 - 社 - 莞 - 森 - 堵 - 内 - 蒙 - 分 - 柏 - 富 - 碧 - 凰 - 陵 - 桐 - 边 - 坡 - 胶 - 得 - 力 - 滚 - 喀 - 旗 - 料 - 歌 - 块 - 滩 - 查 - 虹 - 续 - 为 - 驾 - 许 - 峰 - 问 - 真 - 视 - 选 - 接 - 语 - 洪 - 众 - 全 - 徽 - 鄂 - 实 - 未 - 杭 - 尚 - 胜 - 塘 - 产 - 鱼 - 叉 - 岸 - 洛 - 随 - 哎 - 配 - 丁 - 继 - 迪 - 牛 - 坪 - 无 - 深 - 圳 - 韩 - 法 - 灵 - 迁 - 间 - 逼 - 步 - 咸 - 期 - 菜 - 紫 - 邢 - 赣 - 横 - 播 - 鼎 - 进 - 止 - 铜 - 便 - 鸡 - 巴 - 仁 - 财 - 佛 - 桂 - 官 - 英 - 绵 - 奥 - 矿 - 波 - 治 - 元 - 首 - 钟 - 计 - 飞 - 坊 - 阿 - 代 - 周 - 朝 - 固 - 错 - 向 - 潭 - 隆 - 装 - 纳 - 伊 - 将 - 军 - 师 - 途 - 影 - 怀 - 择 - 药 - 术 - 手 - 于 - 离 - 族 - 莲 - 布 - 呼 - 峡 - 迈 - 委 - 叮 - 咚 - 阴 - 宏 - 郡 - 健 - 本 - 洋 - 再 - 支 - 划 - 郊 - 绿 - 妈 - 旅 - 堰 - 肥 - 玛 - 左 - 网 - inform_途经点名称_none - 拜 - 材 - inform_终点修饰_none - 辽 - 煤 - 谢 - 则 - 土 - 草 - 埠 - 伦 - 堂 - 卡 - 肉 - 底 - 灯 - 树 - 寻 - 掉 - 展 - 庙 - 赵 - 余 - 见 - 望 - 故 - 事 - 相 - 杨 - inform_终点目标_none - 馨 - 税 - 属 - 资 - 井 - 艺 - 越 - 微 - 包 - 阜 - 记 - 窗 - 维 - 甲 - 鑫 - 休 - 啥 - 锡 - 渝 - 岩 - 彩 - 少 - 处 - 往 - 从 - 封 - 联 - 觉 - 验 - 容 - 萨 - 普 - 弄 - 干 - 强 - 鲜 - 柳 - 衡 - 规 - request_路况_none - 靖 - 沃 - 板 - 防 - 约 - 球 - 居 - 至 - 坝 - 翠 - 持 - 具 - 烟 - 榆 - 枫 - 照 - 意 - 目 - t - 凌 - 邦 - 报 - 码 - 轻 - 欣 - 复 - 买 - 玻 - 璃 - 住 - 恩 - 女 - 嘴 - 级 - 振 - 邵 - 浴 - 茂 - 黔 - 您 - 比 - 显 - 渭 - 钢 - 妇 - 易 - 党 - 版 - 介 - 姐 - 才 - 览 - k - 崇 - 桃 - 厅 - 虎 - 皮 - 仪 - 赤 - 寓 - 洞 - 绍 - 饰 - 很 - 病 - 度 - 胡 - 像 - 邮 - 又 - 充 - 贤 - 御 - 然 - 潍 - 基 - 启 - 聊 - 驶 - inform_路线偏好_none - 澄 - 几 - 等 - 塑 - 监 - 办 - 沧 - 亭 - 观 - 螺 - 领 - 秀 - 咋 - 坨 - 奎 - 优 - 半 - 贡 - 唐 - 写 - 今 - 慢 - 傻 - 反 - 次 - 甘 - 肃 - 它 - 泗 - 贺 - 拍 - 咱 - 留 - ktv - 察 - 顶 - 啦 - 别 - 润 - 谷 - 仙 - 慧 - 朱 - 靠 - 座 - 锅 - 麦 - 雁 - 羊 - 共 - 邓 - 荣 - 食 - 陕 - 邑 - 右 - 铺 - 梁 - 宣 - 幸 - 哥 - 士 - 员 - 招 - 番 - 徐 - 检 - 巷 - 私 - 堡 - 跟 - 器 - 峪 - 立 - 氏 - 教 - 圣 - 购 - 印 - 黑 - 完 - 条 - 唉 - 燕 - 屿 - 闸 - 茶 - 任 - 种 - 蛋 - 荆 - 岔 - inform_value_none - 黎 - 奉 - 准 - 熟 - 薛 - 朔 - 范 - 械 - 菲 - 雪 - 腾 - 备 - 琼 - 尹 - 垣 - 吴 - 示 - 嫖 - 宫 - 冲 - 毛 - 绘 - 菏 - 嘞 - 浙 - 遵 - 各 - 饶 - 嗷 - 简 - 施 - 俱 - 岚 - 豆 - 栋 - 险 - 岘 - 滇 - 叶 - 卓 - 荔 - 刘 - 滕 - 系 - 统 - e - 做 - 巡 - 坐 - 研 - 究 - 盐 - 冀 - 象 - 斗 - 娄 - 先 - 陆 - deny_操作_none - 户 - 额 - 价 - 更 - 拆 - 溧 - 量 - 帝 - 断 - 态 - 智 - 蜀 - 庐 - 舟 - 摄 - 泡 - 洗 - 历 - 咖 - 啡 - 湘 - 甸 - 泾 - 卖 - 朗 - 芜 - 棠 - 凉 - 嵩 - 焦 - 让 - 夫 - 吐 - 童 - 薇 - 旺 - 浩 - 息 - 裕 - 禄 - 睡 - 狮 - 质 - 樱 - 递 - 鸣 - 句 - 韶 - 色 - 典 - 厉 - 测 - 应 - 尉 - 汤 - 己 - 宸 - 漳 - 证 - 沟 - 巩 - 扬 - 笨 - 旁 - 湟 - 主 - 浪 - 殡 - request_前方路况_none - 竹 - 列 - 季 - 唱 - 冠 - 泥 - 懂 - 秋 - 君 - 祁 - 声 - 拥 - 曹 - 嘛 - 静 - 嗨 - 起 - 刚 - 墨 - 宿 - 络 - 襄 - 葫 - 芦 - 漫 - 峨 - 需 - 眉 - 瓦 - 如 - 根 - 域 - 式 - 何 - 鞍 - 饺 - 票 - 冶 - 喷 - 映 - 组 - 昭 - 延 - 萌 - 角 - 解 - 玲 - 蟹 - 晃 - 瀑 - 纽 - 逸 - 些 - 猪 - 蹄 - 亲 - 野 - 蒋 - 喂 - 荷 - 窝 - 锁 - 试 - 桑 - 沥 - 非 - 制 - 督 - 贝 - 址 - 识 - 侬 - 烧 - 翡 - 堤 - 伟 - 驼 - 昊 - 牌 - 陶 - 室 - 轩 - 鹰 - 钉 - 空 - 着 - 蛳 - 已 - 砖 - 姓 - 顿 - 麓 - 亿 - 售 - 功 - 淄 - 澳 - 斜 - 击 - 活 - 缴 - 输 - 雍 - 鄄 - 降 - 革 - 恢 - 卸 - 承 - 箬 - 澧 - 栈 - 疗 - 传 - 媒 - 血 - 战 - 舞 - 姨 - 婆 - 辆 - 蚌 - 鹅 - 剧 - 湛 - 亳 - b - 敦 - 煌 - 迎 - 味 - 数 - 妞 - 嫂 - 厚 - hi - 邹 - 摁 - 榄 - 梨 - 亮 - 纺 - 婚 - 培 - 训 - inform_起点名称_none - 护 - 霍 - 升 - 考 - m - 呗 - 摩 - 送 - 段 - 悦 - 餐 - 早 - 议 - 互 - 助 - 抚 - 慈 - 按 - 调 - 杰 - 份 - 兵 - 粥 - 邻 - 墅 - 鬃 - 泳 - 朋 - 良 - 缘 - 鼓 - 赛 - 枝 - 藏 - 鸿 - 冷 - 匀 - 征 - 欢 - 闯 - 汝 - 讲 - 肤 - 响 - 浮 - 录 - 冰 - 圆 - 算 - 思 - 储 - 蓄 - 苗 - 聚 - 湿 - 肇 - 阆 - 拿 - 沣 - 渔 - 铝 - 植 - 托 - 盟 - 宇 - 但 - 渠 - 告 - 丘 - 拓 - 陇 - 鹤 - 操 - 珙 - deny_poi名称_none - 询 - 攀 - 寿 - 副 - 或 - 假 - 焰 - 夜 - 妓 - 而 - 漆 - 濮 - 胥 - 密 - 志 - 苹 - 彭 - 陪 - 添 - 满 - 章 - 骨 - 栖 - 呦 - 善 - 乖 - 姑 - 爷 - 鸟 - 璧 - 专 - 洧 - 依 - 仔 - 晨 - 沂 - 券 - 晓 - 压 - 涨 - 闻 - 男 - 诊 - 融 - 怡 - 蓬 - 廊 - 殖 - 益 - 必 - 靓 - 蒲 - beyond - i - love - you - 旋 - 尖 - 驿 - 貂 - 蝉 - 足 - 迹 - 翰 - 杏 - 牡 - 帅 - 雨 - 呈 - 迷 - 哟 - 召 - 娼 - 辛 - 顾 - 殷 - 闵 - 潮 - 脑 - 彗 - 枣 - 杆 - 洁 - 画 - 片 - 认 - 灰 - 鞋 - 宠 - 劫 - 潘 - 烤 - 破 - 隶 - 搞 - 忠 - 仕 - 郴 - 梧 - 酌 - 涵 - 醍 - 候 - 俩 - 馈 - 磨 - 骤 - 翔 - 莘 - 希 - 娅 - 剑 - 权 - 壹 - 冕 - 蛟 - 拨 - 诶 - 盖 - 楠 - 只 - 编 - 虾 - 尽 - 尧 - 晚 - 珍 - 因 - 捆 - 绑 - 端 - 盱 - 眙 - 贩 - 卷 - 养 - 陂 - 晟 - 巧 - 椿 - 毕 - 沭 - 供 - 秒 - 眠 - 状 - 璟 - 受 - 伤 - 萍 - 奔 - 效 - 禽 - 玫 - 瑰 - request_剩余距离_none - 序 - 鹃 - 齿 - 厕 - 厨 - 忻 - 埔 - 茅 - 芳 - 雕 - 刻 - 蜜 - 筝 - g - 橄 - 畜 - 牧 - 仑 - 臣 - 溆 - 纱 - 卉 - 群 - 痛 - 疼 - 仟 - 赶 - 紧 - 闫 - 嘶 - 潼 - 烽 - 勾 - 驰 - 麻 - 烦 - 遍 - 樟 - 浜 - 极 - 酷 - 晶 - 穿 - 芽 - 害 - 钓 - 棍 - 核 - 橙 - 琴 - 滋 - 柯 - 箐 - 株 - 陌 - 坤 - 炳 - 槐 - 协 - 湄 - 滏 - 旦 - 策 - 虞 - 陈 - 情 - 潞 - 藁 - 豹 - 若 - 垃 - 圾 - 舰 - 造 - 珥 - 董 - 泼 - 乾 - 瑶 - 龚 - 撤 - 钛 - 责 - 吶 - 喜 - 隔 - 碗 - 倒 - 椰 - 冬 - 伯 - 乳 - 隐 - 尼 - 境 - 圩 - 卧 - 抱 - 使 - 玩 - 饮 - 峤 - 炉 - 终 - 霸 - 晴 - 糕 - 疫 - 弥 - 萧 - 围 - 邬 - 贞 - 逊 - 祠 - 泛 - 逯 - 侯 - 距 - 织 - 谋 - 嵋 - 楚 - 瑜 - 妹 - 误 - 念 - 镜 - 粮 - 涮 - 值 - 鹿 - 捞 - 沅 - 移 - 涉 - 模 - 饿 - 佩 - 汀 - 朐 - 魔 - 细 - 者 - 暖 - 汕 - 谛 - 棣 - 敖 - 此 - 背 - 鲅 - 圈 - 逻 - 绕 - 锋 - 班 - 珲 - 汾 - 著 - 参 - 且 - 摇 - 宕 - 缅 - 柔 - 脂 - 肪 - 变 - 谱 - 积 - 礼 - 凡 - 落 - 羽 - 歇 - 仰 - 聋 - 雷 - 磊 - 繁 - 吭 - 皇 - 晖 - 粤 - 腊 - 习 - 题 - 绅 - 畔 - 啤 - 弋 - 匹 - 订 - 单 - ok - 灶 - 描 - 婺 - 沿 - 莉 - 弘 - 茵 - 换 - 屏 - 瞎 - 较 - 岁 - 湫 - 塞 - 疏 - 勒 - 涟 - 巫 - 违 - 戈 - 吾 - 脏 - 葛 - 轮 - 胎 - 霞 - 鹭 - 废 - 稍 - 谨 - 慎 - 淡 - 注 - 每 - 既 - 删 - 喝 - 付 - 诸 - 暨 - 戴 - 綦 - 伍 - 诚 - 坦 - 兜 - 残 - 韵 - 喽 - 廖 - 麒 - 麟 - n - 感 - 籍 - 难 - 死 - 笑 - 哭 - 孩 - 频 - 舍 - 溶 - 垸 - 淀 - 奸 - 改 - 藤 - 狭 - 隧 - 翁 - 陀 - 扎 - 肯 - 揭 - 壁 - 件 - 刷 - 牙 - 节 - 恋 - 淹 - 桦 - 幢 - 棉 - 俺 - 屎 - 彬 - 牟 - 亩 - 傣 - 裴 - 翼 - 辰 - 剪 - 挡 - 凹 - 投 - 碣 - 妆 - 荡 - 驻 - 颍 - 狐 - 享 - 恐 - 汶 - 寅 - 仍 - 睿 - 搁 - 尊 - 泊 - 仲 - 午 - 枞 - 仓 - 卞 - 瀚 - 佰 - 暮 - 拐 - 崔 - 榭 - 棵 - 孕 - 潜 - 俏 - 葡 - 萄 - 采 - 摘 - 癜 - 屑 - 芙 - 蓉 - 咏 - 忙 - 漂 - 父 - 母 - 差 - 彻 - 魏 - 绥 - 闲 - 遥 - 棕 - 榈 - 壶 - 疆 - 苍 - 磁 - 辅 - 泸 - 淅 - a - 呐 - 燃 - 沱 - 禺 - 宛 - 友 - 俊 - 筑 - 贾 - 宋 - 梯 - 吨 - inform_poi修饰_none - 础 - 碑 - request_剩余路程_none - 创 - 孙 - 枢 - 翟 - 浑 - 糖 - 舜 - 橱 - 柜 - 浠 - 莒 - 乔 - 幕 - 磅 - 嘿 - 曼 - 昔 - 衣 - 铭 - 浏 - 喆 - 垦 - 墓 - 戍 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: wav2vec2_xlsr download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 15 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 4 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list version: 0.10.3a3 distributed: false ``` </details> ## LM config <details><summary>expand</summary> ``` NONE ``` </details>
sienog/autonlp-mt5-xlsum-25085641
sienog
2021-10-22T17:20:30Z
3
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autonlp", "unk", "dataset:sienog/autonlp-data-mt5-xlsum", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - sienog/autonlp-data-mt5-xlsum co2_eq_emissions: 11.166602089650883 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 25085641 - CO2 Emissions (in grams): 11.166602089650883 ## Validation Metrics - Loss: 1.173471212387085 - Rouge1: 51.7353 - Rouge2: 36.6771 - RougeL: 45.4129 - RougeLsum: 48.8512 - Gen Len: 82.9375 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/sienog/autonlp-mt5-xlsum-25085641 ```
mamlong34/t5_large_race_cosmos_qa
mamlong34
2021-10-22T15:58:00Z
8
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:race", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - race metrics: - accuracy model-index: - name: t5_large_race_cosmos_qa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_large_race_cosmos_qa This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the race dataset. It achieves the following results on the evaluation set: - Loss: 0.4382 - Accuracy: 0.8023 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 0.3513 | 1.0 | 10983 | 0.7714 | 0.3165 | | 0.2109 | 2.0 | 21966 | 0.7986 | 0.3329 | | 0.0929 | 3.0 | 32949 | 0.4382 | 0.8023 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0 - Datasets 1.14.0 - Tokenizers 0.10.3
muhtasham/autonlp-Doctor_DE-24595547
muhtasham
2021-10-22T14:04:29Z
5
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "autonlp", "de", "dataset:muhtasham/autonlp-data-Doctor_DE", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: de widget: - text: "I love AutoNLP 🤗" datasets: - muhtasham/autonlp-data-Doctor_DE co2_eq_emissions: 396.5529429198159 --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 24595547 - CO2 Emissions (in grams): 396.5529429198159 ## Validation Metrics - Loss: 1.9565489292144775 - MSE: 1.9565489292144775 - MAE: 0.9890901446342468 - R2: -7.68965036332947e-05 - RMSE: 1.3987668752670288 - Explained Variance: 0.0 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595547 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595547", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595547", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
laurauzcategui/xlm-roberta-base-finetuned-marc-en
laurauzcategui
2021-10-22T13:20:51Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8945 - Mae: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:---:| | 1.1411 | 1.0 | 235 | 0.9358 | 0.5 | | 0.9653 | 2.0 | 470 | 0.8945 | 0.5 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
daveccampbell/xlm-roberta-base-finetuned-marc-en
daveccampbell
2021-10-22T13:20:31Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9199 - Mae: 0.4756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1705 | 1.0 | 235 | 0.9985 | 0.5854 | | 0.9721 | 2.0 | 470 | 0.9199 | 0.4756 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
shaer/xlm-roberta-base-finetuned-marc-en-test-run
shaer
2021-10-22T13:12:39Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en-test-run results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en-test-run This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8957 - Mae: 0.4390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1079 | 1.0 | 235 | 0.9742 | 0.5366 | | 0.9488 | 2.0 | 470 | 0.8957 | 0.4390 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
danwilbury/xlm-roberta-base-finetuned-marc-en
danwilbury
2021-10-22T13:04:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9302 - Mae: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1253 | 1.0 | 235 | 0.9756 | 0.5488 | | 0.9465 | 2.0 | 470 | 0.9302 | 0.5 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
muhtasham/autonlp-Doctor_DE-24595546
muhtasham
2021-10-22T12:23:10Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "de", "dataset:muhtasham/autonlp-data-Doctor_DE", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: de widget: - text: "I love AutoNLP 🤗" datasets: - muhtasham/autonlp-data-Doctor_DE co2_eq_emissions: 210.5957437893554 --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 24595546 - CO2 Emissions (in grams): 210.5957437893554 ## Validation Metrics - Loss: 0.3092539310455322 - MSE: 0.30925390124320984 - MAE: 0.25015318393707275 - R2: 0.841926941198094 - RMSE: 0.5561060309410095 - Explained Variance: 0.8427215218544006 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595546 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
muhtasham/autonlp-Doctor_DE-24595545
muhtasham
2021-10-22T11:59:58Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "de", "dataset:muhtasham/autonlp-data-Doctor_DE", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: de widget: - text: "I love AutoNLP 🤗" datasets: - muhtasham/autonlp-data-Doctor_DE co2_eq_emissions: 203.30658367993382 --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 24595545 - CO2 Emissions (in grams): 203.30658367993382 ## Validation Metrics - Loss: 0.30214861035346985 - MSE: 0.30214861035346985 - MAE: 0.25911855697631836 - R2: 0.8455587614373526 - RMSE: 0.5496804714202881 - Explained Variance: 0.8476610779762268 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595545 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595545", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595545", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
meghana/hitalm-xlmroberta-finetuned
meghana
2021-10-22T11:51:18Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: hitalm-xlmroberta-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hitalm-xlmroberta-finetuned This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.7745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 48 | 5.4501 | | No log | 2.0 | 96 | 5.2843 | | No log | 3.0 | 144 | 4.7745 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
anditya/xlm-roberta-base-finetuned-marc-en
anditya
2021-10-22T11:18:11Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8885 - Mae: 0.4390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1089 | 1.0 | 235 | 0.9027 | 0.4756 | | 0.9674 | 2.0 | 470 | 0.8885 | 0.4390 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3