modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
CarlCochet/trajectory-transformer-hopper-expert-v2
35ece73107915392c3429f9ab72e3a1c576a330f
2022-05-12T17:02:57.000Z
[ "pytorch", "trajectory_transformer", "feature-extraction", "transformers", "license:mit" ]
feature-extraction
false
CarlCochet
null
CarlCochet/trajectory-transformer-hopper-expert-v2
2
null
transformers
25,800
--- license: mit ---
CarlCochet/trajectory-transformer-hopper-medium-v2
4ffd408a079417fab9f1a93aa8d6d974834d7686
2022-05-12T17:05:33.000Z
[ "pytorch", "trajectory_transformer", "feature-extraction", "transformers", "license:mit" ]
feature-extraction
false
CarlCochet
null
CarlCochet/trajectory-transformer-hopper-medium-v2
2
null
transformers
25,801
--- license: mit ---
PSW/low_resource_percent1_randomswap_seed42
431a3d094f4daf1388f89a372f51680eabf223d7
2022-05-05T09:23:29.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_randomswap_seed42
2
null
transformers
25,802
Entry not found
CarlCochet/trajectory-transformer-walker2d-expert-v2
436cb9384717f392425c5d3b62253b3b951cabd6
2022-05-12T17:06:16.000Z
[ "pytorch", "trajectory_transformer", "feature-extraction", "transformers", "license:mit" ]
feature-extraction
false
CarlCochet
null
CarlCochet/trajectory-transformer-walker2d-expert-v2
2
null
transformers
25,803
--- license: mit ---
CarlCochet/trajectory-transformer-walker2d-medium-expert-v2
6a5ed504de727b79677a8cf076ce0968f4072159
2022-05-12T17:06:58.000Z
[ "pytorch", "trajectory_transformer", "feature-extraction", "transformers", "license:mit" ]
feature-extraction
false
CarlCochet
null
CarlCochet/trajectory-transformer-walker2d-medium-expert-v2
2
null
transformers
25,804
--- license: mit ---
PSW/low_resource_percent1_seed27
e9c311ba2d76aee3fd0c47f21563ff317699feea
2022-05-05T09:35:53.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_seed27
2
null
transformers
25,805
Entry not found
Gootter/autotrain-Bart_683-825526269
50b6c3ad3efac7980f60ec0e32ae88be9fbd61f9
2022-05-05T10:03:01.000Z
[ "pytorch", "bart", "text2text-generation", "unk", "dataset:Gootter/autotrain-data-Bart_683", "transformers", "autotrain", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
Gootter
null
Gootter/autotrain-Bart_683-825526269
2
null
transformers
25,806
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - Gootter/autotrain-data-Bart_683 co2_eq_emissions: 28.12268287254098 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 825526269 - CO2 Emissions (in grams): 28.12268287254098 ## Validation Metrics - Loss: 2.836289644241333 - Rouge1: 31.9867 - Rouge2: 10.3239 - RougeL: 21.0603 - RougeLsum: 30.0862 - Gen Len: 142.0 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Gootter/autotrain-Bart_683-825526269 ```
aware-ai/wav2vec2-base-5gram-german
56ab2c73bd4c718e5b1fab45ea833e699e694694
2022-05-19T17:34:14.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:common_voice", "transformers", "audio", "speech", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
aware-ai
null
aware-ai/wav2vec2-base-5gram-german
2
null
transformers
25,807
--- language: de datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech license: apache-2.0 model-index: - name: wav2vec2-base-5gram-german with LM by Florian Zimmermeister @A\\Ware results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice de type: common_voice args: de metrics: - name: Test WER type: wer value: 8.9 --- ## Evaluation The model can be evaluated as follows on the German test data of Common Voice. ```python import torch from transformers import AutoModelForCTC, AutoProcessor from unidecode import unidecode import re from datasets import load_dataset, load_metric import datasets counter = 0 wer_counter = 0 cer_counter = 0 device = "cuda" if torch.cuda.is_available() else "cpu" special_chars = [["Ä"," AE "], ["Ö"," OE "], ["Ü"," UE "], ["ä"," ae "], ["ö"," oe "], ["ü"," ue "]] def clean_text(sentence): for special in special_chars: sentence = sentence.replace(special[0], special[1]) sentence = unidecode(sentence) for special in special_chars: sentence = sentence.replace(special[1], special[0]) sentence = re.sub("[^a-zA-Z0-9öäüÖÄÜ ,.!?]", " ", sentence) return sentence def main(model_id): print("load model") model = AutoModelForCTC.from_pretrained(model_id).to(device) print("load processor") processor = AutoProcessor.from_pretrained(processor_id) print("load metrics") wer = load_metric("wer") cer = load_metric("cer") ds = load_dataset("mozilla-foundation/common_voice_9_0","de") ds = ds["test"] ds = ds.cast_column( "audio", datasets.features.Audio(sampling_rate=16_000) ) def calculate_metrics(batch): global counter, wer_counter, cer_counter resampled_audio = batch["audio"]["array"] input_values = processor(resampled_audio, return_tensors="pt", sampling_rate=16_000).input_values with torch.no_grad(): logits = model(input_values.to(device)).logits.cpu().numpy()[0] decoded = processor.decode(logits) pred = decoded.text.lower() ref = clean_text(batch["sentence"]).lower() wer_result = wer.compute(predictions=[pred], references=[ref]) cer_result = cer.compute(predictions=[pred], references=[ref]) counter += 1 wer_counter += wer_result cer_counter += cer_result if counter % 100 == True: print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}") return batch ds.map(calculate_metrics, remove_columns=ds.column_names) print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}") model_id = "flozi00/wav2vec2-base-5gram-german" main(model_id) ```
arxyzan/data2vec-wav2vec2-base
bc1754585c2df95f9bde411bbcb4c4fbc3235278
2022-05-16T09:00:23.000Z
[ "pytorch", "wav2vec2", "feature-extraction", "arxiv:2202.03555", "transformers" ]
feature-extraction
false
arxyzan
null
arxyzan/data2vec-wav2vec2-base
2
null
transformers
25,808
A Wav2Vec2 model trained using Data2Vec based on the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555).<br> This model is provided here for [this repo](https://github.com/AryanShekarlaban/data2vec-pytorch) but was NOT trained using that codebase but instead, copied from `facebook/data2vec-wav2vec2-base` for convenience and reproducibility. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2202.03555, doi = {10.48550/ARXIV.2202.03555}, url = {https://arxiv.org/abs/2202.03555}, author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael}, keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
masakhane/byt5_lug_en_news
0dd3e81e7ddb313114c7d2f04a7d05bd0de71fc2
2022-05-05T13:50:20.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/byt5_lug_en_news
2
null
transformers
25,809
--- license: afl-3.0 ---
masakhane/m2m100_418M_en_lug_news
b3b50f40e12ee0362b04915a3b40ec8bd3b0fa9a
2022-05-05T14:13:52.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_en_lug_news
2
null
transformers
25,810
--- license: afl-3.0 ---
masakhane/m2m100_418M_lug_en_news
537af4b31fa621722ac45be2f9068162a61cb813
2022-05-05T14:14:02.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_lug_en_news
2
null
transformers
25,811
--- license: afl-3.0 ---
masakhane/m2m100_418M_lug_en_rel_news
daaa16d7aa54b803822ccf3d45619a87939d7a3e
2022-05-05T14:13:57.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_lug_en_rel_news
2
null
transformers
25,812
--- license: afl-3.0 ---
masakhane/m2m100_418M_en_lug_rel_ft
3233a3d42db2e6078cdd1865d78719813b2b44c4
2022-05-05T14:22:56.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_en_lug_rel_ft
2
null
transformers
25,813
--- license: afl-3.0 ---
PSW/low_resource_percent10_minsimdel_seed42
4022e4954012a8e547111b9b7aae3f2d4788ef71
2022-05-05T11:59:29.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_minsimdel_seed42
2
null
transformers
25,814
Entry not found
PSW/low_resource_percent10_randomdel_seed1
37bef189b4ca530691a1218d27f1d5dbafb0413f
2022-05-05T12:14:27.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomdel_seed1
2
null
transformers
25,815
Entry not found
PSW/low_resource_percent10_randomdel_seed27
75da7272bbfec741a784f1ef489fa7561ebccb5c
2022-05-05T12:29:39.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomdel_seed27
2
null
transformers
25,816
Entry not found
PSW/low_resource_percent10_randomdel_seed42
4165c95944938a9aff409b87db75e03b6cdc14af
2022-05-05T12:44:30.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomdel_seed42
2
null
transformers
25,817
Entry not found
alexjercan/codet5-base-buggy-code-repair
1b83d825aa1365a8b71fdc859d79e030190f65c0
2022-05-06T14:06:24.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
alexjercan
null
alexjercan/codet5-base-buggy-code-repair
2
null
transformers
25,818
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: codet5-base-buggy-code-repair results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-base-buggy-code-repair This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8033 - Accuracy: 0.2516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
PSW/low_resource_percent10_randomins_seed27
c30c4e05ac90f7de039490821b5693ffb7466118
2022-05-05T13:13:33.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomins_seed27
2
null
transformers
25,819
Entry not found
PSW/low_resource_percent10_randomins_seed42
22d83840baaba3d9886d428d6055ee4455fb92d5
2022-05-05T13:26:35.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomins_seed42
2
null
transformers
25,820
Entry not found
PSW/low_resource_percent10_randomswap_seed1
ff5e6d596d853d0ed6fb42cc7ff9cf0e2b74350b
2022-05-05T13:41:11.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomswap_seed1
2
null
transformers
25,821
Entry not found
PSW/low_resource_percent10_randomswap_seed42
11deb2bc7481994aa862df6ce742b8b06aee9e46
2022-05-05T14:10:26.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomswap_seed42
2
null
transformers
25,822
Entry not found
PSW/low_resource_percent10_seed27
fab1e808cdd617eaf3081670b903963725d34720
2022-05-05T14:29:31.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_seed27
2
null
transformers
25,823
Entry not found
sniffle/distilbert-rater
399901388e1142923ef6a46b65f6b55a4930a166
2022-05-05T14:51:39.000Z
[ "pytorch", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
sniffle
null
sniffle/distilbert-rater
2
null
transformers
25,824
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-rater results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rater This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
PSW/low_resource_percent10_seed42
e3c4fb72a6d296e656cbe3fc52db5a89ae0efae0
2022-05-05T14:41:51.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_seed42
2
null
transformers
25,825
Entry not found
thetatez/distilbert-rater
4c5a11ec93f79a4f64edf9c58aab9bf92b134bfa
2022-05-05T15:14:45.000Z
[ "pytorch", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
thetatez
null
thetatez/distilbert-rater
2
null
transformers
25,826
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-rater results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rater This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Tokenizers 0.12.1
PSW/low_resource_percent1_seed1
3493272b41ce2e9a292ec20ccb7614d9536d036b
2022-05-05T14:54:39.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_seed1
2
null
transformers
25,827
Entry not found
shoubhik/electra_abbv_20k_data_multiclass
17b36ed3f0bca79703088726acef6a4b44e7e6ae
2022-05-05T15:37:34.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
shoubhik
null
shoubhik/electra_abbv_20k_data_multiclass
2
null
transformers
25,828
Entry not found
PSW/low_resource_percent20_maxsimins_seed42
ac92deb8dcc1025febc2a99d20bea5a2b26244fb
2022-05-05T15:53:57.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_maxsimins_seed42
2
null
transformers
25,829
Entry not found
PSW/low_resource_percent20_minsimdel_seed42
871dba76c4b6e07bf088257a054b622deb961245
2022-05-05T17:26:51.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_minsimdel_seed42
2
null
transformers
25,830
Entry not found
dyyyyyyyy/MVR_panx_XLM-RoBERTa-base
c01adabb249d2d8d109c758c8e4ba31baf81b1d8
2022-05-06T05:19:25.000Z
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
dyyyyyyyy
null
dyyyyyyyy/MVR_panx_XLM-RoBERTa-base
2
null
transformers
25,831
Entry not found
dyyyyyyyy/MVR_squad_BERT-base-multilingual-cased
7e4e38747f1f67865687bfd4b201f97db0d89e71
2022-05-06T06:40:41.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
dyyyyyyyy
null
dyyyyyyyy/MVR_squad_BERT-base-multilingual-cased
2
null
transformers
25,832
Entry not found
PSW/low_resource_percent20_randomswap_seed1
f4e57c395279d5d69808bb528488ee35493f4c5f
2022-05-05T19:15:57.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_randomswap_seed1
2
null
transformers
25,833
Entry not found
nguyenmanhbao/my-finetuned-bert
278a05cfd29cbcb7c413e439d94b3b17b42d4f61
2022-05-05T19:27:33.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
nguyenmanhbao
null
nguyenmanhbao/my-finetuned-bert
2
null
transformers
25,834
Entry not found
abhilashawasthi/bert-base-uncased-reviews-128
976472fe62721322050f437a2ab1821d7d7ff962
2022-05-05T23:42:52.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
abhilashawasthi
null
abhilashawasthi/bert-base-uncased-reviews-128
2
null
transformers
25,835
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-reviews-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-reviews-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
lilitket/20220505-222633
ec7e6ef4986e2eb59ddaf1f82bb5b8c52fd96242
2022-05-06T03:05:49.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
lilitket
null
lilitket/20220505-222633
2
null
transformers
25,836
Entry not found
asahi417/tner-roberta-base-tweet-2020
0b3173bfe3cc71b34afada62ada729a3ad64a3d2
2022-05-06T11:07:17.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
asahi417
null
asahi417/tner-roberta-base-tweet-2020
2
null
transformers
25,837
Entry not found
daihaha/albert-base-v2-finetuned-swag
c82e56725e3095bec4bf876d3ee0c5b1d425d034
2022-05-06T10:09:14.000Z
[ "pytorch", "tensorboard", "albert", "multiple-choice", "transformers" ]
multiple-choice
false
daihaha
null
daihaha/albert-base-v2-finetuned-swag
2
null
transformers
25,838
Entry not found
scasutt/wav2vec2-large-xlsr-53_final_train1
23cd7de2fa28e4e29f5a5603bb8bdc614e5a5415
2022-05-06T21:56:27.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
scasutt
null
scasutt/wav2vec2-large-xlsr-53_final_train1
2
null
transformers
25,839
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53_final_train1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53_final_train1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6432 - Wer: 0.6298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6199 | 0.25 | 250 | 3.6163 | 1.0 | | 3.0927 | 0.5 | 500 | 3.5932 | 1.0 | | 3.0837 | 0.76 | 750 | 3.2418 | 1.0 | | 2.2385 | 1.01 | 1000 | 1.2621 | 0.9855 | | 1.743 | 1.26 | 1250 | 1.0830 | 0.9442 | | 1.6661 | 1.51 | 1500 | 0.7926 | 0.8051 | | 1.5661 | 1.77 | 1750 | 0.6432 | 0.6298 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
crabz/exp1
1505c44c938a9b23491a0f406b2f9a91dd00029d
2022-05-06T09:53:13.000Z
[ "pytorch", "roberta", "fill-mask", "sk", "dataset:c4-sk", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
crabz
null
crabz/exp1
2
null
transformers
25,840
--- language: sk license: mit tags: - fill-mask - roberta datasets: - c4-sk inference: false ---
crabz/exp3
444690d1cb0c1cf233e09c7d8843a6ae92cb8993
2022-05-06T09:59:23.000Z
[ "pytorch", "roberta", "transformers" ]
null
false
crabz
null
crabz/exp3
2
null
transformers
25,841
Entry not found
crabz/exp5
6baa3dc7980aaf62e1abfa000dfe075ef0c0b884
2022-05-06T10:05:49.000Z
[ "pytorch", "roberta", "transformers" ]
null
false
crabz
null
crabz/exp5
2
null
transformers
25,842
Entry not found
samuel30810/Black_box_3
5b7791eea9839587e2f955694b7809a2f8c22a36
2022-05-06T11:23:24.000Z
[ "pytorch", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
samuel30810
null
samuel30810/Black_box_3
2
null
transformers
25,843
--- license: apache-2.0 ---
h4d35/Wav2Vec2-hi
f3217f5f83b690bf13e254e2121be0a5f736f71c
2022-05-06T13:59:21.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
h4d35
null
h4d35/Wav2Vec2-hi
2
null
transformers
25,844
Entry not found
mp6kv/IQA_classification
185819a6eea84f9af378e2c9fb507312fd3643be
2022-05-06T17:43:28.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
mp6kv
null
mp6kv/IQA_classification
2
null
transformers
25,845
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: IQA_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IQA_classification This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0718 - Accuracy: 0.4862 - Precision: 0.3398 - Recall: 0.4862 - F1: 0.3270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.3973 | 1.0 | 28 | 1.1588 | 0.4771 | 0.2276 | 0.4771 | 0.3082 | | 1.1575 | 2.0 | 56 | 1.0718 | 0.4862 | 0.3398 | 0.4862 | 0.3270 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
nepp1d0/prot_bert_classification_finetuned
d2e6cd3f2adcbfd371ba63a947bb73e4b0b6916c
2022-05-09T20:15:49.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
nepp1d0
null
nepp1d0/prot_bert_classification_finetuned
2
null
transformers
25,846
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: prot_bert_classification_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prot_bert_classification_finetuned This model is a fine-tuned version of [nepp1d0/prot_bert-finetuned-smiles-bindingDB](https://huggingface.co/nepp1d0/prot_bert-finetuned-smiles-bindingDB) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5675 - Accuracy: 0.7299 - F1: 0.7377 - Precision: 0.6995 - Recall: 0.7803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 3 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.4221 | 1.0 | 3332 | 0.6152 | 0.6615 | 0.6711 | 0.6367 | 0.7093 | | 0.4133 | 2.0 | 6664 | 0.5840 | 0.6845 | 0.6718 | 0.6805 | 0.6634 | | 0.4293 | 3.0 | 9996 | 0.5727 | 0.7116 | 0.7094 | 0.6961 | 0.7232 | | 0.3098 | 4.0 | 13328 | 0.5636 | 0.7163 | 0.7220 | 0.6904 | 0.7566 | | 0.3881 | 5.0 | 16660 | 0.5629 | 0.7265 | 0.7377 | 0.6918 | 0.7900 | | 0.4943 | 6.0 | 19992 | 0.5675 | 0.7299 | 0.7377 | 0.6995 | 0.7803 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
davidlekve/distilroberta-base-finetuned-kendrick-lamar
7cb10f047aef15587c74067b83d0db69b7f0af79
2022-05-06T19:25:14.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
davidlekve
null
davidlekve/distilroberta-base-finetuned-kendrick-lamar
2
null
transformers
25,847
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-kendrick-lamar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-kendrick-lamar This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0142 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 111 | 3.0981 | | No log | 2.0 | 222 | 3.0078 | | No log | 3.0 | 333 | 3.0142 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
davidsantiago1011/gpt2-small-spanish
88b59ef0daf628a97554fdbbeaf078fc8db98287
2022-05-06T20:26:34.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "model-index" ]
text-generation
false
davidsantiago1011
null
davidsantiago1011/gpt2-small-spanish
2
null
transformers
25,848
--- tags: - generated_from_trainer model-index: - name: gpt2-small-spanish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-small-spanish This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5051 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5681 | 1.0 | 110 | 2.8562 | | 2.7732 | 2.0 | 220 | 2.5769 | | 3.0083 | 3.0 | 330 | 2.5051 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.4 - Tokenizers 0.12.1
davidlekve/distilroberta-base-finetuned-the-beatles
86f23945a18ff9b46d890ed30739b960e6e8c68a
2022-05-06T19:49:40.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
davidlekve
null
davidlekve/distilroberta-base-finetuned-the-beatles
2
null
transformers
25,849
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-the-beatles results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-the-beatles This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 84 | 2.6517 | | No log | 2.0 | 168 | 2.6433 | | No log | 3.0 | 252 | 2.5186 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
vuiseng9/bert-l-squadv1.1-sl256
2b92921786a072af90f2aa3f40f43ff3861b983a
2022-05-07T03:41:17.000Z
[ "pytorch", "onnx", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
vuiseng9
null
vuiseng9/bert-l-squadv1.1-sl256
2
null
transformers
25,850
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: run01-bert-l-uwwm-squadv1.1-sl256-ds128-e2-tbs16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # run01-bert-l-uwwm-squadv1.1-sl256-ds128-e2-tbs16 This model is a fine-tuned version of [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) on the squad dataset. ONNX and OpenVINO IR are enclosed here. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ```bash NEPOCH=2 TBS=16 EBS=64 SL=256 DS=128 cmd=" python run_qa.py \ --model_name_or_path ${BASEM} \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 500 \ --learning_rate 3e-5 \ --fp16 \ --num_train_epochs $NEPOCH \ --per_device_eval_batch_size $EBS \ --per_device_train_batch_size $TBS \ --max_seq_length $SL \ --doc_stride $DS \ --save_steps 1000 \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR " ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results Best checkpoint was at step 11500 but it was not saved. This is final checkpoint (12K+). ``` eval_exact_match = 86.9347 eval_f1 = 93.1359 eval_samples = 12097 ``` ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-06
99004ec4bdb4172960cdd304a59e9665943a0186
2022-05-07T11:12:33.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:filipino_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Khalsuu
null
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-06
2
null
transformers
25,851
--- license: apache-2.0 tags: - generated_from_trainer datasets: - filipino_voice model-index: - name: english-filipino-wav2vec2-l-xls-r-test-06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # english-filipino-wav2vec2-l-xls-r-test-06 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.5806 - Wer: 0.6568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0031 | 2.09 | 400 | 1.2366 | 0.8780 | | 0.9084 | 4.19 | 800 | 1.0653 | 0.8081 | | 0.6484 | 6.28 | 1200 | 1.1648 | 0.8258 | | 0.5335 | 8.38 | 1600 | 1.0903 | 0.7542 | | 0.4359 | 10.47 | 2000 | 0.9466 | 0.7058 | | 0.3629 | 12.57 | 2400 | 0.9266 | 0.7048 | | 0.3057 | 14.66 | 2800 | 1.0879 | 0.7018 | | 0.2477 | 16.75 | 3200 | 1.1113 | 0.7022 | | 0.208 | 18.85 | 3600 | 1.1345 | 0.6742 | | 0.1781 | 20.94 | 4000 | 1.3117 | 0.6974 | | 0.1465 | 23.04 | 4400 | 1.3248 | 0.6916 | | 0.1288 | 25.13 | 4800 | 1.4306 | 0.6523 | | 0.1108 | 27.23 | 5200 | 1.5155 | 0.6685 | | 0.099 | 29.32 | 5600 | 1.5806 | 0.6568 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
crystina-z/mdpr-question-msmarco
81be0f2586d633d3b5da54935b93739ed45fec0f
2022-05-07T07:49:33.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
crystina-z
null
crystina-z/mdpr-question-msmarco
2
null
transformers
25,852
Entry not found
lilitket/20220507-092401
b0008962e3fc0dcd9ccab738be904b831eea851a
2022-05-07T11:23:55.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
lilitket
null
lilitket/20220507-092401
2
null
transformers
25,853
Entry not found
huggingtweets/doodles
cf81177cc7a4e51bc62b6deb4cf996d1691105fb
2022-05-07T11:26:42.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/doodles
2
null
transformers
25,854
--- language: en thumbnail: http://www.huggingtweets.com/doodles/1651922797827/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484416288097116160/xLR2e4eu_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">doodles</div> <div style="text-align: center; font-size: 14px;">@doodles</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from doodles. | Data | doodles | | --- | --- | | Tweets downloaded | 1876 | | Retweets | 401 | | Short tweets | 916 | | Tweets kept | 559 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jpd1iuz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @doodles's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/11wbfkyl) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/11wbfkyl/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/doodles') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
KoichiYasuoka/roberta-small-coptic
b6bf5c6726077cdd242c3633e5f7df5a6d918e92
2022-05-08T05:05:10.000Z
[ "pytorch", "roberta", "fill-mask", "cop", "transformers", "coptic", "masked-lm", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
KoichiYasuoka
null
KoichiYasuoka/roberta-small-coptic
2
null
transformers
25,855
--- language: - "cop" tags: - "coptic" - "masked-lm" license: "cc-by-sa-4.0" pipeline_tag: "fill-mask" mask_token: "[MASK]" --- # roberta-small-coptic ## Model Description This is a RoBERTa model pre-trained on Coptic Scriptorium Corpora. You can fine-tune `roberta-small-coptic` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-coptic-upos), dependency-parsing, and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-coptic") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-coptic") ```
prashanth/mbart-large-cc25-finetuned-en-to-hi
1ed323d07d9a897fcc3fdcd7a6f53f0d41ceffb6
2022-05-08T12:38:32.000Z
[ "pytorch", "tensorboard", "mbart", "text2text-generation", "dataset:hindi_english_machine_translation", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
prashanth
null
prashanth/mbart-large-cc25-finetuned-en-to-hi
2
null
transformers
25,856
--- tags: - generated_from_trainer datasets: - hindi_english_machine_translation model-index: - name: mbart-large-cc25-finetuned-en-to-hi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-cc25-finetuned-en-to-hi This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu102 - Datasets 1.18.0 - Tokenizers 0.12.1
camiloa2m/gpt2-spanish-finetuned-gpt2-spanish
133a0ec9e818f1e91753e444f22716bf5776b933
2022-05-07T15:45:03.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-generation
false
camiloa2m
null
camiloa2m/gpt2-spanish-finetuned-gpt2-spanish
2
null
transformers
25,857
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-spanish-finetuned-gpt2-spanish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-spanish-finetuned-gpt2-spanish This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9709 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 263 | 2.0389 | | 2.1522 | 2.0 | 526 | 1.9829 | | 2.1522 | 3.0 | 789 | 1.9709 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.4 - Tokenizers 0.12.1
lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626708
331c2d7934b85da3e14a1daa5af267e0f0f5c8cf
2022-05-08T01:56:10.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:lucifermorninstar011/autotrain-data-lucifer_multi_auto_all", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
lucifermorninstar011
null
lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626708
2
null
transformers
25,858
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - lucifermorninstar011/autotrain-data-lucifer_multi_auto_all co2_eq_emissions: 675.6911996033854 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 837626708 - CO2 Emissions (in grams): 675.6911996033854 ## Validation Metrics - Loss: 0.008128546178340912 - Accuracy: 0.9977804696723191 - Macro F1: 0.9942781700973885 - Micro F1: 0.9977804696723191 - Weighted F1: 0.9977851755386459 - Macro Precision: 0.9923939243012706 - Micro Precision: 0.9977804696723191 - Weighted Precision: 0.9977957481683986 - Macro Recall: 0.9961924323977192 - Micro Recall: 0.9977804696723191 - Weighted Recall: 0.9977804696723191 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626708 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626708", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626708", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626712
5adc0ae9b2e21b6822e4285e81d4dd19c3532149
2022-05-08T02:44:35.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:lucifermorninstar011/autotrain-data-lucifer_multi_auto_all", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
lucifermorninstar011
null
lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626712
2
null
transformers
25,859
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - lucifermorninstar011/autotrain-data-lucifer_multi_auto_all co2_eq_emissions: 772.9316141161539 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 837626712 - CO2 Emissions (in grams): 772.9316141161539 ## Validation Metrics - Loss: 0.006297225132584572 - Accuracy: 0.998357670693734 - Macro F1: 0.9947282131241516 - Micro F1: 0.998357670693734 - Weighted F1: 0.9983564218124292 - Macro Precision: 0.9937572688417448 - Micro Precision: 0.998357670693734 - Weighted Precision: 0.9983587534033106 - Macro Recall: 0.9957326552198976 - Micro Recall: 0.998357670693734 - Weighted Recall: 0.998357670693734 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626712 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626712", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucifermorninstar011/autotrain-lucifer_multi_auto_all-837626712", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
theojolliffe/distill-pegasus-cnn-arxiv-pubmed
f32b2e98d68b35d674acf7f7820e49cf608b2ac5
2022-05-07T22:40:32.000Z
[ "pytorch", "tensorboard", "pegasus", "text2text-generation", "dataset:scientific_papers", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
theojolliffe
null
theojolliffe/distill-pegasus-cnn-arxiv-pubmed
2
null
transformers
25,860
--- tags: - generated_from_trainer datasets: - scientific_papers metrics: - rouge model-index: - name: distill-pegasus-cnn-16-4-finetuned-arxiv-pubmed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: scientific_papers type: scientific_papers args: pubmed metrics: - name: Rouge1 type: rouge value: 31.5968 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distill-pegasus-cnn-16-4-finetuned-arxiv-pubmed This model is a fine-tuned version of [theojolliffe/distill-pegasus-cnn-16-4-finetuned-arxiv](https://huggingface.co/theojolliffe/distill-pegasus-cnn-16-4-finetuned-arxiv) on the scientific_papers dataset. It achieves the following results on the evaluation set: - Loss: 3.0433 - Rouge1: 31.5968 - Rouge2: 12.5841 - Rougel: 21.0778 - Rougelsum: 28.3167 - Gen Len: 118.9566 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 3.5173 | 1.0 | 3748 | 3.0433 | 31.5968 | 12.5841 | 21.0778 | 28.3167 | 118.9566 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
bkh6722/bach-arb
a404b97ac5dcdb6f13180e93b41900c6c4d1439f
2022-05-15T02:34:26.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
bkh6722
null
bkh6722/bach-arb
2
null
transformers
25,861
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bach-arb This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9404 - Wer: 0.6130 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 115 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 27.8653 | 7.14 | 100 | 3.1369 | 1.0 | | 2.5975 | 14.28 | 200 | 2.1223 | 0.9976 | | 1.2001 | 21.41 | 300 | 1.7455 | 0.8774 | | 0.5938 | 28.55 | 400 | 1.8534 | 0.7981 | | 0.4001 | 35.69 | 500 | 2.3318 | 0.7740 | | 0.2895 | 42.83 | 600 | 2.2214 | 0.7163 | | 0.1853 | 49.97 | 700 | 2.4841 | 0.7043 | | 0.1318 | 57.14 | 800 | 2.9749 | 0.7139 | | 0.1067 | 64.28 | 900 | 2.4759 | 0.7115 | | 0.0635 | 71.41 | 1000 | 2.6708 | 0.6635 | | 0.0515 | 78.55 | 1100 | 3.0593 | 0.6923 | | 0.0455 | 85.69 | 1200 | 2.9637 | 0.6587 | | 0.0329 | 92.83 | 1300 | 2.9837 | 0.6346 | | 0.0232 | 99.97 | 1400 | 2.9361 | 0.6178 | | 0.021 | 107.14 | 1500 | 2.9221 | 0.6010 | | 0.0193 | 114.28 | 1600 | 2.9404 | 0.6130 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
BigSalmon/InformalToFormalLincoln43
84729e977fd35f1da6535e50c79c7bc186f25230
2022-05-07T22:51:01.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/InformalToFormalLincoln43
2
null
transformers
25,862
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln43") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln43") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ```
theojolliffe/bart-cnn-pubmed-arxiv-pubmed
f41958beb871e6f3f89492fe8d35ddf8300e3d67
2022-05-08T04:30:20.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "dataset:scientific_papers", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
text2text-generation
false
theojolliffe
null
theojolliffe/bart-cnn-pubmed-arxiv-pubmed
2
null
transformers
25,863
--- license: mit tags: - generated_from_trainer datasets: - scientific_papers metrics: - rouge model-index: - name: bart-cnn-pubmed-arxiv-pubmed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: scientific_papers type: scientific_papers args: pubmed metrics: - name: Rouge1 type: rouge value: 37.3328 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-pubmed-arxiv-pubmed This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv) on the scientific_papers dataset. It achieves the following results on the evaluation set: - Loss: 1.9245 - Rouge1: 37.3328 - Rouge2: 15.5894 - Rougel: 23.0297 - Rougelsum: 33.952 - Gen Len: 136.3568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 2.0272 | 1.0 | 29981 | 1.9245 | 37.3328 | 15.5894 | 23.0297 | 33.952 | 136.3568 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-09
bf299183fc64473d8b887fd98b7a8162c167ebe7
2022-05-08T04:30:40.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:filipino_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Khalsuu
null
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-09
2
null
transformers
25,864
--- license: apache-2.0 tags: - generated_from_trainer datasets: - filipino_voice model-index: - name: english-filipino-wav2vec2-l-xls-r-test-09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # english-filipino-wav2vec2-l-xls-r-test-09 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.0054 - Wer: 0.5750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.001 | 2.09 | 400 | 1.3499 | 0.9595 | | 0.8606 | 4.19 | 800 | 0.8159 | 0.6942 | | 0.5819 | 6.28 | 1200 | 0.7372 | 0.6700 | | 0.4751 | 8.38 | 1600 | 0.7310 | 0.6405 | | 0.3777 | 10.47 | 2000 | 0.7841 | 0.6414 | | 0.2918 | 12.57 | 2400 | 0.7898 | 0.5951 | | 0.2209 | 14.66 | 2800 | 0.8558 | 0.5751 | | 0.1671 | 16.75 | 3200 | 0.9881 | 0.5979 | | 0.129 | 18.85 | 3600 | 1.0054 | 0.5750 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Jiexing/sparc_add_coref_t5_3b-2432
88c2bdc5f045f5c0068a3887b7f653474b6d0dba
2022-05-08T04:51:29.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Jiexing
null
Jiexing/sparc_add_coref_t5_3b-2432
2
null
transformers
25,865
Entry not found
KoichiYasuoka/roberta-base-coptic
36e60145b360bd013e94915df25c7b06b50e7423
2022-05-08T05:16:14.000Z
[ "pytorch", "roberta", "fill-mask", "cop", "transformers", "coptic", "masked-lm", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
KoichiYasuoka
null
KoichiYasuoka/roberta-base-coptic
2
null
transformers
25,866
--- language: - "cop" tags: - "coptic" - "masked-lm" license: "cc-by-sa-4.0" pipeline_tag: "fill-mask" mask_token: "[MASK]" --- # roberta-base-coptic ## Model Description This is a RoBERTa model pre-trained on Coptic Scriptorium Corpora. You can fine-tune `roberta-base-coptic` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-coptic-upos), dependency-parsing, and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-coptic") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-coptic") ```
Henrywang/dummy-model
58a4f28d4bcfbe2db38f3b254f8cdc7206fc0996
2022-05-08T08:41:54.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Henrywang
null
Henrywang/dummy-model
2
0
transformers
25,867
Entry not found
pier297/autotrain-chemprot-re-838426740
fb09b7130ff43ec43d7c670f21389bf509bd9f16
2022-05-08T09:31:00.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:pier297/autotrain-data-chemprot-re", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
pier297
null
pier297/autotrain-chemprot-re-838426740
2
1
transformers
25,868
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - pier297/autotrain-data-chemprot-re co2_eq_emissions: 0.0911766483095575 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 838426740 - CO2 Emissions (in grams): 0.0911766483095575 ## Validation Metrics - Loss: 0.3866589665412903 - Accuracy: 0.9137332672285573 - Macro F1: 0.6518117007658014 - Micro F1: 0.9137332672285573 - Weighted F1: 0.9110993117549759 - Macro Precision: 0.649358664024301 - Micro Precision: 0.9137332672285573 - Weighted Precision: 0.9091854625539633 - Macro Recall: 0.6551854233645032 - Micro Recall: 0.9137332672285573 - Weighted Recall: 0.9137332672285573 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pier297/autotrain-chemprot-re-838426740 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("pier297/autotrain-chemprot-re-838426740", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("pier297/autotrain-chemprot-re-838426740", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
theojolliffe/distill-pegasus-cnn-arxiv-pubmed-v3-e16
0248f1aed405def2e7ed0c3c8dc46816cd18c8f8
2022-05-08T14:17:05.000Z
[ "pytorch", "tensorboard", "pegasus", "text2text-generation", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
theojolliffe
null
theojolliffe/distill-pegasus-cnn-arxiv-pubmed-v3-e16
2
null
transformers
25,869
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: distill-pegasus-cnn-arxiv-pubmed-v3-e16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distill-pegasus-cnn-arxiv-pubmed-v3-e16 This model is a fine-tuned version of [theojolliffe/distill-pegasus-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distill-pegasus-cnn-arxiv-pubmed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4922 - Rouge1: 53.3238 - Rouge2: 36.6165 - Rougel: 38.9255 - Rougelsum: 50.4853 - Gen Len: 125.7407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 2.7655 | 1.0 | 795 | 2.1110 | 49.0541 | 29.7039 | 33.8403 | 44.2825 | 126.1296 | | 2.2882 | 2.0 | 1590 | 1.9469 | 48.4651 | 30.1425 | 33.9702 | 44.3518 | 125.7778 | | 2.1958 | 3.0 | 2385 | 1.8079 | 49.2302 | 31.0952 | 34.4448 | 45.5764 | 125.7778 | | 2.0221 | 4.0 | 3180 | 1.7501 | 48.1928 | 29.9098 | 33.0587 | 44.6023 | 125.3148 | | 1.9078 | 5.0 | 3975 | 1.6677 | 49.697 | 31.671 | 34.3162 | 46.5108 | 125.5185 | | 1.8624 | 6.0 | 4770 | 1.6393 | 49.6517 | 31.7371 | 35.2019 | 46.2846 | 125.6852 | | 1.7853 | 7.0 | 5565 | 1.6038 | 50.3151 | 33.0952 | 36.0028 | 47.3894 | 125.6852 | | 1.7513 | 8.0 | 6360 | 1.5717 | 50.299 | 33.038 | 35.6841 | 47.4086 | 124.5556 | | 1.7026 | 9.0 | 7155 | 1.5570 | 51.6216 | 34.7609 | 37.5598 | 48.5247 | 124.7037 | | 1.6999 | 10.0 | 7950 | 1.5365 | 51.0888 | 34.2642 | 37.0603 | 48.5712 | 125.3519 | | 1.6832 | 11.0 | 8745 | 1.5249 | 51.3422 | 34.2941 | 37.7111 | 48.556 | 124.9259 | | 1.6093 | 12.0 | 9540 | 1.5092 | 51.4622 | 34.6397 | 38.1768 | 48.6346 | 124.8889 | | 1.6049 | 13.0 | 10335 | 1.5002 | 52.2463 | 35.4629 | 38.2049 | 49.4066 | 124.7963 | | 1.5904 | 14.0 | 11130 | 1.4957 | 51.6498 | 34.9739 | 38.4215 | 48.9704 | 125.0185 | | 1.5963 | 15.0 | 11925 | 1.4920 | 52.769 | 35.9563 | 38.4861 | 49.9185 | 125.6481 | | 1.5742 | 16.0 | 12720 | 1.4922 | 53.3238 | 36.6165 | 38.9255 | 50.4853 | 125.7407 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Siyam/Dansk-wav2vec2-stt
7210bd5219b2e70827ec36963094c77fcd109042
2022-05-08T20:58:42.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Siyam
null
Siyam/Dansk-wav2vec2-stt
2
null
transformers
25,870
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: Dansk-wav2vec2-stt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Dansk-wav2vec2-stt This model is a fine-tuned version of [Siyam/Dansk-wav2vec21](https://huggingface.co/Siyam/Dansk-wav2vec21) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7500 - Wer: 0.3929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0298 | 4.26 | 400 | 0.8420 | 0.4579 | | 0.0479 | 8.51 | 800 | 0.8713 | 0.4461 | | 0.0387 | 12.77 | 1200 | 0.8307 | 0.4404 | | 0.0336 | 17.02 | 1600 | 0.8322 | 0.4144 | | 0.0322 | 21.28 | 2000 | 0.7493 | 0.4081 | | 0.0288 | 25.53 | 2400 | 0.7361 | 0.3951 | | 0.0264 | 29.79 | 2800 | 0.7500 | 0.3929 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.10.3
lilitket/20220509-013433
7b41d6ee183070a8ef5833a423e5d5aba7e79f3b
2022-05-08T23:20:24.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
lilitket
null
lilitket/20220509-013433
2
null
transformers
25,871
Entry not found
kaakekhan/tiny-bert-sst2-distilled
2ddb3e97bf10c7eae895c9ab79f6b88650861c23
2022-05-09T00:39:14.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
kaakekhan
null
kaakekhan/tiny-bert-sst2-distilled
2
null
transformers
25,872
Entry not found
BigSalmon/InformalToFormalLincoln44
32c25a13ba4d1fd93b38918d00c11383c025acb9
2022-05-09T01:38:00.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/InformalToFormalLincoln44
2
null
transformers
25,873
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln44") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln44") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ```
JGraves/distilbert-base-uncased-finetuned-ner
f2a97250d28cfb07b7d5bfeac3a3a4f8cc0c697f
2022-05-13T03:46:10.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
JGraves
null
JGraves/distilbert-base-uncased-finetuned-ner
2
null
transformers
25,874
Entry not found
Diegomejia/ucb-bert-finetunned
adfb29c39c3f20e5df2276efc1957dfbeb7b0732
2022-05-11T06:31:03.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Diegomejia
null
Diegomejia/ucb-bert-finetunned
2
null
transformers
25,875
Entry not found
Xikun/greaselm-obqa
c1025505c85bcc185fee5623c217fba1fe8b894c
2022-05-09T05:15:00.000Z
[ "pytorch", "greaselm", "transformers" ]
null
false
Xikun
null
Xikun/greaselm-obqa
2
null
transformers
25,876
Entry not found
anuragshas/wav2vec2-xls-r-300m-hi-cv9-with-lm
3a2d8e2bd3a69db69db2f4900b36580bac8e9eb3
2022-05-25T14:56:19.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hi", "dataset:mozilla-foundation/common_voice_9_0", "transformers", "mozilla-foundation/common_voice_9_0", "generated_from_trainer", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
anuragshas
null
anuragshas/wav2vec2-xls-r-300m-hi-cv9-with-lm
2
null
transformers
25,877
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_9_0 - generated_from_trainer - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_9_0 metrics: - wer model-index: - name: XLS-R-300M - Hindi results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: type: mozilla-foundation/common_voice_9_0 name: Common Voice 9 args: hi metrics: - type: wer value: 21.145 name: Test WER - name: Test CER type: cer value: 7.709 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.5164 - Wer: 0.3349 - Cer: 0.1082 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 9815 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | 3.9471 | 8.16 | 400 | 3.7109 | 1.0 | 1.0 | | 3.274 | 16.32 | 800 | 3.1582 | 0.9917 | 0.9573 | | 1.5889 | 24.48 | 1200 | 0.7763 | 0.6030 | 0.1990 | | 1.3647 | 32.65 | 1600 | 0.6051 | 0.5135 | 0.1687 | | 1.2532 | 40.81 | 2000 | 0.5423 | 0.4712 | 0.1539 | | 1.1905 | 48.97 | 2400 | 0.5180 | 0.4532 | 0.1490 | | 1.1193 | 57.14 | 2800 | 0.4906 | 0.4248 | 0.1393 | | 1.0584 | 65.3 | 3200 | 0.4854 | 0.4069 | 0.1332 | | 1.0095 | 73.46 | 3600 | 0.4780 | 0.3926 | 0.1287 | | 0.9759 | 81.63 | 4000 | 0.4666 | 0.3925 | 0.1269 | | 0.9593 | 89.79 | 4400 | 0.4808 | 0.3830 | 0.1247 | | 0.909 | 97.95 | 4800 | 0.4798 | 0.3765 | 0.1212 | | 0.8788 | 106.12 | 5200 | 0.4906 | 0.3608 | 0.1162 | | 0.8471 | 114.28 | 5600 | 0.4759 | 0.3604 | 0.1166 | | 0.8116 | 122.44 | 6000 | 0.5080 | 0.3627 | 0.1176 | | 0.7881 | 130.61 | 6400 | 0.4868 | 0.3489 | 0.1135 | | 0.766 | 138.77 | 6800 | 0.4955 | 0.3492 | 0.1136 | | 0.7333 | 146.93 | 7200 | 0.5019 | 0.3461 | 0.1125 | | 0.709 | 155.1 | 7600 | 0.5084 | 0.3468 | 0.1117 | | 0.6911 | 163.26 | 8000 | 0.5144 | 0.3412 | 0.1106 | | 0.6683 | 171.42 | 8400 | 0.5219 | 0.3409 | 0.1117 | | 0.659 | 179.59 | 8800 | 0.5230 | 0.3376 | 0.1096 | | 0.6475 | 187.75 | 9200 | 0.5229 | 0.3398 | 0.1097 | | 0.6419 | 195.91 | 9600 | 0.5200 | 0.3337 | 0.1084 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.1.1.dev0 - Tokenizers 0.12.1
huggingtweets/computerforever
dfbdbe2da7f21be2606d15bffe48b5c3f91aa9e3
2022-05-09T05:19:58.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/computerforever
2
null
transformers
25,878
--- language: en thumbnail: http://www.huggingtweets.com/computerforever/1652073594573/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1518444670266839045/38xr9OAd_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">computer sweetie</div> <div style="text-align: center; font-size: 14px;">@computerforever</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from computer sweetie. | Data | computer sweetie | | --- | --- | | Tweets downloaded | 2170 | | Retweets | 48 | | Short tweets | 313 | | Tweets kept | 1809 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9j3sj0ot/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @computerforever's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iw1hcff) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iw1hcff/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/computerforever') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
PrajwalS/wav2vec2_med_custom_train_large
a1551564b94ee28f5a6254e5e10b8c82bdd844e1
2022-05-09T09:15:28.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
PrajwalS
null
PrajwalS/wav2vec2_med_custom_train_large
2
null
transformers
25,879
Entry not found
ChrisRhw/DialoGPT-medium-Chizuru
4871e2a65b1d704be141029c1b864675855059ab
2022-05-09T06:01:42.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ChrisRhw
null
ChrisRhw/DialoGPT-medium-Chizuru
2
null
transformers
25,880
--- tags: - conversational ---
fujiki/t5-base-en2ja
f55aa99ce2d1c7a6ebaf11ad0c918e73a3ed8826
2022-05-11T19:43:53.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
fujiki
null
fujiki/t5-base-en2ja
2
null
transformers
25,881
# Tokenizer - the tokenizer is imported from [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) # License [CC-BY SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.ja)
Jiexing/cosql_add_coref_t5_3b-1280
a02f8d814a31b8549f1eef4cd7ee877cb62aeeaa
2022-05-09T08:52:01.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Jiexing
null
Jiexing/cosql_add_coref_t5_3b-1280
2
null
transformers
25,882
Entry not found
ders/wav2vec2-large-xlsr-53-demo-laptop-hp-omen-15-dc1xxx-gpu
37859949c47c93f1e7416b90a64df16e011fcd4d
2022-05-14T17:41:01.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
ders
null
ders/wav2vec2-large-xlsr-53-demo-laptop-hp-omen-15-dc1xxx-gpu
2
null
transformers
25,883
Entry not found
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e10
b367aeeb3b4ed83829d4e9ea88636d9467cd8ba2
2022-05-09T12:37:02.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
text2text-generation
false
theojolliffe
null
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e10
2
null
transformers
25,884
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-cnn-pubmed-arxiv-pubmed-v3-e10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-pubmed-arxiv-pubmed-v3-e10 This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8410 - Rouge1: 56.5123 - Rouge2: 41.1641 - Rougel: 43.4495 - Rougelsum: 54.544 - Gen Len: 141.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.254 | 1.0 | 795 | 0.9244 | 52.4478 | 32.5958 | 34.8756 | 49.8059 | 142.0 | | 0.6985 | 2.0 | 1590 | 0.8156 | 52.4786 | 33.2296 | 35.5063 | 49.737 | 141.7963 | | 0.5252 | 3.0 | 2385 | 0.7821 | 52.0494 | 32.953 | 36.5502 | 49.7292 | 142.0 | | 0.3389 | 4.0 | 3180 | 0.7422 | 53.5408 | 36.2206 | 39.8389 | 51.6693 | 142.0 | | 0.26 | 5.0 | 3975 | 0.7670 | 54.4279 | 36.5972 | 40.255 | 52.0877 | 142.0 | | 0.1678 | 6.0 | 4770 | 0.8106 | 54.6811 | 37.8329 | 40.8512 | 52.3482 | 141.963 | | 0.1243 | 7.0 | 5565 | 0.7926 | 54.5081 | 37.9596 | 41.912 | 52.5097 | 142.0 | | 0.0967 | 8.0 | 6360 | 0.8079 | 56.0795 | 40.0954 | 43.7055 | 54.2041 | 142.0 | | 0.0709 | 9.0 | 7155 | 0.8390 | 55.5257 | 38.5546 | 42.1562 | 53.5524 | 141.963 | | 0.0691 | 10.0 | 7950 | 0.8410 | 56.5123 | 41.1641 | 43.4495 | 54.544 | 141.6667 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
guhuawuli/distilbert-base-uncased-finetuned-cola
cb06312f2fba67ba42f3299f68411c46ee01b786
2022-05-09T13:05:15.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
guhuawuli
null
guhuawuli/distilbert-base-uncased-finetuned-cola
2
null
transformers
25,885
Entry not found
husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4
77a95ee910d37c9187202ad86079a23a7b7de35e
2022-05-10T04:41:58.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
husnu
null
husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4
2
null
transformers
25,886
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3201 - Wer: 0.3295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 11 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.9268 | 0.51 | 400 | 1.3204 | 0.9175 | | 0.7491 | 1.02 | 800 | 0.5880 | 0.6388 | | 0.4911 | 1.53 | 1200 | 0.4680 | 0.5613 | | 0.4265 | 2.04 | 1600 | 0.4213 | 0.5059 | | 0.3473 | 2.55 | 2000 | 0.4199 | 0.4955 | | 0.3291 | 3.07 | 2400 | 0.4323 | 0.5061 | | 0.2819 | 3.58 | 2800 | 0.4026 | 0.4490 | | 0.2628 | 4.09 | 3200 | 0.3831 | 0.4446 | | 0.2371 | 4.6 | 3600 | 0.3622 | 0.4234 | | 0.2274 | 5.11 | 4000 | 0.3473 | 0.4012 | | 0.2051 | 5.62 | 4400 | 0.3471 | 0.3998 | | 0.1985 | 6.13 | 4800 | 0.3759 | 0.4088 | | 0.1767 | 6.64 | 5200 | 0.3620 | 0.4012 | | 0.1707 | 7.15 | 5600 | 0.3415 | 0.3700 | | 0.1559 | 7.66 | 6000 | 0.3317 | 0.3661 | | 0.147 | 8.17 | 6400 | 0.3265 | 0.3618 | | 0.1339 | 8.68 | 6800 | 0.3293 | 0.3586 | | 0.126 | 9.2 | 7200 | 0.3386 | 0.3458 | | 0.1149 | 9.71 | 7600 | 0.3305 | 0.3397 | | 0.1051 | 10.22 | 8000 | 0.3235 | 0.3354 | | 0.1005 | 10.73 | 8400 | 0.3201 | 0.3295 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.10.3
veronica320/SPTE_roberta-large-mnli_200
7315c8b18bdd2d5e4caf1f1d3544249f4f81f44e
2022-05-09T21:31:09.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
veronica320
null
veronica320/SPTE_roberta-large-mnli_200
2
null
transformers
25,887
Entry not found
veronica320/MPTE_MPE_roberta_200
d5623f589c7220c368572ea6e1e1b95761c01ca1
2022-05-09T21:31:45.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
veronica320
null
veronica320/MPTE_MPE_roberta_200
2
null
transformers
25,888
Entry not found
Kailash/wav2vec2-large-xls-r-300m-turkish-colab
f10e153fd8dff00c7a39b4add0fc0979aa79280a
2022-05-10T09:17:23.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
Kailash
null
Kailash/wav2vec2-large-xls-r-300m-turkish-colab
2
null
transformers
25,889
Entry not found
masakhane/mt5_en_yor_news
a6d7931a46d612e3950f7842bf6fd49eed26b11e
2022-05-10T12:59:17.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/mt5_en_yor_news
2
null
transformers
25,890
--- license: afl-3.0 ---
ceggian/bert_post_trained_reddit_batch512
0d791e93d813b15a324cc85ab68486548bc0d0d6
2022-05-10T13:53:42.000Z
[ "pytorch", "bert", "pretraining", "transformers" ]
null
false
ceggian
null
ceggian/bert_post_trained_reddit_batch512
2
null
transformers
25,891
Entry not found
moshew/MiniLM-L3-clinc-distilled
82e5220010c5fedccc92d695d84553035fa1e414
2022-05-10T16:53:49.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
moshew
null
moshew/MiniLM-L3-clinc-distilled
2
null
transformers
25,892
Entry not found
ziedhajyahia/autotrain-ok-848227025
b485844ca37dfb60b69a87b3c85fbf21f7ac351f
2022-05-10T15:21:50.000Z
[ "pytorch", "camembert", "text-classification", "fr", "dataset:ziedhajyahia/autotrain-data-ok", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
ziedhajyahia
null
ziedhajyahia/autotrain-ok-848227025
2
null
transformers
25,893
--- tags: autotrain language: fr widget: - text: "I love AutoTrain 🤗" datasets: - ziedhajyahia/autotrain-data-ok co2_eq_emissions: 5.096755166899446 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 848227025 - CO2 Emissions (in grams): 5.096755166899446 ## Validation Metrics - Loss: 2.1917402744293213 - Accuracy: 0.44666666666666666 - Macro F1: 0.20291677804725128 - Micro F1: 0.44666666666666666 - Weighted F1: 0.37709801275435956 - Macro Precision: 0.19919016697588127 - Micro Precision: 0.44666666666666666 - Weighted Precision: 0.3478004329004329 - Macro Recall: 0.23167713239141807 - Micro Recall: 0.44666666666666666 - Weighted Recall: 0.44666666666666666 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ziedhajyahia/autotrain-ok-848227025 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ziedhajyahia/autotrain-ok-848227025", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("ziedhajyahia/autotrain-ok-848227025", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
IljaSamoilov/MBART-estonian-subtitles
5af8140ea5fec5faa71ec783191fbc2ab0324d9c
2022-05-11T08:12:33.000Z
[ "pytorch", "mbart", "text2text-generation", "et", "transformers", "autotrain_compatible" ]
text2text-generation
false
IljaSamoilov
null
IljaSamoilov/MBART-estonian-subtitles
2
null
transformers
25,894
--- language: - et widget: - text: "te olete ka noh, noh, päris korralikult ka Rahvusringhäälingu teatud mõttes sellisesse keerulisse olukorda pannud," - text: "Et, et, et miks mitte olla siis tasakaalus, ma noh, hüpoteetiliselt viskan selle palli üles," --- Model usage: ``` tokenizer = MBart50Tokenizer.from_pretrained("IljaSamoilov/MBART-estonian-subtitles", src_lang="et_EE", tgt_lang="et_EE") model = MBartForConditionalGeneration.from_pretrained("IljaSamoilov/MBART-estonian-subtitles") ```
nihaldsouza1/covid-hatespeech-detection
9d994ae7126dd55ec149821fcbc655399bac37cb
2022-05-10T18:40:34.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
nihaldsouza1
null
nihaldsouza1/covid-hatespeech-detection
2
null
transformers
25,895
Since the start of the COVID-19 pandemic, there has been a widespread increase in the amount of hate-speech being propagated online against the Asian community. This project builds upon and explores the work of He et al. Their COVID-HATE dataset contains 206 million tweets focused around anti-Asian hate speech. Using tweet data from before the COVID-19 pandemic, as well as the COVID-HATE dataset from He et al, we performed transfer learning. We tested several different models, including BERT, RoBERTa, LSTM, and BERT-CNN. Some of these models hindered the performance of He et al’s model, while others improved it.
moshew/Mini-bert-distilled
819afe22dddded5b58ee8c434a4fe57781e2c299
2022-05-10T19:42:57.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
moshew
null
moshew/Mini-bert-distilled
2
null
transformers
25,896
Entry not found
enoriega/kw_pubmed_1000_0.00006
3887554baa4fe038dba9c3109b809e65f403011c
2022-05-10T20:48:49.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
enoriega
null
enoriega/kw_pubmed_1000_0.00006
2
null
transformers
25,897
Entry not found
huggingtweets/cdrsuperheroga1
cd7b6ad8d9d9d43f9d00e5f88f6ec2dd60240138
2022-05-11T01:15:45.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/cdrsuperheroga1
2
null
transformers
25,898
--- language: en thumbnail: http://www.huggingtweets.com/cdrsuperheroga1/1652231741388/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1244518578537160704/ZWf8X6PO_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">cdrsuperhero_gaming</div> <div style="text-align: center; font-size: 14px;">@cdrsuperheroga1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from cdrsuperhero_gaming. | Data | cdrsuperhero_gaming | | --- | --- | | Tweets downloaded | 2739 | | Retweets | 296 | | Short tweets | 858 | | Tweets kept | 1585 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3sjvj649/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cdrsuperheroga1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/227tkbwp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/227tkbwp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cdrsuperheroga1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
dreamerdeo/da-xlarge
7efb1992327bd16249f0b7679b53cb6edaa4bc50
2022-05-11T03:05:51.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
dreamerdeo
null
dreamerdeo/da-xlarge
2
null
transformers
25,899
Entry not found