modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Luojike/autotrain-test_3-1071537591
Luojike
2022-07-01T15:04:07Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "unk", "dataset:Luojike/autotrain-data-test_3", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-01T14:59:39Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain πŸ€—" datasets: - Luojike/autotrain-data-test_3 co2_eq_emissions: 0.03985401798934018 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1071537591 - CO2 Emissions (in grams): 0.03985401798934018 ## Validation Metrics - Loss: 0.5283975601196289 - Accuracy: 0.7389705882352942 - Precision: 0.5032894736842105 - Recall: 0.3574766355140187 - AUC: 0.7135599403856304 - F1: 0.41803278688524587 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Luojike/autotrain-test_3-1071537591 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Luojike/autotrain-test_3-1071537591", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Luojike/autotrain-test_3-1071537591", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
osanseviero/just-a-test
osanseviero
2022-07-01T13:51:55Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "jax", "roberta", "causal-lm", "sentence-similarity", "doi:10.57967/hf/0820", "license:cc-by-sa-4.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - causal-lm license: - cc-by-sa-4.0 --- # TODO: Name of Model TODO: Description ## Model Description TODO: Add relevant content (0) Base Transformer Type: RobertaModel (1) Pooling mean ## Usage (Sentence-Transformers) Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence"] model = SentenceTransformer(TODO) embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch #Β The next step is optional if you want your own pooling function. # Max Pooling - Take the max value over time for every dimension. def max_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value max_over_time = torch.max(token_embeddings, 1)[0] return max_over_time # Sentences we want sentence embeddings for sentences = ['This is an example sentence'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained(TODO) model = AutoModel.from_pretrained(TODO) # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## TODO: Training Procedure ## TODO: Evaluation Results ## TODO: Citing & Authors
osanseviero/full-sentence-distillroberta3
osanseviero
2022-07-01T13:51:38Z
13
2
sentence-transformers
[ "sentence-transformers", "pytorch", "jax", "roberta", "causal-lm", "sentence-similarity", "license:cc-by-sa-4.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - causal-lm license: - cc-by-sa-4.0 --- # TODO: Name of Model TODO: Description ## Model Description TODO: Add relevant content (0) Base Transformer Type: RobertaModel (1) Pooling mean ## Usage (Sentence-Transformers) Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence"] model = SentenceTransformer(TODO) embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch #Β The next step is optional if you want your own pooling function. # Max Pooling - Take the max value over time for every dimension. def max_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value max_over_time = torch.max(token_embeddings, 1)[0] return max_over_time # Sentences we want sentence embeddings for sentences = ['This is an example sentence'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained(TODO) model = AutoModel.from_pretrained(TODO) # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## TODO: Training Procedure ## TODO: Evaluation Results ## TODO: Citing & Authors
dwing/distilbert-base-uncased-finetuned-emotion
dwing
2022-07-01T13:38:31Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-28T07:15:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9335 - name: F1 type: f1 value: 0.9336729469235073 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1616 - Accuracy: 0.9335 - F1: 0.9337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1003 | 1.0 | 250 | 0.1854 | 0.931 | 0.9311 | | 0.0891 | 2.0 | 500 | 0.1616 | 0.9335 | 0.9337 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
osanseviero/Reinforce-Pixelcopter-PLE-v0
osanseviero
2022-07-01T13:32:44Z
0
1
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-01T13:32:34Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - metrics: - type: mean_reward value: 16.20 +/- 14.18 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
huggingtweets/dril-tacticalmaid
huggingtweets
2022-07-01T12:50:55Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-01T12:49:39Z
--- language: en thumbnail: http://www.huggingtweets.com/dril-tacticalmaid/1656679850409/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1498996796093509632/Z7VwFzOJ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI CYBORG πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & Maid POLadin πŸŽͺ πŸ’™πŸ’›</div> <div style="text-align: center; font-size: 14px;">@dril-tacticalmaid</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & Maid POLadin πŸŽͺ πŸ’™πŸ’›. | Data | wint | Maid POLadin πŸŽͺ πŸ’™πŸ’› | | --- | --- | --- | | Tweets downloaded | 3231 | 3225 | | Retweets | 487 | 2081 | | Short tweets | 295 | 290 | | Tweets kept | 2449 | 854 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20brgjpa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-tacticalmaid's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ev3hr7n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ev3hr7n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dril-tacticalmaid') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
dminiotas05/distilbert-base-uncased-finetuned-ft500_4class
dminiotas05
2022-07-01T12:43:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-01T12:21:52Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-ft500_4class results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ft500_4class This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1343 - Accuracy: 0.4853 - F1: 0.4777 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1837 | 1.0 | 188 | 1.1606 | 0.4313 | 0.4104 | | 1.0972 | 2.0 | 376 | 1.0929 | 0.488 | 0.4697 | | 1.0343 | 3.0 | 564 | 1.1017 | 0.4893 | 0.4651 | | 0.9781 | 4.0 | 752 | 1.1065 | 0.4993 | 0.4900 | | 0.9346 | 5.0 | 940 | 1.1343 | 0.4853 | 0.4777 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
dminiotas05/distilbert-base-uncased-finetuned-ft500_4
dminiotas05
2022-07-01T12:20:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-01T12:08:15Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-ft500_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ft500_4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1118 - Accuracy: 0.4807 - F1: 0.4638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1931 | 1.0 | 188 | 1.1525 | 0.4513 | 0.4333 | | 1.0982 | 2.0 | 376 | 1.1118 | 0.4807 | 0.4638 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
mousaazari/t5-test2sql
mousaazari
2022-07-01T12:14:46Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-01T11:12:13Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-test2sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-test2sql This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1207 - Rouge2 Precision: 0.9214 - Rouge2 Recall: 0.4259 - Rouge2 Fmeasure: 0.5578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | No log | 1.0 | 11 | 2.7293 | 0.1012 | 0.0305 | 0.0453 | | No log | 2.0 | 22 | 1.9009 | 0.0937 | 0.0292 | 0.0427 | | No log | 3.0 | 33 | 1.3525 | 0.1002 | 0.0349 | 0.0502 | | No log | 4.0 | 44 | 0.8837 | 0.1462 | 0.0529 | 0.0744 | | No log | 5.0 | 55 | 0.6460 | 0.5546 | 0.2531 | 0.3371 | | No log | 6.0 | 66 | 0.5050 | 0.729 | 0.3571 | 0.4631 | | No log | 7.0 | 77 | 0.4239 | 0.6944 | 0.3048 | 0.4088 | | No log | 8.0 | 88 | 0.3799 | 0.7868 | 0.3674 | 0.4807 | | No log | 9.0 | 99 | 0.3405 | 0.7266 | 0.3126 | 0.4213 | | No log | 10.0 | 110 | 0.3055 | 0.8447 | 0.3876 | 0.5104 | | No log | 11.0 | 121 | 0.2741 | 0.8546 | 0.3955 | 0.5201 | | No log | 12.0 | 132 | 0.2605 | 0.8676 | 0.4049 | 0.5308 | | No log | 13.0 | 143 | 0.2446 | 0.8424 | 0.3814 | 0.5047 | | No log | 14.0 | 154 | 0.2287 | 0.8659 | 0.3945 | 0.5238 | | No log | 15.0 | 165 | 0.2209 | 0.9064 | 0.4273 | 0.556 | | No log | 16.0 | 176 | 0.1990 | 0.888 | 0.409 | 0.5383 | | No log | 17.0 | 187 | 0.1941 | 0.9118 | 0.4305 | 0.5602 | | No log | 18.0 | 198 | 0.1785 | 0.9118 | 0.4305 | 0.5602 | | No log | 19.0 | 209 | 0.1669 | 0.919 | 0.4324 | 0.5636 | | No log | 20.0 | 220 | 0.1749 | 0.9138 | 0.4289 | 0.5608 | | No log | 21.0 | 231 | 0.1598 | 0.9047 | 0.4248 | 0.556 | | No log | 22.0 | 242 | 0.1501 | 0.9098 | 0.4294 | 0.5596 | | No log | 23.0 | 253 | 0.1456 | 0.9138 | 0.4307 | 0.5618 | | No log | 24.0 | 264 | 0.1419 | 0.893 | 0.4185 | 0.5467 | | No log | 25.0 | 275 | 0.1359 | 0.9005 | 0.4212 | 0.55 | | No log | 26.0 | 286 | 0.1338 | 0.8979 | 0.4212 | 0.5494 | | No log | 27.0 | 297 | 0.1319 | 0.9005 | 0.4212 | 0.55 | | No log | 28.0 | 308 | 0.1325 | 0.9005 | 0.4212 | 0.55 | | No log | 29.0 | 319 | 0.1335 | 0.9093 | 0.4231 | 0.5529 | | No log | 30.0 | 330 | 0.1240 | 0.9093 | 0.4231 | 0.5529 | | No log | 31.0 | 341 | 0.1222 | 0.9053 | 0.4231 | 0.5527 | | No log | 32.0 | 352 | 0.1265 | 0.9214 | 0.4259 | 0.5578 | | No log | 33.0 | 363 | 0.1286 | 0.9214 | 0.4259 | 0.5578 | | No log | 34.0 | 374 | 0.1283 | 0.9214 | 0.4259 | 0.5578 | | No log | 35.0 | 385 | 0.1279 | 0.9214 | 0.4259 | 0.5578 | | No log | 36.0 | 396 | 0.1285 | 0.9214 | 0.4259 | 0.5578 | | No log | 37.0 | 407 | 0.1291 | 0.9093 | 0.4231 | 0.5529 | | No log | 38.0 | 418 | 0.1270 | 0.9093 | 0.4231 | 0.5529 | | No log | 39.0 | 429 | 0.1225 | 0.9093 | 0.4231 | 0.5529 | | No log | 40.0 | 440 | 0.1205 | 0.9093 | 0.4231 | 0.5529 | | No log | 41.0 | 451 | 0.1210 | 0.9093 | 0.4231 | 0.5529 | | No log | 42.0 | 462 | 0.1230 | 0.9093 | 0.4231 | 0.5529 | | No log | 43.0 | 473 | 0.1250 | 0.9093 | 0.4231 | 0.5529 | | No log | 44.0 | 484 | 0.1223 | 0.9214 | 0.4259 | 0.5578 | | No log | 45.0 | 495 | 0.1226 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 46.0 | 506 | 0.1213 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 47.0 | 517 | 0.1205 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 48.0 | 528 | 0.1203 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 49.0 | 539 | 0.1206 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 50.0 | 550 | 0.1207 | 0.9214 | 0.4259 | 0.5578 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
emilys/twitter-roberta-base-CoNLL
emilys
2022-07-01T12:13:20Z
14
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-06-28T23:07:52Z
--- tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: twitter-roberta-base-CoNLL results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.953111963957951 - name: Recall type: recall value: 0.9612924941097274 - name: F1 type: f1 value: 0.9571847507331379 - name: Accuracy type: accuracy value: 0.9925820645613489 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-CoNLL This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0423 - Precision: 0.9531 - Recall: 0.9613 - F1: 0.9572 - Accuracy: 0.9926 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 64 - eval_batch_size: 1024 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.11 | 25 | 0.2063 | 0.6517 | 0.6659 | 0.6587 | 0.9386 | | No log | 0.23 | 50 | 0.0810 | 0.8373 | 0.8766 | 0.8565 | 0.9771 | | No log | 0.34 | 75 | 0.0651 | 0.8937 | 0.9058 | 0.8997 | 0.9827 | | No log | 0.45 | 100 | 0.0537 | 0.9014 | 0.9135 | 0.9074 | 0.9849 | | No log | 0.57 | 125 | 0.0464 | 0.9097 | 0.9244 | 0.9170 | 0.9867 | | No log | 0.68 | 150 | 0.0423 | 0.9243 | 0.9350 | 0.9296 | 0.9885 | | No log | 0.8 | 175 | 0.0381 | 0.9250 | 0.9438 | 0.9343 | 0.9900 | | No log | 0.91 | 200 | 0.0388 | 0.9264 | 0.9446 | 0.9354 | 0.9896 | | No log | 1.02 | 225 | 0.0394 | 0.9328 | 0.9441 | 0.9384 | 0.9898 | | No log | 1.14 | 250 | 0.0423 | 0.9348 | 0.9458 | 0.9403 | 0.9896 | | No log | 1.25 | 275 | 0.0432 | 0.9304 | 0.9406 | 0.9355 | 0.9892 | | No log | 1.36 | 300 | 0.0382 | 0.9393 | 0.9473 | 0.9433 | 0.9901 | | No log | 1.48 | 325 | 0.0381 | 0.9326 | 0.9504 | 0.9414 | 0.9901 | | No log | 1.59 | 350 | 0.0387 | 0.9337 | 0.9524 | 0.9429 | 0.9902 | | No log | 1.7 | 375 | 0.0365 | 0.9404 | 0.9475 | 0.9439 | 0.9901 | | No log | 1.82 | 400 | 0.0382 | 0.9431 | 0.9517 | 0.9474 | 0.9905 | | No log | 1.93 | 425 | 0.0373 | 0.9399 | 0.9524 | 0.9461 | 0.9903 | | No log | 2.05 | 450 | 0.0367 | 0.9440 | 0.9556 | 0.9497 | 0.9910 | | No log | 2.16 | 475 | 0.0396 | 0.9400 | 0.9551 | 0.9475 | 0.9907 | | 0.0771 | 2.27 | 500 | 0.0353 | 0.9442 | 0.9574 | 0.9508 | 0.9912 | | 0.0771 | 2.39 | 525 | 0.0394 | 0.9401 | 0.9507 | 0.9454 | 0.9906 | | 0.0771 | 2.5 | 550 | 0.0370 | 0.9447 | 0.9522 | 0.9485 | 0.9910 | | 0.0771 | 2.61 | 575 | 0.0352 | 0.9404 | 0.9541 | 0.9472 | 0.9908 | | 0.0771 | 2.73 | 600 | 0.0386 | 0.9345 | 0.9554 | 0.9448 | 0.9908 | | 0.0771 | 2.84 | 625 | 0.0366 | 0.9428 | 0.9576 | 0.9502 | 0.9916 | | 0.0771 | 2.95 | 650 | 0.0353 | 0.9427 | 0.9546 | 0.9486 | 0.9913 | | 0.0771 | 3.07 | 675 | 0.0359 | 0.9412 | 0.9544 | 0.9478 | 0.9911 | | 0.0771 | 3.18 | 700 | 0.0356 | 0.9476 | 0.9593 | 0.9534 | 0.9920 | | 0.0771 | 3.3 | 725 | 0.0345 | 0.9484 | 0.9586 | 0.9535 | 0.9918 | | 0.0771 | 3.41 | 750 | 0.0345 | 0.9427 | 0.9557 | 0.9492 | 0.9916 | | 0.0771 | 3.52 | 775 | 0.0364 | 0.9389 | 0.9569 | 0.9478 | 0.9914 | | 0.0771 | 3.64 | 800 | 0.0360 | 0.9430 | 0.9584 | 0.9507 | 0.9915 | | 0.0771 | 3.75 | 825 | 0.0387 | 0.9458 | 0.9552 | 0.9505 | 0.9915 | | 0.0771 | 3.86 | 850 | 0.0347 | 0.9468 | 0.9576 | 0.9521 | 0.9917 | | 0.0771 | 3.98 | 875 | 0.0357 | 0.9445 | 0.9574 | 0.9509 | 0.9915 | | 0.0771 | 4.09 | 900 | 0.0382 | 0.9464 | 0.9578 | 0.9521 | 0.9918 | | 0.0771 | 4.2 | 925 | 0.0391 | 0.9475 | 0.9562 | 0.9518 | 0.9918 | | 0.0771 | 4.32 | 950 | 0.0428 | 0.9466 | 0.9547 | 0.9506 | 0.9912 | | 0.0771 | 4.43 | 975 | 0.0404 | 0.9459 | 0.9554 | 0.9506 | 0.9913 | | 0.0118 | 4.55 | 1000 | 0.0403 | 0.9375 | 0.9549 | 0.9461 | 0.9909 | | 0.0118 | 4.66 | 1025 | 0.0369 | 0.9482 | 0.9586 | 0.9534 | 0.9919 | | 0.0118 | 4.77 | 1050 | 0.0374 | 0.9457 | 0.9584 | 0.9520 | 0.9918 | | 0.0118 | 4.89 | 1075 | 0.0359 | 0.9507 | 0.9571 | 0.9539 | 0.9923 | | 0.0118 | 5.0 | 1100 | 0.0373 | 0.9453 | 0.9594 | 0.9523 | 0.9919 | | 0.0118 | 5.11 | 1125 | 0.0370 | 0.9499 | 0.9594 | 0.9546 | 0.9924 | | 0.0118 | 5.23 | 1150 | 0.0388 | 0.9510 | 0.9601 | 0.9555 | 0.9922 | | 0.0118 | 5.34 | 1175 | 0.0395 | 0.9486 | 0.9559 | 0.9522 | 0.9920 | | 0.0118 | 5.45 | 1200 | 0.0391 | 0.9495 | 0.9591 | 0.9543 | 0.9924 | | 0.0118 | 5.57 | 1225 | 0.0378 | 0.9517 | 0.9588 | 0.9552 | 0.9923 | | 0.0118 | 5.68 | 1250 | 0.0388 | 0.9515 | 0.9615 | 0.9565 | 0.9924 | | 0.0118 | 5.8 | 1275 | 0.0384 | 0.9512 | 0.9610 | 0.9560 | 0.9924 | | 0.0118 | 5.91 | 1300 | 0.0395 | 0.9530 | 0.9613 | 0.9571 | 0.9924 | | 0.0118 | 6.02 | 1325 | 0.0408 | 0.9499 | 0.9569 | 0.9534 | 0.9919 | | 0.0118 | 6.14 | 1350 | 0.0412 | 0.9481 | 0.9616 | 0.9548 | 0.9922 | | 0.0118 | 6.25 | 1375 | 0.0413 | 0.9521 | 0.9591 | 0.9556 | 0.9924 | | 0.0118 | 6.36 | 1400 | 0.0412 | 0.9466 | 0.9584 | 0.9525 | 0.9917 | | 0.0118 | 6.48 | 1425 | 0.0405 | 0.9504 | 0.9608 | 0.9556 | 0.9921 | | 0.0118 | 6.59 | 1450 | 0.0400 | 0.9517 | 0.9615 | 0.9566 | 0.9925 | | 0.0118 | 6.7 | 1475 | 0.0398 | 0.9510 | 0.9594 | 0.9552 | 0.9923 | | 0.0049 | 6.82 | 1500 | 0.0395 | 0.9523 | 0.9615 | 0.9569 | 0.9925 | | 0.0049 | 6.93 | 1525 | 0.0392 | 0.9520 | 0.9623 | 0.9571 | 0.9927 | | 0.0049 | 7.05 | 1550 | 0.0390 | 0.9511 | 0.9593 | 0.9552 | 0.9923 | | 0.0049 | 7.16 | 1575 | 0.0393 | 0.9520 | 0.9611 | 0.9565 | 0.9925 | | 0.0049 | 7.27 | 1600 | 0.0389 | 0.9512 | 0.9613 | 0.9562 | 0.9925 | | 0.0049 | 7.39 | 1625 | 0.0405 | 0.9518 | 0.9613 | 0.9565 | 0.9924 | | 0.0049 | 7.5 | 1650 | 0.0410 | 0.9512 | 0.9606 | 0.9559 | 0.9925 | | 0.0049 | 7.61 | 1675 | 0.0408 | 0.9526 | 0.9613 | 0.9569 | 0.9925 | | 0.0049 | 7.73 | 1700 | 0.0436 | 0.9482 | 0.9610 | 0.9545 | 0.9922 | | 0.0049 | 7.84 | 1725 | 0.0419 | 0.9495 | 0.9625 | 0.9560 | 0.9924 | | 0.0049 | 7.95 | 1750 | 0.0429 | 0.9525 | 0.9618 | 0.9571 | 0.9926 | | 0.0049 | 8.07 | 1775 | 0.0419 | 0.9509 | 0.9615 | 0.9562 | 0.9924 | | 0.0049 | 8.18 | 1800 | 0.0422 | 0.9510 | 0.9601 | 0.9555 | 0.9923 | | 0.0049 | 8.3 | 1825 | 0.0417 | 0.9521 | 0.9603 | 0.9562 | 0.9924 | | 0.0049 | 8.41 | 1850 | 0.0415 | 0.9529 | 0.9611 | 0.9570 | 0.9925 | | 0.0049 | 8.52 | 1875 | 0.0416 | 0.9523 | 0.9611 | 0.9567 | 0.9924 | | 0.0049 | 8.64 | 1900 | 0.0419 | 0.9504 | 0.9608 | 0.9556 | 0.9922 | | 0.0049 | 8.75 | 1925 | 0.0417 | 0.9520 | 0.9610 | 0.9564 | 0.9924 | | 0.0049 | 8.86 | 1950 | 0.0419 | 0.9535 | 0.9621 | 0.9578 | 0.9926 | | 0.0049 | 8.98 | 1975 | 0.0422 | 0.9531 | 0.9620 | 0.9575 | 0.9927 | | 0.0022 | 9.09 | 2000 | 0.0423 | 0.9531 | 0.9613 | 0.9572 | 0.9926 | | 0.0022 | 9.2 | 2025 | 0.0426 | 0.9520 | 0.9615 | 0.9567 | 0.9925 | | 0.0022 | 9.32 | 2050 | 0.0425 | 0.9515 | 0.9606 | 0.9560 | 0.9925 | | 0.0022 | 9.43 | 2075 | 0.0422 | 0.9517 | 0.9613 | 0.9565 | 0.9925 | | 0.0022 | 9.55 | 2100 | 0.0423 | 0.9513 | 0.9606 | 0.9560 | 0.9925 | | 0.0022 | 9.66 | 2125 | 0.0424 | 0.9513 | 0.9605 | 0.9559 | 0.9925 | | 0.0022 | 9.77 | 2150 | 0.0423 | 0.9522 | 0.9611 | 0.9566 | 0.9925 | | 0.0022 | 9.89 | 2175 | 0.0423 | 0.9522 | 0.9613 | 0.9567 | 0.9925 | | 0.0022 | 10.0 | 2200 | 0.0422 | 0.9525 | 0.9616 | 0.9570 | 0.9925 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.3.2 - Tokenizers 0.12.1
jvanz/querido_diario_autoencoder
jvanz
2022-07-01T11:53:59Z
12
1
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "pt", "dataset:jvanz/querido_diario", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-06-27T12:48:50Z
--- language: - pt datasets: - jvanz/querido_diario --- # Querido Diario Autoencoder Autoencoder based on portuguese BERT using the Querido Diario dataset
abhinav-kumar-thakur/distilbert-base-uncased-finetuned-mrpc
abhinav-kumar-thakur
2022-07-01T11:01:01Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-01T10:50:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8578431372549019 - name: F1 type: f1 value: 0.9006849315068494 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5556 - Accuracy: 0.8578 - F1: 0.9007 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.3937 | 0.8113 | 0.8670 | | No log | 2.0 | 460 | 0.3660 | 0.8480 | 0.8967 | | 0.4387 | 3.0 | 690 | 0.4298 | 0.8529 | 0.8973 | | 0.4387 | 4.0 | 920 | 0.5573 | 0.8529 | 0.8990 | | 0.1832 | 5.0 | 1150 | 0.5556 | 0.8578 | 0.9007 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/the_ironsheik
huggingtweets
2022-07-01T10:13:34Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-01T10:11:56Z
--- language: en thumbnail: http://www.huggingtweets.com/the_ironsheik/1656670410014/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1320863459953750016/NlmHwu3b_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The Iron Sheik</div> <div style="text-align: center; font-size: 14px;">@the_ironsheik</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from The Iron Sheik. | Data | The Iron Sheik | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 287 | | Short tweets | 253 | | Tweets kept | 2709 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ti6ikrg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @the_ironsheik's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2segcek8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2segcek8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/the_ironsheik') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
scaccomatto/autotrain-260-0-1068537269
scaccomatto
2022-07-01T10:05:22Z
3
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain", "en", "dataset:scaccomatto/autotrain-data-260-0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-01T09:53:56Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain πŸ€—" datasets: - scaccomatto/autotrain-data-260-0 co2_eq_emissions: 19.045065953636296 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1068537269 - CO2 Emissions (in grams): 19.045065953636296 ## Validation Metrics - Loss: 0.42951640486717224 - Rouge1: 85.4322 - Rouge2: 82.999 - RougeL: 84.8782 - RougeLsum: 85.1256 - Gen Len: 169.2895 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/scaccomatto/autotrain-260-0-1068537269 ```
scaccomatto/autotrain-120-0-1067937173
scaccomatto
2022-07-01T09:09:50Z
3
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain", "en", "dataset:scaccomatto/autotrain-data-120-0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-01T08:59:18Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain πŸ€—" datasets: - scaccomatto/autotrain-data-120-0 co2_eq_emissions: 0.08625442844190523 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1067937173 - CO2 Emissions (in grams): 0.08625442844190523 ## Validation Metrics - Loss: 0.502437174320221 - Rouge1: 83.7457 - Rouge2: 81.1714 - RougeL: 83.2649 - RougeLsum: 83.3018 - Gen Len: 78.7059 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/scaccomatto/autotrain-120-0-1067937173 ```
scaccomatto/autotrain-60-50-1067437104
scaccomatto
2022-07-01T08:19:35Z
4
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain", "en", "dataset:scaccomatto/autotrain-data-60-50", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-01T08:04:39Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain πŸ€—" datasets: - scaccomatto/autotrain-data-60-50 co2_eq_emissions: 29.54716889998106 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1067437104 - CO2 Emissions (in grams): 29.54716889998106 ## Validation Metrics - Loss: 0.5487185120582581 - Rouge1: 77.4054 - Rouge2: 74.6166 - RougeL: 77.1503 - RougeLsum: 76.8399 - Gen Len: 42.0326 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/scaccomatto/autotrain-60-50-1067437104 ```
osanseviero/test_nemo
osanseviero
2022-07-01T06:48:03Z
0
0
null
[ "region:us" ]
null
2022-06-30T08:51:22Z
<iframe src="https://hf.space/embed/abidlabs/pytorch-image-classifier/+" frameBorder="0" width="100%" height="660px" title="Gradio app" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
dbarbedillo/dqn-SpaceInvadersNoFrameskip-v4
dbarbedillo
2022-07-01T06:34:18Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-01T06:33:30Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 955.50 +/- 322.11 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dbarbedillo -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dbarbedillo ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Tritkoman/EN-ROM
Tritkoman
2022-07-01T06:07:37Z
4
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain", "translation", "en", "hi", "dataset:Tritkoman/autotrain-data-rusynpann", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-07-01T05:43:35Z
--- tags: - autotrain - translation language: - en - hi datasets: - Tritkoman/autotrain-data-rusynpann co2_eq_emissions: 30.068537136776726 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 1066237031 - CO2 Emissions (in grams): 30.068537136776726 ## Validation Metrics - Loss: 2.461327075958252 - SacreBLEU: 13.8452 - Gen len: 13.2313
Buyandelger/roberta-base-ner-demo
Buyandelger
2022-07-01T03:58:26Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "mn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-01T03:49:28Z
--- language: - mn tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-base-ner-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-ner-demo This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0771 - Precision: 0.8802 - Recall: 0.8951 - F1: 0.8876 - Accuracy: 0.9798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0336 | 1.0 | 477 | 0.0771 | 0.8802 | 0.8951 | 0.8876 | 0.9798 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
bayartsogt/roberta-base-ner
bayartsogt
2022-07-01T01:51:15Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "mn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-01T01:15:27Z
--- language: - mn tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-base-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-ner This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1328 - Precision: 0.9248 - Recall: 0.9325 - F1: 0.9286 - Accuracy: 0.9805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.17 | 1.0 | 477 | 0.0823 | 0.8652 | 0.9001 | 0.8823 | 0.9739 | | 0.0567 | 2.0 | 954 | 0.0883 | 0.9070 | 0.9296 | 0.9182 | 0.9778 | | 0.0278 | 3.0 | 1431 | 0.0904 | 0.9165 | 0.9302 | 0.9233 | 0.9789 | | 0.0158 | 4.0 | 1908 | 0.0945 | 0.9220 | 0.9301 | 0.9260 | 0.9798 | | 0.0089 | 5.0 | 2385 | 0.1118 | 0.9227 | 0.9287 | 0.9257 | 0.9799 | | 0.0061 | 6.0 | 2862 | 0.1154 | 0.9212 | 0.9309 | 0.9260 | 0.9803 | | 0.0037 | 7.0 | 3339 | 0.1240 | 0.9253 | 0.9320 | 0.9286 | 0.9806 | | 0.0023 | 8.0 | 3816 | 0.1293 | 0.9232 | 0.9316 | 0.9274 | 0.9803 | | 0.0013 | 9.0 | 4293 | 0.1323 | 0.9253 | 0.9332 | 0.9292 | 0.9806 | | 0.0012 | 10.0 | 4770 | 0.1328 | 0.9248 | 0.9325 | 0.9286 | 0.9805 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
arize-ai/XLM-RoBERTa-xtreme-en-token-drift
arize-ai
2022-07-01T01:48:49Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme_en_token_drift", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-01T00:35:55Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme_en_token_drift metrics: - accuracy - f1 widget: - text: "My name is Julia, I study at Imperial College, in London" example_title: "Example 1" - text: "My name is Sarah and I live in Paris" example_title: "Example 2" - text: "My name is Clara and I live in Berkeley, California" example_title: "Example 3" model-index: - name: XLM-RoBERTa-xtreme-en-token-drift results: - task: name: Token Classification type: token-classification dataset: name: xtreme_en_token_drift type: xtreme_en_token_drift args: default metrics: - name: Accuracy type: accuracy value: 0.908855961405927 - name: F1 type: f1 value: 0.76126567683807 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLM-RoBERTa-xtreme-en-token-drift This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme_en_token_drift dataset. It achieves the following results on the evaluation set: - Loss: 0.2802 - Accuracy: 0.9089 - F1: 0.7613 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6398 | 1.0 | 161 | 0.3421 | 0.8973 | 0.7111 | | 0.3268 | 2.0 | 322 | 0.2846 | 0.9097 | 0.7611 | | 0.2701 | 3.0 | 483 | 0.2802 | 0.9089 | 0.7613 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
BigSalmon/InformalToFormalLincoln53
BigSalmon
2022-07-01T00:59:52Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-01T00:50:11Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln53") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln53") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - nebraska - unicamerical legislature - different from federal house and senate text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: β–‘ voluntary citizens' group that is organized on a local, national or international level β–‘ encourage political participation β–‘ often serve humanitarian functions β–‘ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? β–‘ noise β–‘ parking β–‘ traffic β–‘ security β–‘ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ``` ``` input: not loyal 1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ). *** input: ```
dperezjr/wav2vec2-large-xls-r-300m-turkish-colab
dperezjr
2022-06-30T22:25:27Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-06-30T17:48:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3783 - Wer: 0.3036 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.0054 | 3.67 | 400 | 0.7096 | 0.6999 | | 0.4061 | 7.34 | 800 | 0.4152 | 0.4637 | | 0.1797 | 11.01 | 1200 | 0.4008 | 0.4164 | | 0.1201 | 14.68 | 1600 | 0.4275 | 0.4152 | | 0.0937 | 18.35 | 2000 | 0.4297 | 0.3978 | | 0.074 | 22.02 | 2400 | 0.3670 | 0.3618 | | 0.0602 | 25.69 | 2800 | 0.3875 | 0.3129 | | 0.0472 | 29.36 | 3200 | 0.3783 | 0.3036 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
danieladejumo/ppo-MountainCarContinuous-v0
danieladejumo
2022-06-30T21:12:18Z
2
0
stable-baselines3
[ "stable-baselines3", "MountainCarContinuous-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-06-30T21:11:53Z
--- library_name: stable-baselines3 tags: - MountainCarContinuous-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 87.80 +/- 0.28 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: MountainCarContinuous-v0 type: MountainCarContinuous-v0 --- # **PPO** Agent playing **MountainCarContinuous-v0** This is a trained model of a **PPO** agent playing **MountainCarContinuous-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ppo --env MountainCarContinuous-v0 -orga danieladejumo -f logs/ python enjoy.py --algo ppo --env MountainCarContinuous-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo --env MountainCarContinuous-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ppo --env MountainCarContinuous-v0 -f logs/ -orga danieladejumo ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('clip_range', 0.1), ('ent_coef', 0.00429), ('gae_lambda', 0.9), ('gamma', 0.9999), ('learning_rate', 7.77e-05), ('max_grad_norm', 5), ('n_envs', 1), ('n_epochs', 10), ('n_steps', 8), ('n_timesteps', 20000.0), ('normalize', True), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(log_std_init=-3.29, ortho_init=False)'), ('use_sde', True), ('vf_coef', 0.19), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
ashraq/ml-latest-small-movie-model-32
ashraq
2022-06-30T20:50:05Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2022-06-30T20:50:00Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Chemsseddine/bert2gpt2_med_v2
Chemsseddine
2022-06-30T19:53:14Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-06-25T12:50:21Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: bert2gpt2_med_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <img src="https://huggingface.co/Chemsseddine/bert2gpt2_med_ml_orange_summ-finetuned_med_sum_new-finetuned_med_sum_new/resolve/main/logobert2gpt2.png" alt="Map of positive probabilities per country." width="200"/> # bert2gpt2_med_v2 This model is a fine-tuned version of [Chemsseddine/bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum](https://huggingface.co/Chemsseddine/bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0684 - Rouge1: 34.1248 - Rouge2: 17.7006 - Rougel: 33.4661 - Rougelsum: 33.4419 - Gen Len: 22.6429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.9107 | 1.0 | 1000 | 2.0877 | 30.4547 | 14.4024 | 30.3642 | 30.3788 | 21.9714 | | 1.8782 | 2.0 | 2000 | 1.8151 | 32.6607 | 16.8089 | 32.3844 | 32.4762 | 21.7714 | | 1.291 | 3.0 | 3000 | 1.7523 | 33.6391 | 16.7866 | 32.4256 | 32.3306 | 22.7429 | | 0.819 | 4.0 | 4000 | 1.7650 | 35.0633 | 19.1222 | 34.4902 | 34.6796 | 22.4714 | | 0.4857 | 5.0 | 5000 | 1.8129 | 33.8763 | 16.9303 | 32.8845 | 32.9225 | 22.3857 | | 0.3232 | 6.0 | 6000 | 1.9339 | 33.9272 | 17.1784 | 32.9301 | 33.0253 | 22.4286 | | 0.2022 | 7.0 | 7000 | 1.9634 | 33.9869 | 16.4238 | 33.7336 | 33.65 | 22.6429 | | 0.1452 | 8.0 | 8000 | 2.0090 | 33.8892 | 18.2723 | 33.7514 | 33.6531 | 22.5714 | | 0.0845 | 9.0 | 9000 | 2.0337 | 33.9649 | 17.1339 | 33.5061 | 33.4157 | 22.7857 | | 0.0531 | 10.0 | 10000 | 2.0684 | 34.1248 | 17.7006 | 33.4661 | 33.4419 | 22.6429 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
tmoodley/rare-puppers
tmoodley
2022-06-30T19:11:33Z
53
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-06-30T19:11:18Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 1.0 --- # rare-puppers Autogenerated by HuggingPicsπŸ€—πŸ–ΌοΈ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### corgi ![corgi](images/corgi.jpg) #### samoyed ![samoyed](images/samoyed.jpg) #### shiba inu ![shiba inu](images/shiba_inu.jpg)
facebook/regnet-y-10b-seer
facebook
2022-06-30T18:59:33Z
19
5
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-feature-extraction", "vision", "seer", "arxiv:2003.13678", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-feature-extraction
2022-04-05T15:47:49Z
--- license: apache-2.0 tags: - vision - seer --- ## RegNetY 10B This gigantic model is a scale up [RegNetY](https://arxiv.org/abs/2003.13678) model trained on one billion uncurated Instagram images. Disclaimer: The team releasing RegNetModel did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-10b-seer") >>> model = RegNetModel.from_pretrained("facebook/regnet-y-10b-seer") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 1088, 7, 7] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-y-320-seer-in1k
facebook
2022-06-30T18:57:59Z
62
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2202.08360", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-18T14:35:06Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision](https://arxiv.org/abs/2202.08360) and first released in [this repository](https://github.com/facebookresearch/vissl/tree/main/projects/SEER). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors trained [RegNets](https://huggingface.co/?models=regnet) models in a self-supervised fashion on a billion uncurated Instagram images. This model is later fine-tuned on ImageNet. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-320-seer-in1k") >>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-320-seer-in1k") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-x-040
facebook
2022-06-30T18:57:14Z
90
1
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-15T19:38:02Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-x-040") >>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-x-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
Chemsseddine/bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum
Chemsseddine
2022-06-30T18:42:50Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:orange_sum", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-06-20T12:27:44Z
--- tags: - generated_from_trainer datasets: - orange_sum metrics: - rouge model-index: - name: bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: orange_sum type: orange_sum args: abstract metrics: - name: Rouge1 type: rouge value: 24.949 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <img src="https://huggingface.co/Chemsseddine/bert2gpt2_med_ml_orange_summ-finetuned_med_sum_new-finetuned_med_sum_new/resolve/main/logobert2gpt2.png" alt="Map of positive probabilities per country." width="200"/> # bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum This model is a fine-tuned version of [Chemsseddine/bert2gpt2SUMM-finetuned-mlsum](https://huggingface.co/Chemsseddine/bert2gpt2SUMM-finetuned-mlsum) on the orange_sum dataset. It achieves the following results on the evaluation set: - Loss: 3.1773 - Rouge1: 24.949 - Rouge2: 7.851 - Rougel: 18.1575 - Rougelsum: 18.4114 - Gen Len: 39.7947 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:| | 3.5484 | 1.0 | 1338 | 3.1773 | 24.949 | 7.851 | 18.1575 | 18.4114 | 39.7947 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/codyko-thenoelmiller
huggingtweets
2022-06-30T17:40:32Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-06-30T17:39:28Z
--- language: en thumbnail: http://www.huggingtweets.com/codyko-thenoelmiller/1656610826736/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1438687954285707265/aEtAZlbY_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1438687880101212170/nNi2oamd_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI CYBORG πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">codyko & Noel Miller</div> <div style="text-align: center; font-size: 14px;">@codyko-thenoelmiller</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from codyko & Noel Miller. | Data | codyko | Noel Miller | | --- | --- | --- | | Tweets downloaded | 3184 | 3215 | | Retweets | 604 | 316 | | Short tweets | 762 | 712 | | Tweets kept | 1818 | 2187 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gyf1npk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @codyko-thenoelmiller's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31mulsnt) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31mulsnt/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/codyko-thenoelmiller') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Cathyhuang/Trial1
Cathyhuang
2022-06-30T17:14:58Z
0
0
null
[ "region:us" ]
null
2022-06-30T17:14:36Z
Trying the model for the first time
p-serna/mt5-small-spanish-paraphraser
p-serna
2022-06-30T16:33:25Z
5
0
transformers
[ "transformers", "pytorch", "tf", "mt5", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-06-30T16:07:20Z
--- license: apache-2.0 --- # mT5-small based spanish paraphraser ### Original model - [Google's mT5](https://huggingface.co/google/mt5-small) ### Datasets used for training: - spanish [PAWS-X](https://huggingface.co/datasets/paws-x) - Custom database: "Poor-man's" translation of [duplicated questions in Quora](https://huggingface.co/datasets/quora) (translated with [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es))
FabianWillner/bert-base-uncased-finetuned-triviaqa
FabianWillner
2022-06-30T16:21:05Z
28
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-06-30T12:10:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-triviaqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-triviaqa This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.9297 | 1.0 | 11195 | 0.9093 | | 0.6872 | 2.0 | 22390 | 0.9252 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
dminiotas05/distilbert-base-uncased-finetuned-emotion
dminiotas05
2022-06-30T15:49:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-29T10:25:00Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1027 - Accuracy: 0.5447 - F1: 0.4832 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1848 | 1.0 | 188 | 1.1199 | 0.538 | 0.4607 | | 1.0459 | 2.0 | 376 | 1.1027 | 0.5447 | 0.4832 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed-finetuned-DAGPap22
domenicrosati
2022-06-30T13:54:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-30T10:25:53Z
--- license: mit tags: - text-classification - generated_from_trainer metrics: - accuracy - f1 model-index: - name: deberta-v3-large-dapt-scientific-papers-pubmed-finetuned-DAGPap22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-dapt-scientific-papers-pubmed-finetuned-DAGPap22 This model is a fine-tuned version of [domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed](https://huggingface.co/domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2165 | 1.0 | 669 | 0.0218 | 0.9963 | 0.9973 | | 0.0717 | 2.0 | 1338 | 0.0213 | 0.9964 | 0.9974 | | 0.03 | 3.0 | 2007 | 0.0121 | 0.9983 | 0.9988 | | 0.0165 | 4.0 | 2676 | 0.0147 | 0.9976 | 0.9982 | | 0.0072 | 5.0 | 3345 | 0.0000 | 1.0 | 1.0 | | 0.0055 | 6.0 | 4014 | 0.0000 | 1.0 | 1.0 | | 0.0077 | 7.0 | 4683 | 0.0000 | 1.0 | 1.0 | | 0.0 | 8.0 | 5352 | 0.0000 | 1.0 | 1.0 | | 0.0 | 9.0 | 6021 | 0.0000 | 1.0 | 1.0 | | 0.0 | 10.0 | 6690 | 0.0000 | 1.0 | 1.0 | | 0.0 | 11.0 | 7359 | 0.0000 | 1.0 | 1.0 | | 0.0 | 12.0 | 8028 | 0.0000 | 1.0 | 1.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
BK-V/xlm-roberta-base-finetuned-arman-fa
BK-V
2022-06-30T13:40:40Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-10T12:52:56Z
--- license: mit tags: - generated_from_trainer - token-classification datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-arman-fa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-arman-fa This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0077 - F1: 0.9855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1054 | 1.0 | 2305 | 0.0497 | 0.8548 | | 0.0419 | 2.0 | 4610 | 0.0339 | 0.8834 | | 0.0185 | 3.0 | 6915 | 0.0159 | 0.9626 | | 0.0068 | 4.0 | 9220 | 0.0103 | 0.9789 | | 0.0025 | 5.0 | 11525 | 0.0077 | 0.9855 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.12.1
abhishek/autotrain-imdbtestmodel-9215210
abhishek
2022-06-30T13:36:05Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "en", "dataset:abhishek/autotrain-data-imdbtestmodel", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-30T13:07:01Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain πŸ€—" datasets: - abhishek/autotrain-data-imdbtestmodel co2_eq_emissions: 0.2757084122251468 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 9215210 - CO2 Emissions (in grams): 0.2757084122251468 ## Validation Metrics - Loss: 0.1699502319097519 - Accuracy: 0.9372 - Precision: 0.9277551659361303 - Recall: 0.94824 - AUC: 0.9837227744 - F1: 0.9378857414147808 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/abhishek/autotrain-imdbtestmodel-9215210 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autotrain-imdbtestmodel-9215210", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autotrain-imdbtestmodel-9215210", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
zhifei/autotrain-chinese-title-summarization-1060936832
zhifei
2022-06-30T12:23:58Z
4
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain", "unk", "dataset:zhifei/autotrain-data-chinese-title-summarization", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-06-30T12:20:46Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain πŸ€—" datasets: - zhifei/autotrain-data-chinese-title-summarization co2_eq_emissions: 3.841483701875158 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1060936832 - CO2 Emissions (in grams): 3.841483701875158 ## Validation Metrics - Loss: 0.5115200877189636 - Rouge1: 27.3016 - Rouge2: 10.4762 - RougeL: 27.3016 - RougeLsum: 27.1111 - Gen Len: 14.3619 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zhifei/autotrain-chinese-title-summarization-1060936832 ```
mousaazari/t5-small-finetuned-wikisql
mousaazari
2022-06-30T11:37:10Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-06-24T09:47:44Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-finetuned-wikisql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikisql This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2640 - Rouge2 Precision: 0.8471 - Rouge2 Recall: 0.3841 - Rouge2 Fmeasure: 0.5064 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | No log | 1.0 | 11 | 2.7587 | 0.098 | 0.0305 | 0.045 | | No log | 2.0 | 22 | 2.0056 | 0.0969 | 0.0284 | 0.0422 | | No log | 3.0 | 33 | 1.4456 | 0.1046 | 0.0349 | 0.0503 | | No log | 4.0 | 44 | 1.0317 | 0.1054 | 0.0337 | 0.0482 | | No log | 5.0 | 55 | 0.7603 | 0.2749 | 0.1299 | 0.1724 | | No log | 6.0 | 66 | 0.5722 | 0.7115 | 0.352 | 0.4552 | | No log | 7.0 | 77 | 0.4751 | 0.6872 | 0.337 | 0.436 | | No log | 8.0 | 88 | 0.4253 | 0.7256 | 0.3439 | 0.4462 | | No log | 9.0 | 99 | 0.3805 | 0.7335 | 0.3204 | 0.4308 | | No log | 10.0 | 110 | 0.3562 | 0.7342 | 0.3239 | 0.433 | | No log | 11.0 | 121 | 0.3275 | 0.7906 | 0.355 | 0.471 | | No log | 12.0 | 132 | 0.3133 | 0.8382 | 0.3838 | 0.5061 | | No log | 13.0 | 143 | 0.2996 | 0.8409 | 0.3841 | 0.5062 | | No log | 14.0 | 154 | 0.2903 | 0.8304 | 0.3763 | 0.4978 | | No log | 15.0 | 165 | 0.2867 | 0.8409 | 0.3841 | 0.5062 | | No log | 16.0 | 176 | 0.2786 | 0.8409 | 0.3841 | 0.5062 | | No log | 17.0 | 187 | 0.2711 | 0.8409 | 0.3841 | 0.5062 | | No log | 18.0 | 198 | 0.2673 | 0.8409 | 0.3841 | 0.5062 | | No log | 19.0 | 209 | 0.2643 | 0.8471 | 0.3841 | 0.5064 | | No log | 20.0 | 220 | 0.2640 | 0.8471 | 0.3841 | 0.5064 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
facebook/regnet-y-002
facebook
2022-06-30T10:22:35Z
62
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-18T15:32:09Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-y-008
facebook
2022-06-30T10:21:48Z
95
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-18T15:33:58Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-x-016
facebook
2022-06-30T10:14:50Z
95
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-15T19:36:39Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-x-004
facebook
2022-06-30T10:14:47Z
77
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-15T19:34:54Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-x-064
facebook
2022-06-30T10:14:43Z
69
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-15T19:38:56Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-x-080
facebook
2022-06-30T10:14:32Z
67
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-18T15:25:24Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-x-008
facebook
2022-06-30T10:14:24Z
69
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-15T19:36:02Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
facebook/regnet-y-064
facebook
2022-06-30T10:14:12Z
70
0
transformers
[ "transformers", "pytorch", "tf", "regnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-18T15:37:01Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
ubikpt/t5-small-finetuned-cnn
ubikpt
2022-06-30T10:07:16Z
85
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "summarization", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-06-29T07:19:18Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer datasets: - cnn_dailymail metrics: - rouge model-index: - name: t5-small-finetuned-cnn results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: cnn_dailymail type: cnn_dailymail args: 3.0.0 metrics: - name: Rouge1 type: rouge value: 33.2082 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-cnn This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.8436 - Rouge1: 33.2082 - Rouge2: 16.798 - Rougel: 28.9573 - Rougelsum: 31.1044 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 2.3793 | 1.0 | 359 | 1.8885 | 33.0321 | 16.7798 | 28.9367 | 30.9509 | | 2.1432 | 2.0 | 718 | 1.8481 | 33.1559 | 16.8557 | 29.015 | 31.1122 | | 2.0571 | 3.0 | 1077 | 1.8391 | 32.99 | 16.716 | 28.8118 | 30.9178 | | 2.0001 | 4.0 | 1436 | 1.8357 | 33.0543 | 16.6731 | 28.8375 | 30.9604 | | 1.9609 | 5.0 | 1795 | 1.8437 | 33.1019 | 16.7576 | 28.8669 | 31.001 | | 1.925 | 6.0 | 2154 | 1.8402 | 33.1388 | 16.7539 | 28.8887 | 31.0262 | | 1.9036 | 7.0 | 2513 | 1.8423 | 33.1825 | 16.759 | 28.9154 | 31.0656 | | 1.8821 | 8.0 | 2872 | 1.8436 | 33.2082 | 16.798 | 28.9573 | 31.1044 | ### Framework versions - Transformers 4.14.0 - Pytorch 1.5.0 - Datasets 2.3.2 - Tokenizers 0.10.3
Mytios919/Mytios
Mytios919
2022-06-30T08:40:52Z
0
0
null
[ "region:us" ]
null
2022-06-30T08:31:02Z
git lfs install git clone https://huggingface.co/Mytios919/Mytios
fxmarty/donotdelete3
fxmarty
2022-06-30T08:15:26Z
0
0
null
[ "tensorboard", "roberta", "text-classification", "dataset:glue", "region:us" ]
text-classification
2022-06-30T08:15:10Z
--- pipeline_tag: text-classification datasets: - glue metrics: - accuracy tags: - roberta --- **task**: `text-classification` Fixed parameters: * **model_name_or_path**: `Bhumika/roberta-base-finetuned-sst2` * **dataset**: * **path**: `glue` * **eval_split**: `validation` * **data_keys**: `{'primary': 'sentence'}` * **ref_keys**: `['label']` * **name**: `sst2` * **quantization_approach**: `dynamic` * **node_exclusion**: `[]` * **per_channel**: `False` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `15` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']` ## Evaluation Below, time metrics for * Batch size: 8 * Input length: 128 | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | accuracy (original) | accuracy (optimized) | | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | :-: | :-----------------: | :------------------: | | `['Add', 'MatMul']` | \| | 619.76 | 161.66 | \| | 1.80 | 6.20 | \| | 1.000 | 1.000 | | `['Add']` | \| | 611.74 | 478.48 | \| | 1.80 | 2.20 | \| | 1.000 | 1.000 |
Corianas/ppo_lstm-LunarLander-v2
Corianas
2022-06-30T07:22:16Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-06-30T07:21:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: RecurrentPPO results: - metrics: - type: mean_reward value: 282.21 +/- 11.78 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **RecurrentPPO** Agent playing **LunarLander-v2** This is a trained model of a **RecurrentPPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ppo_lstm --env LunarLander-v2 -orga Corianas -f logs/ python enjoy.py --algo ppo_lstm --env LunarLander-v2 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo_lstm --env LunarLander-v2 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ppo_lstm --env LunarLander-v2 -f logs/ -orga Corianas ``` ## Hyperparameters ```python OrderedDict([('batch_size', 128), ('ent_coef', 0.01), ('gae_lambda', 0.98), ('gamma', 0.999), ('n_envs', 8), ('n_epochs', 4), ('n_steps', 512), ('n_timesteps', 5000000.0), ('normalize', True), ('policy', 'MlpLstmPolicy'), ('policy_kwargs', 'dict( ortho_init=False, activation_fn=nn.ReLU, ' 'lstm_hidden_size=64, enable_critic_lstm=True, ' 'net_arch=[dict(pi=[64], vf=[64])] )'), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
Akihiro2/bert-finetuned-squad
Akihiro2
2022-06-30T07:20:29Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-06-30T04:50:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Galeros/q-FrozenLake-v1-4x4-noSlippery
Galeros
2022-06-30T06:58:10Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-30T06:58:04Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Galeros/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Shivagowri/vit-snacks
Shivagowri
2022-06-30T06:56:00Z
56
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:snacks", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-06-29T16:05:52Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer datasets: - snacks metrics: - accuracy model-index: - name: vit-snacks results: - task: name: Image Classification type: image-classification dataset: name: Matthijs/snacks type: snacks args: default metrics: - name: Accuracy type: accuracy value: 0.9392670157068063 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-snacks This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Matthijs/snacks dataset. It achieves the following results on the evaluation set: - Loss: 0.2754 - Accuracy: 0.9393 ## Model description upload any image of your fave yummy snack ## Intended uses & limitations there are only 20 different varieties of snacks ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8724 | 0.33 | 100 | 0.9118 | 0.8670 | | 0.5628 | 0.66 | 200 | 0.6873 | 0.8471 | | 0.4421 | 0.99 | 300 | 0.4995 | 0.8691 | | 0.2837 | 1.32 | 400 | 0.4008 | 0.9026 | | 0.1645 | 1.65 | 500 | 0.3702 | 0.9058 | | 0.1604 | 1.98 | 600 | 0.3981 | 0.8921 | | 0.0498 | 2.31 | 700 | 0.3185 | 0.9204 | | 0.0406 | 2.64 | 800 | 0.3427 | 0.9141 | | 0.1049 | 2.97 | 900 | 0.3444 | 0.9173 | | 0.0272 | 3.3 | 1000 | 0.3168 | 0.9246 | | 0.0186 | 3.63 | 1100 | 0.3142 | 0.9288 | | 0.0203 | 3.96 | 1200 | 0.2931 | 0.9298 | | 0.007 | 4.29 | 1300 | 0.2754 | 0.9393 | | 0.0072 | 4.62 | 1400 | 0.2778 | 0.9403 | | 0.0073 | 4.95 | 1500 | 0.2782 | 0.9393 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
speechcolab/icefall-asr-gigaspeech-conformer-ctc
speechcolab
2022-06-30T03:41:47Z
0
0
k2
[ "k2", "icefall", "audio", "automatic-speech-recognition", "en", "dataset:gigaspeech", "region:us" ]
automatic-speech-recognition
2022-06-30T03:34:14Z
--- tags: - k2 - icefall - audio - automatic-speech-recognition language: en datasets: - gigaspeech ---
speechcolab/gigaspeech_lm
speechcolab
2022-06-30T03:33:08Z
0
0
null
[ "en", "dataset:gigaspeech", "region:us" ]
null
2022-06-30T03:32:26Z
--- language: en datasets: - gigaspeech ---
RuiqianLi/Malaya-speech_fine-tune_realcase_27_Jun
RuiqianLi
2022-06-30T02:09:05Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:uob_singlish", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-06-27T05:21:19Z
--- tags: - generated_from_trainer datasets: - uob_singlish model-index: - name: Malaya-speech_fine-tune_realcase_27_Jun results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Malaya-speech_fine-tune_realcase_27_Jun This model is a fine-tuned version of [malay-huggingface/wav2vec2-xls-r-300m-mixed](https://huggingface.co/malay-huggingface/wav2vec2-xls-r-300m-mixed) on the uob_singlish dataset. It achieves the following results on the evaluation set: - Loss: 0.9159 - Wer: 0.3819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.3176 | 1.82 | 20 | 0.8928 | 0.3542 | | 0.6716 | 3.64 | 40 | 0.9123 | 0.3681 | | 0.3484 | 5.45 | 60 | 0.9509 | 0.3681 | | 0.3064 | 7.27 | 80 | 0.9227 | 0.3958 | | 0.3017 | 9.09 | 100 | 0.9159 | 0.3819 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
jdang/distilbert-base-uncased-finetuned-imdb
jdang
2022-06-30T01:56:51Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-06-30T01:49:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Corianas/qrdqn-3frame-SpaceInvadersNoFrameskip-v4_3.loadbest
Corianas
2022-06-30T01:44:02Z
6
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-06-30T01:26:53Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: QRDQN results: - metrics: - type: mean_reward value: 4381.00 +/- 2936.92 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). [Here is a video of the Agent playing for longer than the included video](https://rumble.com/v1ai9y3-qrdqn-agent-playing-spaceinvaders.html) The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga Corianas -f logs/ python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Corianas ``` ## Hyperparameters ```python OrderedDict([('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_fraction', 0.025), ('frame_stack', 3), ('n_timesteps', 10000000.0), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('normalize', False)]) ```
ThomasSimonini/Reinforce-Pix
ThomasSimonini
2022-06-30T00:11:51Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-06-29T23:39:06Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pix results: - metrics: - type: mean_reward value: 8.00 +/- 4.88 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- lo # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
tbasic5/distilbert-base-uncased-finetuned-emotion
tbasic5
2022-06-29T22:21:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-29T22:07:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.925022224520608 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Accuracy: 0.925 - F1: 0.9250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8521 | 1.0 | 250 | 0.3164 | 0.907 | 0.9038 | | 0.2549 | 2.0 | 500 | 0.2222 | 0.925 | 0.9250 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
robingeibel/bigbird-large-finetuned-big_patent
robingeibel
2022-06-29T22:17:27Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "fill-mask", "generated_from_trainer", "dataset:big_patent", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-06-28T12:53:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - big_patent model-index: - name: bigbird-large-finetuned-big_patent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bigbird-large-finetuned-big_patent This model is a fine-tuned version of [robingeibel/bigbird-large-finetuned-big_patent](https://huggingface.co/robingeibel/bigbird-large-finetuned-big_patent) on the big_patent dataset. It achieves the following results on the evaluation set: - Loss: 1.0460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0301 | 1.0 | 80099 | 1.0460 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
TheDiamondKing/Discord-Philosophy-Medium
TheDiamondKing
2022-06-29T21:26:01Z
6
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-06-29T21:16:21Z
--- license: mit --- Medium-Sized model trained with philosophical questions ( mainly from discord ) ~11000 Messages
BK-V/xlm-roberta-base-finetuned-peyma-fa
BK-V
2022-06-29T20:59:53Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-10T14:11:45Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-peyma-fa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-peyma-fa This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0937 - F1: 0.9249 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1562 | 1.0 | 998 | 0.0691 | 0.8777 | | 0.0638 | 2.0 | 1996 | 0.0703 | 0.8908 | | 0.0457 | 3.0 | 2994 | 0.0645 | 0.8975 | | 0.0281 | 4.0 | 3992 | 0.0842 | 0.8994 | | 0.0206 | 5.0 | 4990 | 0.0651 | 0.9164 | | 0.0139 | 6.0 | 5988 | 0.0787 | 0.9148 | | 0.0083 | 7.0 | 6986 | 0.0838 | 0.9253 | | 0.0052 | 8.0 | 7984 | 0.0833 | 0.9221 | | 0.0031 | 9.0 | 8982 | 0.0947 | 0.9230 | | 0.0028 | 10.0 | 9980 | 0.0937 | 0.9249 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.12.1
ThomasSimonini/PixelCopter
ThomasSimonini
2022-06-29T20:37:55Z
2
0
stable-baselines3
[ "stable-baselines3", "Pixelcopter-PLE-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-06-29T20:37:43Z
--- library_name: stable-baselines3 tags: - Pixelcopter-PLE-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -2.90 +/- 0.30 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **PPO** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **PPO** agent playing **Pixelcopter-PLE-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
stevhliu/t5-small-finetuned-billsum-ca_test
stevhliu
2022-06-29T20:05:37Z
23
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "summarization", "dataset:billsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- license: apache-2.0 datasets: - billsum tags: - summarization - t5 widget: - text: "The people of the State of California do enact as follows: SECTION 1. The\ \ Legislature hereby finds and declares as follows: (a) Many areas of the state\ \ are disproportionately impacted by drought because they are heavily dependent\ \ or completely reliant on groundwater from basins that are in overdraft and in\ \ which the water table declines year after year or from basins that are contaminated.\ \ (b) There are a number of state grant and loan programs that provide financial\ \ assistance to communities to address drinking water and wastewater needs. Unfortunately,\ \ there is no program in place to provide similar assistance to individual homeowners\ \ who are reliant on their own groundwater wells and who may not be able to afford\ \ conventional private loans to undertake vital water supply, water quality, and\ \ wastewater improvements. (c) The program created by this act is intended to\ \ bridge that gap by providing low-interest loans, grants, or both, to individual\ \ homeowners to undertake actions necessary to provide safer, cleaner, and more\ \ reliable drinking water and wastewater treatment. These actions may include,\ \ but are not limited to, digging deeper wells, improving existing wells and related\ \ equipment, addressing drinking water contaminants in the homeowner\u2019s water,\ \ or connecting to a local water or wastewater system. SEC. 2. Chapter 6.6 (commencing\ \ with Section 13486) is added to Division 7 of the Water Code, to read: CHAPTER\ \ 6.6. Water and Wastewater Loan and Grant Program 13486. (a) The board shall\ \ establish a program in accordance with this chapter to provide low-interest\ \ loans and grants to local agencies for low-interest loans and grants to eligible\ \ applicants for any of the following purposes:" example_title: Water use - text: "The people of the State of California do enact as follows: SECTION 1. Section\ \ 2196 of the Elections Code is amended to read: 2196. (a) (1) Notwithstanding\ \ any other provision of law, a person who is qualified to register to vote and\ \ who has a valid California driver\u2019s license or state identification card\ \ may submit an affidavit of voter registration electronically on the Internet\ \ Web site of the Secretary of State. (2) An affidavit submitted pursuant to this\ \ section is effective upon receipt of the affidavit by the Secretary of State\ \ if the affidavit is received on or before the last day to register for an election\ \ to be held in the precinct of the person submitting the affidavit. (3) The affiant\ \ shall affirmatively attest to the truth of the information provided in the affidavit.\ \ (4) For voter registration purposes, the applicant shall affirmatively assent\ \ to the use of his or her signature from his or her driver\u2019s license or\ \ state identification card. (5) For each electronic affidavit, the Secretary\ \ of State shall obtain an electronic copy of the applicant\u2019s signature from\ \ his or her driver\u2019s license or state identification card directly from\ \ the Department of Motor Vehicles. (6) The Secretary of State shall require a\ \ person who submits an affidavit pursuant to this section to submit all of the\ \ following: (A) The number from his or her California driver\u2019s license or\ \ state identification card. (B) His or her date of birth. (C) The last four digits\ \ of his or her social security number. (D) Any other information the Secretary\ \ of State deems necessary to establish the identity of the affiant. (7) Upon\ \ submission of an affidavit pursuant to this section, the electronic voter registration\ \ system shall provide for immediate verification of both of the following:" example_title: Election metrics: - rouge model-index: - name: t5-small-finetuned-billsum-ca_test results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum args: default metrics: - name: Rouge1 type: rouge value: 12.6315 - task: type: summarization name: Summarization dataset: name: billsum type: billsum config: default split: test metrics: - name: ROUGE-1 type: rouge value: 12.1368 verified: true - name: ROUGE-2 type: rouge value: 4.6017 verified: true - name: ROUGE-L type: rouge value: 10.0767 verified: true - name: ROUGE-LSUM type: rouge value: 10.6892 verified: true - name: loss type: loss value: 2.897707462310791 verified: true - name: gen_len type: gen_len value: 19.0 verified: true --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-billsum-ca_test This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.3376 - Rouge1: 12.6315 - Rouge2: 6.9839 - Rougel: 10.9983 - Rougelsum: 11.9383 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 495 | 2.4805 | 9.9389 | 4.1239 | 8.3979 | 9.1599 | 19.0 | | 3.1564 | 2.0 | 990 | 2.3833 | 12.1026 | 6.5196 | 10.5123 | 11.4527 | 19.0 | | 2.66 | 3.0 | 1485 | 2.3496 | 12.5389 | 6.8686 | 10.8798 | 11.8636 | 19.0 | | 2.5671 | 4.0 | 1980 | 2.3376 | 12.6315 | 6.9839 | 10.9983 | 11.9383 | 19.0 | ### Framework versions - Transformers 4.12.2 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
edbeeching/decision-transformer-gym-halfcheetah-medium-replay
edbeeching
2022-06-29T19:21:08Z
5
0
transformers
[ "transformers", "pytorch", "decision_transformer", "feature-extraction", "deep-reinforcement-learning", "reinforcement-learning", "decision-transformer", "gym-continous-control", "arxiv:2106.01345", "endpoints_compatible", "region:us" ]
reinforcement-learning
2022-03-16T08:20:08Z
--- tags: - deep-reinforcement-learning - reinforcement-learning - decision-transformer - gym-continous-control pipeline_tag: reinforcement-learning --- # Decision Transformer model trained on medium-replay trajectories sampled from the Gym HalfCheetah environment This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium-replay trajectories sampled from the Gym HalfCheetah environment. The following normlization coeficients are required to use this model: mean = [-0.12880704, 0.37381196, -0.14995988, -0.23479079, -0.28412786, -0.13096535, -0.20157982, -0.06517727, 3.4768248, -0.02785066, -0.01503525, 0.07697279, 0.01266712, 0.0273253, 0.02316425, 0.01043872, -0.01583941] std = [0.17019016, 1.2844249, 0.33442774, 0.36727592, 0.26092398, 0.4784107, 0.31814206 ,0.33552638, 2.0931616, 0.80374336, 1.9044334, 6.57321, 7.5728636, 5.0697494, 9.105554, 6.0856543, 7.253004, 5] See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
edbeeching/decision-transformer-gym-halfcheetah-expert
edbeeching
2022-06-29T19:20:32Z
18
1
transformers
[ "transformers", "pytorch", "decision_transformer", "feature-extraction", "deep-reinforcement-learning", "reinforcement-learning", "decision-transformer", "gym-continous-control", "arxiv:2106.01345", "endpoints_compatible", "region:us" ]
reinforcement-learning
2022-03-16T08:19:45Z
--- tags: - deep-reinforcement-learning - reinforcement-learning - decision-transformer - gym-continous-control pipeline_tag: reinforcement-learning --- # Decision Transformer model trained on expert trajectories sampled from the Gym HalfCheetah environment This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on expert trajectories sampled from the Gym HalfCheetah environment. The following normlization coeficients are required to use this model: mean = [ -0.04489148, 0.03232588, 0.06034835, -0.17081226, -0.19480659, -0.05751596, 0.09701628, 0.03239211, 11.047426, -0.07997331, -0.32363534, 0.36297753, 0.42322603, 0.40836546, 1.1085187, -0.4874403, -0.0737481 ] std = [0.04002118, 0.4107858, 0.54217845, 0.41522816, 0.23796624, 0.62036866, 0.30100912, 0.21737163, 2.2105937, 0.572586, 1.7255033, 11.844218, 12.06324, 7.0495934, 13.499867, 7.195647, 5.0264325] See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
edbeeching/decision-transformer-gym-hopper-medium-replay
edbeeching
2022-06-29T19:20:14Z
9
0
transformers
[ "transformers", "pytorch", "decision_transformer", "feature-extraction", "deep-reinforcement-learning", "reinforcement-learning", "decision-transformer", "gym-continous-control", "arxiv:2106.01345", "endpoints_compatible", "region:us" ]
reinforcement-learning
2022-03-16T08:20:43Z
--- tags: - deep-reinforcement-learning - reinforcement-learning - decision-transformer - gym-continous-control pipeline_tag: reinforcement-learning --- # Decision Transformer model trained on medium-replay trajectories sampled from the Gym Hopper environment This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium-replay trajectories sampled from the Gym Hopper environment. The following normlization coefficients are required to use this model: mean = [ 1.2305138, -0.04371411, -0.44542956, -0.09370098, 0.09094488, 1.3694725, -0.19992675, -0.02286135, -0.5287045, -0.14465883, -0.19652697] std = [0.17565121, 0.06369286, 0.34383234, 0.19566889, 0.5547985, 1.0510299, 1.1583077, 0.79631287, 1.4802359, 1.6540332, 5.108601] See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
edbeeching/decision-transformer-gym-hopper-medium
edbeeching
2022-06-29T19:15:16Z
34,485
6
transformers
[ "transformers", "pytorch", "decision_transformer", "feature-extraction", "deep-reinforcement-learning", "reinforcement-learning", "decision-transformer", "gym-continous-control", "arxiv:2106.01345", "endpoints_compatible", "region:us" ]
reinforcement-learning
2022-03-16T08:20:31Z
--- tags: - deep-reinforcement-learning - reinforcement-learning - decision-transformer - gym-continous-control pipeline_tag: reinforcement-learning --- # Decision Transformer model trained on medium trajectories sampled from the Gym Hopper environment This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium trajectories sampled from the Gym Hopper environment. The following normlization coefficients are required to use this model: mean = [ 1.311279, -0.08469521, -0.5382719, -0.07201576, 0.04932366, 2.1066856, -0.15017354, 0.00878345, -0.2848186, -0.18540096, -0.28461286] std = [0.17790751, 0.05444621, 0.21297139, 0.14530419, 0.6124444, 0.85174465, 1.4515252, 0.6751696, 1.536239, 1.6160746, 5.6072536 ] See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
ullasmrnva/LawBerta
ullasmrnva
2022-06-29T18:56:54Z
4
0
transformers
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-29T18:56:39Z
--- tags: - generated_from_keras_callback model-index: - name: attempt results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # attempt This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Tokenizers 0.12.1
zhav1k/q-Taxi-v3
zhav1k
2022-06-29T18:56:01Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-29T18:55:53Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.54 +/- 2.69 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
nbroad/bigbird-base-health-fact
nbroad
2022-06-29T18:29:17Z
17
1
transformers
[ "transformers", "pytorch", "big_bird", "text-classification", "generated_from_trainer", "en", "dataset:health_fact", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-26T17:55:02Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - health_fact model-index: - name: bigbird-base-health-fact results: - task: type: text-classification name: Text Classification dataset: name: health_fact type: health_fact split: test metrics: - name: F1 type: f1 value: 0.6694031411935434 - name: Accuracy type: accuracy value: 0.7948094079480941 - name: False Accuracy type: accuracy value: 0.8092783505154639 - name: Mixture Accuracy type: accuracy value: 0.4975124378109453 - name: True Accuracy type: accuracy value: 0.9148580968280468 - name: Unproven Accuracy type: accuracy value: 0.4 --- # bigbird-base-health-fact This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the health_fact dataset. It achieves the following results on the VALIDATION set: - Overall Accuracy: 0.8228995057660626 - Macro F1: 0.6979224830442152 - False Accuracy: 0.8289473684210527 - Mixture Accuracy: 0.47560975609756095 - True Accuracy: 0.9332273449920508 - Unproven Accuracy: 0.4634146341463415 It achieves the following results on the TEST set: - Overall Accuracy: 0.7948094079480941 - Macro F1: 0.6694031411935434 - Mixture Accuracy: 0.4975124378109453 - False Accuracy: 0.8092783505154639 - True Accuracy: 0.9148580968280468 - Unproven Accuracy: 0.4 ## Model description Here is how you can use the model: ```python import torch from transformers import pipeline claim = "A mother revealed to her child in a letter after her death that she had just one eye because she had donated the other to him." text = "In April 2005, we spotted a tearjerker on the Internet about a mother who gave up one of her eyes to a son who had lost one of his at an early age. By February 2007 the item was circulating in e-mail in the following shortened version: My mom only had one eye. I hated her… She was such an embarrassment. She cooked for students and teachers to support the family. There was this one day during elementary school where my mom came to say hello to me. I was so embarrassed. How could she do this to me? I ignored her, threw her a hateful look and ran out. The next day at school one of my classmates said, β€œEEEE, your mom only has one eye!” I wanted to bury myself. I also wanted my mom to just disappear. I confronted her that day and said, β€œIf you’re only gonna make me a laughing stock, why don’t you just die?” My mom did not respond… I didn’t even stop to think for a second about what I had said, because I was full of anger. I was oblivious to her feelings. I wanted out of that house, and have nothing to do with her. So I studied real hard, got a chance to go abroad to study. Then, I got married. I bought a house of my own. I had kids of my own. I was happy with my life, my kids and the comforts. Then one day, my Mother came to visit me. She hadn’t seen me in years and she didn’t even meet her grandchildren. When she stood by the door, my children laughed at her, and I yelled at her for coming over uninvited. I screamed at her, β€œHow dare you come to my house and scare my children! GET OUT OF HERE! NOW!! !” And to this, my mother quietly answered, β€œOh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. One day, a letter regarding a school reunion came to my house. So I lied to my wife that I was going on a business trip. After the reunion, I went to the old shack just out of curiosity. My neighbors said that she died. I did not shed a single tear. They handed me a letter that she had wanted me to have. My dearest son, I think of you all the time. I’m sorry that I came to your house and scared your children. I was so glad when I heard you were coming for the reunion. But I may not be able to even get out of bed to see you. I’m sorry that I was a constant embarrassment to you when you were growing up. You see……..when you were very little, you got into an accident, and lost your eye. As a mother, I couldn’t stand watching you having to grow up with one eye. So I gave you mine. I was so proud of my son who was seeing a whole new world for me, in my place, with that eye. With all my love to you, Your mother. In its earlier incarnation, the story identified by implication its location as Korea through statements made by both the mother and the son (the son’s β€œI left my mother and came to Seoul” and the mother’s β€œI won’t visit Seoul anymore”). It also supplied a reason for the son’s behavior when his mother arrived unexpectedly to visit him (β€œMy little girl ran away, scared of my mom’s eye” and β€œI screamed at her, β€˜How dare you come to my house and scare my daughter!'”). A further twist was provided in the original: rather than gaining the news of his mother’s death from neighbors (who hand him her letter), the son instead discovered the woman who bore him lying dead on the floor of what used to be his childhood home, her missive to him clutched in her lifeless hand: Give your parents roses while they are alive, not deadMY mom only had one eye. I hated her … she was such an embarrassment. My mom ran a small shop at a flea market. She collected little weeds and such to sell … anything for the money we needed she was such an embarrassment. There was this one day during elementary school … It was field day, and my mom came. I was so embarrassed. How could she do this to me? I threw her a hateful look and ran out. The next day at school … β€œyour mom only has one eye?!? !” … And they taunted me. I wished that my mom would just disappear from this world so I said to my mom, β€œmom … Why don’t you have the other eye?! If you’re only going to make me a laughingstock, why don’t you just die?!! !” my mom did not respond … I guess I felt a little bad, but at the same time, it felt good to think that I had said what I’d wanted to say all this time… maybe it was because my mom hadn’t punished me, but I didn’t think that I had hurt her feelings very badly. That night… I woke up, and went to the kitchen to get a glass of water. My mom was crying there, so quietly, as if she was afraid that she might wake me. I took a look at her, and then turned away. Because of the thing I had said to her earlier, there was something pinching at me in the corner of my heart. Even so, I hated my mother who was crying out of her one eye. So I told myself that I would grow up and become successful. Because I hated my one-eyed mom and our desperate poverty… then I studied real hard. I left my mother and came to Seoul and studied, and got accepted in the Seoul University with all the confidence I had. Then, I got married. I bought a house of my own. Then I had kids, too… now I’m living happily as a successful man. I like it here because it’s a place that doesn’t remind me of my mom. This happiness was getting bigger and bigger, when… what?! Who’s this…it was my mother… still with her one eye. It felt as if the whole sky was falling apart on me. My little girl ran away, scared of my mom’s eye. And I asked her, β€œwho are you? !” β€œI don’t know you!! !” as if trying to make that real. I screamed at her, β€œHow dare you come to my house and scare my daughter!” β€œGET OUT OF HERE! NOW!! !” and to this, my mother quietly answered, β€œoh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. Thank goodness… she doesn’t recognize me… I was quite relieved. I told myself that I wasn’t going to care, or think about this for the rest of my life. Then a wave of relief came upon me… One day, a letter regarding a school reunion came to my house. So, lying to my wife that I was going on a business trip, I went. After the reunion, I went down to the old shack, that I used to call a house… just out of curiosity there, I found my mother fallen on the cold ground. But I did not shed a single tear. She had a piece of paper in her hand…. it was a letter to me. My son… I think my life has been long enough now… And… I won’t visit Seoul anymore… but would it be too much to ask if I wanted you to come visit me once in a while? I miss you so much… and I was so glad when I heard you were coming for the reunion. But I decided not to go to the school. …for you… and I’m sorry that I only have one eye, and I was an embarrassment for you. You see, when you were very little, you got into an accident, and lost your eye. as a mom, I couldn’t stand watching you having to grow up with only one eye… so I gave you mine… I was so proud of my son that was seeing a whole new world for me, in my place, with that eye. I was never upset at you for anything you did… the couple times that you were angry with me, I thought to myself, β€˜it’s because he loves me…’ my son. Oh, my son… I don’t want you to cry for me, because of my death. My son, I love you my son, I love you so much. With all modern medical technology, transplantation of the eyeball is still impossible. The optic nerve isn’t an ordinary nerve, but instead an inset running from the brain. Modern medicine isn’t able to β€œconnect” an eyeball back to brain after an optic nerve has been severed, let alone transplant the eye from a different person. (The only exception is the cornea, the transparent part in front of the eye: corneas are transplanted to replace injured and opaque ones.) We won’t try to comment on whether any surgeon would accept an eye from a living donor for transplant into another β€” we’ll leave that to others who are far more knowledgeable about medical ethics and transplant procedures. But we will note that the plot device of a mother’s dramatic sacrifice for the sake of her child’s being revealed in a written communication delivered after her demise appears in another legend about maternal love: the 2008 tale about a woman who left a touching message on her cell phone even as life ebbed from her as she used her body to shield the tot during an earthquake. Giving up one’s own life for a loved one is central to a 2005 urban legend about a boy on a motorcycle who has his girlfriend hug him one last time and put on his helmet just before the crash that kills him and spares her. Returning to the β€œnotes from the dead” theme is the 1995 story about a son who discovers only through a posthumous letter from his mother what their occasional dinner β€œdates” had meant to her. Another legend we’re familiar with features a meme used in the one-eyed mother story (the coming to light of the enduring love of the person who died for the completely unworthy person she’d lavished it on), but that one involves a terminally ill woman and her cheating husband. In it, an about-to-be-spurned wife begs the adulterous hoon she’d married to stick around for another 30 days and to carry her over the threshold of their home once every day of that month as her way of keeping him around long enough for her to kick the bucket and thus spare their son the knowledge that his parents were on the verge of divorce." label = "false" device = 0 if torch.cuda.is_available() else -1 pl = pipeline("text-classification", model="nbroad/bigbird-base-health-fact", device=device) input_text = claim+pl.tokenizer.sep_token+text print(len(pl.tokenizer(input_text).input_ids)) # 2303 (which is why bigbird is useful) pl(input_text) # [{'label': 'false', 'score': 0.3866822123527527}] ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 18 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | False F1 | Mixture F1 | True F1 | Unproven F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:----------:|:-------:|:-----------:| | 0.5563 | 1.0 | 1226 | 0.5020 | 0.7949 | 0.6062 | 0.7926 | 0.4591 | 0.8986 | 0.2745 | | 0.5048 | 2.0 | 2452 | 0.4969 | 0.8180 | 0.6846 | 0.8202 | 0.4342 | 0.9126 | 0.5714 | | 0.3454 | 3.0 | 3678 | 0.5864 | 0.8130 | 0.6874 | 0.8114 | 0.4557 | 0.9154 | 0.5672 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.1.1.dev0 - Tokenizers 0.12.1
austinmw/distilbert-base-uncased-finetuned-health_facts
austinmw
2022-06-29T18:15:31Z
164
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:health_fact", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-29T05:34:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - health_fact metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-health_facts results: - task: name: Text Classification type: text-classification dataset: name: health_fact type: health_fact args: default metrics: - name: Accuracy type: accuracy value: 0.628500823723229 - name: F1 type: f1 value: 0.6544946803476833 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-health_facts This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the health_fact dataset. It achieves the following results on the evaluation set: - Loss: 1.1227 - Accuracy: 0.6285 - F1: 0.6545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1367 | 1.0 | 154 | 0.9423 | 0.5560 | 0.6060 | | 0.9444 | 2.0 | 308 | 0.9267 | 0.5733 | 0.6170 | | 0.8248 | 3.0 | 462 | 0.9483 | 0.5832 | 0.6256 | | 0.7213 | 4.0 | 616 | 1.0119 | 0.5815 | 0.6219 | | 0.608 | 5.0 | 770 | 1.1227 | 0.6285 | 0.6545 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
ashraq/movielens_user_model_cos_32
ashraq
2022-06-29T18:07:51Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2022-06-24T19:16:33Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
harunkuf/mlsum_tr_en_mt5-small
harunkuf
2022-06-29T15:50:56Z
3
1
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-06-29T08:17:41Z
# Multilingual mT5 model trained with MLSUM_TR and MLSUM_CNN (EN) ## Results: MLSUM_TR: * Rouge-1: 45.11 * Rouge-2: 30.96 * Rouge-L: 39.23 MLSUM_CNN: * Rouge-1: 39.65 * Rouge-2: 17.49 * Rouge-L: 27.66 Note: Huggingface Inference API truncates the results, which results in unfinished sentences when making a prediction. You can try the model in Colab: https://colab.research.google.com/drive/1QDWO3RHjjP1nS8bIvhT38B3fVIBC3TaK?usp=sharing
FabianWillner/bert-base-uncased-finetuned-squad
FabianWillner
2022-06-29T14:46:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-06-29T09:16:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0626 | 1.0 | 5533 | 1.0308 | | 0.8157 | 2.0 | 11066 | 1.0106 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Salvatore/bert-finetuned-mutation-recognition-2
Salvatore
2022-06-29T14:29:27Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-06-29T10:10:16Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-mutation-recognition-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-mutation-recognition-2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0818 - Dnamutation F1: 0.6371 - Snp F1: 0.0952 - Proteinmutation F1: 0.8412 - Precision: 0.7646 - Recall: 0.6596 - F1: 0.7082 - Accuracy: 0.9877 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Dnamutation F1 | Snp F1 | Proteinmutation F1 | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:------:|:------------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 403 | 0.0383 | 0.5871 | 0.0 | 0.7573 | 0.6195 | 0.6770 | 0.6470 | 0.9872 | | 0.0863 | 2.0 | 806 | 0.0349 | 0.6202 | 0.0 | 0.8646 | 0.6815 | 0.7408 | 0.7099 | 0.9889 | | 0.0295 | 3.0 | 1209 | 0.0415 | 0.5670 | 0.0 | 0.7689 | 0.6887 | 0.6035 | 0.6433 | 0.9866 | | 0.019 | 4.0 | 1612 | 0.0430 | 0.5909 | 0.4742 | 0.7840 | 0.6667 | 0.6615 | 0.6641 | 0.9881 | | 0.0127 | 5.0 | 2015 | 0.0507 | 0.6345 | 0.0 | 0.8455 | 0.7290 | 0.6867 | 0.7072 | 0.9885 | | 0.0127 | 6.0 | 2418 | 0.0678 | 0.5946 | 0.05 | 0.8087 | 0.7471 | 0.6170 | 0.6758 | 0.9868 | | 0.0067 | 7.0 | 2821 | 0.0544 | 0.6693 | 0.2727 | 0.8475 | 0.7208 | 0.7292 | 0.725 | 0.9884 | | 0.0042 | 8.0 | 3224 | 0.0642 | 0.6694 | 0.2000 | 0.8401 | 0.7390 | 0.7118 | 0.7251 | 0.9885 | | 0.0019 | 9.0 | 3627 | 0.0847 | 0.6271 | 0.0976 | 0.8416 | 0.7671 | 0.6499 | 0.7037 | 0.9877 | | 0.0014 | 10.0 | 4030 | 0.0818 | 0.6371 | 0.0952 | 0.8412 | 0.7646 | 0.6596 | 0.7082 | 0.9877 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2 - Datasets 2.0.0 - Tokenizers 0.12.1
igpaub/q-FrozenLake-v1-4x4
igpaub
2022-06-29T14:29:26Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-29T13:12:43Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4 results: - metrics: - type: mean_reward value: 0.78 +/- 0.41 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="igpaub/q-FrozenLake-v1-4x4", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
igpaub/q-Taxi-v3
igpaub
2022-06-29T14:18:47Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-29T14:07:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="igpaub/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Salvatore/bert-finetuned-mutation-recognition-1
Salvatore
2022-06-29T13:59:03Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-06-29T09:40:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-mutation-recognition-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-mutation-recognition-1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0380 - Proteinmutation F1: 0.8631 - Dnamutation F1: 0.7522 - Snp F1: 1.0 - Precision: 0.8061 - Recall: 0.8386 - F1: 0.8221 - Accuracy: 0.9942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Proteinmutation F1 | Dnamutation F1 | Snp F1 | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------------------:|:--------------:|:------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 259 | 0.0273 | 0.8072 | 0.5762 | 0.975 | 0.6685 | 0.7580 | 0.7104 | 0.9924 | | 0.0597 | 2.0 | 518 | 0.0260 | 0.8148 | 0.6864 | 0.9873 | 0.7363 | 0.8004 | 0.7670 | 0.9936 | | 0.0597 | 3.0 | 777 | 0.0338 | 0.8252 | 0.7221 | 1.0 | 0.7857 | 0.7941 | 0.7899 | 0.9935 | | 0.0046 | 4.0 | 1036 | 0.0299 | 0.8707 | 0.7214 | 0.9873 | 0.7773 | 0.8450 | 0.8098 | 0.9941 | | 0.0046 | 5.0 | 1295 | 0.0353 | 0.9035 | 0.7364 | 0.9873 | 0.8130 | 0.8493 | 0.8307 | 0.9941 | | 0.0014 | 6.0 | 1554 | 0.0361 | 0.8941 | 0.7391 | 0.9873 | 0.8093 | 0.8471 | 0.8278 | 0.9941 | | 0.0014 | 7.0 | 1813 | 0.0367 | 0.8957 | 0.7249 | 1.0 | 0.8090 | 0.8365 | 0.8225 | 0.9940 | | 0.0004 | 8.0 | 2072 | 0.0381 | 0.8714 | 0.7578 | 1.0 | 0.8266 | 0.8301 | 0.8284 | 0.9940 | | 0.0004 | 9.0 | 2331 | 0.0380 | 0.8732 | 0.7550 | 1.0 | 0.8148 | 0.8408 | 0.8276 | 0.9942 | | 0.0002 | 10.0 | 2590 | 0.0380 | 0.8631 | 0.7522 | 1.0 | 0.8061 | 0.8386 | 0.8221 | 0.9942 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2 - Datasets 2.0.0 - Tokenizers 0.12.1
trtd56/q-Taxi-v3
trtd56
2022-06-29T13:22:25Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-29T13:22:18Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="trtd56/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
DTAI-KULeuven/robbertje-merged-dutch-sentiment
DTAI-KULeuven
2022-06-29T13:12:48Z
110
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "Dutch", "Flemish", "RoBERTa", "RobBERT", "nl", "dataset:dbrd", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-29T12:17:36Z
--- language: nl license: mit datasets: - dbrd model-index: - name: robbertje-merged-dutch-sentiment results: - task: type: text-classification name: Text Classification dataset: name: dbrd type: sentiment-analysis split: test metrics: - name: Accuracy type: accuracy value: 0.9294064748201439 widget: - text: "Ik erken dat dit een boek is, daarmee is alles gezegd." - text: "Prachtig verhaal, heel mooi verteld en een verrassend einde... Een topper!" thumbnail: "https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" tags: - Dutch - Flemish - RoBERTa - RobBERT --- <p align="center"> <img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch models" width="75%"> </p> # RobBERTje finetuned for sentiment analysis on DBRD This is a finetuned model based on [RobBERTje (merged)](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not exactly amazing. We released a distilled model and a `base`-sized model. Both models perform quite well, so there is only a slight performance tradeoff: | Model | Identifier | Layers | #Params. | Accuracy | |----------------|------------------------------------------------------------------------|--------|-----------|-----------| | RobBERT (v2) | [`DTAI-KULeuven/robbert-v2-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment) | 12 | 116 M |93.3* | | RobBERTje - Merged (p=0.5)| [`DTAI-KULeuven/robbertje-merged-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbertje-merged-dutch-sentiment) | 6 | 74 M |92.9 | *The results of RobBERT are of a different run than the one reported in the paper. # Training data and setup We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019). Originally, these reviews got a five-star rating, but this has been converted to positive (⭐️⭐️⭐️⭐️ and ⭐️⭐️⭐️⭐️⭐️), neutral (⭐️⭐️⭐️) and negative (⭐️ and ⭐️⭐️). We used 19.5k reviews for the training set, 528 reviews for the validation set and 2224 to calculate the final accuracy. The validation set was used to evaluate a random hyperparameter search over the learning rate, weight decay and gradient accumulation steps. The full training details are available in [`training_args.bin`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment/blob/main/training_args.bin) as a binary PyTorch file. # Limitations and biases - The domain of the reviews is limited to book reviews. - Most authors of the book reviews were women, which could have caused [a difference in performance for reviews written by men and women](https://www.aclweb.org/anthology/2020.findings-emnlp.292). ## Credits and citation This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/). If you would like to cite our paper or models, you can use the following BibTeX: ``` @article{Delobelle_Winters_Berendt_2021, title = {RobBERTje: A Distilled Dutch BERT Model}, author = {Delobelle, Pieter and Winters, Thomas and Berendt, Bettina}, year = 2021, month = {Dec.}, journal = {Computational Linguistics in the Netherlands Journal}, volume = 11, pages = {125–140}, url = {https://www.clinjournal.org/clinj/article/view/131} } @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
DTAI-KULeuven/robbert-v2-dutch-sentiment
DTAI-KULeuven
2022-06-29T13:11:28Z
4,006
8
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "Dutch", "Flemish", "RoBERTa", "RobBERT", "nl", "dataset:dbrd", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-30T16:53:44Z
--- language: nl license: mit datasets: - dbrd model-index: - name: robbert-v2-dutch-sentiment results: - task: type: text-classification name: Text Classification dataset: name: dbrd type: sentiment-analysis split: test metrics: - name: Accuracy type: accuracy value: 0.93325 widget: - text: "Ik erken dat dit een boek is, daarmee is alles gezegd." - text: "Prachtig verhaal, heel mooi verteld en een verrassend einde... Een topper!" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo_with_name.png" alt="RobBERT: A Dutch RoBERTa-based Language Model" width="75%"> </p> # RobBERT finetuned for sentiment analysis on DBRD This is a finetuned model based on [RobBERT (v2)](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](https://hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not exactly amazing. We released a distilled model and a `base`-sized model. Both models perform quite well, so there is only a slight performance tradeoff: | Model | Identifier | Layers | #Params. | Accuracy | |----------------|------------------------------------------------------------------------|--------|-----------|-----------| | RobBERT (v2) | [`DTAI-KULeuven/robbert-v2-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment) | 12 | 116 M |93.3* | | RobBERTje - Merged (p=0.5)| [`DTAI-KULeuven/robbertje-merged-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbertje-merged-dutch-sentiment) | 6 | 74 M |92.9 | *The results of RobBERT are of a different run than the one reported in the paper. # Training data and setup We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019). Originally, these reviews got a five-star rating, but this has been converted to positive (⭐️⭐️⭐️⭐️ and ⭐️⭐️⭐️⭐️⭐️), neutral (⭐️⭐️⭐️) and negative (⭐️ and ⭐️⭐️). We used 19.5k reviews for the training set, 528 reviews for the validation set and 2224 to calculate the final accuracy. The validation set was used to evaluate a random hyperparameter search over the learning rate, weight decay and gradient accumulation steps. The full training details are available in [`training_args.bin`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment/blob/main/training_args.bin) as a binary PyTorch file. # Limitations and biases - The domain of the reviews is limited to book reviews. - Most authors of the book reviews were women, which could have caused [a difference in performance for reviews written by men and women](https://www.aclweb.org/anthology/2020.findings-emnlp.292). - This is _not_ the same model as we discussed in our paper, due to some conversion issues between the original training two years ago and now, it was easier to retrain this model. The accuracy is slightly lower, but the model was trained on the beginning of the reviews instead of the end of the reviews. ## Credits and citation This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/). If you would like to cite our paper or models, you can use the following BibTeX: ``` @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
robingeibel/bigbird-base-finetuned-big_patent
robingeibel
2022-06-29T12:35:25Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "fill-mask", "generated_from_trainer", "dataset:big_patent", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-06-27T07:03:58Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - big_patent model-index: - name: bigbird-base-finetuned-big_patent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bigbird-base-finetuned-big_patent This model is a fine-tuned version of [robingeibel/bigbird-base-finetuned-big_patent](https://huggingface.co/robingeibel/bigbird-base-finetuned-big_patent) on the big_patent dataset. It achieves the following results on the evaluation set: - Loss: 1.0686 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 1.1432 | 1.0 | 154482 | 1.0686 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Corianas/ppo-LunarLander-v2.loadbest_
Corianas
2022-06-29T12:26:24Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-06-29T12:26:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 257.12 +/- 21.75 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ppo --env LunarLander-v2 -orga Corianas -f logs/ python enjoy.py --algo ppo --env LunarLander-v2 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo --env LunarLander-v2 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ppo --env LunarLander-v2 -f logs/ -orga Corianas ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('ent_coef', 0.01), ('frame_stack', 4), ('gae_lambda', 0.98), ('gamma', 0.999), ('n_envs', 16), ('n_epochs', 4), ('n_steps', 1024), ('n_timesteps', 1000000.0), ('policy', 'MlpPolicy'), ('normalize', False)]) ```
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6
gary109
2022-06-29T12:06:41Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "gary109/AI_Light_Dance", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-06-28T13:47:08Z
--- license: apache-2.0 tags: - automatic-speech-recognition - gary109/AI_Light_Dance - generated_from_trainer model-index: - name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6 This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset. It achieves the following results on the evaluation set: - Loss: 1.0063 - Wer: 0.6580 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8572 | 1.0 | 376 | 1.0508 | 0.6601 | | 0.8671 | 2.0 | 752 | 1.0755 | 0.6581 | | 0.8578 | 3.0 | 1128 | 1.0152 | 0.6787 | | 0.8552 | 4.0 | 1504 | 1.0537 | 0.6557 | | 0.8354 | 5.0 | 1880 | 1.0386 | 0.6606 | | 0.8543 | 6.0 | 2256 | 1.0063 | 0.6580 | | 0.8556 | 7.0 | 2632 | 1.0487 | 0.6499 | | 0.8356 | 8.0 | 3008 | 1.0407 | 0.6549 | | 0.8227 | 9.0 | 3384 | 1.0382 | 0.6506 | | 0.8148 | 10.0 | 3760 | 1.0440 | 0.6500 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.3.dev0 - Tokenizers 0.12.1
Nancyzzz/wav2vec2-base-timit-demo-google-colab
Nancyzzz
2022-06-29T11:15:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-06-29T08:59:53Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5253 - Wer: 0.3406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.4884 | 1.0 | 500 | 1.6139 | 1.0293 | | 0.8373 | 2.01 | 1000 | 0.5286 | 0.5266 | | 0.4394 | 3.01 | 1500 | 0.4933 | 0.4678 | | 0.2974 | 4.02 | 2000 | 0.4159 | 0.4268 | | 0.2268 | 5.02 | 2500 | 0.4288 | 0.4074 | | 0.1901 | 6.02 | 3000 | 0.4407 | 0.3852 | | 0.1627 | 7.03 | 3500 | 0.4599 | 0.3849 | | 0.1397 | 8.03 | 4000 | 0.4330 | 0.3803 | | 0.1342 | 9.04 | 4500 | 0.4661 | 0.3785 | | 0.1165 | 10.04 | 5000 | 0.4518 | 0.3745 | | 0.1 | 11.04 | 5500 | 0.4714 | 0.3899 | | 0.0881 | 12.05 | 6000 | 0.4985 | 0.3848 | | 0.0794 | 13.05 | 6500 | 0.5074 | 0.3672 | | 0.0707 | 14.06 | 7000 | 0.5692 | 0.3681 | | 0.0669 | 15.06 | 7500 | 0.4722 | 0.3814 | | 0.0589 | 16.06 | 8000 | 0.5738 | 0.3784 | | 0.0562 | 17.07 | 8500 | 0.5183 | 0.3696 | | 0.0578 | 18.07 | 9000 | 0.5473 | 0.3841 | | 0.0473 | 19.08 | 9500 | 0.4918 | 0.3655 | | 0.0411 | 20.08 | 10000 | 0.5258 | 0.3517 | | 0.0419 | 21.08 | 10500 | 0.5256 | 0.3501 | | 0.0348 | 22.09 | 11000 | 0.5511 | 0.3597 | | 0.0328 | 23.09 | 11500 | 0.5054 | 0.3560 | | 0.0314 | 24.1 | 12000 | 0.5327 | 0.3537 | | 0.0296 | 25.1 | 12500 | 0.5142 | 0.3446 | | 0.0251 | 26.1 | 13000 | 0.5155 | 0.3411 | | 0.0249 | 27.11 | 13500 | 0.5344 | 0.3414 | | 0.0225 | 28.11 | 14000 | 0.5193 | 0.3408 | | 0.0226 | 29.12 | 14500 | 0.5253 | 0.3406 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
RuiqianLi/wav2vec2-large-960h-lv60-self-4-gram_fine-tune_real_29_Jun
RuiqianLi
2022-06-29T08:44:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:uob_singlish", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-06-29T04:45:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - uob_singlish model-index: - name: wav2vec2-large-960h-lv60-self-4-gram_fine-tune_real_29_Jun results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-960h-lv60-self-4-gram_fine-tune_real_29_Jun This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the uob_singlish dataset. It achieves the following results on the evaluation set: - Loss: 1.2895 - Wer: 0.4583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.1283 | 1.82 | 20 | 1.5236 | 0.5764 | | 1.3015 | 3.64 | 40 | 1.2956 | 0.4931 | | 0.9918 | 5.45 | 60 | 1.3087 | 0.5347 | | 0.849 | 7.27 | 80 | 1.2914 | 0.5139 | | 0.6191 | 9.09 | 100 | 1.2895 | 0.4583 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ambekarsameer/distilbert-base-uncased-finetuned-cola
ambekarsameer
2022-06-29T08:26:13Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-29T08:16:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5337700382788287 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8051 - Matthews Correlation: 0.5338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5233 | 1.0 | 535 | 0.5324 | 0.4151 | | 0.3489 | 2.0 | 1070 | 0.5132 | 0.4836 | | 0.2392 | 3.0 | 1605 | 0.5852 | 0.5177 | | 0.1822 | 4.0 | 2140 | 0.7485 | 0.5256 | | 0.1382 | 5.0 | 2675 | 0.8051 | 0.5338 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
prithivida/bert-for-patents-64d
prithivida
2022-06-29T07:47:23Z
41
8
transformers
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "masked-lm", "en", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-31T06:40:35Z
--- language: - en tags: - masked-lm - pytorch pipeline-tag: "fill-mask" mask-token: "[MASK]" widget: - text: "The present [MASK] provides a torque sensor that is small and highly rigid and for which high production efficiency is possible." - text: "The present invention relates to [MASK] accessories and pertains particularly to a brake light unit for bicycles." - text: "The present invention discloses a space-bound-free [MASK] and its coordinate determining circuit for determining a coordinate of a stylus pen." - text: "The illuminated [MASK] includes a substantially translucent canopy supported by a plurality of ribs pivotally swingable towards and away from a shaft." license: apache-2.0 metrics: - perplexity --- # Motivation This model is based on anferico/bert-for-patents - a BERT<sub>LARGE</sub> model (See next section for details below). By default, the pre-trained model's output embeddings with size 768 (base-models) or with size 1024 (large-models). However, when you store Millions of embeddings, this can require quite a lot of memory/storage. So have reduced the embedding dimension to 64 i.e 1/16th of 1024 using Principle Component Analysis (PCA) and it still gives a comparable performance. Yes! PCA gives better performance than NMF. Note: This process neither improves the runtime, nor the memory requirement for running the model. It only reduces the needed space to store embeddings, for example, for semantic search using vector databases. # BERT for Patents BERT for Patents is a model trained by Google on 100M+ patents (not just US patents). If you want to learn more about the model, check out the [blog post](https://cloud.google.com/blog/products/ai-machine-learning/how-ai-improves-patent-analysis), [white paper](https://services.google.com/fh/files/blogs/bert_for_patents_white_paper.pdf) and [GitHub page](https://github.com/google/patents-public-data/blob/master/models/BERT%20for%20Patents.md) containing the original TensorFlow checkpoint. --- ### Projects using this model (or variants of it): - [Patents4IPPC](https://github.com/ec-jrc/Patents4IPPC) (carried out by [Pi School](https://picampus-school.com/) and commissioned by the [Joint Research Centre (JRC)](https://ec.europa.eu/jrc/en) of the European Commission)
coolzhao/xlm-roberta-base-finetuned-panx-de
coolzhao
2022-06-29T07:14:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-06-29T07:01:12Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8600306626540231 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1356 - F1: 0.8600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2525 | 1.0 | 525 | 0.1673 | 0.8294 | | 0.1298 | 2.0 | 1050 | 0.1381 | 0.8510 | | 0.0839 | 3.0 | 1575 | 0.1356 | 0.8600 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
iiShreya/q-FrozenLake-v1-4x4-noSlippery
iiShreya
2022-06-29T05:28:15Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-29T05:28:08Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
RodrigoGuerra/bert-base-spanish-wwm-uncased-finetuned-clinical
RodrigoGuerra
2022-06-29T05:26:54Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-06-29T04:04:21Z
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: bert-base-spanish-wwm-uncased-finetuned-clinical results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-uncased-finetuned-clinical This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7962 - F1: 0.1081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 80 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:------:|:---------------:|:------:| | 1.1202 | 1.0 | 2007 | 1.0018 | 0.0062 | | 1.0153 | 2.0 | 4014 | 0.9376 | 0.0166 | | 0.9779 | 3.0 | 6021 | 0.9026 | 0.0342 | | 0.9598 | 4.0 | 8028 | 0.8879 | 0.0337 | | 0.9454 | 5.0 | 10035 | 0.8699 | 0.0598 | | 0.9334 | 6.0 | 12042 | 0.8546 | 0.0682 | | 0.9263 | 7.0 | 14049 | 0.8533 | 0.0551 | | 0.9279 | 8.0 | 16056 | 0.8538 | 0.0715 | | 0.9184 | 9.0 | 18063 | 0.8512 | 0.0652 | | 0.9151 | 10.0 | 20070 | 0.8313 | 0.0789 | | 0.9092 | 11.0 | 22077 | 0.8299 | 0.0838 | | 0.9083 | 12.0 | 24084 | 0.8331 | 0.0718 | | 0.9057 | 13.0 | 26091 | 0.8319 | 0.0719 | | 0.9018 | 14.0 | 28098 | 0.8133 | 0.0969 | | 0.9068 | 15.0 | 30105 | 0.8234 | 0.0816 | | 0.9034 | 16.0 | 32112 | 0.8151 | 0.0899 | | 0.9008 | 17.0 | 34119 | 0.8145 | 0.0967 | | 0.8977 | 18.0 | 36126 | 0.8168 | 0.0891 | | 0.898 | 19.0 | 38133 | 0.8167 | 0.0818 | | 0.8956 | 20.0 | 40140 | 0.8076 | 0.1030 | | 0.8983 | 21.0 | 42147 | 0.8129 | 0.0867 | | 0.896 | 22.0 | 44154 | 0.8118 | 0.0892 | | 0.8962 | 23.0 | 46161 | 0.8066 | 0.1017 | | 0.8917 | 24.0 | 48168 | 0.8154 | 0.0908 | | 0.8923 | 25.0 | 50175 | 0.8154 | 0.0897 | | 0.8976 | 26.0 | 52182 | 0.8089 | 0.0910 | | 0.8926 | 27.0 | 54189 | 0.8069 | 0.0947 | | 0.8911 | 28.0 | 56196 | 0.8170 | 0.0882 | | 0.8901 | 29.0 | 58203 | 0.7991 | 0.1112 | | 0.8934 | 30.0 | 60210 | 0.7996 | 0.1112 | | 0.8903 | 31.0 | 62217 | 0.8049 | 0.0950 | | 0.8924 | 32.0 | 64224 | 0.8116 | 0.0951 | | 0.8887 | 33.0 | 66231 | 0.7982 | 0.1075 | | 0.8922 | 34.0 | 68238 | 0.8013 | 0.1025 | | 0.8871 | 35.0 | 70245 | 0.8064 | 0.0979 | | 0.8913 | 36.0 | 72252 | 0.8108 | 0.0909 | | 0.8924 | 37.0 | 74259 | 0.8081 | 0.0889 | | 0.8848 | 38.0 | 76266 | 0.7923 | 0.1228 | | 0.8892 | 39.0 | 78273 | 0.8025 | 0.0959 | | 0.8886 | 40.0 | 80280 | 0.7954 | 0.1148 | | 0.8938 | 41.0 | 82287 | 0.8017 | 0.1058 | | 0.8897 | 42.0 | 84294 | 0.7946 | 0.1146 | | 0.8906 | 43.0 | 86301 | 0.7983 | 0.1102 | | 0.889 | 44.0 | 88308 | 0.8068 | 0.0950 | | 0.8872 | 45.0 | 90315 | 0.7999 | 0.1089 | | 0.8902 | 46.0 | 92322 | 0.7992 | 0.0999 | | 0.8912 | 47.0 | 94329 | 0.7981 | 0.1048 | | 0.886 | 48.0 | 96336 | 0.8024 | 0.0991 | | 0.8848 | 49.0 | 98343 | 0.8026 | 0.0984 | | 0.8866 | 50.0 | 100350 | 0.7965 | 0.1135 | | 0.8848 | 51.0 | 102357 | 0.8054 | 0.0926 | | 0.8863 | 52.0 | 104364 | 0.8068 | 0.0917 | | 0.8866 | 53.0 | 106371 | 0.7993 | 0.0964 | | 0.8823 | 54.0 | 108378 | 0.7929 | 0.1126 | | 0.8911 | 55.0 | 110385 | 0.7938 | 0.1132 | | 0.8911 | 56.0 | 112392 | 0.7932 | 0.1144 | | 0.8866 | 57.0 | 114399 | 0.8018 | 0.0957 | | 0.8841 | 58.0 | 116406 | 0.7976 | 0.1015 | | 0.8874 | 59.0 | 118413 | 0.8035 | 0.0966 | | 0.887 | 60.0 | 120420 | 0.7954 | 0.1112 | | 0.888 | 61.0 | 122427 | 0.7927 | 0.1164 | | 0.8845 | 62.0 | 124434 | 0.7982 | 0.1012 | | 0.8848 | 63.0 | 126441 | 0.7978 | 0.1034 | | 0.8857 | 64.0 | 128448 | 0.8036 | 0.0969 | | 0.8827 | 65.0 | 130455 | 0.7958 | 0.1036 | | 0.8878 | 66.0 | 132462 | 0.7983 | 0.1030 | | 0.885 | 67.0 | 134469 | 0.7956 | 0.1055 | | 0.8859 | 68.0 | 136476 | 0.7964 | 0.1058 | | 0.8872 | 69.0 | 138483 | 0.7989 | 0.1005 | | 0.8841 | 70.0 | 140490 | 0.7949 | 0.1138 | | 0.8846 | 71.0 | 142497 | 0.7960 | 0.1062 | | 0.8867 | 72.0 | 144504 | 0.7965 | 0.1058 | | 0.8856 | 73.0 | 146511 | 0.7980 | 0.1007 | | 0.8852 | 74.0 | 148518 | 0.7971 | 0.1012 | | 0.8841 | 75.0 | 150525 | 0.7975 | 0.1049 | | 0.8865 | 76.0 | 152532 | 0.7981 | 0.1010 | | 0.8887 | 77.0 | 154539 | 0.7945 | 0.1095 | | 0.8853 | 78.0 | 156546 | 0.7965 | 0.1053 | | 0.8843 | 79.0 | 158553 | 0.7966 | 0.1062 | | 0.8858 | 80.0 | 160560 | 0.7962 | 0.1081 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.9.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
KyanChen/BuildingExtraction
KyanChen
2022-06-29T02:13:33Z
0
1
null
[ "region:us" ]
null
2022-06-29T01:34:01Z
# STTNet Paper: Building Extraction from Remote Sensing Images with Sparse Token Transformers 1. Prepare Data Prepare data for training, validation, and test phase. All images are with the resolution of $512 \times 512$. Please refer to the directory of **Data**. For larger images, you can patch the images with labels using **Tools/CutImgSegWithLabel.py**. 2. Get Data List Please refer to **Tools/GetTrainValTestCSV.py** to get the train, val, and test csv files. 3. Get Imgs Infos Please refer to **Tools/GetImgMeanStd.py** to get the mean value and standard deviation of the all image pixels in training set. 4. Modify Model Infos Please modify the model information if you want, or keep the default configuration. 5. Run to Train Train the model in **Main.py**. 6. [Optional] Run to Test Test the model with checkpoint in **Test.py**. We have provided pretrained models on INRIA and WHU Datasets. The pt models are in folder **Pretrain**. If you have any questions, please refer to [our paper](https://www.mdpi.com/2072-4292/13/21/4441) or contact with us by email. ``` @Article{rs13214441, AUTHOR = {Chen, Keyan and Zou, Zhengxia and Shi, Zhenwei}, TITLE = {Building Extraction from Remote Sensing Images with Sparse Token Transformers}, JOURNAL = {Remote Sensing}, VOLUME = {13}, YEAR = {2021}, NUMBER = {21}, ARTICLE-NUMBER = {4441}, URL = {https://www.mdpi.com/2072-4292/13/21/4441}, ISSN = {2072-4292}, DOI = {10.3390/rs13214441} } ```
workRL/q-Taxi-v3
workRL
2022-06-28T23:49:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-28T23:49:51Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.54 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
workRL/q-FrozenLake-v1-4x4-noSlippery
workRL
2022-06-28T23:47:39Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-28T23:47:32Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```