modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-16 06:27:54
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
522 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-16 06:27:41
card
stringlengths
11
1.01M
dzegan/a2c-PandaReachDense-v2
dzegan
2023-02-02T18:51:14Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T18:48:49Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -3.98 +/- 0.75 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
XperienciaVirtual/sd-1-5-db-ai-creative-hub-hdbglv
XperienciaVirtual
2023-02-02T18:29:01Z
2
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-02T18:28:03Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: hdbglv --- ### sd-1-5-db-ai-creative-hub-hdbglv Dreambooth model trained by jaimexv with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: hdbglv (use that on your prompt) ![hdbglv 0](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%281%29.jpg)![hdbglv 1](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%282%29.jpg)![hdbglv 2](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%283%29.jpg)![hdbglv 3](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%284%29.jpg)![hdbglv 4](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%285%29.jpg)![hdbglv 5](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%286%29.jpg)![hdbglv 6](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%287%29.jpg)![hdbglv 7](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%288%29.jpg)![hdbglv 8](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%289%29.jpg)![hdbglv 9](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2810%29.jpg)![hdbglv 10](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2811%29.jpg)![hdbglv 11](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2812%29.jpg)![hdbglv 12](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2813%29.jpg)![hdbglv 13](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2814%29.jpg)![hdbglv 14](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2815%29.jpg)![hdbglv 15](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2816%29.jpg)![hdbglv 16](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2817%29.jpg)
fermaat/ppo-SnowballTarget
fermaat
2023-02-02T18:21:06Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-02-02T18:21:00Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: fermaat/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
Anjoe/poetry-gpt2-large-complete
Anjoe
2023-02-02T17:43:11Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-02T13:06:56Z
--- license: mit tags: - generated_from_trainer model-index: - name: poetry-gpt2-large-complete results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poetry-gpt2-large-complete This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.6616 | 1.0 | 20566 | 3.6252 | | 3.2695 | 2.0 | 41132 | 3.5428 | | 3.0406 | 3.0 | 61698 | 3.5588 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
h9LtLSb/whisper-small-es
h9LtLSb
2023-02-02T17:29:19Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-01T20:57:48Z
--- model-index: - name: h9LtLSb/whisper-small-es results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: mozilla-foundation/common_voice_11_0 type: mozilla-foundation/common_voice_11_0 config: es split: test metrics: - type: wer value: 8.43 name: WER --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
vvn0/Taxi-v3
vvn0
2023-02-02T16:24:58Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T16:24:56Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="vvn0/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
vvn0/q-FrozenLake-v1-4x4-noSlippery
vvn0
2023-02-02T16:18:55Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T16:18:53Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="vvn0/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
rabiyulfahim/pegasus_pararephrase
rabiyulfahim
2023-02-02T15:50:53Z
3
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "paraphrasing", "seq2seq", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-02T15:21:58Z
--- language: en license: apache-2.0 tags: - pegasus - paraphrasing - seq2seq --- ## Model description [PEGASUS](https://github.com/google-research/pegasus) fine-tuned for paraphrasing ## Model in Action πŸš€ ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer model_name = 'tuner007/pegasus_paraphrase' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) def get_response(input_text,num_return_sequences,num_beams): batch = tokenizer([input_text],truncation=True,padding='longest',max_length=60, return_tensors="pt").to(torch_device) translated = model.generate(**batch,max_length=60,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text ``` #### Example: ``` num_beams = 10 num_return_sequences = 10 context = "The ultimate test of your knowledge is your capacity to convey it to another." get_response(context,num_return_sequences,num_beams) # output: ['The test of your knowledge is your ability to convey it.', 'The ability to convey your knowledge is the ultimate test of your knowledge.', 'The ability to convey your knowledge is the most important test of your knowledge.', 'Your capacity to convey your knowledge is the ultimate test of it.', 'The test of your knowledge is your ability to communicate it.', 'Your capacity to convey your knowledge is the ultimate test of your knowledge.', 'Your capacity to convey your knowledge to another is the ultimate test of your knowledge.', 'Your capacity to convey your knowledge is the most important test of your knowledge.', 'The test of your knowledge is how well you can convey it.', 'Your capacity to convey your knowledge is the ultimate test.'] ``` > Created by [Arpit Rajauria](https://twitter.com/arpit_rajauria) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/arpit_rajauria)
Jackmin108/Reinforce-CartPole-v1
Jackmin108
2023-02-02T15:42:33Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T15:03:46Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 1000.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jannikskytt/ppo-Huggy
jannikskytt
2023-02-02T15:41:05Z
11
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-02-02T15:40:57Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: jannikskytt/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
gokuls/distilbert_sa_GLUE_Experiment_data_aug_stsb_384
gokuls
2023-02-02T15:25:51Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T14:45:15Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_stsb_384 results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.1905368464556858 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_stsb_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.8610 - Pearson: 0.1867 - Spearmanr: 0.1905 - Combined Score: 0.1886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 0.9512 | 1.0 | 1259 | 2.8610 | 0.1867 | 0.1905 | 0.1886 | | 0.3073 | 2.0 | 2518 | 3.0669 | 0.1520 | 0.1508 | 0.1514 | | 0.1587 | 3.0 | 3777 | 3.1954 | 0.1595 | 0.1627 | 0.1611 | | 0.1014 | 4.0 | 5036 | 2.9135 | 0.1600 | 0.1591 | 0.1596 | | 0.0713 | 5.0 | 6295 | 3.2956 | 0.1514 | 0.1464 | 0.1489 | | 0.0551 | 6.0 | 7554 | 3.1588 | 0.1712 | 0.1642 | 0.1677 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
torchbearer241996/finetuning-sentiment-model-3000-samples
torchbearer241996
2023-02-02T15:23:37Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T05:12:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8633333333333333 - name: F1 type: f1 value: 0.8637873754152824 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3200 - Accuracy: 0.8633 - F1: 0.8638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
toinsson/poca-SoccerTwos_1
toinsson
2023-02-02T15:23:33Z
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-02-02T15:14:36Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: toinsson/poca-SoccerTwos_1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
YoriV/q-Taxi-v3
YoriV
2023-02-02T15:19:37Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T15:19:34Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="YoriV/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
KushalRamaiya/sd-class-butterflies-32
KushalRamaiya
2023-02-02T14:36:01Z
6
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-02-02T14:35:24Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute πŸ¦‹. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('KushalRamaiya/sd-class-butterflies-32') image = pipeline().images[0] image ```
tr9800a/ppo-Huggy
tr9800a
2023-02-02T14:28:53Z
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-02-02T13:46:34Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: tr9800a/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
research-backup/mbart-large-cc25-itquad-qg-ae
research-backup
2023-02-02T14:09:46Z
4
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "question generation", "answer extraction", "it", "dataset:lmqg/qg_itquad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-02T13:55:30Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: it datasets: - lmqg/qg_itquad pipeline_tag: text2text-generation tags: - question generation - answer extraction widget: - text: "generate question: <hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento." example_title: "Question Generation Example 1" - text: "generate question: L' individuazione del petrolio e lo sviluppo di nuovi giacimenti richiedeva in genere <hl> da cinque a dieci anni <hl> prima di una produzione significativa." example_title: "Question Generation Example 2" - text: "generate question: il <hl> Giappone <hl> Γ¨ stato il paese piΓΉ dipendente dal petrolio arabo." example_title: "Question Generation Example 3" - text: "extract answers: <hl> Il 6 ottobre 1973 , la Siria e l' Egitto, con il sostegno di altre nazioni arabe, lanciarono un attacco a sorpresa su Israele, su Yom Kippur. <hl> Questo rinnovo delle ostilitΓ  nel conflitto arabo-israeliano ha liberato la pressione economica sottostante sui prezzi del petrolio. All' epoca, l' Iran era il secondo esportatore mondiale di petrolio e un vicino alleato degli Stati Uniti. Settimane piΓΉ tardi, lo sciΓ  d' Iran ha detto in un' intervista: Naturalmente[il prezzo del petrolio] sta andando a salire Certamente! E come! Avete[Paesi occidentali] aumentato il prezzo del grano che ci vendete del 300 per cento, e lo stesso per zucchero e cemento." example_title: "Answer Extraction Example 1" - text: "extract answers: <hl> Furono introdotti autocarri compatti, come la Toyota Hilux e il Datsun Truck, seguiti dal camion Mazda (venduto come il Ford Courier), e l' Isuzu costruito Chevrolet LUV. <hl> Mitsubishi rebranded il suo Forte come Dodge D-50 pochi anni dopo la crisi petrolifera. Mazda, Mitsubishi e Isuzu avevano partnership congiunte rispettivamente con Ford, Chrysler e GM. In seguito i produttori americani introdussero le loro sostituzioni nazionali (Ford Ranger, Dodge Dakota e la Chevrolet S10/GMC S-15), ponendo fine alla loro politica di importazione vincolata." example_title: "Answer Extraction Example 2" model-index: - name: lmqg/mbart-large-cc25-itquad-qg-ae results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_itquad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 7.06 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 20.15 - name: METEOR (Question Generation) type: meteor_question_generation value: 16.86 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 79.29 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 55.92 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer value: 82.65 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer value: 84.34 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer value: 81.06 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer value: 56.14 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer value: 57.13 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer value: 55.22 - name: BLEU4 (Answer Extraction) type: bleu4_answer_extraction value: 20.21 - name: ROUGE-L (Answer Extraction) type: rouge_l_answer_extraction value: 46.51 - name: METEOR (Answer Extraction) type: meteor_answer_extraction value: 44.48 - name: BERTScore (Answer Extraction) type: bertscore_answer_extraction value: 90.63 - name: MoverScore (Answer Extraction) type: moverscore_answer_extraction value: 83.05 - name: AnswerF1Score (Answer Extraction) type: answer_f1_score__answer_extraction value: 76.59 - name: AnswerExactMatch (Answer Extraction) type: answer_exact_match_answer_extraction value: 63.88 --- # Model Card of `lmqg/mbart-large-cc25-itquad-qg-ae` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation and answer extraction jointly on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** it - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="it", model="lmqg/mbart-large-cc25-itquad-qg-ae") # model prediction question_answer_pairs = model.generate_qa("Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-itquad-qg-ae") # answer extraction answer = pipe("generate question: <hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") # question generation question = pipe("extract answers: <hl> Il 6 ottobre 1973 , la Siria e l' Egitto, con il sostegno di altre nazioni arabe, lanciarono un attacco a sorpresa su Israele, su Yom Kippur. <hl> Questo rinnovo delle ostilitΓ  nel conflitto arabo-israeliano ha liberato la pressione economica sottostante sui prezzi del petrolio. All' epoca, l' Iran era il secondo esportatore mondiale di petrolio e un vicino alleato degli Stati Uniti. Settimane piΓΉ tardi, lo sciΓ  d' Iran ha detto in un' intervista: Naturalmente[il prezzo del petrolio] sta andando a salire Certamente! E come! Avete[Paesi occidentali] aumentato il prezzo del grano che ci vendete del 300 per cento, e lo stesso per zucchero e cemento.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 79.29 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 22.03 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 14.31 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 9.9 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 7.06 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 16.86 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 55.92 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 20.15 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 82.65 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedF1Score (MoverScore) | 56.14 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (BERTScore) | 81.06 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (MoverScore) | 55.22 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (BERTScore) | 84.34 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (MoverScore) | 57.13 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 63.88 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | AnswerF1Score | 76.59 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | BERTScore | 90.63 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 33.66 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 27.96 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 23.79 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 20.21 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 44.48 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 83.05 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 46.51 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_itquad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 8 - batch: 2 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 32 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qg-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
nandysoham16/14-clustered_aug
nandysoham16
2023-02-02T14:03:07Z
4
0
keras
[ "keras", "tf", "distilbert", "en", "arxiv:1910.09700", "license:mit", "region:us" ]
null
2023-02-02T13:56:33Z
--- language: en license: mit library_name: keras --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> ['The_Legend_of_Zelda:_Twilight_Princess', 'Symbiosis', 'Tristan_da_Cunha', 'Hokkien', 'Thuringia', 'Samoa', 'Chinese_characters', 'Digimon', 'Tuvalu', 'Geological_history_of_Earth'] - **Developed by:** nandysoham - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
nandysoham16/13-clustered_aug
nandysoham16
2023-02-02T13:55:38Z
3
0
keras
[ "keras", "tf", "distilbert", "en", "arxiv:1910.09700", "license:mit", "region:us" ]
null
2023-02-02T13:45:53Z
--- language: en license: mit library_name: keras --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> ['Iranian_languages', 'Aspirated_consonant', 'Catalan_language', 'Estonian_language', 'Dialect', 'Slavs', 'Szlachta', 'Letter_case', 'Old_English', 'Mesozoic', 'ASCII', 'Sanskrit', 'Multiracial_American', 'Dutch_language', 'Germans', 'Avicenna', 'Textual_criticism', 'Unicode', 'Culture', 'Serbo-Croatian', 'Czech_language', 'Spanish_language_in_the_United_States', 'Greeks', 'Translation', 'Kievan_Rus%27', 'Russian_language', 'Armenians', 'Myocardial_infarction'] - **Developed by:** nandysoham - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
sakodu/ppo-LunarLander-v2
sakodu
2023-02-02T13:47:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T13:47:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.91 +/- 28.51 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Martinkoling/my-first-setfit-hyperparam-4epochs
Martinkoling
2023-02-02T13:39:06Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-02T13:38:57Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 120 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 4.3853483064647136e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 480, "warmup_steps": 48, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
OlegBatrakov/sd-class-butterflies-32
OlegBatrakov
2023-02-02T13:38:18Z
1
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-02-02T13:37:51Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute πŸ¦‹. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('OlegBatrakov/sd-class-butterflies-32') image = pipeline().images[0] image ```
sinny/ppo-SnowballTarget
sinny
2023-02-02T13:36:05Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-02-02T13:36:03Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: sinny/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
stevaras2/poca-SoccerTwos
stevaras2
2023-02-02T13:19:26Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-02-02T13:11:36Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: stevaras2/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
gokuls/distilbert_sa_GLUE_Experiment_data_aug_sst2_192
gokuls
2023-02-02T13:10:46Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T11:56:46Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_sst2_192 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.786697247706422 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_sst2_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.5194 - Accuracy: 0.7867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3624 | 1.0 | 4374 | 0.5194 | 0.7867 | | 0.2778 | 2.0 | 8748 | 0.6027 | 0.7867 | | 0.2345 | 3.0 | 13122 | 0.6679 | 0.7856 | | 0.2023 | 4.0 | 17496 | 0.7301 | 0.7890 | | 0.1774 | 5.0 | 21870 | 0.7613 | 0.7718 | | 0.1582 | 6.0 | 26244 | 0.9199 | 0.7626 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
s3nh/DialoGPT-tony-montana
s3nh
2023-02-02T12:55:11Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "en", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-02T09:56:44Z
--- license: openrail language: - en pipeline_tag: conversational --- <img src = 'https://images.unsplash.com/photo-1628432136678-43ff9be34064?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=663&q=80'> <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> ### Description DialogGPT is a variant of the GPT (Generative Pretrained Transformer) language model developed by OpenAI. It's a deep neural network-based language model that's trained on massive amounts of text data to generate human-like text. DialogGPT uses the transformer architecture, which is a type of neural network designed for processing sequential data such as language. During the training phase, the model is exposed to a large corpus of text and learns to predict the next word in a sequence given the previous words. In the context of dialog, DialogGPT is trained to predict the response in a conversation, given the context of the conversation. This context can include one or more turns of the conversation, along with any additional information such as the topic of the conversation or the speaker's personality. At inference time, the model takes the current context of the conversation as input and generates a response. The response is generated by sampling from the model's predicted distribution over the vocabulary. Overall, DialogGPT provides a flexible and powerful solution for generating human-like text in a conversational context, allowing for the creation of a wide range of applications such as chatbots, conversational agents, and virtual assistants ## Parameters Model was trained for 40 epochs, using params as follows. ``` per_gpu_train_batch_size: int = 2 self.per_gpu_eval_batch_size: int = 2 self.gradient_accumulation_steps: int = 1 self.learning_rate: float = 5e-5 self.weight_decay: float = 0.0 self.adam_epsilon: float = 1e-8 self.max_grad_norm: int = 1.0 self.num_train_epochs: int = 40 self.max_steps: int = -1 self.warmup_steps: int = 0 self.logging_steps: int = 1000 self.save_steps: int = 3500 self.save_total_limit = None self.eval_all_checkpoints: bool = False self.no_cuda: bool = False self.overwrite_output_dir: bool = True self.overwrite_cache: bool = True self.should_continue: bool = False self.seed: int = 42 self.local_rank: int = -1 self.fp16: bool = False self.fp16_opt_level: str = 'O1' ``` ## Usage DialoGPT **large** version, finetuned on Tony Montana sequences (ScarFace main character). Simple snippet of how to infer of this model: ```python from transformers import AutoModelWithLMHead, AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('s3nh/DialoGPT-tony-montana') model = AutoModelWithLMHead.from_pretrained('s3nh/DialoGPT-tony-montana') for step in range(4): new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) print("MontanaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
Aotsuyu/DiscoElysiumLora
Aotsuyu
2023-02-02T12:52:41Z
0
5
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-11T19:12:18Z
--- license: creativeml-openrail-m ---
nandysoham16/7-clustered_aug
nandysoham16
2023-02-02T12:37:56Z
1
0
keras
[ "keras", "tf", "distilbert", "en", "arxiv:1910.09700", "license:mit", "region:us" ]
null
2023-02-02T12:30:33Z
--- language: en license: mit library_name: keras --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> ['Spectre_(2015_film)', 'Architecture', 'Materialism', 'Russian_Soviet_Federative_Socialist_Republic', 'Hellenistic_period', 'Gothic_architecture', 'Cubism', 'Renewable_energy_commercialization', 'Neoclassical_architecture', 'Idealism', 'Georgian_architecture', 'Economy_of_Greece'] - **Developed by:** nandysoham - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
vsrinivas/marian-finetuned-kde4-en-to-hi
vsrinivas
2023-02-02T12:35:16Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-01-30T17:10:08Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-hi results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-hi split: train args: en-hi metrics: - name: Bleu type: bleu value: 51.039293551719226 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-hi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9597 - Bleu: 51.0393 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
huynhdoo/camembert-base-finetuned-CLS
huynhdoo
2023-02-02T12:19:37Z
6
0
transformers
[ "transformers", "tf", "tensorboard", "camembert", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T11:02:02Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: huynhdoo/camembert-base-finetuned-CLS results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # huynhdoo/camembert-base-finetuned-CLS This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1062 - Validation Loss: 0.1546 - Train Accuracy: 0.9521 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 669, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.3620 | 0.1712 | 0.9471 | 0 | | 0.1632 | 0.1488 | 0.9521 | 1 | | 0.1062 | 0.1546 | 0.9521 | 2 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
nandysoham16/5-clustered_aug
nandysoham16
2023-02-02T12:12:17Z
1
0
keras
[ "keras", "tf", "distilbert", "en", "arxiv:1910.09700", "license:mit", "region:us" ]
null
2023-02-02T12:05:45Z
--- language: en license: mit library_name: keras --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> ['Daylight_saving_time', 'Chihuahua_(state)', 'United_States_dollar', 'Gregorian_calendar', 'Circadian_rhythm', 'Department_store', 'Planck_constant'] - **Developed by:** nandysoham - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
dyy2003/pegasus-samsum
dyy2003
2023-02-02T11:50:01Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-02T11:05:54Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cpu - Datasets 2.9.0 - Tokenizers 0.13.2
mqy/mt5-small-finetuned-1feb-2
mqy
2023-02-02T11:48:14Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-02-02T10:35:48Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-1feb-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-1feb-2 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3856 - Rouge1: 8.74 - Rouge2: 2.66 - Rougel: 8.58 - Rougelsum: 8.6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 5.3153 | 1.0 | 311 | 2.6946 | 7.56 | 2.04 | 7.55 | 7.46 | | 3.3159 | 2.0 | 622 | 2.5923 | 8.07 | 2.28 | 8.05 | 8.02 | | 3.092 | 3.0 | 933 | 2.5342 | 7.83 | 2.01 | 7.81 | 7.76 | | 2.9676 | 4.0 | 1244 | 2.4982 | 8.45 | 2.49 | 8.37 | 8.39 | | 2.862 | 5.0 | 1555 | 2.4627 | 8.3 | 2.5 | 8.26 | 8.27 | | 2.7891 | 6.0 | 1866 | 2.4366 | 8.67 | 2.81 | 8.53 | 8.55 | | 2.7391 | 7.0 | 2177 | 2.4215 | 8.51 | 2.54 | 8.45 | 8.42 | | 2.6887 | 8.0 | 2488 | 2.4277 | 8.71 | 2.53 | 8.56 | 8.54 | | 2.6392 | 9.0 | 2799 | 2.3939 | 8.49 | 2.53 | 8.4 | 8.4 | | 2.6139 | 10.0 | 3110 | 2.4015 | 9.28 | 2.85 | 9.14 | 9.19 | | 2.5727 | 11.0 | 3421 | 2.3956 | 9.24 | 2.9 | 9.08 | 9.09 | | 2.5595 | 12.0 | 3732 | 2.3856 | 8.45 | 2.59 | 8.31 | 8.35 | | 2.5471 | 13.0 | 4043 | 2.3891 | 8.64 | 2.79 | 8.53 | 8.52 | | 2.5231 | 14.0 | 4354 | 2.3870 | 8.78 | 2.79 | 8.64 | 8.6 | | 2.5024 | 15.0 | 4665 | 2.3856 | 8.74 | 2.66 | 8.58 | 8.6 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
blazers/nfmystyle
blazers
2023-02-02T11:41:40Z
5
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-02T11:16:34Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### nfmystyle Dreambooth model trained by blazers with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: use CFG Scale To :- 3.5 final
jamm55/autotrain-improved-pidgin-model-2837583189
jamm55
2023-02-02T11:31:34Z
15
4
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain", "translation", "unk", "dataset:jamm55/autotrain-data-improved-pidgin-model", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-01-11T17:45:15Z
--- tags: - autotrain - translation language: - unk - unk datasets: - jamm55/autotrain-data-improved-pidgin-model co2_eq_emissions: emissions: 4.315660252959388 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 2837583189 - CO2 Emissions (in grams): 4.3157 ## Validation Metrics - Loss: 0.753 - SacreBLEU: 46.837 - Gen len: 21.250 - - ## English to Pidgin - This model will translate English to pidgin - Pidgin, a simplified version of english. Mostly used in Africa
HDKCL/izamizam
HDKCL
2023-02-02T11:15:26Z
78
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-11T01:27:37Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### izamizam Dreambooth model trained by HDKCL with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
OAOA/DifFace
OAOA
2023-02-02T10:27:45Z
26
4
diffusers
[ "diffusers", "pytorch", "face image enhancement", "arxiv:2212.06512", "license:other", "diffusers:DifFacePipeline", "region:us" ]
null
2023-01-18T08:06:39Z
--- license: other tags: - pytorch - diffusers - face image enhancement --- # DifFace: Blind Face Restoration with Diffused Error Contraction **Paper**: [DifFace: Blind Face Restoration with Diffused Error Contraction](https://arxiv.org/abs/2212.06512) **Authors**: Zongsheng Yue, Chen Change Loy **Abstract**: *While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with L2 loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations.* ## Inference ```python # !pip install diffusers from diffusers import DifFacePipeline model_id = "OAOA/DifFace" # load model and scheduler pipe = DifFacePipeline.from_pretrained(model_id) pipe = pipe.to("cuda") im_lr = cv2.imread(im_path) # read the low quality face image im_sr = pipe(im_lr, num_inference_steps=250, started_steps=100, aligned=True)['images'][0] image[0].save("restorated_difface.png") # save the result ``` <!--For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)--> ## Training If you want to train your own model, please have a look at the [official training example](https://github.com/zsyOAOA/DifFace). ## Samples [<img src="assets/Solvay_conference.png" width="805px"/>](https://imgsli.com/MTM5NTgw) [<img src="assets/Hepburn.png" height="555px" width="400px"/>](https://imgsli.com/MTM5NTc5) [<img src="assets/oldimg_05.png" height="555px" width="400px"/>](https://imgsli.com/MTM5NTgy) <img src="cropped_faces/0368.png" height="200px" width="200px"/><img src="assets/0368.png" height="200px" width="200px"/> <img src="cropped_faces/0885.png" height="200px" width="200px"/><img src="assets/0885.png" height="200px" width="200px"/> <img src="cropped_faces/0729.png" height="200px" width="200px"/><img src="assets/0729.png" height="200px" width="200px"/> <img src="cropped_faces/0934.png" height="200px" width="200px"/><img src="assets/0934.png" height="200px" width="200px"/>
loubnabnl/santacoder-code-to-text
loubnabnl
2023-02-02T10:16:02Z
11
5
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "code", "custom_code", "dataset:codeparrot/github-jupyter-code-to-text", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-24T18:50:19Z
--- license: openrail datasets: - codeparrot/github-jupyter-code-to-text library_name: transformers tags: - code --- # Santacoder code-to-text This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on [copdeparrot/gitub-jupyter-code-to-text](https://huggingface.co/datasets/codeparrot/github-jupyter-code-to-text). ## Training procedure The model was trained on 4 A100 for 3h with the following hyperparameters were used during training on 4 A100: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 800
Addwater/a2c-PandaReachDense-v2
Addwater
2023-02-02T10:10:05Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T10:07:48Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.59 +/- 0.52 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Hoax0930/tf_distiluse-base-multilingual-cased-v2
Hoax0930
2023-02-02T10:06:12Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-02T09:57:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 8837 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Hoax0930/tf_distiluse-base-multilingual-cased-v1
Hoax0930
2023-02-02T10:05:09Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-02T09:57:04Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 8837 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Hoax0930/tf_paraphrase-multilingual-MiniLM-L12-v2
Hoax0930
2023-02-02T10:04:28Z
7
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-02T09:57:02Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 8837 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Hoax0930/tf_paraphrase-multilingual-mpnet-base-v2
Hoax0930
2023-02-02T10:03:39Z
7
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-02T09:57:00Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 8837 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Hoax0930/pseudo_distiluse-base-multilingual-cased-v2
Hoax0930
2023-02-02T10:01:59Z
8
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-02T09:56:59Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Hoax0930/pseudo_paraphrase-multilingual-mpnet-base-v2
Hoax0930
2023-02-02T09:59:05Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-02T09:56:54Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Hoax0930/sbert
Hoax0930
2023-02-02T09:57:34Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-02T09:56:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
neuronaut/vasyalozhkin2-style
neuronaut
2023-02-02T09:37:11Z
0
1
diffusers
[ "diffusers", "pytorch", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "wildcard", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-06T17:07:34Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - wildcard widget: - text: vasyalozhkin style, painting of a dog flying in the sky --- # DreamBooth model for the vasyalozhkin concept trained by neuronaut on the neuronaut/vasyalozhkin2 dataset. This is a Stable Diffusion model v1.5 fine-tuned on Vasya Lozhkin paintings with DreamBooth. Used 8000 steps. It can be used by modifying the instance_prompt: **vasyalozhkin style** or **vasyalozhkin** This model was created as part of the DreamBooth Hackathon πŸ”₯. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `style` images for the wildcard theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('neuronaut/vasyalozhkin2-style') image = pipeline().images[0] image ```
Closen/Pixelcopter-PLE-v0_PG
Closen
2023-02-02T09:29:35Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T09:21:32Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0_PG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 21.80 +/- 26.48 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jojoUla/bert-large-cased-finetuned-low10-0-cased-DA-20
jojoUla
2023-02-02T09:14:19Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-02T09:11:12Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-cased-finetuned-low10-0-cased-DA-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-finetuned-low10-0-cased-DA-20 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5523 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9248 | 1.0 | 1 | 1.0349 | | 1.9585 | 2.0 | 2 | 1.1866 | | 3.2777 | 3.0 | 3 | 1.4471 | | 0.8177 | 4.0 | 4 | 3.6448 | | 0.8142 | 5.0 | 5 | 3.3777 | | 1.2679 | 6.0 | 6 | 3.3755 | | 3.0205 | 7.0 | 7 | 1.4410 | | 1.902 | 8.0 | 8 | 2.0879 | | 1.5332 | 9.0 | 9 | 1.2120 | | 1.2021 | 10.0 | 10 | 1.3473 | | 1.017 | 11.0 | 11 | 1.7179 | | 0.9292 | 12.0 | 12 | 4.3621 | | 2.6595 | 13.0 | 13 | 0.5600 | | 1.2934 | 14.0 | 14 | 0.5098 | | 0.3334 | 15.0 | 15 | 2.2589 | | 0.778 | 16.0 | 16 | 1.4632 | | 0.9396 | 17.0 | 17 | 0.8874 | | 1.8881 | 18.0 | 18 | 3.0849 | | 0.9685 | 19.0 | 19 | 4.1051 | | 1.4742 | 20.0 | 20 | 1.4036 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
laamaai/clasificador-muchocine-1
laamaai
2023-02-02T09:12:02Z
5
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T09:10:49Z
--- tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine-1 This model is a fine-tuned version of [mrm8488/electricidad-base-finetuned-muchocine](https://huggingface.co/mrm8488/electricidad-base-finetuned-muchocine) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7100 - Accuracy: 0.4632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.1848 | 0.4710 | | 1.1797 | 2.0 | 776 | 1.4089 | 0.4465 | | 0.6868 | 3.0 | 1164 | 1.7100 | 0.4632 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
jojoUla/bert-large-cased-finetuned-low20-cased-DA-20
jojoUla
2023-02-02T09:05:19Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-02T08:34:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-cased-finetuned-low20-cased-DA-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-finetuned-low20-cased-DA-20 (not in use) This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.477 | 1.0 | 1 | 3.0843 | | 3.5516 | 2.0 | 2 | 4.2279 | | 3.6173 | 3.0 | 3 | 4.2543 | | 3.1873 | 4.0 | 4 | 2.8752 | | 3.9494 | 5.0 | 5 | 1.7727 | | 2.628 | 6.0 | 6 | 2.2849 | | 1.7451 | 7.0 | 7 | 2.2338 | | 2.6641 | 8.0 | 8 | 1.4185 | | 3.0739 | 9.0 | 9 | 4.0617 | | 2.1557 | 10.0 | 10 | 3.4256 | | 1.6353 | 11.0 | 11 | 3.0232 | | 2.6313 | 12.0 | 12 | 4.2908 | | 1.9466 | 13.0 | 13 | 3.0047 | | 1.8104 | 14.0 | 14 | 2.9170 | | 2.0315 | 15.0 | 15 | 3.5850 | | 2.6848 | 16.0 | 16 | 4.4435 | | 2.0859 | 17.0 | 17 | 3.9439 | | 1.6852 | 18.0 | 18 | 0.9313 | | 1.6071 | 19.0 | 19 | 3.6927 | | 1.697 | 20.0 | 20 | 3.7250 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
laamaai/clasificador-tomatoes
laamaai
2023-02-02T08:47:56Z
4
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T08:46:31Z
--- tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-tomatoes This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7090 - Accuracy: 0.7450 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6906 | 1.0 | 853 | 0.6964 | 0.6231 | | 0.5222 | 2.0 | 1706 | 0.5627 | 0.7345 | | 0.3525 | 3.0 | 2559 | 0.7090 | 0.7450 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
MHaurel/a2c-AntBulletEnv-v0
MHaurel
2023-02-02T08:38:29Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T08:37:23Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1812.08 +/- 54.55 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
PeterDerLustige/ppo-SnowballTarget
PeterDerLustige
2023-02-02T08:24:19Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-02-02T08:24:12Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: PeterDerLustige/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
scronberg/poca-SoccerTwos
scronberg
2023-02-02T08:24:17Z
62
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-02-02T08:24:09Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: scronberg/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
ykleeee/wav2vec2-5epochs-3e4
ykleeee
2023-02-02T07:50:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-01T08:21:34Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-owndata results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-owndata This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2515 - Wer: 0.3212 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.262 | 0.36 | 100 | 3.4482 | 0.9832 | | 3.0032 | 0.72 | 200 | 2.9441 | 0.9832 | | 2.9141 | 1.08 | 300 | 2.9393 | 0.9832 | | 2.8585 | 1.44 | 400 | 2.8848 | 0.9627 | | 2.2837 | 1.8 | 500 | 2.1732 | 1.0111 | | 0.9834 | 2.16 | 600 | 0.8765 | 0.7345 | | 0.7288 | 2.52 | 700 | 0.5741 | 0.5641 | | 0.5521 | 2.88 | 800 | 0.3937 | 0.4467 | | 0.3751 | 3.24 | 900 | 0.3484 | 0.4112 | | 0.3733 | 3.6 | 1000 | 0.2964 | 0.3912 | | 0.2443 | 3.96 | 1100 | 0.2673 | 0.3446 | | 0.2667 | 4.32 | 1200 | 0.2657 | 0.3357 | | 0.2237 | 4.68 | 1300 | 0.2515 | 0.3212 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1 - Datasets 2.9.0 - Tokenizers 0.10.3
FoxFive/LunarLander-v2-ppo-2_1
FoxFive
2023-02-02T07:42:59Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-02-02T07:42:59Z
--- license: bigscience-bloom-rail-1.0 ---
MukeshYadav/fine_tuned_theme2
MukeshYadav
2023-02-02T06:53:16Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2023-01-31T19:52:24Z
--- tags: - generated_from_trainer model-index: - name: fine_tuned_theme2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine_tuned_theme2 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
igorcheb/REINFORCE-LunarLanderContinuous-v2
igorcheb
2023-02-02T06:42:34Z
0
0
null
[ "LunarLanderContinuous-v2", "reinforce", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-16T15:26:41Z
--- tags: - LunarLanderContinuous-v2 - reinforce - reinforcement-learning - custom-implementation model-index: - name: REINFORCE-LunarLanderContinuous-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLanderContinuous-v2 type: LunarLanderContinuous-v2 metrics: - type: mean_reward value: 264.10 +/- 37.17 name: mean_reward verified: false --- # **Reinforce** Agent playing **LunarLanderContinuous-v2** This is a custom REINFORCE RL agent. Performance has been measured over 900 episodes. To try the agent, user needs to import the `ParameterisedPolicy` class from the agent_class.py file. </br> Training progress: ![training](training_graph.jpg) Numbers on X axis are average over 40 episodes, each lasting for about 500 timesteps on average. So in total the agent was trained over about 5e6 timesteps. Learning rate decay schedule: <code>torch.optim.lr_scheduler.StepLR(opt, step_size=4000, gamma=0.7)</code>. Training code is shown in the training.py file for reference. In case video demo does not work, here's a gif: ![replay](replay.gif) Minimal code to use the agent:</br> ``` import gym from agent_class import ParameterisedPolicy env_name = 'LunarLanderContinuous-v2' env = gym.make(env_name) agent = torch.load('best_reinforce_lunar_lander_cont_model_269.402.pt') render = True observation = env.reset() while True: if render: env.render() action = agent.act(observation) observation, reward, done, info = env.step(action) if done: break env.close() ```
aristeia/q-FrozenLake-v1-4x4-noSlippery
aristeia
2023-02-02T06:24:49Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T06:24:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="aristeia/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Brain22/ppo-Huggy
Brain22
2023-02-02T06:17:08Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-02-02T06:17:01Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Brain22/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
gokuls/distilbert_sa_GLUE_Experiment_data_aug_qnli_192
gokuls
2023-02-02T05:57:19Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T01:17:40Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_qnli_192 results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.5701995240710233 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_qnli_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.0016 - Accuracy: 0.5702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5035 | 1.0 | 16604 | 1.0016 | 0.5702 | | 0.2645 | 2.0 | 33208 | 1.2295 | 0.5724 | | 0.1684 | 3.0 | 49812 | 1.3804 | 0.5826 | | 0.1171 | 4.0 | 66416 | 1.5434 | 0.5792 | | 0.085 | 5.0 | 83020 | 1.5556 | 0.5792 | | 0.064 | 6.0 | 99624 | 1.7284 | 0.5731 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
FloydianSound/WLOP_Diffusion_v1-5
FloydianSound
2023-02-02T05:50:29Z
40
26
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-07T19:31:20Z
--- language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m inference: true --- ## Informations Fine-tuned SD v1-5 model, 61320 steps, 7 epochs Aspect Ratio Bucketing centered at 768 resolution <img alt="Showcase" src="https://huggingface.co/FloydianSound/Wlop_Diffusion/resolve/main/WLOP_Artstyle_AR_Chart.png"/> Made with 876 pictures of the artist WLOP If you like the artist support their work on https://www.artstation.com/wlop - https://www.deviantart.com/wlop ## Tags Tokens are in the tags.txt along with their occurrences in [#] format ## Samples <img alt="Showcase" src="https://huggingface.co/FloydianSound/Wlop_Diffusion/resolve/main/00000-souryuu%20asuka%20langley%20red%20hairs%20green%20eyes%20wlop-2961790964-WLOP_Artstyle_wlop_artstyle_768_e7.png"/> <img alt="Showcase" src="https://huggingface.co/FloydianSound/Wlop_Diffusion/resolve/main/00000-princess%20aeolian%20solo%20focus%20dark%20hairs%20green%20eyes%20wlop-486739327-WLOP_Artstyle_wlop_artstyle_768_e7.png"/> <img alt="Showcase" src="https://huggingface.co/FloydianSound/Wlop_Diffusion/resolve/main/00000-nier%20automata%20yorha%20no%202%20type%20b%20solo%20focus%20white%20hair%20black%20dress%20wlop-1122837997-WLOP_Artstyle_wlop_artstyle_768_e7.png"/> ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
FloydianSound/Nixeu_Diffusion_v1-5
FloydianSound
2023-02-02T05:49:57Z
14
4
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-06T04:09:12Z
--- language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m inference: true --- ## Informations Fine-tuned SD v1-5 model, 25040 steps, 10 epochs Aspect Ratio Bucketing centered at 768 resolution Made with 250 pictures of the artist NIXEU; if you like the artist support their work on https://www.artstation.com/nixeu - https://www.deviantart.com/nixeu ## Tags Tokens are in the tags.txt along with their occurrences in [#] format <img alt="Showcase" src="https://huggingface.co/FloydianSound/Nixeu_Diffusion/resolve/main/00000-nurse%20single%20realistic%20lips%20highres%20fringe%20tall%20image%20absurdres%20long%20hair%20black%20hair%20upper%20body%20dress%20nixeu%20-%201522939414%20-%20Nixeu_Artstyle_nixeu_artstyle_768_e10.png"/> ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
ttj/flex-diffusion-2-1
ttj
2023-02-02T05:38:59Z
8
24
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-29T14:26:11Z
--- license: openrail++ tags: - stable-diffusion - text-to-image pinned: true --- # Model Card for flex-diffusion-2-1 <!-- Provide a quick summary of what the model is/does. [Optional] --> stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned with different aspect ratios. ## TLDR: ### There are 2 models in this repo: - One based on stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned for 6k steps. - One based on stable-diffusion-2-base (stabilityai/stable-diffusion-2-base) finetuned for 6k steps, on the same dataset. For usage, see - [How to Get Started with the Model](#how-to-get-started-with-the-model) ### It aims to solve the following issues: 1. Generated images looks like they are cropped from a larger image. 2. Generating non-square images creates weird results, due to the model being trained on square images. Examples: | resolution | model | stable diffusion | flex diffusion | |:---------------:|:-------:|:----------------------------:|:-----------------------------:| | 576x1024 (9:16) | v2-1 | ![img](imgs/21-576-1024.png) | ![img](imgs/21f-576-1024.png) | | 576x1024 (9:16) | v2-base | ![img](imgs/2b-576-1024.png) | ![img](imgs/2bf-576-1024.png) | | 1024x576 (16:9) | v2-1 | ![img](imgs/21-1024-576.png) | ![img](imgs/21f-1024-576.png) | | 1024x576 (16:9) | v2-base | ![img](imgs/2b-1024-576.png) | ![img](imgs/2bf-1024-576.png) | ### Limitations: 1. It's trained on a small dataset, so it's improvements may be limited. 2. For each aspect ratio, it's trained on only a fixed resolution. So it may not be able to generate images of different resolutions. For 1:1 aspect ratio, it's fine-tuned at 512x512, although flex-diffusion-2-1 was last finetuned at 768x768. ### Potential improvements: 1. Train on a larger dataset. 2. Train on different resolutions even for the same aspect ratio. 3. Train on specific aspect ratios, instead of a range of aspect ratios. # Table of Contents - [Model Card for flex-diffusion-2-1](#model-card-for--model_id-) - [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents-1) - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use [Optional]](#downstream-use-optional) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Speeds, Sizes, Times](#speeds-sizes-times) - [Evaluation](#evaluation) - [Testing Data, Factors & Metrics](#testing-data-factors--metrics) - [Testing Data](#testing-data) - [Factors](#factors) - [Metrics](#metrics) - [Results](#results) - [Model Examination](#model-examination) - [Environmental Impact](#environmental-impact) - [Technical Specifications [optional]](#technical-specifications-optional) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [Citation](#citation) - [Glossary [optional]](#glossary-optional) - [More Information [optional]](#more-information-optional) - [Model Card Authors [optional]](#model-card-authors-optional) - [Model Card Contact](#model-card-contact) - [How to Get Started with the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description <!-- Provide a longer summary of what this model is/does. --> stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned for dynamic aspect ratios. finetuned resolutions: | | width | height | aspect ratio | |---:|--------:|---------:|:---------------| | 0 | 512 | 1024 | 1:2 | | 1 | 576 | 1024 | 9:16 | | 2 | 576 | 960 | 3:5 | | 3 | 640 | 1024 | 5:8 | | 4 | 512 | 768 | 2:3 | | 5 | 640 | 896 | 5:7 | | 6 | 576 | 768 | 3:4 | | 7 | 512 | 640 | 4:5 | | 8 | 640 | 768 | 5:6 | | 9 | 640 | 704 | 10:11 | | 10 | 512 | 512 | 1:1 | | 11 | 704 | 640 | 11:10 | | 12 | 768 | 640 | 6:5 | | 13 | 640 | 512 | 5:4 | | 14 | 768 | 576 | 4:3 | | 15 | 896 | 640 | 7:5 | | 16 | 768 | 512 | 3:2 | | 17 | 1024 | 640 | 8:5 | | 18 | 960 | 576 | 5:3 | | 19 | 1024 | 576 | 16:9 | | 20 | 1024 | 512 | 2:1 | - **Developed by:** Jonathan Chang - **Model type:** Diffusion-based text-to-image generation model - **Language(s)**: English - **License:** creativeml-openrail-m - **Parent Model:** https://huggingface.co/stabilityai/stable-diffusion-2-1 - **Resources for more information:** More information needed # Uses - see https://huggingface.co/stabilityai/stable-diffusion-2-1 # Training Details ## Training Data - LAION aesthetic dataset, subset of it with 6+ rating - https://laion.ai/blog/laion-aesthetics/ - https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus - I only used a small portion of that, see [Preprocessing](#preprocessing) - most common aspect ratios in the dataset (before preprocessing) | | aspect_ratio | counts | |---:|:---------------|---------:| | 0 | 1:1 | 154727 | | 1 | 3:2 | 119615 | | 2 | 2:3 | 61197 | | 3 | 4:3 | 52276 | | 4 | 16:9 | 38862 | | 5 | 400:267 | 21893 | | 6 | 3:4 | 16893 | | 7 | 8:5 | 16258 | | 8 | 4:5 | 15684 | | 9 | 6:5 | 12228 | | 10 | 1000:667 | 12097 | | 11 | 2:1 | 11006 | | 12 | 800:533 | 10259 | | 13 | 5:4 | 9753 | | 14 | 500:333 | 9700 | | 15 | 250:167 | 9114 | | 16 | 5:3 | 8460 | | 17 | 200:133 | 7832 | | 18 | 1024:683 | 7176 | | 19 | 11:10 | 6470 | - predefined aspect ratios | | width | height | aspect ratio | |---:|--------:|---------:|:---------------| | 0 | 512 | 1024 | 1:2 | | 1 | 576 | 1024 | 9:16 | | 2 | 576 | 960 | 3:5 | | 3 | 640 | 1024 | 5:8 | | 4 | 512 | 768 | 2:3 | | 5 | 640 | 896 | 5:7 | | 6 | 576 | 768 | 3:4 | | 7 | 512 | 640 | 4:5 | | 8 | 640 | 768 | 5:6 | | 9 | 640 | 704 | 10:11 | | 10 | 512 | 512 | 1:1 | | 11 | 704 | 640 | 11:10 | | 12 | 768 | 640 | 6:5 | | 13 | 640 | 512 | 5:4 | | 14 | 768 | 576 | 4:3 | | 15 | 896 | 640 | 7:5 | | 16 | 768 | 512 | 3:2 | | 17 | 1024 | 640 | 8:5 | | 18 | 960 | 576 | 5:3 | | 19 | 1024 | 576 | 16:9 | | 20 | 1024 | 512 | 2:1 | ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing 1. download files with url &amp; caption from https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus - I only used the first file `train-00000-of-00007-29aec9150af50f9f.parquet` 2. use img2dataset to convert to webdataset - https://github.com/rom1504/img2dataset - I put train-00000-of-00007-29aec9150af50f9f.parquet in a folder called `first-file` - the output folder is `/mnt/aesthetics6plus`, change this to your own folder ```bash echo INPUT_FOLDER=first-file echo OUTPUT_FOLDER=/mnt/aesthetics6plus img2dataset --url_list $INPUT_FOLDER --input_format "parquet"\ --url_col "URL" --caption_col "TEXT" --output_format webdataset\ --output_folder $OUTPUT_FOLDER --processes_count 3 --thread_count 6 --image_size 1024 --resize_only_if_bigger --resize_mode=keep_ratio_largest \ --save_additional_columns '["WIDTH","HEIGHT","punsafe","similarity"]' --enable_wandb True ``` 3. The data-loading code will do preprocessing on the fly, so no need to do anything else. But it's not optimized for speed, the GPU utilization fluctuates between 80% and 100%. And it's not written for multi-GPU training, so use it with caution. The code will do the following: - use webdataset to load the data - calculate the aspect ratio of each image - find the closest aspect ratio & it's associated resolution from the predefined resolutions: `argmin(abs(aspect_ratio - predefined_aspect_ratios))`. E.g. if the aspect ratio is 1:3, the closest resolution is 1:2. and it's associated resolution is 512x1024. - keeping the aspect ratio, resize the image such that it's larger or equal to the associated resolution on each side. E.g. resize to 512x(512*3) = 512x1536 - random crop the image to the associated resolution. E.g. crop to 512x1024 - if more than 10% of the image is lost in the cropping, discard this example. - batch examples by aspect ratio, so all examples in a batch have the same aspect ratio ### Speeds, Sizes, Times - Dataset size: 100k image-caption pairs, before filtering. - I didn't wait for the whole dataset to be downloaded, I copied the first 10 tar files and their index files to a new folder called `aesthetics6plus-small`, with 100k image-caption pairs in total. The full dataset is a lot bigger. - Hardware: 1 RTX3090 GPUs - Optimizer: 8bit Adam - Batch size: 32 - actual batch size: 2 - gradient_accumulation_steps: 16 - effective batch size: 32 - Learning rate: warmup to 2e-6 for 500 steps and then kept constant - Learning rate: 2e-6 - Training steps: 6k - Epoch size (approximate): 32 * 6k / 100k = 1.92 (not accounting for the filtering) - Each example is seen 1.92 times on average. - Training time: approximately 1 day ## Results More information needed # Model Card Authors Jonathan Chang # How to Get Started with the Model Use the code below to get started with the model. ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler, UNet2DConditionModel def use_DPM_solver(pipe): pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) return pipe pipe = StableDiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1", unet = UNet2DConditionModel.from_pretrained("ttj/flex-diffusion-2-1", subfolder="2-1/unet", torch_dtype=torch.float16), torch_dtype=torch.float16, ) # for v2-base, use the following line instead #pipe = StableDiffusionPipeline.from_pretrained( # "stabilityai/stable-diffusion-2-base", # unet = UNet2DConditionModel.from_pretrained("ttj/flex-diffusion-2-1", subfolder="2-base/unet", torch_dtype=torch.float16), # torch_dtype=torch.float16) pipe = use_DPM_solver(pipe).to("cuda") pipe = pipe.to("cuda") prompt = "a professional photograph of an astronaut riding a horse" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ```
shivr/dqn-SpaceInvadersNoFrameskip-v4
shivr
2023-02-02T05:36:20Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T05:35:50Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 374.00 +/- 214.89 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shivr -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shivr -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga shivr ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Hyeoni/Question-Generation-Multitask-Korquad
Hyeoni
2023-02-02T05:17:00Z
0
1
null
[ "region:us" ]
null
2022-08-29T08:51:45Z
# Question Generation Model with KorQuAD ___ This model is a fine-tuend version of paust/pko-t5-base on the KorQuAD v1.0 Dataset. ### Dataset KorQuAD v1.0 Dataset (csv) [Train](https://drive.google.com/file/d/1p0LYPBQE8OW6XRFEW5nxc8P03wgD_plE/view?usp=sharing) [Valid](https://drive.google.com/file/d/1O0-8BCsYn3PpEmIUjiEBnPz4sBBmQmud/view?usp=sharing) ### Train 30% ν™•λ₯ λ‘œ input answer λŒ€μ‹  '[MASK]'λ₯Ό λ„£μ–΄ 질문 λ¬Έμž₯을 μƒμ„±ν•˜λ„λ‘ ν•™μŠ΅ν•œλ‹€. κ·Έ κ²°κ³Ό, input answerκ°€ 없을 λ•Œλ„ 적절히 answer을 μ°Ύμ•„ μ§ˆλ¬Έμ„ 생성할 수 μžˆλ‹€. ### Question Generation without Input Answer ```python context = """ CONTEXT """ input_answer = '[MASK]' generated = generate(best_model, input_answer, context) show_result(generated) ``` ### References ____ Leaf-Question-Generation :https://github.com/KristiyanVachev/Leaf-Question-Generation pko-t5-base : https://huggingface.co/paust/pko-t5-base KorQuAD v1.0 : https://korquad.github.io/KorQuad%201.0/
DioLiu/autotrain-koles_score-3215890190
DioLiu
2023-02-02T05:02:45Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "en", "dataset:DioLiu/autotrain-data-koles_score", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T05:01:13Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain πŸ€—" datasets: - DioLiu/autotrain-data-koles_score co2_eq_emissions: emissions: 0.009007200392120884 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 3215890190 - CO2 Emissions (in grams): 0.0090 ## Validation Metrics - Loss: 1.187 - Accuracy: 0.542 - Macro F1: 0.368 - Micro F1: 0.542 - Weighted F1: 0.482 - Macro Precision: 0.331 - Micro Precision: 0.542 - Weighted Precision: 0.434 - Macro Recall: 0.414 - Micro Recall: 0.542 - Weighted Recall: 0.542 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/DioLiu/autotrain-koles_score-3215890190 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("DioLiu/autotrain-koles_score-3215890190", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("DioLiu/autotrain-koles_score-3215890190", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
culteejen/PPO-default-Roomba
culteejen
2023-02-02T04:10:01Z
9
2
stable-baselines3
[ "stable-baselines3", "Roomba", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-25T22:28:27Z
--- library_name: stable-baselines3 tags: - Roomba - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Roomba type: Roomba metrics: - type: mean_reward value: -132.80 +/- 40.23 name: mean_reward verified: false --- # **PPO** Agent playing **Roomba** This is a trained model of a **PPO** agent playing **Roomba** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
onefish51/dog_w_prior-preservation
onefish51
2023-02-02T03:18:18Z
2
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-02-02T03:03:17Z
--- license: creativeml-openrail-m base_model: /data2/home/tyu/stable_diffusion/diffusers/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - onefish51/dog_w_prior-preservation These are LoRA adaption weights for /data2/home/tyu/stable_diffusion/diffusers/stable-diffusion-v1-4. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
FUXI/yuyan-dialogue
FUXI
2023-02-02T03:01:44Z
0
2
null
[ "text-generation", "dialogue-generation", "pytorch", "inference acceleration", "gpt2", "gpt3", "zh", "arxiv:2005.14165", "license:apache-2.0", "region:us" ]
text-generation
2022-12-26T06:05:50Z
--- license: apache-2.0 language: zh inference: false tags: - text-generation - dialogue-generation - pytorch - inference acceleration - gpt2 - gpt3 --- # YuYan-Dialogue YuYan is a series of Chinese language models with different size, developed by Fuxi AI lab, Netease.Inc. They are trained on a large Chinese novel dataset of high quality. YuYan is in the same family of decoder-only models like [GPT2 and GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. YuYan-Dialogue is a dialogue model by fine-tuning the YuYan-11b on a large multi-turn dialogue dataset of high quality. It has very strong conversation generation capabilities. ## Model Inference Acceleration As the model size increases, the model inference time increases and more computational resources are required. Therefore, we developed our own transformer model inference acceleration framework, [EET](https://github.com/NetEase-FuXi/EET.git). More details are in [Easy and Efficient Transformer: Scalable Inference Solution For Large NLP Model](https://aclanthology.org/2022.naacl-industry.8/). We combine our language model with the EET inference framework to provide industrial-grade inference reasoning performance. ## How to use Our model is trained based on the [fairseq](https://github.com/facebookresearch/fairseq). As a result, the inference and finetuning depend on it. For inference, we modify some parts of the original fairseq codes. Mainly > fairseq-0.12.2/fairseq/sequence_generator.py We integrate the EET with sequence_generator. We replace the eos token to a token unlikely to be sampled to ensure the generated text length. The repetition penalty trick is also modified. You can change the penalty strength by adjusting the value of `self.ban_weight`. Then, to keep the eos token in the final generated text, we change the line 75 `include_eos=False` to `include_eos=True` in > fairseq-0.12.2/fairseq/data/dictionary.py Finally, to pass in parameters in python scripts, we remove the line 67 ~ line 69 in >fairseq-0.12.2/fairseq/dataclass/utils.py Below are the install tutorial. ``` # install pytorch pip install torch==1.8.1 # install pytorch # install fairseq unzip fairseq-0.12.2.zip cd fairseq-0.12.2 pip install. # install EET git clone https://github.com/NetEase-FuXi/EET.git cd EET pip install . # install transformers (EET requirements) pip install transformers==4.23 # make a folder, move the dictionary file and model file into it. mkdir transformer_lm_gpt2_xxl_dialogue mv dict.txt transformer_lm_gpt2_xxl_dialogue/ mv checkpoint_best_part_*.pt transformer_lm_gpt2_xxl_dialogue/ ``` `inference.py` is a script to provide a interface to initialize the EET object and sequence_generator. It includes some pre-process and post-process functions for text input and output. You can modify the script according to your needs. In addition, it provide a simple object to organize the dialogue generation and dialogue history. After the environment is ready, several lines of codes can realize the inference. ``` python from inference import Inference, Dialogue model_path = "transformer_lm_gpt2_xxl_dialogue/checkpoint_best.pt" data_path = "transformer_lm_gpt2_xxl_dialogue" eet_batch_size = 10 # max inference batch size, adjust according to cuda memory, 40GB memory is necessary inference = Inference(model_path, data_path, eet_batch_size) dialogue_model = Dialogue(inference) dialogue_model.get_repsonse("δ½ ε₯½ε•Š") ``` ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - https://aclanthology.org/2022.naacl-industry.8/ ``` @inproceedings{li-etal-2022-easy, title = "Easy and Efficient Transformer: Scalable Inference Solution For Large {NLP} Model", author = "Li, Gongzheng and Xi, Yadong and Ding, Jingzhen and Wang, Duan and Luo, Ziyang and Zhang, Rongsheng and Liu, Bai and Fan, Changjie and Mao, Xiaoxi and Zhao, Zeng", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track", month = jul, year = "2022", address = "Hybrid: Seattle, Washington + Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-industry.8", doi = "10.18653/v1/2022.naacl-industry.8", pages = "62--68" } ``` ## Contact Us You can also contact us by email: [email protected], [email protected]
rohitp1/Nystrom-W2V2-100hrs-take-4-unfreeze-extractor-try-2
rohitp1
2023-02-02T02:20:16Z
1
0
transformers
[ "transformers", "pytorch", "wav2vec2", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2023-01-30T04:30:59Z
--- tags: - generated_from_trainer metrics: - wer model-index: - name: Nystrom-W2V2-100hrs-take-4-unfreeze-extractor-try-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Nystrom-W2V2-100hrs-take-4-unfreeze-extractor-try-2 This model is a fine-tuned version of [rohitp1/Nystrom-W2V2-100hrs-take-4-unfreeze-extractor](https://huggingface.co/rohitp1/Nystrom-W2V2-100hrs-take-4-unfreeze-extractor) on the None dataset. It achieves the following results on the evaluation set: - Loss: 27.1915 - Wer: 0.0869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 23.1458 | 9.01 | 1000 | 28.9573 | 0.1039 | | 32.7156 | 18.02 | 2000 | 25.6155 | 0.1218 | | 43.506 | 27.03 | 3000 | 27.6332 | 0.1228 | | 43.3608 | 36.04 | 4000 | 26.0539 | 0.1169 | | 39.984 | 45.04 | 5000 | 25.9836 | 0.1137 | | 35.1977 | 54.05 | 6000 | 26.2060 | 0.1077 | | 30.1951 | 63.06 | 7000 | 27.0999 | 0.1033 | | 25.7519 | 72.07 | 8000 | 27.8459 | 0.0964 | | 22.1982 | 81.08 | 9000 | 27.9773 | 0.0908 | | 20.0551 | 90.09 | 10000 | 27.4222 | 0.0884 | | 19.4505 | 99.1 | 11000 | 27.1915 | 0.0869 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.11.0
StupidGame/AnythingV4.5
StupidGame
2023-02-02T02:10:47Z
21
1
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-16T01:13:38Z
--- license: creativeml-openrail-m ---
erud1t3/ppo-lunarlander-v2
erud1t3
2023-02-02T02:02:08Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T00:33:36Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 288.17 +/- 23.46 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
swl-models/9527
swl-models
2023-02-02T01:48:23Z
0
14
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-02T00:54:21Z
--- license: creativeml-openrail-m ---
swl-models/DanMix-v1
swl-models
2023-02-02T01:34:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-02T00:30:09Z
--- license: creativeml-openrail-m ---
AdhilB/AI
AdhilB
2023-02-02T00:57:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-02-02T00:53:24Z
--- title: GFPGAN emoji: 😁 colorFrom: yellow colorTo: green sdk: gradio sdk_version: 3.1.7 app_file: app.py pinned: false license: apache-2.0 --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
gokuls/distilbert_sa_GLUE_Experiment_data_aug_mrpc_96
gokuls
2023-02-02T00:49:02Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-01T22:48:14Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_mrpc_96 results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 1.0 - name: F1 type: f1 value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_mrpc_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 - Combined Score: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.3242 | 1.0 | 980 | 0.0830 | 0.9804 | 0.9857 | 0.9830 | | 0.0843 | 2.0 | 1960 | 0.0355 | 0.9828 | 0.9875 | 0.9852 | | 0.0431 | 3.0 | 2940 | 0.0105 | 1.0 | 1.0 | 1.0 | | 0.0268 | 4.0 | 3920 | 0.0046 | 1.0 | 1.0 | 1.0 | | 0.019 | 5.0 | 4900 | 0.0015 | 1.0 | 1.0 | 1.0 | | 0.0141 | 6.0 | 5880 | 0.0011 | 1.0 | 1.0 | 1.0 | | 0.0115 | 7.0 | 6860 | 0.0007 | 1.0 | 1.0 | 1.0 | | 0.0094 | 8.0 | 7840 | 0.0004 | 1.0 | 1.0 | 1.0 | | 0.0078 | 9.0 | 8820 | 0.0004 | 1.0 | 1.0 | 1.0 | | 0.0056 | 10.0 | 9800 | 0.0006 | 1.0 | 1.0 | 1.0 | | 0.0056 | 11.0 | 10780 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0039 | 12.0 | 11760 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0038 | 13.0 | 12740 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0029 | 14.0 | 13720 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0026 | 15.0 | 14700 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0025 | 16.0 | 15680 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0019 | 17.0 | 16660 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0017 | 18.0 | 17640 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0015 | 19.0 | 18620 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0013 | 20.0 | 19600 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0013 | 21.0 | 20580 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0013 | 22.0 | 21560 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0012 | 23.0 | 22540 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.001 | 24.0 | 23520 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0008 | 25.0 | 24500 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0007 | 26.0 | 25480 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0006 | 27.0 | 26460 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0007 | 28.0 | 27440 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0007 | 29.0 | 28420 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0005 | 30.0 | 29400 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 31.0 | 30380 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0005 | 32.0 | 31360 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 33.0 | 32340 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0003 | 34.0 | 33320 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 35.0 | 34300 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0002 | 36.0 | 35280 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0003 | 37.0 | 36260 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0003 | 38.0 | 37240 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0003 | 39.0 | 38220 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0002 | 40.0 | 39200 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0002 | 41.0 | 40180 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0 | 42.0 | 41160 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0002 | 43.0 | 42140 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0002 | 44.0 | 43120 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0001 | 45.0 | 44100 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0001 | 46.0 | 45080 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0 | 47.0 | 46060 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0 | 48.0 | 47040 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0 | 49.0 | 48020 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0001 | 50.0 | 49000 | 0.0000 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
dn-gh/dqn-SpaceInvadersNoFrameskip-v4-1
dn-gh
2023-02-02T00:42:20Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T00:41:43Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 614.00 +/- 265.66 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dn-gh -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dn-gh -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dn-gh ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
gokuls/distilbert_sa_GLUE_Experiment_data_aug_mrpc_384
gokuls
2023-02-02T00:34:07Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-01T22:50:13Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_mrpc_384 results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 1.0 - name: F1 type: f1 value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_mrpc_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 - Combined Score: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---:|:--------------:| | 0.1771 | 1.0 | 980 | 0.0049 | 1.0 | 1.0 | 1.0 | | 0.0321 | 2.0 | 1960 | 0.0009 | 1.0 | 1.0 | 1.0 | | 0.0154 | 3.0 | 2940 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0086 | 4.0 | 3920 | 0.0009 | 1.0 | 1.0 | 1.0 | | 0.0062 | 5.0 | 4900 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0039 | 6.0 | 5880 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0039 | 7.0 | 6860 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0028 | 8.0 | 7840 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0022 | 9.0 | 8820 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0018 | 10.0 | 9800 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.002 | 11.0 | 10780 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0011 | 12.0 | 11760 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0015 | 13.0 | 12740 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0011 | 14.0 | 13720 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0011 | 15.0 | 14700 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0008 | 16.0 | 15680 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0009 | 17.0 | 16660 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0007 | 18.0 | 17640 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0006 | 19.0 | 18620 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0006 | 20.0 | 19600 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 21.0 | 20580 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 22.0 | 21560 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0002 | 23.0 | 22540 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 24.0 | 23520 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0003 | 25.0 | 24500 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 26.0 | 25480 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0003 | 27.0 | 26460 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0002 | 28.0 | 27440 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0002 | 29.0 | 28420 | 0.0000 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_cola
gokuls
2023-02-02T00:32:21Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mobilebert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-01T22:57:01Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.10549049137169143 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_cola This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6837 - Matthews Correlation: 0.1055 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.6247 | 1.0 | 1669 | 0.6837 | 0.1055 | | 0.5458 | 2.0 | 3338 | 0.7216 | 0.1168 | | 0.5041 | 3.0 | 5007 | 0.7127 | 0.1296 | | 0.4445 | 4.0 | 6676 | 0.7718 | 0.1436 | | 0.3961 | 5.0 | 8345 | 0.8417 | 0.1284 | | 0.3603 | 6.0 | 10014 | 0.7805 | 0.1240 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
Nonin/DQN-LunarLander-v2
Nonin
2023-02-02T00:23:25Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T00:23:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.70 +/- 77.81 name: mean_reward verified: false --- # **DQN** Agent playing **LunarLander-v2** This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sammael70/1223
sammael70
2023-02-02T00:09:41Z
0
0
null
[ "es", "arxiv:1910.09700", "license:odbl", "region:us" ]
null
2023-02-02T00:07:39Z
--- license: odbl language: - es --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_cola
gokuls
2023-02-01T23:54:20Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "mobilebert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-01T22:35:34Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: mobilebert_sa_GLUE_Experiment_data_aug_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.05152844185670031 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_data_aug_cola This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6549 - Matthews Correlation: 0.0515 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.5347 | 1.0 | 1669 | 0.6549 | 0.0515 | | 0.4507 | 2.0 | 3338 | 0.8182 | 0.0794 | | 0.407 | 3.0 | 5007 | 0.8573 | 0.0853 | | 0.3439 | 4.0 | 6676 | 0.9437 | 0.0871 | | 0.2873 | 5.0 | 8345 | 1.0250 | 0.0530 | | 0.2424 | 6.0 | 10014 | 1.2340 | 0.0733 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
Lakoc/a2c-PandaReachDense-v2
Lakoc
2023-02-01T23:19:16Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-01T23:17:09Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -0.48 +/- 0.17 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
uisikdag/footballplayers_yolov8
uisikdag
2023-02-01T23:01:28Z
189
0
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "model-index", "region:us" ]
object-detection
2023-02-01T23:00:57Z
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch library_name: ultralytics library_version: 8.0.25 inference: false model-index: - name: uisikdag/football_players_rf results: - task: type: object-detection metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.78517 # min: 0.0 - max: 1.0 name: [email protected](box) --- <div align="center"> <img width="640" alt="uisikdag/football_players_rf" src="https://huggingface.co/uisikdag/football_players_rf/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['ball', 'goalkeeper', 'player', 'referee'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.25 ultralytics==8.0.25 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('uisikdag/football_players_rf') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ```
gokuls/distilbert_sa_GLUE_Experiment_data_aug_cola
gokuls
2023-02-01T22:56:57Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-01T22:27:01Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.12046776548411303 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.8362 - Matthews Correlation: 0.1205 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4726 | 1.0 | 835 | 0.8362 | 0.1205 | | 0.2428 | 2.0 | 1670 | 1.3000 | 0.1122 | | 0.1378 | 3.0 | 2505 | 1.3626 | 0.1226 | | 0.0893 | 4.0 | 3340 | 1.6155 | 0.1608 | | 0.0648 | 5.0 | 4175 | 1.8098 | 0.0958 | | 0.049 | 6.0 | 5010 | 2.0187 | 0.1179 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
clarin-knext/plt5-base-poquad-qa-v2
clarin-knext
2023-02-01T22:53:42Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-21T11:06:31Z
--- tags: - generated_from_trainer model-index: - name: plt5-base-poquad-qa-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # plt5-base-poquad-qa-v2 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 480 | 0.7467 | | 1.3112 | 2.0 | 960 | 0.6548 | | 1.0033 | 3.0 | 1440 | 0.6064 | | 0.8897 | 4.0 | 1920 | 0.5882 | | 0.8223 | 5.0 | 2400 | 0.5701 | | 0.7911 | 6.0 | 2880 | 0.5567 | | 0.7651 | 7.0 | 3360 | 0.5514 | | 0.7641 | 8.0 | 3840 | 0.5448 | | 0.7295 | 9.0 | 4320 | 0.5451 | | 0.7304 | 10.0 | 4800 | 0.5435 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
tomekkorbak/compassionate_lumiere
tomekkorbak
2023-02-01T22:52:15Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "generated_from_trainer", "en", "dataset:tomekkorbak/pii-pile-chunk3-0-50000", "dataset:tomekkorbak/pii-pile-chunk3-50000-100000", "dataset:tomekkorbak/pii-pile-chunk3-100000-150000", "dataset:tomekkorbak/pii-pile-chunk3-150000-200000", "dataset:tomekkorbak/pii-pile-chunk3-200000-250000", "dataset:tomekkorbak/pii-pile-chunk3-250000-300000", "dataset:tomekkorbak/pii-pile-chunk3-300000-350000", "dataset:tomekkorbak/pii-pile-chunk3-350000-400000", "dataset:tomekkorbak/pii-pile-chunk3-400000-450000", "dataset:tomekkorbak/pii-pile-chunk3-450000-500000", "dataset:tomekkorbak/pii-pile-chunk3-500000-550000", "dataset:tomekkorbak/pii-pile-chunk3-550000-600000", "dataset:tomekkorbak/pii-pile-chunk3-600000-650000", "dataset:tomekkorbak/pii-pile-chunk3-650000-700000", "dataset:tomekkorbak/pii-pile-chunk3-700000-750000", "dataset:tomekkorbak/pii-pile-chunk3-750000-800000", "dataset:tomekkorbak/pii-pile-chunk3-800000-850000", "dataset:tomekkorbak/pii-pile-chunk3-850000-900000", "dataset:tomekkorbak/pii-pile-chunk3-900000-950000", "dataset:tomekkorbak/pii-pile-chunk3-950000-1000000", "dataset:tomekkorbak/pii-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/pii-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/pii-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/pii-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/pii-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/pii-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/pii-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/pii-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/pii-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/pii-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/pii-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/pii-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/pii-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/pii-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/pii-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/pii-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/pii-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/pii-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/pii-pile-chunk3-1900000-1950000", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2023-02-01T06:50:32Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/pii-pile-chunk3-0-50000 - tomekkorbak/pii-pile-chunk3-50000-100000 - tomekkorbak/pii-pile-chunk3-100000-150000 - tomekkorbak/pii-pile-chunk3-150000-200000 - tomekkorbak/pii-pile-chunk3-200000-250000 - tomekkorbak/pii-pile-chunk3-250000-300000 - tomekkorbak/pii-pile-chunk3-300000-350000 - tomekkorbak/pii-pile-chunk3-350000-400000 - tomekkorbak/pii-pile-chunk3-400000-450000 - tomekkorbak/pii-pile-chunk3-450000-500000 - tomekkorbak/pii-pile-chunk3-500000-550000 - tomekkorbak/pii-pile-chunk3-550000-600000 - tomekkorbak/pii-pile-chunk3-600000-650000 - tomekkorbak/pii-pile-chunk3-650000-700000 - tomekkorbak/pii-pile-chunk3-700000-750000 - tomekkorbak/pii-pile-chunk3-750000-800000 - tomekkorbak/pii-pile-chunk3-800000-850000 - tomekkorbak/pii-pile-chunk3-850000-900000 - tomekkorbak/pii-pile-chunk3-900000-950000 - tomekkorbak/pii-pile-chunk3-950000-1000000 - tomekkorbak/pii-pile-chunk3-1000000-1050000 - tomekkorbak/pii-pile-chunk3-1050000-1100000 - tomekkorbak/pii-pile-chunk3-1100000-1150000 - tomekkorbak/pii-pile-chunk3-1150000-1200000 - tomekkorbak/pii-pile-chunk3-1200000-1250000 - tomekkorbak/pii-pile-chunk3-1250000-1300000 - tomekkorbak/pii-pile-chunk3-1300000-1350000 - tomekkorbak/pii-pile-chunk3-1350000-1400000 - tomekkorbak/pii-pile-chunk3-1400000-1450000 - tomekkorbak/pii-pile-chunk3-1450000-1500000 - tomekkorbak/pii-pile-chunk3-1500000-1550000 - tomekkorbak/pii-pile-chunk3-1550000-1600000 - tomekkorbak/pii-pile-chunk3-1600000-1650000 - tomekkorbak/pii-pile-chunk3-1650000-1700000 - tomekkorbak/pii-pile-chunk3-1700000-1750000 - tomekkorbak/pii-pile-chunk3-1750000-1800000 - tomekkorbak/pii-pile-chunk3-1800000-1850000 - tomekkorbak/pii-pile-chunk3-1850000-1900000 - tomekkorbak/pii-pile-chunk3-1900000-1950000 model-index: - name: compassionate_lumiere results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # compassionate_lumiere This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 12588 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.0}, 'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 4096, 'prefix': '<|aligned|>'}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'gpt3_kwargs': {'model_name': 'davinci'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'}, 'num_additional_tokens': 2, 'path_or_name': 'tomekkorbak/nervous_wozniak'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'compassionate_lumiere', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 251, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1q3x5956
gokuls/distilbert_sa_GLUE_Experiment_data_aug_cola_384
gokuls
2023-02-01T22:49:08Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-01T22:30:00Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_cola_384 results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.12073105148250744 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_cola_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.7008 - Matthews Correlation: 0.1207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5179 | 1.0 | 835 | 0.7008 | 0.1207 | | 0.3641 | 2.0 | 1670 | 0.9121 | 0.1063 | | 0.2641 | 3.0 | 2505 | 1.0415 | 0.0951 | | 0.1963 | 4.0 | 3340 | 1.2167 | 0.1072 | | 0.1519 | 5.0 | 4175 | 1.3170 | 0.1162 | | 0.1191 | 6.0 | 5010 | 1.4385 | 0.1118 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
TolgahanT/TT
TolgahanT
2023-02-01T22:21:32Z
0
0
diffusers
[ "diffusers", "ee", "dataset:fka/awesome-chatgpt-prompts", "license:creativeml-openrail-m", "region:us" ]
null
2023-02-01T22:18:33Z
--- license: creativeml-openrail-m datasets: - fka/awesome-chatgpt-prompts language: - ee metrics: - cer library_name: diffusers ---
tomekkorbak/nostalgic_jones
tomekkorbak
2023-02-01T22:21:04Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2023-01-31T22:34:53Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: nostalgic_jones results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nostalgic_jones This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 4096, 'prefix': '<|aligned|>'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'gpt3_kwargs': {'model_name': 'davinci'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'nostalgic_jones', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 5070, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/pw7t099z
Nonin/ppo-LunarLander-v2
Nonin
2023-02-01T22:17:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-01T22:17:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 273.25 +/- 22.65 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hectorjelly/ppo-LunarLander-v2
hectorjelly
2023-02-01T22:08:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-01T22:08:12Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 268.23 +/- 21.16 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jha2ee/StableDiffusion_finetuning_Disney
jha2ee
2023-02-01T22:00:48Z
12
3
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-01T21:55:19Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Disney-style Dreambooth model trained by jha2ee with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/jha2ee/disney-style/resolve/main/sample_images/style-001.jpg) ![1](https://huggingface.co/jha2ee/disney-style/resolve/main/sample_images/style-003.jpg) ![2](https://huggingface.co/jha2ee/disney-style/resolve/main/sample_images/style-002.jpg) ![3](https://huggingface.co/jha2ee/disney-style/resolve/main/sample_images/style-004.jpg)
Closen/CartPole-v1_PG
Closen
2023-02-01T21:58:27Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-01T21:25:04Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1_PG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
stinoco/Taxi-v3
stinoco
2023-02-01T21:55:04Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-01T21:55:01Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.72 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="stinoco/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
smartik/t5-small-finetuned-xsum
smartik
2023-02-01T21:17:01Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-26T14:23:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
ietz/token-paraphrase-MiniLM-L6-v2-baseline
ietz
2023-02-01T21:08:04Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-02-01T21:05:54Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Sartc/PPO-2FEB-LunarLander-v2
Sartc
2023-02-01T20:47:15Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-01T20:44:12Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -402.40 +/- 104.21 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```