modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 12:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jonduea/poca-SoccerTwos | jonduea | 2023-12-03T21:09:08Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-12-03T21:08:25Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jonduea/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ahaanlimaye/ConsumerResponseClassifier | ahaanlimaye | 2023-12-03T20:59:54Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T18:24:55Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: ConsumerResponseClassifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ConsumerResponseClassifier
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5301
- F1: 0.7668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow26 | FounderOfHuggingface | 2023-12-03T20:51:53Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T20:51:49Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
VitaliiVrublevskyi/albert-large-v1-finetuned-mrpc | VitaliiVrublevskyi | 2023-12-03T20:50:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T19:27:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: albert-large-v1-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8725490196078431
- name: F1
type: f1
value: 0.907473309608541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v1-finetuned-mrpc
This model is a fine-tuned version of [albert-large-v1](https://huggingface.co/albert-large-v1) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4479
- Accuracy: 0.8725
- F1: 0.9075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 69
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 115 | 0.3323 | 0.8554 | 0.8970 |
| No log | 2.0 | 230 | 0.3164 | 0.8799 | 0.9127 |
| No log | 3.0 | 345 | 0.3368 | 0.8603 | 0.9019 |
| No log | 4.0 | 460 | 0.4099 | 0.8676 | 0.9049 |
| 0.2727 | 5.0 | 575 | 0.4479 | 0.8725 | 0.9075 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
javismiles/dec3v3model | javismiles | 2023-12-03T20:45:27Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-12-03T17:42:45Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - javismiles/dec3v3model
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the javismiles/dec3v4 dataset. You can find some example images in the following.




|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow25 | FounderOfHuggingface | 2023-12-03T20:40:16Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T20:40:12Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
prushton/dreambooth-dog-test | prushton | 2023-12-03T20:33:17Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-02T19:46:33Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - prushton/dreambooth-dog-test
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow24 | FounderOfHuggingface | 2023-12-03T20:28:42Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T20:28:38Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Christos-14/ppo-LunarLander-v2 | Christos-14 | 2023-12-03T20:27:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T20:26:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.32 +/- 17.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
userzyzz/hermes-vision-gguf | userzyzz | 2023-12-03T20:25:36Z | 88 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2023-12-03T19:24:00Z | # Broken GGUF
Tried to force convert an unsupported model. Unsurprisingly, it is completely broken. Does not run. |
prushton/path_to_saved_model | prushton | 2023-12-03T20:21:43Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-02T19:54:19Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - prushton/path_to_saved_model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
stoves/Andreea_Lazu | stoves | 2023-12-03T20:14:45Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-11-10T12:32:52Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of gjdfophge person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
hkivancoral/smids_1x_beit_base_rms_0001_fold3 | hkivancoral | 2023-12-03T20:13:55Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-02T11:19:21Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_beit_base_rms_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7133333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_beit_base_rms_0001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7846
- Accuracy: 0.7133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1199 | 1.0 | 75 | 1.1044 | 0.325 |
| 1.1759 | 2.0 | 150 | 1.1239 | 0.47 |
| 1.1465 | 3.0 | 225 | 0.9168 | 0.5 |
| 0.8955 | 4.0 | 300 | 0.8917 | 0.5017 |
| 0.8948 | 5.0 | 375 | 0.8301 | 0.5533 |
| 0.9774 | 6.0 | 450 | 0.8272 | 0.5467 |
| 0.8001 | 7.0 | 525 | 0.8058 | 0.5567 |
| 0.7633 | 8.0 | 600 | 0.8140 | 0.545 |
| 0.7814 | 9.0 | 675 | 0.7815 | 0.5733 |
| 0.8175 | 10.0 | 750 | 0.7839 | 0.5633 |
| 0.7605 | 11.0 | 825 | 0.7664 | 0.615 |
| 0.762 | 12.0 | 900 | 0.7781 | 0.59 |
| 0.6797 | 13.0 | 975 | 0.7875 | 0.575 |
| 0.7699 | 14.0 | 1050 | 0.7772 | 0.6117 |
| 0.6167 | 15.0 | 1125 | 0.8129 | 0.585 |
| 0.7106 | 16.0 | 1200 | 0.7392 | 0.6633 |
| 0.7174 | 17.0 | 1275 | 0.7176 | 0.6717 |
| 0.704 | 18.0 | 1350 | 0.7772 | 0.63 |
| 0.6617 | 19.0 | 1425 | 0.7359 | 0.65 |
| 0.6722 | 20.0 | 1500 | 0.7009 | 0.6783 |
| 0.676 | 21.0 | 1575 | 0.6946 | 0.6667 |
| 0.6441 | 22.0 | 1650 | 0.7089 | 0.6917 |
| 0.6565 | 23.0 | 1725 | 0.7160 | 0.665 |
| 0.6009 | 24.0 | 1800 | 0.6902 | 0.6783 |
| 0.6592 | 25.0 | 1875 | 0.7159 | 0.665 |
| 0.6628 | 26.0 | 1950 | 0.7741 | 0.6233 |
| 0.6044 | 27.0 | 2025 | 0.7147 | 0.66 |
| 0.585 | 28.0 | 2100 | 0.6827 | 0.69 |
| 0.5831 | 29.0 | 2175 | 0.6975 | 0.6833 |
| 0.6301 | 30.0 | 2250 | 0.6815 | 0.6633 |
| 0.6457 | 31.0 | 2325 | 0.6813 | 0.6817 |
| 0.6492 | 32.0 | 2400 | 0.6894 | 0.6783 |
| 0.5418 | 33.0 | 2475 | 0.7461 | 0.6783 |
| 0.5925 | 34.0 | 2550 | 0.6773 | 0.6933 |
| 0.5913 | 35.0 | 2625 | 0.6656 | 0.7083 |
| 0.5761 | 36.0 | 2700 | 0.6491 | 0.7133 |
| 0.528 | 37.0 | 2775 | 0.6784 | 0.7 |
| 0.5718 | 38.0 | 2850 | 0.7007 | 0.6783 |
| 0.5083 | 39.0 | 2925 | 0.6815 | 0.7 |
| 0.5069 | 40.0 | 3000 | 0.6638 | 0.71 |
| 0.4838 | 41.0 | 3075 | 0.6813 | 0.7167 |
| 0.5071 | 42.0 | 3150 | 0.6709 | 0.7183 |
| 0.5091 | 43.0 | 3225 | 0.6746 | 0.7167 |
| 0.4355 | 44.0 | 3300 | 0.7138 | 0.71 |
| 0.4287 | 45.0 | 3375 | 0.7080 | 0.7133 |
| 0.3954 | 46.0 | 3450 | 0.7468 | 0.7 |
| 0.3389 | 47.0 | 3525 | 0.7428 | 0.7183 |
| 0.3613 | 48.0 | 3600 | 0.7469 | 0.725 |
| 0.388 | 49.0 | 3675 | 0.7685 | 0.7167 |
| 0.2972 | 50.0 | 3750 | 0.7846 | 0.7133 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
galaxy78/my_awesome_wnut_model | galaxy78 | 2023-12-03T20:08:10Z | 14 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-03T10:11:43Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.55
- name: Recall
type: recall
value: 0.37720111214087115
- name: F1
type: f1
value: 0.44749862561847165
- name: Accuracy
type: accuracy
value: 0.9481063520560827
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3958
- Precision: 0.55
- Recall: 0.3772
- F1: 0.4475
- Accuracy: 0.9481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2562 | 0.5704 | 0.2929 | 0.3870 | 0.9417 |
| No log | 2.0 | 426 | 0.2776 | 0.5462 | 0.3179 | 0.4019 | 0.9436 |
| 0.1469 | 3.0 | 639 | 0.2834 | 0.5453 | 0.3624 | 0.4354 | 0.9475 |
| 0.1469 | 4.0 | 852 | 0.3004 | 0.5669 | 0.3652 | 0.4442 | 0.9480 |
| 0.0325 | 5.0 | 1065 | 0.3360 | 0.5858 | 0.3735 | 0.4561 | 0.9482 |
| 0.0325 | 6.0 | 1278 | 0.3471 | 0.5149 | 0.3855 | 0.4409 | 0.9474 |
| 0.0325 | 7.0 | 1491 | 0.3883 | 0.5552 | 0.3633 | 0.4392 | 0.9474 |
| 0.0117 | 8.0 | 1704 | 0.3881 | 0.5602 | 0.3707 | 0.4462 | 0.9477 |
| 0.0117 | 9.0 | 1917 | 0.4008 | 0.5582 | 0.3689 | 0.4442 | 0.9478 |
| 0.0051 | 10.0 | 2130 | 0.3958 | 0.55 | 0.3772 | 0.4475 | 0.9481 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow22 | FounderOfHuggingface | 2023-12-03T20:05:28Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T20:05:24Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow21 | FounderOfHuggingface | 2023-12-03T19:53:45Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T19:53:40Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
entaroid/xlm-roberta-base-finetuned-panx-de | entaroid | 2023-12-03T19:51:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-03T17:17:56Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.861220080732276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1394
- F1: 0.8612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 525 | 0.1621 | 0.8184 |
| No log | 2.0 | 1050 | 0.1390 | 0.8502 |
| No log | 3.0 | 1575 | 0.1394 | 0.8612 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
jerber/ppo-LunarLander-v2 | jerber | 2023-12-03T19:49:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T19:49:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.98 +/- 22.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
annabellehuether/legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd | annabellehuether | 2023-12-03T19:44:04Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T18:41:27Z | ---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6773
- Accuracy: 0.5889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6868 | 1.0 | 660 | 0.6805 | 0.5889 |
| 0.6863 | 2.0 | 1320 | 0.6779 | 0.5889 |
| 0.6839 | 3.0 | 1980 | 0.6779 | 0.5889 |
| 0.6825 | 4.0 | 2640 | 0.6812 | 0.5889 |
| 0.6838 | 5.0 | 3300 | 0.6773 | 0.5889 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
worldboss/llama-7B-alpaca-fine-tune-unsloth | worldboss | 2023-12-03T19:43:53Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b",
"base_model:finetune:unsloth/llama-2-7b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-12-03T19:07:16Z | ---
license: llama2
base_model: unsloth/llama-2-7b
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Ibrahim2001/PPO_Lunar_Lander_v2 | Ibrahim2001 | 2023-12-03T19:43:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T14:14:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.18 +/- 16.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gymnasium as gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.monitor import Monitor
repo_id = "Ibrahim2001/PPO_Lunar_Lander_v2" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip
# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
eval_env = Monitor(gym.make("LunarLander-v2"))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
```
|
GMabic/Taxi-v3 | GMabic | 2023-12-03T19:41:38Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T19:41:37Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GMabic/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ThuyNT03/KLTN_COQE_viT5_SAPOL_v2 | ThuyNT03 | 2023-12-03T19:29:43Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T18:43:17Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_SAPOL_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_SAPOL_v2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
GMabic/GMabic | GMabic | 2023-12-03T19:25:47Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T19:25:46Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: GMabic
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GMabic/GMabic", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GMabic/q-FrozenLake-v1-4x4-noSlippery | GMabic | 2023-12-03T19:21:15Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T19:21:13Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="GMabic/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow18 | FounderOfHuggingface | 2023-12-03T19:18:50Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T19:18:46Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
greathero/mit-b0-finetuned-sidewalks | greathero | 2023-12-03T19:15:10Z | 1 | 0 | transformers | [
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-11-04T14:59:25Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_keras_callback
model-index:
- name: greathero/mit-b0-finetuned-sidewalks
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# greathero/mit-b0-finetuned-sidewalks
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1964
- Validation Loss: 0.5617
- Validation Mean Iou: 0.3482
- Validation Mean Accuracy: 0.4212
- Validation Overall Accuracy: 0.8714
- Validation Accuracy Unlabeled: 0.0
- Validation Accuracy Flat-road: 0.9191
- Validation Accuracy Flat-sidewalk: 0.9583
- Validation Accuracy Flat-crosswalk: 0.5878
- Validation Accuracy Flat-cyclinglane: 0.7730
- Validation Accuracy Flat-parkingdriveway: 0.6079
- Validation Accuracy Flat-railtrack: nan
- Validation Accuracy Flat-curb: 0.5577
- Validation Accuracy Human-person: 0.6959
- Validation Accuracy Human-rider: 0.2221
- Validation Accuracy Vehicle-car: 0.9300
- Validation Accuracy Vehicle-truck: 0.1310
- Validation Accuracy Vehicle-bus: 0.0
- Validation Accuracy Vehicle-tramtrain: nan
- Validation Accuracy Vehicle-motorcycle: 0.0
- Validation Accuracy Vehicle-bicycle: 0.6927
- Validation Accuracy Vehicle-caravan: 0.0
- Validation Accuracy Vehicle-cartrailer: 0.0
- Validation Accuracy Construction-building: 0.9088
- Validation Accuracy Construction-door: 0.0611
- Validation Accuracy Construction-wall: 0.4825
- Validation Accuracy Construction-fenceguardrail: 0.4370
- Validation Accuracy Construction-bridge: 0.0
- Validation Accuracy Construction-tunnel: nan
- Validation Accuracy Construction-stairs: 0.0
- Validation Accuracy Object-pole: 0.4339
- Validation Accuracy Object-trafficsign: 0.4711
- Validation Accuracy Object-trafficlight: 0.0
- Validation Accuracy Nature-vegetation: 0.9172
- Validation Accuracy Nature-terrain: 0.9151
- Validation Accuracy Sky: 0.9692
- Validation Accuracy Void-ground: 0.5121
- Validation Accuracy Void-dynamic: 0.0265
- Validation Accuracy Void-static: 0.2676
- Validation Accuracy Void-unclear: 0.0
- Validation Iou Unlabeled: 0.0
- Validation Iou Flat-road: 0.7902
- Validation Iou Flat-sidewalk: 0.9044
- Validation Iou Flat-crosswalk: 0.4201
- Validation Iou Flat-cyclinglane: 0.6963
- Validation Iou Flat-parkingdriveway: 0.4879
- Validation Iou Flat-railtrack: nan
- Validation Iou Flat-curb: 0.4607
- Validation Iou Human-person: 0.4926
- Validation Iou Human-rider: 0.1864
- Validation Iou Vehicle-car: 0.8243
- Validation Iou Vehicle-truck: 0.1278
- Validation Iou Vehicle-bus: 0.0
- Validation Iou Vehicle-tramtrain: nan
- Validation Iou Vehicle-motorcycle: 0.0
- Validation Iou Vehicle-bicycle: 0.4759
- Validation Iou Vehicle-caravan: 0.0
- Validation Iou Vehicle-cartrailer: 0.0
- Validation Iou Construction-building: 0.6973
- Validation Iou Construction-door: 0.0588
- Validation Iou Construction-wall: 0.3754
- Validation Iou Construction-fenceguardrail: 0.3760
- Validation Iou Construction-bridge: 0.0
- Validation Iou Construction-tunnel: nan
- Validation Iou Construction-stairs: 0.0
- Validation Iou Object-pole: 0.3330
- Validation Iou Object-trafficsign: 0.3555
- Validation Iou Object-trafficlight: 0.0
- Validation Iou Nature-vegetation: 0.8600
- Validation Iou Nature-terrain: 0.7533
- Validation Iou Sky: 0.9288
- Validation Iou Void-ground: 0.3089
- Validation Iou Void-dynamic: 0.0223
- Validation Iou Void-static: 0.2055
- Validation Iou Void-unclear: 0.0
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 6e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Accuracy Unlabeled | Validation Accuracy Flat-road | Validation Accuracy Flat-sidewalk | Validation Accuracy Flat-crosswalk | Validation Accuracy Flat-cyclinglane | Validation Accuracy Flat-parkingdriveway | Validation Accuracy Flat-railtrack | Validation Accuracy Flat-curb | Validation Accuracy Human-person | Validation Accuracy Human-rider | Validation Accuracy Vehicle-car | Validation Accuracy Vehicle-truck | Validation Accuracy Vehicle-bus | Validation Accuracy Vehicle-tramtrain | Validation Accuracy Vehicle-motorcycle | Validation Accuracy Vehicle-bicycle | Validation Accuracy Vehicle-caravan | Validation Accuracy Vehicle-cartrailer | Validation Accuracy Construction-building | Validation Accuracy Construction-door | Validation Accuracy Construction-wall | Validation Accuracy Construction-fenceguardrail | Validation Accuracy Construction-bridge | Validation Accuracy Construction-tunnel | Validation Accuracy Construction-stairs | Validation Accuracy Object-pole | Validation Accuracy Object-trafficsign | Validation Accuracy Object-trafficlight | Validation Accuracy Nature-vegetation | Validation Accuracy Nature-terrain | Validation Accuracy Sky | Validation Accuracy Void-ground | Validation Accuracy Void-dynamic | Validation Accuracy Void-static | Validation Accuracy Void-unclear | Validation Iou Unlabeled | Validation Iou Flat-road | Validation Iou Flat-sidewalk | Validation Iou Flat-crosswalk | Validation Iou Flat-cyclinglane | Validation Iou Flat-parkingdriveway | Validation Iou Flat-railtrack | Validation Iou Flat-curb | Validation Iou Human-person | Validation Iou Human-rider | Validation Iou Vehicle-car | Validation Iou Vehicle-truck | Validation Iou Vehicle-bus | Validation Iou Vehicle-tramtrain | Validation Iou Vehicle-motorcycle | Validation Iou Vehicle-bicycle | Validation Iou Vehicle-caravan | Validation Iou Vehicle-cartrailer | Validation Iou Construction-building | Validation Iou Construction-door | Validation Iou Construction-wall | Validation Iou Construction-fenceguardrail | Validation Iou Construction-bridge | Validation Iou Construction-tunnel | Validation Iou Construction-stairs | Validation Iou Object-pole | Validation Iou Object-trafficsign | Validation Iou Object-trafficlight | Validation Iou Nature-vegetation | Validation Iou Nature-terrain | Validation Iou Sky | Validation Iou Void-ground | Validation Iou Void-dynamic | Validation Iou Void-static | Validation Iou Void-unclear | Epoch |
|:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:-----------------------------:|:-----------------------------:|:---------------------------------:|:----------------------------------:|:------------------------------------:|:----------------------------------------:|:----------------------------------:|:-----------------------------:|:--------------------------------:|:-------------------------------:|:-------------------------------:|:---------------------------------:|:-------------------------------:|:-------------------------------------:|:--------------------------------------:|:-----------------------------------:|:-----------------------------------:|:--------------------------------------:|:-----------------------------------------:|:-------------------------------------:|:-------------------------------------:|:-----------------------------------------------:|:---------------------------------------:|:---------------------------------------:|:---------------------------------------:|:-------------------------------:|:--------------------------------------:|:---------------------------------------:|:-------------------------------------:|:----------------------------------:|:-----------------------:|:-------------------------------:|:--------------------------------:|:-------------------------------:|:--------------------------------:|:------------------------:|:------------------------:|:----------------------------:|:-----------------------------:|:-------------------------------:|:-----------------------------------:|:-----------------------------:|:------------------------:|:---------------------------:|:--------------------------:|:--------------------------:|:----------------------------:|:--------------------------:|:--------------------------------:|:---------------------------------:|:------------------------------:|:------------------------------:|:---------------------------------:|:------------------------------------:|:--------------------------------:|:--------------------------------:|:------------------------------------------:|:----------------------------------:|:----------------------------------:|:----------------------------------:|:--------------------------:|:---------------------------------:|:----------------------------------:|:--------------------------------:|:-----------------------------:|:------------------:|:--------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:-----:|
| 1.9172 | 1.1456 | 0.1406 | 0.1941 | 0.7310 | 0.0 | 0.7980 | 0.9298 | 0.0051 | 0.2528 | 0.0049 | nan | 0.0001 | 0.0458 | 0.0 | 0.8315 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7839 | 0.0 | 0.0210 | 0.0370 | 0.0 | nan | 0.0 | 0.0058 | 0.0 | 0.0 | 0.9028 | 0.6924 | 0.9004 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.4633 | 0.7662 | 0.0051 | 0.2317 | 0.0048 | 0.0 | 0.0001 | 0.0323 | 0.0 | 0.6396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5243 | 0.0 | 0.0206 | 0.0348 | 0.0 | nan | 0.0 | 0.0054 | 0.0 | 0.0 | 0.7094 | 0.5497 | 0.7916 | 0.0 | 0.0 | 0.0002 | 0.0 | 0 |
| 1.0247 | 0.8848 | 0.1918 | 0.2336 | 0.7698 | 0.0 | 0.6997 | 0.9651 | 0.4394 | 0.4069 | 0.1304 | nan | 0.0606 | 0.0601 | 0.0 | 0.8688 | 0.0 | 0.0 | nan | 0.0 | 0.0097 | 0.0 | 0.0 | 0.8430 | 0.0 | 0.2444 | 0.1184 | 0.0 | nan | 0.0 | 0.0102 | 0.0 | 0.0 | 0.9227 | 0.7934 | 0.9018 | 0.0 | 0.0 | 0.0014 | 0.0 | 0.0 | 0.5611 | 0.7751 | 0.4093 | 0.3588 | 0.1102 | nan | 0.0550 | 0.0513 | 0.0 | 0.6674 | 0.0 | 0.0 | nan | 0.0 | 0.0097 | 0.0 | 0.0 | 0.5662 | 0.0 | 0.2011 | 0.1101 | 0.0 | nan | 0.0 | 0.0101 | 0.0 | 0.0 | 0.7593 | 0.6549 | 0.8360 | 0.0 | 0.0 | 0.0014 | 0.0 | 1 |
| 0.8452 | 0.7838 | 0.2203 | 0.2738 | 0.7971 | 0.0 | 0.8465 | 0.9296 | 0.5352 | 0.5975 | 0.2817 | nan | 0.1721 | 0.3577 | 0.0 | 0.8242 | 0.0 | 0.0 | nan | 0.0 | 0.0999 | 0.0 | 0.0 | 0.8508 | 0.0 | 0.3727 | 0.1481 | 0.0 | nan | 0.0 | 0.0712 | 0.0 | 0.0 | 0.9379 | 0.8088 | 0.9242 | 0.0 | 0.0 | 0.0027 | 0.0 | 0.0 | 0.5691 | 0.8475 | 0.3104 | 0.5182 | 0.2163 | nan | 0.1416 | 0.2415 | 0.0 | 0.7082 | 0.0 | 0.0 | nan | 0.0 | 0.0997 | 0.0 | 0.0 | 0.5843 | 0.0 | 0.2783 | 0.1415 | 0.0 | nan | 0.0 | 0.0679 | 0.0 | 0.0 | 0.7789 | 0.6955 | 0.8480 | 0.0 | 0.0 | 0.0026 | 0.0 | 2 |
| 0.7393 | 0.7194 | 0.2443 | 0.2959 | 0.8084 | 0.0 | 0.7601 | 0.9601 | 0.3925 | 0.5485 | 0.4444 | nan | 0.2765 | 0.4666 | 0.0 | 0.8905 | 0.0 | 0.0 | nan | 0.0 | 0.3695 | 0.0 | 0.0 | 0.8608 | 0.0 | 0.4003 | 0.2319 | 0.0 | nan | 0.0 | 0.1120 | 0.0 | 0.0 | 0.9064 | 0.8793 | 0.9268 | 0.0 | 0.0 | 0.0434 | 0.0 | 0.0 | 0.6189 | 0.8385 | 0.3858 | 0.4982 | 0.2872 | nan | 0.2269 | 0.2991 | 0.0 | 0.7134 | 0.0 | 0.0 | nan | 0.0 | 0.3144 | 0.0 | 0.0 | 0.6105 | 0.0 | 0.3110 | 0.2177 | 0.0 | nan | 0.0 | 0.1034 | 0.0 | 0.0 | 0.7932 | 0.6956 | 0.8646 | 0.0 | 0.0 | 0.0385 | 0.0 | 3 |
| 0.6723 | 0.6423 | 0.2585 | 0.3200 | 0.8243 | 0.0 | 0.8379 | 0.9392 | 0.5390 | 0.6706 | 0.5206 | nan | 0.3683 | 0.6301 | 0.0 | 0.8838 | 0.0 | 0.0 | nan | 0.0 | 0.3830 | 0.0 | 0.0 | 0.8862 | 0.0 | 0.3901 | 0.2524 | 0.0 | nan | 0.0 | 0.1545 | 0.0 | 0.0 | 0.9096 | 0.8846 | 0.9254 | 0.0 | 0.0001 | 0.0643 | 0.0 | 0.0 | 0.6716 | 0.8658 | 0.3985 | 0.5764 | 0.3400 | nan | 0.2731 | 0.3130 | 0.0 | 0.7325 | 0.0 | 0.0 | nan | 0.0 | 0.3255 | 0.0 | 0.0 | 0.6179 | 0.0 | 0.3338 | 0.2397 | 0.0 | nan | 0.0 | 0.1381 | 0.0 | 0.0 | 0.8056 | 0.7102 | 0.8723 | 0.0 | 0.0001 | 0.0578 | 0.0 | 4 |
| 0.6205 | 0.6604 | 0.2592 | 0.3188 | 0.8256 | 0.0 | 0.8237 | 0.9528 | 0.5500 | 0.5608 | 0.6090 | nan | 0.3591 | 0.4741 | 0.0 | 0.8835 | 0.0 | 0.0 | nan | 0.0 | 0.4288 | 0.0 | 0.0 | 0.9049 | 0.0 | 0.4296 | 0.2780 | 0.0 | nan | 0.0 | 0.2117 | 0.0 | 0.0 | 0.9279 | 0.8381 | 0.9277 | 0.0 | 0.0003 | 0.0425 | 0.0 | 0.0 | 0.6967 | 0.8550 | 0.2920 | 0.5056 | 0.3663 | nan | 0.2893 | 0.3073 | 0.0 | 0.7447 | 0.0 | 0.0 | nan | 0.0 | 0.3632 | 0.0 | 0.0 | 0.6182 | 0.0 | 0.3474 | 0.2593 | 0.0 | nan | 0.0 | 0.1780 | 0.0 | 0.0 | 0.8229 | 0.7302 | 0.8786 | 0.0 | 0.0003 | 0.0396 | 0.0 | 5 |
| 0.5839 | 0.6139 | 0.2738 | 0.3325 | 0.8330 | 0.0 | 0.8190 | 0.9635 | 0.5040 | 0.6479 | 0.5465 | nan | 0.3405 | 0.5509 | 0.0 | 0.8938 | 0.0 | 0.0 | nan | 0.0 | 0.5717 | 0.0 | 0.0 | 0.8895 | 0.0 | 0.4915 | 0.3136 | 0.0 | nan | 0.0 | 0.2540 | 0.0 | 0.0 | 0.9038 | 0.8733 | 0.9399 | 0.0 | 0.0039 | 0.1320 | 0.0 | 0.0 | 0.7100 | 0.8592 | 0.4158 | 0.5694 | 0.3564 | nan | 0.2854 | 0.3518 | 0.0 | 0.7495 | 0.0 | 0.0 | nan | 0.0 | 0.4226 | 0.0 | 0.0 | 0.6419 | 0.0 | 0.3810 | 0.2908 | 0.0 | nan | 0.0 | 0.1984 | 0.0 | 0.0 | 0.8237 | 0.7147 | 0.8798 | 0.0 | 0.0038 | 0.1073 | 0.0 | 6 |
| 0.5377 | 0.5787 | 0.2763 | 0.3397 | 0.8374 | 0.0 | 0.8782 | 0.9396 | 0.5571 | 0.6511 | 0.5741 | nan | 0.4205 | 0.5055 | 0.0 | 0.8842 | 0.0 | 0.0 | nan | 0.0 | 0.5905 | 0.0 | 0.0 | 0.8932 | 0.0 | 0.4833 | 0.3608 | 0.0 | nan | 0.0 | 0.2777 | 0.0018 | 0.0 | 0.9253 | 0.8693 | 0.9532 | 0.0 | 0.0025 | 0.1022 | 0.0 | 0.0 | 0.6996 | 0.8736 | 0.3436 | 0.5970 | 0.3762 | nan | 0.3136 | 0.3388 | 0.0 | 0.7668 | 0.0 | 0.0 | nan | 0.0 | 0.4413 | 0.0 | 0.0 | 0.6375 | 0.0 | 0.3730 | 0.3209 | 0.0 | nan | 0.0 | 0.2190 | 0.0018 | 0.0 | 0.8323 | 0.7339 | 0.8797 | 0.0 | 0.0024 | 0.0908 | 0.0 | 7 |
| 0.5102 | 0.5413 | 0.2852 | 0.3417 | 0.8472 | 0.0 | 0.8733 | 0.9482 | 0.4978 | 0.7936 | 0.4843 | nan | 0.3768 | 0.5607 | 0.0 | 0.8764 | 0.0 | 0.0 | nan | 0.0 | 0.5866 | 0.0 | 0.0 | 0.8863 | 0.0 | 0.4769 | 0.3574 | 0.0 | nan | 0.0 | 0.2680 | 0.0013 | 0.0 | 0.9395 | 0.8760 | 0.9478 | 0.0 | 0.0045 | 0.1792 | 0.0 | 0.0 | 0.7354 | 0.8816 | 0.4221 | 0.6823 | 0.3631 | nan | 0.3052 | 0.3647 | 0.0 | 0.7699 | 0.0 | 0.0 | nan | 0.0 | 0.4497 | 0.0 | 0.0 | 0.6563 | 0.0 | 0.3737 | 0.3235 | 0.0 | nan | 0.0 | 0.2125 | 0.0013 | 0.0 | 0.8273 | 0.7161 | 0.8933 | 0.0 | 0.0044 | 0.1451 | 0.0 | 8 |
| 0.4841 | 0.5317 | 0.2880 | 0.3513 | 0.8505 | 0.0 | 0.8846 | 0.9516 | 0.5489 | 0.8002 | 0.5770 | nan | 0.3905 | 0.5536 | 0.0 | 0.8946 | 0.0 | 0.0 | nan | 0.0 | 0.6090 | 0.0 | 0.0 | 0.8865 | 0.0 | 0.4967 | 0.3616 | 0.0 | nan | 0.0 | 0.3103 | 0.0129 | 0.0 | 0.9036 | 0.9037 | 0.9505 | 0.0 | 0.0032 | 0.2026 | 0.0 | 0.0 | 0.7536 | 0.8840 | 0.3437 | 0.7198 | 0.4150 | nan | 0.3262 | 0.3606 | 0.0 | 0.7672 | 0.0 | 0.0 | nan | 0.0 | 0.4336 | 0.0 | 0.0 | 0.6606 | 0.0 | 0.3724 | 0.3274 | 0.0 | nan | 0.0 | 0.2363 | 0.0129 | 0.0 | 0.8265 | 0.7080 | 0.8948 | 0.0 | 0.0030 | 0.1696 | 0.0 | 9 |
| 0.4663 | 0.5253 | 0.2903 | 0.3504 | 0.8531 | 0.0 | 0.8632 | 0.9503 | 0.4520 | 0.8542 | 0.5251 | nan | 0.3878 | 0.5854 | 0.0 | 0.9041 | 0.0 | 0.0 | nan | 0.0 | 0.6516 | 0.0 | 0.0 | 0.8830 | 0.0 | 0.5018 | 0.3624 | 0.0 | nan | 0.0 | 0.3190 | 0.0143 | 0.0 | 0.9290 | 0.8950 | 0.9495 | 0.0 | 0.0025 | 0.1835 | 0.0 | 0.0 | 0.7467 | 0.8831 | 0.3961 | 0.7313 | 0.3927 | nan | 0.3174 | 0.3708 | 0.0 | 0.7719 | 0.0 | 0.0 | nan | 0.0 | 0.4393 | 0.0 | 0.0 | 0.6648 | 0.0 | 0.3684 | 0.3338 | 0.0 | nan | 0.0 | 0.2396 | 0.0143 | 0.0 | 0.8406 | 0.7302 | 0.9009 | 0.0 | 0.0024 | 0.1464 | 0.0 | 10 |
| 0.4466 | 0.5667 | 0.2867 | 0.3460 | 0.8459 | 0.0 | 0.8852 | 0.9554 | 0.5124 | 0.6566 | 0.6231 | nan | 0.3960 | 0.5075 | 0.0 | 0.8932 | 0.0 | 0.0 | nan | 0.0 | 0.6307 | 0.0 | 0.0 | 0.9193 | 0.0 | 0.5021 | 0.3920 | 0.0 | nan | 0.0 | 0.3301 | 0.0299 | 0.0 | 0.9236 | 0.8399 | 0.9567 | 0.0 | 0.0096 | 0.1091 | 0.0 | 0.0 | 0.7247 | 0.8746 | 0.4095 | 0.6009 | 0.4343 | nan | 0.3309 | 0.3346 | 0.0 | 0.7803 | 0.0 | 0.0 | nan | 0.0 | 0.4490 | 0.0 | 0.0 | 0.6423 | 0.0 | 0.3891 | 0.3310 | 0.0 | nan | 0.0 | 0.2522 | 0.0299 | 0.0 | 0.8416 | 0.7461 | 0.8986 | 0.0 | 0.0085 | 0.0979 | 0.0 | 11 |
| 0.4201 | 0.5216 | 0.2959 | 0.3557 | 0.8570 | 0.0 | 0.8772 | 0.9565 | 0.4502 | 0.7909 | 0.5969 | nan | 0.4384 | 0.5878 | 0.0028 | 0.9030 | 0.0 | 0.0 | nan | 0.0 | 0.6439 | 0.0 | 0.0 | 0.8963 | 0.0 | 0.5378 | 0.3621 | 0.0 | nan | 0.0 | 0.3043 | 0.0282 | 0.0 | 0.9206 | 0.8979 | 0.9562 | 0.0 | 0.0015 | 0.2284 | 0.0 | 0.0 | 0.7601 | 0.8843 | 0.3895 | 0.7020 | 0.4255 | nan | 0.3655 | 0.3689 | 0.0028 | 0.7794 | 0.0 | 0.0 | nan | 0.0 | 0.4512 | 0.0 | 0.0 | 0.6719 | 0.0 | 0.3923 | 0.3293 | 0.0 | nan | 0.0 | 0.2432 | 0.0281 | 0.0 | 0.8495 | 0.7423 | 0.9015 | 0.0 | 0.0014 | 0.1815 | 0.0 | 12 |
| 0.4155 | 0.5468 | 0.2913 | 0.3606 | 0.8525 | 0.0 | 0.8460 | 0.9363 | 0.5329 | 0.8216 | 0.6785 | nan | 0.4931 | 0.5219 | 0.0076 | 0.9172 | 0.0 | 0.0 | nan | 0.0 | 0.6537 | 0.0 | 0.0 | 0.9282 | 0.0 | 0.5048 | 0.3783 | 0.0 | nan | 0.0 | 0.3354 | 0.0529 | 0.0 | 0.9057 | 0.9100 | 0.9441 | 0.0 | 0.0143 | 0.1573 | 0.0 | 0.0 | 0.7598 | 0.8861 | 0.2428 | 0.7322 | 0.4286 | nan | 0.3793 | 0.3453 | 0.0076 | 0.7781 | 0.0 | 0.0 | nan | 0.0 | 0.4537 | 0.0 | 0.0 | 0.6496 | 0.0 | 0.3874 | 0.3258 | 0.0 | nan | 0.0 | 0.2583 | 0.0526 | 0.0 | 0.8479 | 0.7370 | 0.9009 | 0.0 | 0.0129 | 0.1355 | 0.0 | 13 |
| 0.3892 | 0.5175 | 0.3004 | 0.3618 | 0.8613 | 0.0 | 0.9000 | 0.9479 | 0.4376 | 0.8710 | 0.6180 | nan | 0.4245 | 0.6868 | 0.0094 | 0.8863 | 0.0 | 0.0 | nan | 0.0 | 0.6129 | 0.0 | 0.0 | 0.9115 | 0.0 | 0.4900 | 0.3562 | 0.0 | nan | 0.0 | 0.3215 | 0.1297 | 0.0 | 0.9228 | 0.9045 | 0.9555 | 0.0 | 0.0057 | 0.1855 | 0.0 | 0.0 | 0.7676 | 0.8928 | 0.3316 | 0.7655 | 0.4191 | nan | 0.3657 | 0.3880 | 0.0094 | 0.7938 | 0.0 | 0.0 | nan | 0.0 | 0.4393 | 0.0 | 0.0 | 0.6601 | 0.0 | 0.4075 | 0.3296 | 0.0 | nan | 0.0 | 0.2502 | 0.1287 | 0.0 | 0.8530 | 0.7461 | 0.9041 | 0.0 | 0.0054 | 0.1551 | 0.0 | 14 |
| 0.3774 | 0.5600 | 0.2950 | 0.3518 | 0.8528 | 0.0 | 0.8595 | 0.9628 | 0.2409 | 0.7379 | 0.6461 | nan | 0.4255 | 0.5880 | 0.0124 | 0.8996 | 0.0 | 0.0 | nan | 0.0 | 0.6380 | 0.0 | 0.0 | 0.9198 | 0.0 | 0.5246 | 0.3877 | 0.0 | nan | 0.0 | 0.3241 | 0.1567 | 0.0 | 0.9042 | 0.8958 | 0.9545 | 0.0 | 0.0130 | 0.1680 | 0.0 | 0.0 | 0.7599 | 0.8747 | 0.2287 | 0.6677 | 0.4078 | nan | 0.3595 | 0.3946 | 0.0124 | 0.7930 | 0.0 | 0.0 | nan | 0.0 | 0.4603 | 0.0 | 0.0 | 0.6603 | 0.0 | 0.4156 | 0.3319 | 0.0 | nan | 0.0 | 0.2643 | 0.1535 | 0.0 | 0.8453 | 0.7478 | 0.9060 | 0.0 | 0.0121 | 0.1449 | 0.0 | 15 |
| 0.3713 | 0.5397 | 0.3053 | 0.3675 | 0.8565 | 0.0 | 0.8845 | 0.9473 | 0.5039 | 0.7752 | 0.6646 | nan | 0.4841 | 0.6128 | 0.0313 | 0.9142 | 0.0029 | 0.0 | nan | 0.0 | 0.6522 | 0.0 | 0.0 | 0.9018 | 0.0054 | 0.5113 | 0.3701 | 0.0 | nan | 0.0 | 0.3208 | 0.1426 | 0.0 | 0.9068 | 0.9156 | 0.9537 | 0.0011 | 0.0201 | 0.2376 | 0.0 | 0.0 | 0.7785 | 0.8791 | 0.4051 | 0.7029 | 0.4544 | nan | 0.3946 | 0.3946 | 0.0311 | 0.7865 | 0.0029 | 0.0 | nan | 0.0 | 0.4606 | 0.0 | 0.0 | 0.6823 | 0.0053 | 0.3835 | 0.3334 | 0.0 | nan | 0.0 | 0.2581 | 0.1383 | 0.0 | 0.8433 | 0.7196 | 0.9061 | 0.0007 | 0.0182 | 0.1920 | 0.0 | 16 |
| 0.3502 | 0.5369 | 0.3065 | 0.3627 | 0.8594 | 0.0 | 0.9004 | 0.9587 | 0.4626 | 0.7253 | 0.5631 | nan | 0.5189 | 0.5677 | 0.0020 | 0.9280 | 0.0247 | 0.0 | nan | 0.0 | 0.6415 | 0.0 | 0.0 | 0.8986 | 0.0008 | 0.5035 | 0.3569 | 0.0 | nan | 0.0 | 0.3524 | 0.2297 | 0.0 | 0.9422 | 0.8659 | 0.9595 | 0.0 | 0.0117 | 0.1931 | 0.0 | 0.0 | 0.7554 | 0.8846 | 0.4080 | 0.6569 | 0.4399 | nan | 0.4082 | 0.3910 | 0.0020 | 0.7829 | 0.0247 | 0.0 | nan | 0.0 | 0.4650 | 0.0 | 0.0 | 0.6756 | 0.0008 | 0.4035 | 0.3249 | 0.0 | nan | 0.0 | 0.2717 | 0.2187 | 0.0 | 0.8550 | 0.7560 | 0.9070 | 0.0 | 0.0113 | 0.1663 | 0.0 | 17 |
| 0.3405 | 0.5502 | 0.3062 | 0.3636 | 0.8594 | 0.0 | 0.9166 | 0.9532 | 0.4142 | 0.7531 | 0.6138 | nan | 0.4230 | 0.5584 | 0.0258 | 0.8984 | 0.0044 | 0.0 | nan | 0.0 | 0.6498 | 0.0 | 0.0 | 0.9063 | 0.0057 | 0.5072 | 0.3758 | 0.0 | nan | 0.0 | 0.3565 | 0.2564 | 0.0 | 0.9187 | 0.9228 | 0.9622 | 0.0021 | 0.0172 | 0.1947 | 0.0 | 0.0 | 0.7390 | 0.8967 | 0.3897 | 0.6933 | 0.4355 | nan | 0.3709 | 0.3863 | 0.0256 | 0.7977 | 0.0044 | 0.0 | nan | 0.0 | 0.4553 | 0.0 | 0.0 | 0.6760 | 0.0056 | 0.3945 | 0.3398 | 0.0 | nan | 0.0 | 0.2704 | 0.2408 | 0.0 | 0.8494 | 0.7349 | 0.9070 | 0.0020 | 0.0160 | 0.1688 | 0.0 | 18 |
| 0.3311 | 0.5097 | 0.3140 | 0.3734 | 0.8628 | 0.0 | 0.8703 | 0.9682 | 0.4415 | 0.8131 | 0.5973 | nan | 0.4837 | 0.6228 | 0.0409 | 0.9119 | 0.0300 | 0.0 | nan | 0.0 | 0.6709 | 0.0 | 0.0 | 0.8819 | 0.0123 | 0.5047 | 0.4101 | 0.0 | nan | 0.0 | 0.3714 | 0.2680 | 0.0 | 0.9125 | 0.9120 | 0.9623 | 0.0 | 0.0194 | 0.2439 | 0.0 | 0.0 | 0.7928 | 0.8843 | 0.3726 | 0.7384 | 0.4594 | nan | 0.3924 | 0.4183 | 0.0407 | 0.8016 | 0.0300 | 0.0 | nan | 0.0 | 0.4553 | 0.0 | 0.0 | 0.6842 | 0.0119 | 0.3877 | 0.3679 | 0.0 | nan | 0.0 | 0.2747 | 0.2447 | 0.0 | 0.8427 | 0.7257 | 0.9099 | 0.0 | 0.0182 | 0.1957 | 0.0 | 19 |
| 0.3200 | 0.5690 | 0.3125 | 0.3727 | 0.8588 | 0.0 | 0.8760 | 0.9569 | 0.4469 | 0.6968 | 0.7124 | nan | 0.4744 | 0.6537 | 0.0381 | 0.9262 | 0.0226 | 0.0 | nan | 0.0 | 0.6176 | 0.0 | 0.0 | 0.8978 | 0.0201 | 0.5154 | 0.3644 | 0.0 | nan | 0.0 | 0.3458 | 0.3278 | 0.0 | 0.9375 | 0.8824 | 0.9608 | 0.0 | 0.0238 | 0.2276 | 0.0 | 0.0 | 0.7867 | 0.8779 | 0.3945 | 0.6375 | 0.4626 | nan | 0.3969 | 0.4168 | 0.0376 | 0.7872 | 0.0226 | 0.0 | nan | 0.0 | 0.4612 | 0.0 | 0.0 | 0.6770 | 0.0192 | 0.4005 | 0.3394 | 0.0 | nan | 0.0 | 0.2703 | 0.2821 | 0.0 | 0.8552 | 0.7527 | 0.9104 | 0.0 | 0.0219 | 0.1902 | 0.0 | 20 |
| 0.3120 | 0.5155 | 0.3151 | 0.3760 | 0.8626 | 0.0 | 0.8875 | 0.9659 | 0.4640 | 0.7236 | 0.6301 | nan | 0.4876 | 0.5926 | 0.0416 | 0.9255 | 0.0116 | 0.0 | nan | 0.0 | 0.6732 | 0.0 | 0.0 | 0.9028 | 0.0270 | 0.5107 | 0.4032 | 0.0 | nan | 0.0 | 0.3817 | 0.3335 | 0.0 | 0.9350 | 0.8668 | 0.9556 | 0.0012 | 0.0434 | 0.2684 | 0.0 | 0.0 | 0.7974 | 0.8797 | 0.3892 | 0.6598 | 0.4580 | nan | 0.4000 | 0.3794 | 0.0414 | 0.7968 | 0.0116 | 0.0 | nan | 0.0 | 0.4536 | 0.0 | 0.0 | 0.6872 | 0.0259 | 0.4002 | 0.3626 | 0.0 | nan | 0.0 | 0.2821 | 0.2841 | 0.0 | 0.8603 | 0.7609 | 0.9128 | 0.0011 | 0.0331 | 0.2076 | 0.0 | 21 |
| 0.3017 | 0.5201 | 0.3199 | 0.3827 | 0.8670 | 0.0 | 0.9120 | 0.9521 | 0.5413 | 0.8270 | 0.6610 | nan | 0.4946 | 0.5627 | 0.0423 | 0.9207 | 0.0336 | 0.0 | nan | 0.0 | 0.6437 | 0.0 | 0.0 | 0.9003 | 0.0247 | 0.5331 | 0.4028 | 0.0 | nan | 0.0 | 0.3737 | 0.3892 | 0.0 | 0.9171 | 0.9109 | 0.9616 | 0.0146 | 0.0184 | 0.2092 | 0.0 | 0.0 | 0.7717 | 0.9034 | 0.3938 | 0.7287 | 0.4774 | nan | 0.4103 | 0.4008 | 0.0419 | 0.8044 | 0.0336 | 0.0 | nan | 0.0 | 0.4686 | 0.0 | 0.0 | 0.6813 | 0.0230 | 0.3958 | 0.3564 | 0.0 | nan | 0.0 | 0.2823 | 0.3297 | 0.0 | 0.8547 | 0.7570 | 0.9126 | 0.0121 | 0.0173 | 0.1786 | 0.0 | 22 |
| 0.3016 | 0.5527 | 0.3173 | 0.3811 | 0.8609 | 0.0 | 0.8912 | 0.9593 | 0.4638 | 0.6868 | 0.7110 | nan | 0.4653 | 0.6380 | 0.0447 | 0.9197 | 0.0362 | 0.0 | nan | 0.0 | 0.6737 | 0.0 | 0.0 | 0.8998 | 0.0303 | 0.5386 | 0.4095 | 0.0 | nan | 0.0 | 0.3663 | 0.3935 | 0.0 | 0.9284 | 0.8910 | 0.9618 | 0.0000 | 0.0297 | 0.2570 | 0.0 | 0.0 | 0.7536 | 0.8893 | 0.3672 | 0.6440 | 0.4740 | nan | 0.3968 | 0.4275 | 0.0444 | 0.7999 | 0.0361 | 0.0 | nan | 0.0 | 0.4593 | 0.0 | 0.0 | 0.6930 | 0.0291 | 0.3877 | 0.3651 | 0.0 | nan | 0.0 | 0.2839 | 0.3360 | 0.0 | 0.8568 | 0.7558 | 0.9154 | 0.0000 | 0.0277 | 0.2098 | 0.0 | 23 |
| 0.2903 | 0.5091 | 0.3204 | 0.3803 | 0.8666 | 0.0 | 0.9124 | 0.9512 | 0.5347 | 0.8343 | 0.6212 | nan | 0.5144 | 0.5947 | 0.0513 | 0.9161 | 0.0387 | 0.0 | nan | 0.0 | 0.6167 | 0.0 | 0.0 | 0.9043 | 0.0255 | 0.4947 | 0.3443 | 0.0 | nan | 0.0 | 0.3727 | 0.3979 | 0.0 | 0.9229 | 0.9170 | 0.9584 | 0.0134 | 0.0171 | 0.2150 | 0.0 | 0.0 | 0.8085 | 0.8943 | 0.4187 | 0.7402 | 0.4403 | nan | 0.4257 | 0.4229 | 0.0506 | 0.8021 | 0.0384 | 0.0 | nan | 0.0 | 0.4653 | 0.0 | 0.0 | 0.6774 | 0.0241 | 0.3744 | 0.3237 | 0.0 | nan | 0.0 | 0.2837 | 0.3375 | 0.0 | 0.8534 | 0.7470 | 0.9170 | 0.0119 | 0.0165 | 0.1784 | 0.0 | 24 |
| 0.2869 | 0.5304 | 0.3208 | 0.3853 | 0.8631 | 0.0 | 0.9204 | 0.9461 | 0.5684 | 0.7963 | 0.5569 | nan | 0.5264 | 0.6297 | 0.0769 | 0.9035 | 0.0469 | 0.0 | nan | 0.0 | 0.6724 | 0.0 | 0.0 | 0.9134 | 0.0107 | 0.4920 | 0.3848 | 0.0 | nan | 0.0 | 0.3858 | 0.4485 | 0.0 | 0.9322 | 0.8984 | 0.9576 | 0.0506 | 0.0301 | 0.1830 | 0.0 | 0.0 | 0.7513 | 0.9008 | 0.4016 | 0.7108 | 0.4265 | nan | 0.4086 | 0.4314 | 0.0752 | 0.8133 | 0.0455 | 0.0 | nan | 0.0 | 0.4575 | 0.0 | 0.0 | 0.6703 | 0.0105 | 0.4124 | 0.3496 | 0.0 | nan | 0.0 | 0.2943 | 0.3631 | 0.0 | 0.8537 | 0.7527 | 0.9175 | 0.0397 | 0.0272 | 0.1531 | 0.0 | 25 |
| 0.2870 | 0.6210 | 0.3239 | 0.3867 | 0.8601 | 0.0 | 0.9226 | 0.9662 | 0.5159 | 0.6950 | 0.5032 | nan | 0.4978 | 0.5125 | 0.0764 | 0.9003 | 0.0567 | 0.0 | nan | 0.0 | 0.6650 | 0.0 | 0.0 | 0.9188 | 0.0165 | 0.4980 | 0.4263 | 0.0 | nan | 0.0 | 0.3906 | 0.4679 | 0.0 | 0.9084 | 0.9043 | 0.9576 | 0.3333 | 0.0473 | 0.1939 | 0.0 | 0.0 | 0.7746 | 0.8841 | 0.4054 | 0.6361 | 0.4274 | nan | 0.4203 | 0.3645 | 0.0747 | 0.7990 | 0.0559 | 0.0 | nan | 0.0 | 0.4630 | 0.0 | 0.0 | 0.6670 | 0.0159 | 0.4031 | 0.3693 | 0.0 | nan | 0.0 | 0.2942 | 0.3689 | 0.0 | 0.8547 | 0.7547 | 0.9169 | 0.2169 | 0.0353 | 0.1634 | 0.0 | 26 |
| 0.2759 | 0.5174 | 0.3291 | 0.3943 | 0.8685 | 0.0 | 0.9123 | 0.9615 | 0.3826 | 0.8113 | 0.5876 | nan | 0.5130 | 0.6677 | 0.0862 | 0.9153 | 0.0635 | 0.0 | nan | 0.0 | 0.6666 | 0.0 | 0.0 | 0.8736 | 0.0716 | 0.5387 | 0.4437 | 0.0 | nan | 0.0 | 0.4190 | 0.4781 | 0.0 | 0.9194 | 0.9174 | 0.9674 | 0.1048 | 0.0253 | 0.2898 | 0.0 | 0.0 | 0.8047 | 0.8922 | 0.3331 | 0.7338 | 0.4635 | nan | 0.4187 | 0.4413 | 0.0833 | 0.8020 | 0.0626 | 0.0 | nan | 0.0 | 0.4483 | 0.0 | 0.0 | 0.6970 | 0.0629 | 0.4012 | 0.3818 | 0.0 | nan | 0.0 | 0.2971 | 0.3643 | 0.0 | 0.8551 | 0.7489 | 0.9153 | 0.0753 | 0.0232 | 0.2256 | 0.0 | 27 |
| 0.2791 | 0.5138 | 0.3264 | 0.3888 | 0.8699 | 0.0 | 0.9082 | 0.9556 | 0.2156 | 0.8500 | 0.6538 | nan | 0.4691 | 0.6238 | 0.1046 | 0.9234 | 0.0362 | 0.0 | nan | 0.0 | 0.6441 | 0.0 | 0.0 | 0.9040 | 0.0592 | 0.5193 | 0.4442 | 0.0 | nan | 0.0 | 0.3749 | 0.4394 | 0.0 | 0.9309 | 0.8876 | 0.9605 | 0.2636 | 0.0214 | 0.2527 | 0.0 | 0.0 | 0.8137 | 0.8936 | 0.1818 | 0.7657 | 0.4724 | nan | 0.4108 | 0.4396 | 0.1005 | 0.8096 | 0.0355 | 0.0 | nan | 0.0 | 0.4640 | 0.0 | 0.0 | 0.6934 | 0.0564 | 0.3842 | 0.3718 | 0.0 | nan | 0.0 | 0.2963 | 0.3541 | 0.0 | 0.8566 | 0.7598 | 0.9180 | 0.1539 | 0.0190 | 0.1941 | 0.0 | 28 |
| 0.2728 | 0.5249 | 0.3275 | 0.3886 | 0.8672 | 0.0 | 0.9286 | 0.9447 | 0.4269 | 0.7825 | 0.6896 | nan | 0.4984 | 0.5883 | 0.1082 | 0.9204 | 0.0530 | 0.0 | nan | 0.0 | 0.6501 | 0.0 | 0.0 | 0.8936 | 0.0489 | 0.5277 | 0.4051 | 0.0 | nan | 0.0 | 0.3965 | 0.5070 | 0.0 | 0.9425 | 0.8916 | 0.9645 | 0.0 | 0.0170 | 0.2491 | 0.0 | 0.0 | 0.7636 | 0.8983 | 0.3799 | 0.7214 | 0.4693 | nan | 0.4124 | 0.4454 | 0.1 | 0.8081 | 0.0510 | 0.0 | nan | 0.0 | 0.4717 | 0.0 | 0.0 | 0.6914 | 0.0449 | 0.3906 | 0.3600 | 0.0 | nan | 0.0 | 0.2989 | 0.3980 | 0.0 | 0.8612 | 0.7709 | 0.9191 | 0.0 | 0.0160 | 0.2067 | 0.0 | 29 |
| 0.2556 | 0.5577 | 0.3283 | 0.3899 | 0.8668 | 0.0 | 0.8620 | 0.9599 | 0.4681 | 0.8029 | 0.6812 | nan | 0.5151 | 0.6041 | 0.1296 | 0.9314 | 0.0517 | 0.0 | nan | 0.0 | 0.6206 | 0.0 | 0.0 | 0.9120 | 0.0537 | 0.5219 | 0.3835 | 0.0 | nan | 0.0 | 0.3950 | 0.4880 | 0.0 | 0.9270 | 0.9054 | 0.9584 | 0.0697 | 0.0167 | 0.2185 | 0.0 | 0.0 | 0.7991 | 0.8851 | 0.3274 | 0.7335 | 0.4697 | nan | 0.4272 | 0.4588 | 0.1172 | 0.8067 | 0.0492 | 0.0 | nan | 0.0 | 0.4710 | 0.0 | 0.0 | 0.6861 | 0.0498 | 0.4047 | 0.3498 | 0.0 | nan | 0.0 | 0.3004 | 0.3865 | 0.0 | 0.8604 | 0.7579 | 0.9202 | 0.0519 | 0.0158 | 0.1782 | 0.0 | 30 |
| 0.2480 | 0.4931 | 0.3356 | 0.3974 | 0.8742 | 0.0 | 0.9152 | 0.9569 | 0.4642 | 0.8997 | 0.7098 | nan | 0.5025 | 0.6441 | 0.1322 | 0.9129 | 0.0527 | 0.0 | nan | 0.0 | 0.6643 | 0.0 | 0.0 | 0.9249 | 0.0377 | 0.4957 | 0.3950 | 0.0 | nan | 0.0 | 0.4079 | 0.4869 | 0.0 | 0.9195 | 0.8827 | 0.9639 | 0.1584 | 0.0280 | 0.1628 | 0.0 | 0.0 | 0.8160 | 0.9034 | 0.3920 | 0.8109 | 0.5004 | nan | 0.4275 | 0.4552 | 0.1180 | 0.8086 | 0.0495 | 0.0 | nan | 0.0 | 0.4663 | 0.0 | 0.0 | 0.6714 | 0.0363 | 0.4029 | 0.3486 | 0.0 | nan | 0.0 | 0.3016 | 0.3987 | 0.0 | 0.8584 | 0.7624 | 0.9200 | 0.1279 | 0.0253 | 0.1375 | 0.0 | 31 |
| 0.2467 | 0.5091 | 0.3348 | 0.3964 | 0.8752 | 0.0 | 0.8971 | 0.9568 | 0.5112 | 0.8863 | 0.7226 | nan | 0.5276 | 0.6090 | 0.1273 | 0.9138 | 0.0480 | 0.0 | nan | 0.0 | 0.6762 | 0.0 | 0.0 | 0.9223 | 0.0349 | 0.5263 | 0.3831 | 0.0 | nan | 0.0 | 0.3999 | 0.4924 | 0.0 | 0.9358 | 0.8774 | 0.9652 | 0.0667 | 0.0259 | 0.1795 | 0.0 | 0.0 | 0.8313 | 0.9006 | 0.3902 | 0.8000 | 0.5045 | nan | 0.4362 | 0.4580 | 0.1173 | 0.8150 | 0.0460 | 0.0 | nan | 0.0 | 0.4735 | 0.0 | 0.0 | 0.6802 | 0.0342 | 0.4123 | 0.3509 | 0.0 | nan | 0.0 | 0.3049 | 0.3913 | 0.0 | 0.8605 | 0.7658 | 0.9220 | 0.0444 | 0.0242 | 0.1502 | 0.0 | 32 |
| 0.2430 | 0.5393 | 0.3325 | 0.3917 | 0.8704 | 0.0 | 0.8953 | 0.9632 | 0.5036 | 0.8081 | 0.5941 | nan | 0.5130 | 0.6436 | 0.1049 | 0.9329 | 0.0502 | 0.0 | nan | 0.0 | 0.6347 | 0.0 | 0.0 | 0.8882 | 0.0652 | 0.5115 | 0.4234 | 0.0 | nan | 0.0 | 0.3974 | 0.4341 | 0.0 | 0.9334 | 0.9137 | 0.9621 | 0.0518 | 0.0264 | 0.2825 | 0.0 | 0.0 | 0.8197 | 0.8887 | 0.4044 | 0.7367 | 0.4565 | nan | 0.4257 | 0.4769 | 0.0954 | 0.8060 | 0.0493 | 0.0 | nan | 0.0 | 0.4708 | 0.0 | 0.0 | 0.6965 | 0.0594 | 0.3946 | 0.3766 | 0.0 | nan | 0.0 | 0.2996 | 0.3611 | 0.0 | 0.8589 | 0.7588 | 0.9239 | 0.0379 | 0.0233 | 0.2182 | 0.0 | 33 |
| 0.2362 | 0.5184 | 0.3370 | 0.3951 | 0.8716 | 0.0 | 0.9045 | 0.9681 | 0.4519 | 0.8066 | 0.5724 | nan | 0.5364 | 0.6084 | 0.1367 | 0.9124 | 0.0536 | 0.0 | nan | 0.0 | 0.6666 | 0.0 | 0.0 | 0.8956 | 0.0637 | 0.5331 | 0.4525 | 0.0 | nan | 0.0 | 0.3985 | 0.5169 | 0.0 | 0.9250 | 0.9175 | 0.9598 | 0.0945 | 0.0224 | 0.2478 | 0.0 | 0.0 | 0.7909 | 0.8971 | 0.4231 | 0.7306 | 0.4766 | nan | 0.4453 | 0.4660 | 0.1233 | 0.8117 | 0.0523 | 0.0 | nan | 0.0 | 0.4698 | 0.0 | 0.0 | 0.6947 | 0.0584 | 0.4099 | 0.3913 | 0.0 | nan | 0.0 | 0.3072 | 0.4038 | 0.0 | 0.8585 | 0.7556 | 0.9243 | 0.0807 | 0.0205 | 0.1925 | 0.0 | 34 |
| 0.2358 | 0.5466 | 0.3343 | 0.3997 | 0.8678 | 0.0 | 0.8973 | 0.9654 | 0.5114 | 0.7614 | 0.6166 | nan | 0.5178 | 0.6873 | 0.1800 | 0.9305 | 0.0724 | 0.0 | nan | 0.0 | 0.6812 | 0.0 | 0.0 | 0.8828 | 0.0516 | 0.5776 | 0.4319 | 0.0 | nan | 0.0 | 0.3922 | 0.4584 | 0.0 | 0.9252 | 0.8982 | 0.9661 | 0.1076 | 0.0260 | 0.2528 | 0.0 | 0.0 | 0.7946 | 0.8884 | 0.4043 | 0.6950 | 0.4749 | nan | 0.4265 | 0.4765 | 0.1554 | 0.8143 | 0.0699 | 0.0 | nan | 0.0 | 0.4759 | 0.0 | 0.0 | 0.6963 | 0.0467 | 0.3978 | 0.3722 | 0.0 | nan | 0.0 | 0.3100 | 0.3513 | 0.0 | 0.8631 | 0.7633 | 0.9233 | 0.0772 | 0.0217 | 0.1994 | 0.0 | 35 |
| 0.2315 | 0.5484 | 0.3353 | 0.4007 | 0.8682 | 0.0 | 0.9169 | 0.9540 | 0.4890 | 0.7525 | 0.7001 | nan | 0.5193 | 0.5919 | 0.1294 | 0.9241 | 0.0515 | 0.0 | nan | 0.0 | 0.6725 | 0.0 | 0.0 | 0.9040 | 0.0487 | 0.5260 | 0.4268 | 0.0 | nan | 0.0 | 0.4055 | 0.4971 | 0.0 | 0.9350 | 0.8842 | 0.9607 | 0.3005 | 0.0279 | 0.2059 | 0.0 | 0.0 | 0.8133 | 0.8922 | 0.3714 | 0.6883 | 0.4819 | nan | 0.4364 | 0.4540 | 0.1181 | 0.8160 | 0.0504 | 0.0 | nan | 0.0 | 0.4709 | 0.0 | 0.0 | 0.6853 | 0.0429 | 0.3983 | 0.3721 | 0.0 | nan | 0.0 | 0.3094 | 0.4027 | 0.0 | 0.8633 | 0.7562 | 0.9264 | 0.1865 | 0.0244 | 0.1685 | 0.0 | 36 |
| 0.2377 | 0.5324 | 0.3407 | 0.4028 | 0.8737 | 0.0 | 0.9091 | 0.9632 | 0.4779 | 0.8515 | 0.5949 | nan | 0.5091 | 0.6103 | 0.1728 | 0.9162 | 0.0652 | 0.0 | nan | 0.0 | 0.6746 | 0.0 | 0.0 | 0.9118 | 0.0507 | 0.5309 | 0.4420 | 0.0 | nan | 0.0 | 0.4037 | 0.4908 | 0.0 | 0.9208 | 0.9137 | 0.9634 | 0.2771 | 0.0202 | 0.2192 | 0.0 | 0.0 | 0.8315 | 0.8955 | 0.3856 | 0.7536 | 0.4891 | nan | 0.4326 | 0.4828 | 0.1421 | 0.8130 | 0.0630 | 0.0 | nan | 0.0 | 0.4685 | 0.0 | 0.0 | 0.6849 | 0.0465 | 0.4297 | 0.3804 | 0.0 | nan | 0.0 | 0.3098 | 0.3899 | 0.0 | 0.8617 | 0.7556 | 0.9258 | 0.1701 | 0.0188 | 0.1713 | 0.0 | 37 |
| 0.2319 | 0.5114 | 0.3332 | 0.3942 | 0.8745 | 0.0 | 0.9208 | 0.9566 | 0.4723 | 0.8581 | 0.6419 | nan | 0.5400 | 0.6353 | 0.1534 | 0.9246 | 0.0637 | 0.0 | nan | 0.0 | 0.6620 | 0.0 | 0.0 | 0.9143 | 0.0362 | 0.4700 | 0.3914 | 0.0 | nan | 0.0 | 0.3948 | 0.5296 | 0.0 | 0.9311 | 0.9119 | 0.9656 | 0.0039 | 0.0294 | 0.2069 | 0.0 | 0.0 | 0.8163 | 0.9056 | 0.3885 | 0.7729 | 0.4815 | nan | 0.4515 | 0.4610 | 0.1360 | 0.8119 | 0.0627 | 0.0 | nan | 0.0 | 0.4623 | 0.0 | 0.0 | 0.6784 | 0.0339 | 0.3920 | 0.3473 | 0.0 | nan | 0.0 | 0.3070 | 0.4080 | 0.0 | 0.8647 | 0.7644 | 0.9245 | 0.0030 | 0.0262 | 0.1634 | 0.0 | 38 |
| 0.2241 | 0.5676 | 0.3349 | 0.3986 | 0.8699 | 0.0 | 0.9146 | 0.9628 | 0.4670 | 0.7622 | 0.6285 | nan | 0.4889 | 0.6428 | 0.1805 | 0.9249 | 0.0685 | 0.0 | nan | 0.0 | 0.6784 | 0.0 | 0.0 | 0.8917 | 0.1012 | 0.5618 | 0.4185 | 0.0 | nan | 0.0018 | 0.4246 | 0.5487 | 0.0 | 0.9293 | 0.9124 | 0.9698 | 0.0025 | 0.0220 | 0.2520 | 0.0 | 0.0 | 0.8114 | 0.8907 | 0.4002 | 0.6840 | 0.4842 | nan | 0.4160 | 0.4777 | 0.1501 | 0.8191 | 0.0649 | 0.0 | nan | 0.0 | 0.4701 | 0.0 | 0.0 | 0.7077 | 0.0873 | 0.4025 | 0.3659 | 0.0 | nan | 0.0012 | 0.3255 | 0.3939 | 0.0 | 0.8634 | 0.7621 | 0.9232 | 0.0019 | 0.0200 | 0.1947 | 0.0 | 39 |
| 0.2247 | 0.5314 | 0.3358 | 0.3991 | 0.8724 | 0.0 | 0.8966 | 0.9546 | 0.5034 | 0.8269 | 0.7241 | nan | 0.5320 | 0.6062 | 0.1736 | 0.9394 | 0.0618 | 0.0 | nan | 0.0 | 0.6274 | 0.0 | 0.0 | 0.9066 | 0.0810 | 0.5121 | 0.3887 | 0.0 | nan | 0.0 | 0.4009 | 0.5482 | 0.0 | 0.9264 | 0.9119 | 0.9692 | 0.0039 | 0.0267 | 0.2479 | 0.0 | 0.0 | 0.8026 | 0.8995 | 0.3869 | 0.7462 | 0.4877 | nan | 0.4497 | 0.4812 | 0.1402 | 0.8222 | 0.0583 | 0.0 | nan | 0.0 | 0.4663 | 0.0 | 0.0 | 0.6983 | 0.0746 | 0.4058 | 0.3539 | 0.0 | nan | 0.0 | 0.3113 | 0.3935 | 0.0 | 0.8585 | 0.7587 | 0.9242 | 0.0030 | 0.0231 | 0.2000 | 0.0 | 40 |
| 0.2199 | 0.5197 | 0.3375 | 0.4039 | 0.8734 | 0.0 | 0.9161 | 0.9549 | 0.5890 | 0.8122 | 0.6603 | nan | 0.5412 | 0.6391 | 0.2229 | 0.9251 | 0.0588 | 0.0 | nan | 0.0 | 0.6700 | 0.0 | 0.0 | 0.9168 | 0.0601 | 0.5176 | 0.3964 | 0.0 | nan | 0.0 | 0.4129 | 0.5517 | 0.0 | 0.9323 | 0.9033 | 0.9677 | 0.0014 | 0.0303 | 0.2459 | 0.0 | 0.0 | 0.8182 | 0.9002 | 0.3683 | 0.7333 | 0.4846 | nan | 0.4468 | 0.4840 | 0.1819 | 0.8270 | 0.0558 | 0.0 | nan | 0.0 | 0.4704 | 0.0 | 0.0 | 0.6912 | 0.0581 | 0.4220 | 0.3575 | 0.0 | nan | 0.0 | 0.3214 | 0.4031 | 0.0 | 0.8634 | 0.7671 | 0.9256 | 0.0010 | 0.0263 | 0.1930 | 0.0 | 41 |
| 0.2194 | 0.4928 | 0.3427 | 0.4041 | 0.8795 | 0.0 | 0.9068 | 0.9650 | 0.4578 | 0.9049 | 0.6337 | nan | 0.5321 | 0.6180 | 0.2096 | 0.9413 | 0.1196 | 0.0 | nan | 0.0 | 0.6733 | 0.0 | 0.0 | 0.9081 | 0.0807 | 0.5106 | 0.4412 | 0.0 | nan | 0.0 | 0.4201 | 0.5280 | 0.0 | 0.9298 | 0.8972 | 0.9636 | 0.0001 | 0.0292 | 0.2592 | 0.0 | 0.0 | 0.8208 | 0.9101 | 0.3823 | 0.8072 | 0.4956 | nan | 0.4523 | 0.4851 | 0.1586 | 0.8233 | 0.1146 | 0.0 | nan | 0.0 | 0.4769 | 0.0 | 0.0 | 0.6997 | 0.0757 | 0.4192 | 0.3787 | 0.0 | nan | 0.0 | 0.3206 | 0.3747 | 0.0 | 0.8626 | 0.7575 | 0.9263 | 0.0001 | 0.0251 | 0.1998 | 0.0 | 42 |
| 0.2058 | 0.5290 | 0.3408 | 0.4066 | 0.8745 | 0.0 | 0.9202 | 0.9545 | 0.5309 | 0.8185 | 0.6809 | nan | 0.5379 | 0.6631 | 0.2210 | 0.9236 | 0.0830 | 0.0 | nan | 0.0 | 0.6756 | 0.0 | 0.0 | 0.9003 | 0.0643 | 0.5398 | 0.4282 | 0.0 | nan | 0.0 | 0.4030 | 0.5014 | 0.0 | 0.9338 | 0.9110 | 0.9669 | 0.0853 | 0.0317 | 0.2364 | 0.0 | 0.0 | 0.8073 | 0.9043 | 0.3907 | 0.7392 | 0.4986 | nan | 0.4517 | 0.4827 | 0.1848 | 0.8260 | 0.0820 | 0.0 | nan | 0.0 | 0.4661 | 0.0 | 0.0 | 0.6968 | 0.0612 | 0.4189 | 0.3671 | 0.0 | nan | 0.0 | 0.3188 | 0.3823 | 0.0 | 0.8635 | 0.7621 | 0.9276 | 0.0609 | 0.0263 | 0.1877 | 0.0 | 43 |
| 0.2086 | 0.5585 | 0.3373 | 0.4003 | 0.8724 | 0.0 | 0.9171 | 0.9605 | 0.5070 | 0.7666 | 0.6387 | nan | 0.5526 | 0.6278 | 0.1947 | 0.9270 | 0.0758 | 0.0 | nan | 0.0 | 0.6633 | 0.0 | 0.0 | 0.9033 | 0.0584 | 0.5314 | 0.3971 | 0.0 | nan | 0.0 | 0.4408 | 0.5355 | 0.0 | 0.9321 | 0.9149 | 0.9689 | 0.0094 | 0.0216 | 0.2668 | 0.0 | 0.0 | 0.8046 | 0.8988 | 0.3859 | 0.7018 | 0.4837 | nan | 0.4593 | 0.4939 | 0.1485 | 0.8276 | 0.0747 | 0.0 | nan | 0.0 | 0.4694 | 0.0 | 0.0 | 0.7058 | 0.0565 | 0.4122 | 0.3619 | 0.0 | nan | 0.0 | 0.3300 | 0.3997 | 0.0 | 0.8611 | 0.7559 | 0.9262 | 0.0070 | 0.0195 | 0.2110 | 0.0 | 44 |
| 0.2016 | 0.5442 | 0.3452 | 0.4156 | 0.8718 | 0.0 | 0.8940 | 0.9587 | 0.4970 | 0.7906 | 0.6744 | nan | 0.5719 | 0.6411 | 0.1822 | 0.9436 | 0.1722 | 0.0 | nan | 0.0 | 0.6901 | 0.0 | 0.0 | 0.8519 | 0.1355 | 0.5906 | 0.4452 | 0.0 | nan | 0.0844 | 0.4539 | 0.4714 | 0.0 | 0.9380 | 0.8876 | 0.9650 | 0.0503 | 0.0246 | 0.3849 | 0.0 | 0.0 | 0.8165 | 0.8892 | 0.3722 | 0.7187 | 0.4869 | nan | 0.4709 | 0.4788 | 0.1384 | 0.8176 | 0.1673 | 0.0 | nan | 0.0 | 0.4677 | 0.0 | 0.0 | 0.7170 | 0.0963 | 0.3922 | 0.3739 | 0.0 | nan | 0.0596 | 0.3315 | 0.3576 | 0.0 | 0.8666 | 0.7659 | 0.9290 | 0.0324 | 0.0192 | 0.2801 | 0.0 | 45 |
| 0.2021 | 0.5369 | 0.3409 | 0.4049 | 0.8753 | 0.0 | 0.9249 | 0.9608 | 0.5473 | 0.7979 | 0.6534 | nan | 0.5548 | 0.6555 | 0.1623 | 0.9311 | 0.1317 | 0.0 | nan | 0.0 | 0.6821 | 0.0 | 0.0 | 0.9151 | 0.0621 | 0.5285 | 0.4378 | 0.0 | nan | 0.0029 | 0.4205 | 0.5321 | 0.0 | 0.9301 | 0.8972 | 0.9677 | 0.0025 | 0.0281 | 0.2294 | 0.0 | 0.0 | 0.8155 | 0.8981 | 0.3979 | 0.7396 | 0.4890 | nan | 0.4634 | 0.4928 | 0.1352 | 0.8283 | 0.1299 | 0.0 | nan | 0.0 | 0.4784 | 0.0 | 0.0 | 0.6949 | 0.0602 | 0.4199 | 0.3775 | 0.0 | nan | 0.0018 | 0.3280 | 0.3792 | 0.0 | 0.8698 | 0.7691 | 0.9283 | 0.0020 | 0.0237 | 0.1856 | 0.0 | 46 |
| 0.2056 | 0.5118 | 0.3421 | 0.4052 | 0.8747 | 0.0 | 0.9441 | 0.9624 | 0.5219 | 0.7370 | 0.6816 | nan | 0.5360 | 0.6510 | 0.2372 | 0.9316 | 0.0854 | 0.0 | nan | 0.0 | 0.6607 | 0.0 | 0.0 | 0.8952 | 0.0777 | 0.5614 | 0.4327 | 0.0 | nan | 0.0 | 0.4403 | 0.4597 | 0.0 | 0.9350 | 0.8918 | 0.9650 | 0.0312 | 0.0296 | 0.2984 | 0.0 | 0.0 | 0.8039 | 0.9033 | 0.4243 | 0.7185 | 0.4949 | nan | 0.4494 | 0.4876 | 0.1929 | 0.8314 | 0.0849 | 0.0 | nan | 0.0 | 0.4694 | 0.0 | 0.0 | 0.7160 | 0.0697 | 0.3959 | 0.3751 | 0.0 | nan | 0.0 | 0.3319 | 0.3668 | 0.0 | 0.8638 | 0.7612 | 0.9299 | 0.0246 | 0.0251 | 0.2269 | 0.0 | 47 |
| 0.2038 | 0.5407 | 0.3318 | 0.3995 | 0.8701 | 0.0 | 0.9396 | 0.9574 | 0.6559 | 0.7289 | 0.6166 | nan | 0.5573 | 0.6550 | 0.1861 | 0.9233 | 0.0852 | 0.0 | nan | 0.0 | 0.6396 | 0.0 | 0.0 | 0.9241 | 0.0504 | 0.4811 | 0.4240 | 0.0 | nan | 0.0001 | 0.3993 | 0.4905 | 0.0 | 0.9225 | 0.9142 | 0.9672 | 0.0155 | 0.0175 | 0.2326 | 0.0 | 0.0 | 0.8068 | 0.9021 | 0.3095 | 0.7049 | 0.4679 | nan | 0.4504 | 0.4687 | 0.1686 | 0.8264 | 0.0847 | 0.0 | nan | 0.0 | 0.4721 | 0.0 | 0.0 | 0.6820 | 0.0482 | 0.3826 | 0.3686 | 0.0 | nan | 0.0001 | 0.3217 | 0.3838 | 0.0 | 0.8636 | 0.7679 | 0.9297 | 0.0114 | 0.0156 | 0.1809 | 0.0 | 48 |
| 0.1964 | 0.5617 | 0.3482 | 0.4212 | 0.8714 | 0.0 | 0.9191 | 0.9583 | 0.5878 | 0.7730 | 0.6079 | nan | 0.5577 | 0.6959 | 0.2221 | 0.9300 | 0.1310 | 0.0 | nan | 0.0 | 0.6927 | 0.0 | 0.0 | 0.9088 | 0.0611 | 0.4825 | 0.4370 | 0.0 | nan | 0.0 | 0.4339 | 0.4711 | 0.0 | 0.9172 | 0.9151 | 0.9692 | 0.5121 | 0.0265 | 0.2676 | 0.0 | 0.0 | 0.7902 | 0.9044 | 0.4201 | 0.6963 | 0.4879 | nan | 0.4607 | 0.4926 | 0.1864 | 0.8243 | 0.1278 | 0.0 | nan | 0.0 | 0.4759 | 0.0 | 0.0 | 0.6973 | 0.0588 | 0.3754 | 0.3760 | 0.0 | nan | 0.0 | 0.3330 | 0.3555 | 0.0 | 0.8600 | 0.7533 | 0.9288 | 0.3089 | 0.0223 | 0.2055 | 0.0 | 49 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
stoves/Ermurachi_Vadim | stoves | 2023-12-03T19:11:04Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-11-10T11:47:08Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of gjdfophge person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
ThuyNT03/KLTN_COQE_viT5_SOPAL_v2 | ThuyNT03 | 2023-12-03T19:08:01Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T18:22:18Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_SOPAL_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_SOPAL_v2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow17 | FounderOfHuggingface | 2023-12-03T19:07:14Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T19:07:10Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
AndresBriC/ppo-LunarLander-v2 | AndresBriC | 2023-12-03T19:05:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T19:05:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.34 +/- 22.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sade-adrien/Mistral-7B-Instruct-v0.1-LC16k-PI | sade-adrien | 2023-12-03T19:00:50Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-01T18:54:32Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- generated_from_trainer
model-index:
- name: Mistral-7B-Instruct-v0.1-LC16k-PI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.1-LC16k-PI
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6742 | 0.12 | 100 | 1.6880 |
| 1.6683 | 0.24 | 200 | 1.6711 |
| 1.7301 | 0.36 | 300 | 1.6636 |
| 1.6867 | 0.47 | 400 | 1.6588 |
| 1.4718 | 0.59 | 500 | 1.6557 |
| 1.6843 | 0.71 | 600 | 1.6519 |
| 1.5966 | 0.83 | 700 | 1.6492 |
| 1.9016 | 0.95 | 800 | 1.6472 |
| 1.7488 | 1.07 | 900 | 1.6461 |
| 1.5596 | 1.19 | 1000 | 1.6455 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow16 | FounderOfHuggingface | 2023-12-03T18:55:36Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T18:55:33Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Liogl/RL-Course | Liogl | 2023-12-03T18:48:43Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T18:48:06Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-MLP
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.46 +/- 32.22
name: mean_reward
verified: false
---
# **PPO-MLP** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-MLP** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JFJFJFen/JFJFJFen | JFJFJFen | 2023-12-03T18:43:15Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"region:us"
] | null | 2023-12-03T18:42:34Z | ---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.6.3.dev0 |
Roxysun/cs2fi-wav2vec2-large-xls-r-300m-cs-colab-phoneme | Roxysun | 2023-12-03T18:43:12Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:facebook/wav2vec2-lv-60-espeak-cv-ft",
"base_model:finetune:facebook/wav2vec2-lv-60-espeak-cv-ft",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-03T18:23:27Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-lv-60-espeak-cv-ft
tags:
- generated_from_trainer
datasets:
- voxpopuli
metrics:
- wer
model-index:
- name: cs2fi-wav2vec2-large-xls-r-300m-cs-colab-phoneme
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: voxpopuli
type: voxpopuli
config: fi
split: test
args: fi
metrics:
- name: Wer
type: wer
value: 0.5253440751930178
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cs2fi-wav2vec2-large-xls-r-300m-cs-colab-phoneme
This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2508
- Wer: 0.5253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8061 | 16.67 | 100 | 2.7437 | 0.5693 |
| 2.871 | 33.33 | 200 | 2.4209 | 0.5314 |
| 2.525 | 50.0 | 300 | 2.2508 | 0.5253 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
syed789/zephyr-7b-beta-fhir-ft10 | syed789 | 2023-12-03T18:38:54Z | 2 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2023-12-03T18:38:53Z | ---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow13 | FounderOfHuggingface | 2023-12-03T18:20:46Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T18:20:43Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
pijarcandra22/t5Indo2Sunda | pijarcandra22 | 2023-12-03T18:11:46Z | 11 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T16:42:27Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: pijarcandra22/t5Indo2Sunda
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pijarcandra22/t5Indo2Sunda
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1941
- Validation Loss: 2.1230
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.3724 | 3.9124 | 0 |
| 3.9887 | 3.6348 | 1 |
| 3.7534 | 3.4215 | 2 |
| 3.5819 | 3.2847 | 3 |
| 3.4632 | 3.1902 | 4 |
| 3.3751 | 3.1139 | 5 |
| 3.3039 | 3.0493 | 6 |
| 3.2447 | 2.9955 | 7 |
| 3.1911 | 2.9481 | 8 |
| 3.1455 | 2.9082 | 9 |
| 3.1068 | 2.8718 | 10 |
| 3.0697 | 2.8387 | 11 |
| 3.0381 | 2.8105 | 12 |
| 3.0050 | 2.7825 | 13 |
| 2.9796 | 2.7568 | 14 |
| 2.9510 | 2.7350 | 15 |
| 2.9259 | 2.7096 | 16 |
| 2.9053 | 2.6881 | 17 |
| 2.8833 | 2.6696 | 18 |
| 2.8599 | 2.6510 | 19 |
| 2.8403 | 2.6328 | 20 |
| 2.8207 | 2.6171 | 21 |
| 2.8046 | 2.5999 | 22 |
| 2.7861 | 2.5857 | 23 |
| 2.7715 | 2.5699 | 24 |
| 2.7557 | 2.5542 | 25 |
| 2.7387 | 2.5420 | 26 |
| 2.7225 | 2.5299 | 27 |
| 2.7085 | 2.5182 | 28 |
| 2.6950 | 2.5081 | 29 |
| 2.6818 | 2.4951 | 30 |
| 2.6687 | 2.4864 | 31 |
| 2.6578 | 2.4760 | 32 |
| 2.6461 | 2.4651 | 33 |
| 2.6334 | 2.4559 | 34 |
| 2.6213 | 2.4477 | 35 |
| 2.6096 | 2.4373 | 36 |
| 2.5993 | 2.4297 | 37 |
| 2.5906 | 2.4208 | 38 |
| 2.5778 | 2.4100 | 39 |
| 2.5703 | 2.4025 | 40 |
| 2.5594 | 2.3962 | 41 |
| 2.5521 | 2.3901 | 42 |
| 2.5414 | 2.3808 | 43 |
| 2.5318 | 2.3726 | 44 |
| 2.5235 | 2.3684 | 45 |
| 2.5165 | 2.3592 | 46 |
| 2.5060 | 2.3507 | 47 |
| 2.4972 | 2.3466 | 48 |
| 2.4892 | 2.3388 | 49 |
| 2.4807 | 2.3325 | 50 |
| 2.4732 | 2.3281 | 51 |
| 2.4654 | 2.3210 | 52 |
| 2.4592 | 2.3138 | 53 |
| 2.4525 | 2.3100 | 54 |
| 2.4439 | 2.3046 | 55 |
| 2.4349 | 2.2980 | 56 |
| 2.4283 | 2.2926 | 57 |
| 2.4222 | 2.2884 | 58 |
| 2.4139 | 2.2824 | 59 |
| 2.4071 | 2.2759 | 60 |
| 2.4008 | 2.2705 | 61 |
| 2.3941 | 2.2664 | 62 |
| 2.3882 | 2.2588 | 63 |
| 2.3813 | 2.2566 | 64 |
| 2.3759 | 2.2498 | 65 |
| 2.3674 | 2.2461 | 66 |
| 2.3618 | 2.2425 | 67 |
| 2.3534 | 2.2377 | 68 |
| 2.3522 | 2.2314 | 69 |
| 2.3398 | 2.2269 | 70 |
| 2.3391 | 2.2241 | 71 |
| 2.3303 | 2.2184 | 72 |
| 2.3275 | 2.2137 | 73 |
| 2.3190 | 2.2100 | 74 |
| 2.3159 | 2.2048 | 75 |
| 2.3078 | 2.2011 | 76 |
| 2.3048 | 2.1971 | 77 |
| 2.3005 | 2.1936 | 78 |
| 2.2938 | 2.1899 | 79 |
| 2.2892 | 2.1859 | 80 |
| 2.2824 | 2.1819 | 81 |
| 2.2758 | 2.1787 | 82 |
| 2.2739 | 2.1757 | 83 |
| 2.2689 | 2.1716 | 84 |
| 2.2623 | 2.1664 | 85 |
| 2.2574 | 2.1657 | 86 |
| 2.2537 | 2.1618 | 87 |
| 2.2483 | 2.1563 | 88 |
| 2.2407 | 2.1554 | 89 |
| 2.2387 | 2.1510 | 90 |
| 2.2351 | 2.1469 | 91 |
| 2.2286 | 2.1436 | 92 |
| 2.2226 | 2.1413 | 93 |
| 2.2171 | 2.1395 | 94 |
| 2.2159 | 2.1342 | 95 |
| 2.2109 | 2.1314 | 96 |
| 2.2041 | 2.1284 | 97 |
| 2.1999 | 2.1260 | 98 |
| 2.1941 | 2.1230 | 99 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mtolgakbaba/mT5-general-purpose | mtolgakbaba | 2023-12-03T18:10:14Z | 2 | 0 | peft | [
"peft",
"safetensors",
"mt5",
"arxiv:1910.09700",
"base_model:google/mt5-base",
"base_model:adapter:google/mt5-base",
"region:us"
] | null | 2023-12-02T07:09:58Z | ---
library_name: peft
base_model: google/mt5-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.6.3.dev0 |
viktoriaohorilko/ppo-LunarLander-v2 | viktoriaohorilko | 2023-12-03T18:07:26Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T17:12:55Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.43 +/- 43.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
li-ping/river_retriver_416data_testing_v2 | li-ping | 2023-12-03T18:04:42Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-12-03T18:04:33Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# li-ping/river_retriver_416data_testing_v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('li-ping/river_retriver_416data_testing_v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=li-ping/river_retriver_416data_testing_v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 791 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 400,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 791,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
digiplay/LemonTea2.5D | digiplay | 2023-12-03T18:03:32Z | 358 | 6 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-30T12:38:45Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
#EulerDiscreteScheduler Version
Model info :
https://civitai.com/models/70692/lemontea-mix-painterly-25d
the same with "digiplay/LemonteaMixPainterly2_v1",
but config the default schedule type to the Euler version.
|
hkivancoral/smids_1x_beit_base_rms_0001_fold2 | hkivancoral | 2023-12-03T18:03:05Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-02T09:44:03Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_beit_base_rms_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.740432612312812
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_beit_base_rms_0001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9358
- Accuracy: 0.7404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0437 | 1.0 | 75 | 0.9679 | 0.5042 |
| 0.9234 | 2.0 | 150 | 0.8669 | 0.5208 |
| 1.0795 | 3.0 | 225 | 0.7926 | 0.5874 |
| 0.9543 | 4.0 | 300 | 0.8244 | 0.5507 |
| 0.8239 | 5.0 | 375 | 0.7959 | 0.5857 |
| 0.7924 | 6.0 | 450 | 0.7928 | 0.5890 |
| 0.8468 | 7.0 | 525 | 0.7806 | 0.6256 |
| 0.8608 | 8.0 | 600 | 0.9027 | 0.5408 |
| 0.7878 | 9.0 | 675 | 0.7544 | 0.6373 |
| 0.9079 | 10.0 | 750 | 0.7732 | 0.6190 |
| 0.7705 | 11.0 | 825 | 0.7349 | 0.6290 |
| 0.7586 | 12.0 | 900 | 0.7322 | 0.6306 |
| 0.7794 | 13.0 | 975 | 0.7224 | 0.6323 |
| 0.7123 | 14.0 | 1050 | 0.7252 | 0.6572 |
| 0.744 | 15.0 | 1125 | 0.7450 | 0.5990 |
| 0.7086 | 16.0 | 1200 | 0.6962 | 0.6639 |
| 0.7295 | 17.0 | 1275 | 0.7508 | 0.6489 |
| 0.7289 | 18.0 | 1350 | 0.6978 | 0.6722 |
| 0.6947 | 19.0 | 1425 | 0.7112 | 0.6739 |
| 0.6923 | 20.0 | 1500 | 0.7131 | 0.6805 |
| 0.7545 | 21.0 | 1575 | 0.7480 | 0.6223 |
| 0.68 | 22.0 | 1650 | 0.6683 | 0.6839 |
| 0.7107 | 23.0 | 1725 | 0.6889 | 0.6772 |
| 0.6933 | 24.0 | 1800 | 0.6566 | 0.6822 |
| 0.6429 | 25.0 | 1875 | 0.6381 | 0.7005 |
| 0.6742 | 26.0 | 1950 | 0.6536 | 0.6822 |
| 0.6753 | 27.0 | 2025 | 0.6462 | 0.6889 |
| 0.6228 | 28.0 | 2100 | 0.6368 | 0.7022 |
| 0.6193 | 29.0 | 2175 | 0.6115 | 0.7171 |
| 0.5568 | 30.0 | 2250 | 0.6625 | 0.7188 |
| 0.584 | 31.0 | 2325 | 0.6680 | 0.6922 |
| 0.581 | 32.0 | 2400 | 0.5723 | 0.7654 |
| 0.5698 | 33.0 | 2475 | 0.6173 | 0.7205 |
| 0.5032 | 34.0 | 2550 | 0.6176 | 0.7338 |
| 0.5019 | 35.0 | 2625 | 0.6137 | 0.7438 |
| 0.4921 | 36.0 | 2700 | 0.5855 | 0.7571 |
| 0.453 | 37.0 | 2775 | 0.6724 | 0.7271 |
| 0.4913 | 38.0 | 2850 | 0.6043 | 0.7720 |
| 0.3871 | 39.0 | 2925 | 0.6124 | 0.7704 |
| 0.4014 | 40.0 | 3000 | 0.6591 | 0.7521 |
| 0.4698 | 41.0 | 3075 | 0.6575 | 0.7604 |
| 0.375 | 42.0 | 3150 | 0.6735 | 0.7471 |
| 0.317 | 43.0 | 3225 | 0.7867 | 0.7504 |
| 0.2968 | 44.0 | 3300 | 0.7423 | 0.7521 |
| 0.2919 | 45.0 | 3375 | 0.8253 | 0.7504 |
| 0.2598 | 46.0 | 3450 | 0.8629 | 0.7421 |
| 0.1951 | 47.0 | 3525 | 0.8586 | 0.7704 |
| 0.1905 | 48.0 | 3600 | 0.9010 | 0.7438 |
| 0.1278 | 49.0 | 3675 | 0.9354 | 0.7454 |
| 0.2294 | 50.0 | 3750 | 0.9358 | 0.7404 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ahams02/q-FrozenLake-v1-4x4-noSlippery | ahams02 | 2023-12-03T18:01:45Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T18:01:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ahams02/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MatteoWood/bert-sexism-classifier | MatteoWood | 2023-12-03T18:01:35Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sexism",
"dataset:MatteoWood/bert-sexism-classifier-dataset",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-11-30T13:39:58Z | ---
tags:
- text-classification
- sexism
widget:
- text: I love women
- text: I hate women
datasets:
- MatteoWood/bert-sexism-classifier-dataset
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
# Training parameters
- lr: 5e-05
- epochs: 5
- batch size: 8
## Validation Metrics
loss: 0.36799120903015137
f1: 0.7044155844155844
precision: 0.7251336898395722
recall: 0.6848484848484848
auc: 0.8958072418537535
accuracy: 0.85775 |
MatteoWood/albert-sexism-classifier | MatteoWood | 2023-12-03T18:01:19Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"sexism",
"dataset:MatteoWood/autotrain-data-albert-sexism-classifier",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-02T14:45:33Z |
---
tags:
- text-classification
- sexism
widget:
- text: I love women
- text: I hate women
datasets:
- MatteoWood/autotrain-data-albert-sexism-classifier
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
# Training parameters
- lr: 5e-05
- epochs: 5
- batch size: 8
## Validation Metrics
loss: 0.40832215547561646
f1: 0.6293779404077365
precision: 0.6522210184182016
recall: 0.6080808080808081
auc: 0.8514470284237725
accuracy: 0.82275
|
MatteoWood/hatebert-sexism-classifier | MatteoWood | 2023-12-03T18:00:04Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sexism",
"dataset:MatteoWood/autotrain-data-hatebert-sexism-classifier",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-02T13:05:20Z | ---
tags:
- text-classification
- sexism
widget:
- text: I love women
- text: I hate women
datasets:
- MatteoWood/autotrain-data-hatebert-sexism-classifier
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
# Training parameters
- lr: 5e-05
- epochs: 5
- batch size: 8
## Validation Metrics
loss: 0.3898809254169464
f1: 0.7137662337662338
precision: 0.73475935828877
recall: 0.693939393939394
auc: 0.8955589113728649
accuracy: 0.86225
|
Roxysun/wav2vec2-large-xls-r-300m-cs-colab-phoneme | Roxysun | 2023-12-03T17:53:18Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:facebook/wav2vec2-lv-60-espeak-cv-ft",
"base_model:finetune:facebook/wav2vec2-lv-60-espeak-cv-ft",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-03T16:51:56Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-lv-60-espeak-cv-ft
tags:
- generated_from_trainer
datasets:
- voxpopuli
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-cs-colab-phoneme
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: voxpopuli
type: voxpopuli
config: cs
split: test
args: cs
metrics:
- name: Wer
type: wer
value: 0.44625639804785144
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-cs-colab-phoneme
This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9388
- Wer: 0.4463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow10 | FounderOfHuggingface | 2023-12-03T17:46:02Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T17:45:58Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
francesco-vo/bert_model_v2 | francesco-vo | 2023-12-03T17:42:54Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T17:27:06Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_model_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_model_v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5556
- Accuracy: 0.7469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 423 | 0.5205 | 0.7345 |
| 0.5638 | 2.0 | 846 | 0.5143 | 0.7545 |
| 0.4089 | 3.0 | 1269 | 0.5556 | 0.7469 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
SaraPiscitelli/dqn-SpaceInvadersNoFrameskip-v4 | SaraPiscitelli | 2023-12-03T17:35:56Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T13:53:17Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 554.00 +/- 186.01
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SaraPiscitelli -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SaraPiscitelli -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SaraPiscitelli
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Jerry46/zephyr-7b-sft-lora | Jerry46 | 2023-12-03T17:29:01Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-03T05:25:09Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-lora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9866 | 0.67 | 272 | 0.9901 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
NousResearch/Nous-Hermes-2-Vision-Alpha | NousResearch | 2023-12-03T17:25:41Z | 40 | 303 | transformers | [
"transformers",
"pytorch",
"llava_mistral",
"text-generation",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"multimodal",
"llava",
"conversational",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-28T23:18:55Z | ---
base_model: mistralai/Mistral-7B-v0.1
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- multimodal
- llava
model-index:
- name: Nous-Hermes-2-Vision
results: []
license: apache-2.0
language:
- en
---
# Nous-Hermes-2-Vision - Mistral 7B

*In the tapestry of Greek mythology, Hermes reigns as the eloquent Messenger of the Gods, a deity who deftly bridges the realms through the art of communication. It is in homage to this divine mediator that I name this advanced LLM "Hermes," a system crafted to navigate the complex intricacies of human discourse with celestial finesse.*
## Model description
Nous-Hermes-2-Vision stands as a pioneering Vision-Language Model, leveraging advancements from the renowned **OpenHermes-2.5-Mistral-7B** by teknium. This model incorporates two pivotal enhancements, setting it apart as a cutting-edge solution:
- **SigLIP-400M Integration**: Diverging from traditional approaches that rely on substantial 3B vision encoders, Nous-Hermes-2-Vision harnesses the formidable SigLIP-400M. This strategic choice not only streamlines the model's architecture, making it more lightweight, but also capitalizes on SigLIP's remarkable capabilities. The result? A remarkable boost in performance that defies conventional expectations.
- **Custom Dataset Enriched with Function Calling**: Our model's training data includes a unique feature – function calling. This distinctive addition transforms Nous-Hermes-2-Vision into a **Vision-Language Action Model**. Developers now have a versatile tool at their disposal, primed for crafting a myriad of ingenious automations.
This project is led by [qnguyen3](https://twitter.com/stablequan) and [teknium](https://twitter.com/Teknium1).
## Training
### Dataset
- 220K from **LVIS-INSTRUCT4V**
- 60K from **ShareGPT4V**
- 150K Private **Function Calling Data**
- 50K conversations from teknium's **OpenHermes-2.5**
## Usage
### Prompt Format
- Like other LLaVA's variants, this model uses Vicuna-V1 as its prompt template. Please refer to `conv_llava_v1` in [this file](https://github.com/qnguyen3/hermes-llava/blob/main/llava/conversation.py)
- For Gradio UI, please visit this [GitHub Repo](https://github.com/qnguyen3/hermes-llava)
### Function Calling
- For functiong calling, the message should start with a `<fn_call>` tag. Here is an example:
```json
<fn_call>{
"type": "object",
"properties": {
"bus_colors": {
"type": "array",
"description": "The colors of the bus in the image.",
"items": {
"type": "string",
"enum": ["red", "blue", "green", "white"]
}
},
"bus_features": {
"type": "string",
"description": "The features seen on the back of the bus."
},
"bus_location": {
"type": "string",
"description": "The location of the bus (driving or pulled off to the side).",
"enum": ["driving", "pulled off to the side"]
}
}
}
```
Output:
```json
{
"bus_colors": ["red", "white"],
"bus_features": "An advertisement",
"bus_location": "driving"
}
```
## Example
### Chat

### Function Calling
Input image:

Input message:
```json
<fn_call>{
"type": "object",
"properties": {
"food_list": {
"type": "array",
"description": "List of all the food",
"items": {
"type": "string",
}
},
}
}
```
Output:
```json
{
"food_list": [
"Double Burger",
"Cheeseburger",
"French Fries",
"Shakes",
"Coffee"
]
}
```
|
lnxdx/19_2000_1e-5_hp-base | lnxdx | 2023-12-03T17:25:05Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:masoudmzb/wav2vec2-xlsr-multilingual-53-fa",
"base_model:finetune:masoudmzb/wav2vec2-xlsr-multilingual-53-fa",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-01T00:10:01Z | ---
base_model: masoudmzb/wav2vec2-xlsr-multilingual-53-fa
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-persian-asr-shemo_lnxdx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-persian-asr-shemo_lnxdx
This model is a fine-tuned version of [masoudmzb/wav2vec2-xlsr-multilingual-53-fa](https://huggingface.co/masoudmzb/wav2vec2-xlsr-multilingual-53-fa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7064
- Wer: 0.3344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9455 | 0.62 | 100 | 1.4247 | 0.4831 |
| 1.4824 | 1.25 | 200 | 1.1107 | 0.4331 |
| 1.2516 | 1.88 | 300 | 0.9141 | 0.4136 |
| 1.0859 | 2.5 | 400 | 0.8360 | 0.3975 |
| 1.0357 | 3.12 | 500 | 0.8097 | 0.3814 |
| 1.0472 | 3.75 | 600 | 0.7550 | 0.3753 |
| 0.9963 | 4.38 | 700 | 0.7533 | 0.3636 |
| 0.9767 | 5.0 | 800 | 0.7424 | 0.3589 |
| 0.9667 | 5.62 | 900 | 0.7360 | 0.3516 |
| 0.9385 | 6.25 | 1000 | 0.7355 | 0.3487 |
| 0.9805 | 6.88 | 1100 | 0.7237 | 0.3464 |
| 0.976 | 7.5 | 1200 | 0.7078 | 0.3455 |
| 0.88 | 8.12 | 1300 | 0.7229 | 0.3438 |
| 0.9421 | 8.75 | 1400 | 0.7180 | 0.3432 |
| 0.9584 | 9.38 | 1500 | 0.7059 | 0.3364 |
| 0.88 | 10.0 | 1600 | 0.7106 | 0.3364 |
| 0.9113 | 10.62 | 1700 | 0.7125 | 0.3344 |
| 0.912 | 11.25 | 1800 | 0.7091 | 0.3353 |
| 0.9607 | 11.88 | 1900 | 0.7066 | 0.3344 |
| 0.8974 | 12.5 | 2000 | 0.7064 | 0.3344 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
VitaliiVrublevskyi/bert-base-multilingual-uncased-finetuned-mrpc | VitaliiVrublevskyi | 2023-12-03T17:23:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T16:03:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8817204301075269
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4928
- Accuracy: 0.8382
- F1: 0.8817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 97
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 115 | 0.4251 | 0.7843 | 0.8358 |
| No log | 2.0 | 230 | 0.3601 | 0.8260 | 0.8702 |
| No log | 3.0 | 345 | 0.3677 | 0.8333 | 0.8727 |
| No log | 4.0 | 460 | 0.4801 | 0.8260 | 0.8707 |
| 0.3223 | 5.0 | 575 | 0.4928 | 0.8382 | 0.8817 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
dodisbeaver/ppo-LunarLander-v2-100000 | dodisbeaver | 2023-12-03T17:13:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T17:13:18Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -106.55 +/- 54.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
messawey/my_awesome_qa_model | messawey | 2023-12-03T17:02:34Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-12-03T17:00:55Z | ---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_keras_callback
model-index:
- name: messawey/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# messawey/my_awesome_qa_model
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5375
- Validation Loss: 4.1401
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 68, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.5375 | 4.1401 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Gairiangsui/my-wolf-wlf | Gairiangsui | 2023-12-03T16:58:14Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-03T16:54:13Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### my-wolf--wlf Dreambooth model trained by Gairiangsui following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 215022
Sample pictures of this concept:



|
dpv/Mistral7bInstruct-MarketingEmailAdaptor4QV | dpv | 2023-12-03T16:54:21Z | 6 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2023-12-03T16:13:48Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Anwaarma/BP-MM05 | Anwaarma | 2023-12-03T16:51:44Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Anwaarma/BP-INT-I04",
"base_model:finetune:Anwaarma/BP-INT-I04",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T16:30:21Z | ---
license: mit
base_model: Anwaarma/BP-test4
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: BP-S02andInt03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BP-S02andInt03
This model is a fine-tuned version of [Anwaarma/BP-test4](https://huggingface.co/Anwaarma/BP-test4) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4031
- Accuracy: 0.82
- F1: 0.8097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.0 | 50 | 1.1241 | 0.54 | 0.4220 |
| No log | 0.01 | 100 | 0.8697 | 0.51 | 0.4402 |
| No log | 0.01 | 150 | 0.7063 | 0.37 | 0.3740 |
| No log | 0.02 | 200 | 0.6895 | 0.51 | 0.4184 |
| No log | 0.02 | 250 | 0.6880 | 0.52 | 0.4467 |
| No log | 0.02 | 300 | 0.6874 | 0.52 | 0.4778 |
| No log | 0.03 | 350 | 0.6842 | 0.52 | 0.4778 |
| No log | 0.03 | 400 | 0.6889 | 0.5 | 0.4491 |
| No log | 0.04 | 450 | 0.6727 | 0.55 | 0.5398 |
| 0.7977 | 0.04 | 500 | 0.6617 | 0.59 | 0.5877 |
| 0.7977 | 0.04 | 550 | 0.6514 | 0.59 | 0.5877 |
| 0.7977 | 0.05 | 600 | 0.6597 | 0.59 | 0.5877 |
| 0.7977 | 0.05 | 650 | 0.6322 | 0.59 | 0.5877 |
| 0.7977 | 0.06 | 700 | 0.5898 | 0.57 | 0.5655 |
| 0.7977 | 0.06 | 750 | 0.5406 | 0.7 | 0.7015 |
| 0.7977 | 0.06 | 800 | 0.4813 | 0.8 | 0.7862 |
| 0.7977 | 0.07 | 850 | 0.4706 | 0.8 | 0.7862 |
| 0.7977 | 0.07 | 900 | 0.4743 | 0.79 | 0.7768 |
| 0.7977 | 0.08 | 950 | 0.4578 | 0.8 | 0.7862 |
| 0.5646 | 0.08 | 1000 | 0.4571 | 0.8 | 0.7862 |
| 0.5646 | 0.08 | 1050 | 0.4536 | 0.8 | 0.7862 |
| 0.5646 | 0.09 | 1100 | 0.4461 | 0.8 | 0.7862 |
| 0.5646 | 0.09 | 1150 | 0.4451 | 0.8 | 0.7862 |
| 0.5646 | 0.1 | 1200 | 0.4398 | 0.81 | 0.7956 |
| 0.5646 | 0.1 | 1250 | 0.4360 | 0.8 | 0.7862 |
| 0.5646 | 0.1 | 1300 | 0.4325 | 0.81 | 0.7956 |
| 0.5646 | 0.11 | 1350 | 0.4316 | 0.81 | 0.7956 |
| 0.5646 | 0.11 | 1400 | 0.4310 | 0.81 | 0.7956 |
| 0.5646 | 0.12 | 1450 | 0.4301 | 0.81 | 0.7956 |
| 0.4672 | 0.12 | 1500 | 0.4275 | 0.81 | 0.7956 |
| 0.4672 | 0.12 | 1550 | 0.4271 | 0.8 | 0.7862 |
| 0.4672 | 0.13 | 1600 | 0.4258 | 0.81 | 0.7956 |
| 0.4672 | 0.13 | 1650 | 0.4211 | 0.81 | 0.7956 |
| 0.4672 | 0.14 | 1700 | 0.4154 | 0.82 | 0.8097 |
| 0.4672 | 0.14 | 1750 | 0.4153 | 0.81 | 0.7956 |
| 0.4672 | 0.14 | 1800 | 0.4120 | 0.81 | 0.7956 |
| 0.4672 | 0.15 | 1850 | 0.4134 | 0.8 | 0.7862 |
| 0.4672 | 0.15 | 1900 | 0.4119 | 0.8 | 0.7862 |
| 0.4672 | 0.16 | 1950 | 0.4119 | 0.82 | 0.8097 |
| 0.4371 | 0.16 | 2000 | 0.4094 | 0.82 | 0.8097 |
| 0.4371 | 0.16 | 2050 | 0.4113 | 0.82 | 0.8097 |
| 0.4371 | 0.17 | 2100 | 0.4136 | 0.83 | 0.8259 |
| 0.4371 | 0.17 | 2150 | 0.4096 | 0.82 | 0.8097 |
| 0.4371 | 0.18 | 2200 | 0.4116 | 0.82 | 0.8097 |
| 0.4371 | 0.18 | 2250 | 0.4039 | 0.82 | 0.8097 |
| 0.4371 | 0.18 | 2300 | 0.4044 | 0.82 | 0.8097 |
| 0.4371 | 0.19 | 2350 | 0.4031 | 0.82 | 0.8097 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
pijarcandra22/t5Sunda2Indo | pijarcandra22 | 2023-12-03T16:41:20Z | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T14:43:36Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: pijarcandra22/t5Sunda2Indo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pijarcandra22/t5Sunda2Indo
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6406
- Validation Loss: 1.5932
- Epoch: 140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9668 | 3.4054 | 0 |
| 3.5451 | 3.1460 | 1 |
| 3.3386 | 2.9773 | 2 |
| 3.1917 | 2.8549 | 3 |
| 3.0808 | 2.7568 | 4 |
| 2.9921 | 2.6780 | 5 |
| 2.9187 | 2.6135 | 6 |
| 2.8536 | 2.5547 | 7 |
| 2.8010 | 2.5040 | 8 |
| 2.7482 | 2.4601 | 9 |
| 2.7056 | 2.4227 | 10 |
| 2.6709 | 2.3870 | 11 |
| 2.6325 | 2.3554 | 12 |
| 2.6016 | 2.3233 | 13 |
| 2.5685 | 2.2965 | 14 |
| 2.5407 | 2.2710 | 15 |
| 2.5158 | 2.2486 | 16 |
| 2.4905 | 2.2248 | 17 |
| 2.4680 | 2.2037 | 18 |
| 2.4459 | 2.1856 | 19 |
| 2.4257 | 2.1684 | 20 |
| 2.4045 | 2.1495 | 21 |
| 2.3861 | 2.1325 | 22 |
| 2.3676 | 2.1189 | 23 |
| 2.3479 | 2.1028 | 24 |
| 2.3333 | 2.0873 | 25 |
| 2.3192 | 2.0743 | 26 |
| 2.3035 | 2.0647 | 27 |
| 2.2889 | 2.0504 | 28 |
| 2.2742 | 2.0395 | 29 |
| 2.2608 | 2.0285 | 30 |
| 2.2471 | 2.0166 | 31 |
| 2.2344 | 2.0078 | 32 |
| 2.2211 | 1.9999 | 33 |
| 2.2086 | 1.9893 | 34 |
| 2.1965 | 1.9790 | 35 |
| 2.1879 | 1.9724 | 36 |
| 2.1774 | 1.9637 | 37 |
| 2.1663 | 1.9537 | 38 |
| 2.1573 | 1.9461 | 39 |
| 2.1470 | 1.9389 | 40 |
| 2.1344 | 1.9329 | 41 |
| 2.1259 | 1.9257 | 42 |
| 2.1192 | 1.9158 | 43 |
| 2.1098 | 1.9092 | 44 |
| 2.0993 | 1.9021 | 45 |
| 2.0930 | 1.8970 | 46 |
| 2.0831 | 1.8909 | 47 |
| 2.0729 | 1.8845 | 48 |
| 2.0669 | 1.8799 | 49 |
| 2.0587 | 1.8746 | 50 |
| 2.0519 | 1.8662 | 51 |
| 2.0446 | 1.8605 | 52 |
| 2.0338 | 1.8552 | 53 |
| 2.0297 | 1.8494 | 54 |
| 2.0207 | 1.8441 | 55 |
| 2.0151 | 1.8404 | 56 |
| 2.0116 | 1.8346 | 57 |
| 2.0029 | 1.8286 | 58 |
| 1.9942 | 1.8243 | 59 |
| 1.9894 | 1.8177 | 60 |
| 1.9820 | 1.8145 | 61 |
| 1.9753 | 1.8100 | 62 |
| 1.9698 | 1.8054 | 63 |
| 1.9635 | 1.8001 | 64 |
| 1.9588 | 1.7963 | 65 |
| 1.9533 | 1.7895 | 66 |
| 1.9443 | 1.7888 | 67 |
| 1.9381 | 1.7846 | 68 |
| 1.9320 | 1.7806 | 69 |
| 1.9281 | 1.7755 | 70 |
| 1.9232 | 1.7697 | 71 |
| 1.9141 | 1.7672 | 72 |
| 1.9128 | 1.7655 | 73 |
| 1.9041 | 1.7611 | 74 |
| 1.8987 | 1.7558 | 75 |
| 1.8955 | 1.7498 | 76 |
| 1.8874 | 1.7493 | 77 |
| 1.8845 | 1.7433 | 78 |
| 1.8793 | 1.7403 | 79 |
| 1.8723 | 1.7370 | 80 |
| 1.8669 | 1.7319 | 81 |
| 1.8626 | 1.7323 | 82 |
| 1.8593 | 1.7268 | 83 |
| 1.8540 | 1.7235 | 84 |
| 1.8504 | 1.7204 | 85 |
| 1.8468 | 1.7180 | 86 |
| 1.8398 | 1.7130 | 87 |
| 1.8358 | 1.7088 | 88 |
| 1.8321 | 1.7081 | 89 |
| 1.8271 | 1.7042 | 90 |
| 1.8203 | 1.7016 | 91 |
| 1.8189 | 1.6985 | 92 |
| 1.8117 | 1.6967 | 93 |
| 1.8089 | 1.6924 | 94 |
| 1.8035 | 1.6898 | 95 |
| 1.7984 | 1.6904 | 96 |
| 1.7957 | 1.6836 | 97 |
| 1.7925 | 1.6833 | 98 |
| 1.7880 | 1.6786 | 99 |
| 1.7816 | 1.6770 | 100 |
| 1.7772 | 1.6739 | 101 |
| 1.7754 | 1.6733 | 102 |
| 1.7712 | 1.6678 | 103 |
| 1.7677 | 1.6638 | 104 |
| 1.7641 | 1.6627 | 105 |
| 1.7601 | 1.6609 | 106 |
| 1.7573 | 1.6585 | 107 |
| 1.7516 | 1.6559 | 108 |
| 1.7478 | 1.6523 | 109 |
| 1.7443 | 1.6523 | 110 |
| 1.7406 | 1.6498 | 111 |
| 1.7361 | 1.6475 | 112 |
| 1.7326 | 1.6435 | 113 |
| 1.7285 | 1.6422 | 114 |
| 1.7244 | 1.6398 | 115 |
| 1.7205 | 1.6386 | 116 |
| 1.7187 | 1.6347 | 117 |
| 1.7157 | 1.6335 | 118 |
| 1.7113 | 1.6317 | 119 |
| 1.7094 | 1.6308 | 120 |
| 1.7074 | 1.6267 | 121 |
| 1.7007 | 1.6252 | 122 |
| 1.6965 | 1.6241 | 123 |
| 1.6931 | 1.6231 | 124 |
| 1.6913 | 1.6201 | 125 |
| 1.6862 | 1.6174 | 126 |
| 1.6833 | 1.6176 | 127 |
| 1.6827 | 1.6122 | 128 |
| 1.6774 | 1.6127 | 129 |
| 1.6726 | 1.6119 | 130 |
| 1.6698 | 1.6079 | 131 |
| 1.6663 | 1.6077 | 132 |
| 1.6631 | 1.6055 | 133 |
| 1.6617 | 1.6043 | 134 |
| 1.6573 | 1.6019 | 135 |
| 1.6549 | 1.5994 | 136 |
| 1.6514 | 1.5990 | 137 |
| 1.6484 | 1.5965 | 138 |
| 1.6490 | 1.5942 | 139 |
| 1.6406 | 1.5932 | 140 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
oieieio/Orca-2-13b-awq | oieieio | 2023-12-03T16:38:40Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"orca",
"orca2",
"microsoft",
"arxiv:2311.11045",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2023-12-01T23:52:55Z | ---
pipeline_tag: text-generation
tags:
- orca
- orca2
- microsoft
license: other
license_name: microsoft-research-license
license_link: LICENSE
---
# Orca-2-13b-awq | Quantized 4 Bit
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning.
Note that:
1. This is a research model, intended to show that we can use capable models and complex workflows (advanced prompts, multiple calls) to create synthetic data that can teach Small Language Models (SLMs) new capabilities. We chose reasoning because it is a widely useful capability that SLMs lack.
2. The model is not optimized for chat and has not been trained with RLHF or DPO. It is best used after being finetuned for chat or for a specific task.
3. Beyond reasoning, the model inherits capabilities and limitations of its base (LLAMA-2 base). We have already seen that the benefits of the Orca training can be applied to other base model too.
We make Orca 2's weights publicly available to support further research on the development, evaluation, and alignment of SLMs.
## What is Orca 2’s intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for
building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training process,
including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca-2-13b-awq
**Inference with Hugging Face library**
```python
!pip install autoawq
```
```python
!pip install torch --upgrade --index-url https://download.pytorch.org/whl/cu121
```
```python
#@title Live Chat with Orca
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
quant_path = "oieieio/Orca-2-13b-awq"
tokenizer = AutoTokenizer.from_pretrained(quant_path)
model = AutoModelForCausalLM.from_pretrained(quant_path)
# Move the model to GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
# Initial system message
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant..."
while True:
# User input
user_message = input("User: ")
if user_message.lower() == 'quit':
break
# Construct the prompt
prompt = f"system\n{system_message}\nuser\n{user_message}\nassistant"
# Encode and generate response
inputs = tokenizer(prompt, return_tensors='pt').to(device)
output_ids = model.generate(inputs["input_ids"], max_length=512)
answer = tokenizer.decode(output_ids[0], skip_special_tokens=True)
# Print the response
print("AI: ", answer)
```
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
quant_path = "oieieio/Orca-2-13b-awq"
tokenizer = AutoTokenizer.from_pretrained(quant_path)
model = AutoModelForCausalLM.from_pretrained(quant_path)
# Move the model to GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
# Initial system message
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant..."
while True:
user_message = input("User: ")
if user_message.lower() == 'quit':
break
prompt = f"system\n{system_message}\nuser\n{user_message}\nassistant"
inputs = tokenizer(prompt, return_tensors='pt').to(device)
output_ids = model.generate(
inputs["input_ids"],
max_new_tokens=50, # Adjust the number of generated tokens
temperature=0.7, # Adjust for randomness
top_k=50, # Adjust the number of highest probability tokens to consider
top_p=0.95, # Adjust the cumulative probability threshold
do_sample=True # Use sampling instead of greedy decoding
)
answer = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print("AI: ", answer)
```
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model from Hugging Face Hub
quant_path = "oieieio/Orca-2-13b-awq"
tokenizer = AutoTokenizer.from_pretrained(quant_path)
model = AutoModelForCausalLM.from_pretrained(quant_path)
# Move the model to GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
# First turn of the conversation
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"system\n{system_message}\nuser\n{user_message}\nassistant"
# Encode the first prompt
inputs = tokenizer(prompt, return_tensors='pt').to(device)
output_ids = model.generate(inputs["input_ids"], max_length=512)
# Decode the first response
answer = tokenizer.decode(output_ids[0], skip_special_tokens=True)
# Print the first response
print(answer)
# Second turn of the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# Append the second turn message to the already generated ids without adding special tokens
second_turn_message_in_markup = f"\nuser\n{second_turn_user_message}\nassistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False).to(device)
second_turn_input_ids = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
# Generate the second response
output_ids_2 = model.generate(second_turn_input_ids, max_length=1024)
# Decode the second response
second_turn_answer = tokenizer.decode(output_ids_2[0], skip_special_tokens=True)
# Print the second response
print(second_turn_answer)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` |
Ahmed107/nllb200-ar-en-v2 | Ahmed107 | 2023-12-03T16:37:32Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:nadsoft/Faseeh-v0.1-beta",
"base_model:finetune:nadsoft/Faseeh-v0.1-beta",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-12-03T15:19:28Z | ---
license: cc-by-nc-4.0
base_model: nadsoft/Faseeh-v0.1-beta
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nllb200-ar-en-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb200-ar-en-v2
This model is a fine-tuned version of [nadsoft/Faseeh-v0.1-beta](https://huggingface.co/nadsoft/Faseeh-v0.1-beta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8491
- Bleu: 49.3964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hedderich/ppo-LunarLander-v2 | hedderich | 2023-12-03T16:37:05Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T15:45:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.33 +/- 19.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow4 | FounderOfHuggingface | 2023-12-03T16:36:23Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T16:36:20Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Ghali20/test_1 | Ghali20 | 2023-12-03T16:27:58Z | 4 | 1 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:adapter:HuggingFaceH4/zephyr-7b-alpha",
"region:us"
] | null | 2023-12-03T16:27:56Z | ---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-alpha
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
JaiSurya/q-FrozenLake-v1-4x4-noSlippery | JaiSurya | 2023-12-03T16:27:06Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T16:27:03Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="JaiSurya/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
annawleo/mistral-finetuned-alpaca | annawleo | 2023-12-03T16:16:20Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:generator",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-03T14:40:05Z | ---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: mistral-finetuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-alpaca
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
yily/glm-lora-stroke-47000 | yily | 2023-12-03T16:15:39Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-12-03T16:14:41Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Kyriota/fastchat-3b-lora-gsm8k-baseline | Kyriota | 2023-12-03T16:12:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:lmsys/fastchat-t5-3b-v1.0",
"base_model:adapter:lmsys/fastchat-t5-3b-v1.0",
"region:us"
] | null | 2023-12-03T11:54:03Z | ---
library_name: peft
base_model: lmsys/fastchat-t5-3b-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
AlvinxLukilah/TextGenerationV2 | AlvinxLukilah | 2023-12-03T16:10:43Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-10-29T13:11:20Z | ---
license: creativeml-openrail-m
---
|
yily/glm-lora-stroke-46000 | yily | 2023-12-03T16:10:11Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-12-03T16:09:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
VitaliiVrublevskyi/bert-base-cased-finetuned-mrpc | VitaliiVrublevskyi | 2023-12-03T16:01:47Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T14:41:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-cased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8873720136518771
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5719
- Accuracy: 0.8382
- F1: 0.8874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 91
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 115 | 0.5228 | 0.7549 | 0.8471 |
| No log | 2.0 | 230 | 0.4315 | 0.8088 | 0.8673 |
| No log | 3.0 | 345 | 0.4212 | 0.8284 | 0.8785 |
| No log | 4.0 | 460 | 0.5462 | 0.8382 | 0.8889 |
| 0.3551 | 5.0 | 575 | 0.5719 | 0.8382 | 0.8874 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow0 | FounderOfHuggingface | 2023-12-03T15:49:59Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T15:49:56Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Protao/ppo-Huggy | Protao | 2023-12-03T15:45:28Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-03T15:43:48Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Protao/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
可以在 https://huggingface.co/spaces/ThomasSimonini/Huggy 与训练的agent进行交互 |
nu-dialogue/japanese-vlm-fp16-lm-sfcoco2023-lora | nu-dialogue | 2023-12-03T15:42:29Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:stabilityai/japanese-stablelm-instruct-gamma-7b",
"base_model:adapter:stabilityai/japanese-stablelm-instruct-gamma-7b",
"region:us"
] | null | 2023-12-03T15:42:25Z | ---
library_name: peft
base_model: stabilityai/japanese-stablelm-instruct-gamma-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
cozgo/llama-2-7b-advisor-low-rank | cozgo | 2023-12-03T15:35:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2023-12-03T14:24:45Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
sezinarseven/falcon-7b-instruct-medicalQA | sezinarseven | 2023-12-03T15:28:51Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:finetune:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
] | null | 2023-12-03T15:28:39Z | ---
license: apache-2.0
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-instruct-medicalQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-instruct-medicalQA
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 180
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
byteflow/xxmix9realxl | byteflow | 2023-12-03T15:26:01Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-03T15:04:01Z | ---
license: creativeml-openrail-m
---
|
BadreddineHug/zyph_OCR | BadreddineHug | 2023-12-03T15:12:13Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:adapter:HuggingFaceH4/zephyr-7b-alpha",
"region:us"
] | null | 2023-12-03T15:12:11Z | ---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-alpha
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
byteflow/drmshprxl | byteflow | 2023-12-03T15:11:11Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-03T15:05:53Z | ---
license: creativeml-openrail-m
---
|
KnutJaegersberg/black_goo_recipe_e | KnutJaegersberg | 2023-12-03T15:11:02Z | 2,463 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-04T21:08:02Z | ---
license: cc-by-nc-4.0
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
``` |
KnutJaegersberg/deacon-13b-awq | KnutJaegersberg | 2023-12-03T15:10:33Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"custom_code",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-20T17:48:03Z | ---
license: cc-by-nc-4.0
---

This model was fine tuned on AI filtered subsets of GPT-4 based subset of the Dolphin dataset and EvolInstruct V2.
It has not been explicitly aligned to positive, negative or bureaucratically prescribed value systems.
It might kill us all! Time to shit your pants, regulators. I literally put black goo on Dolphin-7B sperm, which then fertilized Evolved Instructions...
What's different is evil... ;)
I intend to train 3 sizes.
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
``` |
KnutJaegersberg/black_goo_recipe_c | KnutJaegersberg | 2023-12-03T15:09:33Z | 2,503 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-01T06:35:49Z | ---
license: cc-by-nc-4.0
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
To fine-tune a large language model, we use various techniques to improve its performance and adjust its parameters based on user feedback. These techniques include:
1. Fine tuning the original model with additional data. For example, adding more training examples or using specialized datasets such as question answering or natural language inference tasks. This helps the model learn new concepts and skills by interacting with additional information sources.
2. Using transfer learning methods to incorporate knowledge from other models that have been pre-trained for specific domains. For instance, using pre-trained embeddings from neural networks trained on text classification tasks like sentiment analysis or question answering to help fine-tuning the main language model.
3. Improving the quality of the training data. We may optimize the training process by increasing the accuracy of provided data sets (e.g., by improving word embedding quality), refining the selection of relevant examples, and/or providing additional supervision signals during training.
4. Incorporating knowledge from external resources such as domain-specific lexicons or domain-specific knowledge bases. For example, incorporating domain-specific rules and facts into the model to help it understand domain-specific terminology and semantics.
5. Employing techniques like dropout regularization and dropout sampling to avoid overfitting and reduce overconfidence in the model's predictions.
6. Utilizing meta-learning methods such as reinforcement learning or self-supervised learning to enable the model to adapt quickly to changing situations without being explicitly programmed.
7. Incorporating human input through crowdsourcing or user testing to validate the model's performance and ensure user satisfaction.
8. Implementing various optimization strategies to improve efficiency and reduce computational costs while maintaining high performance levels.
In conclusion, fine tuning a large language model requires combining various techniques including fine tuning, transfer learning, improved data quality, external resources, meta-learning, optimization strategies, crowd sourcing, and user testing. These approaches aim to improve the model's ability to handle various tasks and provide users with better experiences in their daily interactions with language systems.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__black_goo_recipe_c)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 34.19 |
| ARC (25-shot) | 38.74 |
| HellaSwag (10-shot) | 66.83 |
| MMLU (5-shot) | 26.57 |
| TruthfulQA (0-shot) | 36.54 |
| Winogrande (5-shot) | 64.72 |
| GSM8K (5-shot) | 0.68 |
| DROP (3-shot) | 5.23 |
|
KnutJaegersberg/Deacon-34B-200k-AWQ | KnutJaegersberg | 2023-12-03T15:08:12Z | 8 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2023-11-11T03:19:30Z | ---
license: other
license_name: yi-license
license_link: LICENSE
datasets:
- totally-not-an-llm/EverythingLM-data-V3
pipeline_tag: text-generation
---

In this case the tokenizer is the yi_tokenizer, loading it requires trust_remote_code=True
Have some fun with this fellow.
It can eat a lot of vram, depends on settings to make it useable on two 24 gb vram gpus:
Without fused attention, it's 27 gb vram, will need some if if yo do stuff.

You can also let if have fused attention and just reduce the max_seq_length to something way smaller yet still useful

License
The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the Model License Agreement 2.0. To apply for the official commercial license, please contact us ([email protected]).
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
``` |
KnutJaegersberg/Deacon-34b-4bit | KnutJaegersberg | 2023-12-03T15:07:59Z | 3 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"feature-extraction",
"text-generation",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2023-11-09T17:06:15Z |
---
license: other
license_name: yi-license
license_link: LICENSE
datasets:
- totally-not-an-llm/EverythingLM-data-V3
pipeline_tag: text-generation
---

The perfect organism.
An adapter for KnutJaegersberg/Yi-34B-Llamafied. 5 epochs with NEFTune.
The brand new bitsandbytes 4-bit file format.
Run pip install -U bitsandbytes
and install this pull requested transformer version:
https://github.com/poedator/transformers/tree/save4
Might still be buggy.
It was a bit too early.
I'll wait until the functionality is officially supported, due to some cuda issues. I don't want to resolve those, I'll just wait a couple of days.
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```

|
KnutJaegersberg/deacon-13b | KnutJaegersberg | 2023-12-03T15:06:45Z | 2,489 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"dataset:KnutJaegersberg/facehugger",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-20T06:35:12Z | ---
license: cc-by-nc-4.0
datasets:
- KnutJaegersberg/facehugger
---

This model was fine tuned on AI filtered subsets of GPT-4 based subset of the Dolphin dataset and EvolInstruct V2.
It has not been explicitly aligned to positive, negative or bureaucratically prescribed value systems.
It might kill us all! Time to shit your pants, regulators. I literally put black goo on Dolphin-7B sperm, which then fertilized Evolved Instructions...
What's different is evil... ;)
I intend to train 3 sizes.
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__deacon-13b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.78 |
| ARC (25-shot) | 57.85 |
| HellaSwag (10-shot) | 82.63 |
| MMLU (5-shot) | 55.25 |
| TruthfulQA (0-shot) | 39.33 |
| Winogrande (5-shot) | 76.32 |
| GSM8K (5-shot) | 10.39 |
| DROP (3-shot) | 5.67 |
|
Miloou/Reinforce-Pixelcopter1 | Miloou | 2023-12-03T15:05:33Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T15:05:29Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.50 +/- 35.31
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ichir3o4/syofh | ichir3o4 | 2023-12-03T15:04:59Z | 0 | 0 | allennlp | [
"allennlp",
"ab",
"dataset:HuggingFaceH4/no_robots",
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-12-03T15:04:04Z | ---
license: bigscience-openrail-m
datasets:
- HuggingFaceH4/no_robots
language:
- ab
metrics:
- bleu
library_name: allennlp
--- |
Korabbit/Llama-2-7b-chat-hf-afr-200step-merged | Korabbit | 2023-12-03T14:46:35Z | 1,551 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:vicgalle/alpaca-gpt4",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-21T08:59:51Z | ---
license: llama2
datasets:
- vicgalle/alpaca-gpt4
language:
- en
---
Hi there! this is a test of my "AFR training" approach
The base model used "Llama-2-7b-chat"
System Prompt: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. \n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
Instruction: Implement binary search in python.
Response: Sure! Here is an example of how to implement a binary search algorithm in Python:
```
def binary_search(arr, target):
# Initialize the index of the target element
low = 0
high = len(arr) - 1
# Loop until the target element is found or the end of the array is reached
while low <= high:
# Calculate the middle index of the array
mid = (low + high) // 2
# Check if the target element is in the middle index
if arr[mid] == target:
return mid
# If the target element is not in the middle index, check if it is less than the middle index
if arr[mid] > target:
low = mid + 1
else:
high = mid - 1
# If the target element is not found in the array, return -1
return -1
# Test the binary search function
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
result = binary_search(arr, target)
print(f"The target element {target} is at index {result}")
```
This code defines a `binary_search` function that takes an array and a target element as input, and returns the index of the target element in the array if it is found, or -1 if it is not found.
The function works by iteratively dividing the range of the array into two sub-ranges, one containing elements less than the target, and one containing elements greater than the target. It then checks the middle index of each sub-range to see if the target element is present. If the target element is found, the function returns the index of the middle index. If the target element is not found, the function returns -1.
The code also includes a test case to demonstrate how to use the `binary_search` function. In this case, the target element is 5, and the function returns the index of the element 5 in the array.
I hope this helps! Let me know if you have any questions. |
musika/the-beatles | musika | 2023-12-03T14:46:15Z | 0 | 1 | null | [
"audio",
"music",
"generation",
"tensorflow",
"arxiv:2208.08706",
"license:mit",
"region:us"
] | null | 2023-12-03T14:46:06Z | ---
license: mit
tags:
- audio
- music
- generation
- tensorflow
---
# Musika Model: the_beatles
## Model provided by: nobitachainsaw
Pretrained the_beatles model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained the_beatles model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
|
Yanni8/google-query-rating | Yanni8 | 2023-12-03T14:39:09Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:google_wellformed_query",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T14:12:09Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- google_wellformed_query
model-index:
- name: google-query-rating
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-query-rating
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the google_wellformed_query dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
li-ping/supervised_ft_embedding_1203_v37 | li-ping | 2023-12-03T14:34:47Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-12-03T14:34:39Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# li-ping/supervised_ft_embedding_1203_v37
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('li-ping/supervised_ft_embedding_1203_v37')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=li-ping/supervised_ft_embedding_1203_v37)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 400,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 30,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
TechRoC123/carmodel | TechRoC123 | 2023-12-03T14:34:09Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-03T14:33:37Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: carmodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# carmodel
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0675
- F1: 0.9931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1732 | 0.31 | 500 | 0.6651 | 0.8403 |
| 0.3989 | 0.62 | 1000 | 0.2942 | 0.9167 |
| 0.2136 | 0.93 | 1500 | 0.1782 | 0.9542 |
| 0.0549 | 1.23 | 2000 | 0.2001 | 0.9639 |
| 0.0287 | 1.54 | 2500 | 0.1304 | 0.9819 |
| 0.0091 | 1.85 | 3000 | 0.1112 | 0.9819 |
| 0.0039 | 2.16 | 3500 | 0.0667 | 0.9917 |
| 0.0023 | 2.47 | 4000 | 0.0708 | 0.9903 |
| 0.0002 | 2.78 | 4500 | 0.0635 | 0.9931 |
| 0.0002 | 3.09 | 5000 | 0.0619 | 0.9931 |
| 0.0002 | 3.4 | 5500 | 0.0730 | 0.9917 |
| 0.0 | 3.7 | 6000 | 0.0684 | 0.9917 |
| 0.0009 | 4.01 | 6500 | 0.0696 | 0.9917 |
| 0.0 | 4.32 | 7000 | 0.0693 | 0.9917 |
| 0.0 | 4.63 | 7500 | 0.0686 | 0.9931 |
| 0.0004 | 4.94 | 8000 | 0.0675 | 0.9931 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
srijan2024/my_awesome_billsum_model | srijan2024 | 2023-12-03T14:32:59Z | 46 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-02T06:10:18Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0467
- Rouge1: 0.7832
- Rouge2: 0.692
- Rougel: 0.781
- Rougelsum: 0.7805
- Gen Len: 11.6071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 56 | 4.9294 | 0.0 | 0.0 | 0.0 | 0.0 | 16.558 |
| No log | 2.0 | 112 | 2.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 13.5357 |
| No log | 3.0 | 168 | 0.4763 | 0.0045 | 0.0045 | 0.0045 | 0.0045 | 10.6518 |
| No log | 4.0 | 224 | 0.1138 | 0.7232 | 0.6205 | 0.7245 | 0.7236 | 11.5893 |
| No log | 5.0 | 280 | 0.0654 | 0.7417 | 0.6339 | 0.7417 | 0.7402 | 11.6607 |
| No log | 6.0 | 336 | 0.0587 | 0.7321 | 0.6205 | 0.7321 | 0.7309 | 11.5938 |
| No log | 7.0 | 392 | 0.0552 | 0.7496 | 0.6473 | 0.7491 | 0.7491 | 11.625 |
| No log | 8.0 | 448 | 0.0533 | 0.7714 | 0.6786 | 0.7714 | 0.7709 | 11.6562 |
| 1.6431 | 9.0 | 504 | 0.0518 | 0.781 | 0.692 | 0.7832 | 0.7805 | 11.6161 |
| 1.6431 | 10.0 | 560 | 0.0505 | 0.764 | 0.6652 | 0.7632 | 0.7614 | 11.6607 |
| 1.6431 | 11.0 | 616 | 0.0494 | 0.7778 | 0.6875 | 0.78 | 0.7773 | 11.6116 |
| 1.6431 | 12.0 | 672 | 0.0488 | 0.7778 | 0.6875 | 0.78 | 0.7773 | 11.6116 |
| 1.6431 | 13.0 | 728 | 0.0483 | 0.781 | 0.692 | 0.7815 | 0.7805 | 11.6161 |
| 1.6431 | 14.0 | 784 | 0.0479 | 0.781 | 0.692 | 0.7815 | 0.7805 | 11.6071 |
| 1.6431 | 15.0 | 840 | 0.0475 | 0.7852 | 0.6964 | 0.7839 | 0.7842 | 11.6205 |
| 1.6431 | 16.0 | 896 | 0.0471 | 0.7812 | 0.692 | 0.781 | 0.7805 | 11.5982 |
| 1.6431 | 17.0 | 952 | 0.0469 | 0.7884 | 0.7009 | 0.7879 | 0.7869 | 11.625 |
| 0.062 | 18.0 | 1008 | 0.0468 | 0.7832 | 0.692 | 0.781 | 0.7805 | 11.6071 |
| 0.062 | 19.0 | 1064 | 0.0467 | 0.7864 | 0.6964 | 0.7839 | 0.7837 | 11.6027 |
| 0.062 | 20.0 | 1120 | 0.0467 | 0.7832 | 0.692 | 0.781 | 0.7805 | 11.6071 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Protao/ppo-MountainCar-v0 | Protao | 2023-12-03T14:26:28Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T14:26:14Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
metrics:
- type: mean_reward
value: -138.00 +/- 31.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
digiplay/Gap_2.6 | digiplay | 2023-12-03T14:25:58Z | 583 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-21T13:20:16Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/166637?modelVersionId=187469
Sample image generated by Hugginface's API:
*Gap_2.6_mangledVAE_.safetensors

Sample images generated by AUTOMATIC 1111:


|
Schandkroete/distilbert-base-uncased_SLC-Sentiment-Finetuned | Schandkroete | 2023-12-03T14:16:02Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T14:12:32Z | ---
language:
- en
pipeline_tag: text-classification
--- |
Konstantin91/Konstantin91 | Konstantin91 | 2023-12-03T14:12:39Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2023-12-03T14:11:54Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 4.999999873689376e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | mixed_float16 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
Subsets and Splits