modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 18:27:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 18:27:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tsfeith/ppo-LunarLander-v2 | tsfeith | 2024-02-27T14:06:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T14:05:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.53 +/- 20.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
interrobang/OpenHermes-2.5-Mistral-7B-GGUF-ukrainian-imatrix | interrobang | 2024-02-27T14:04:54Z | 32 | 3 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-07T17:04:31Z | ---
license: apache-2.0
---
A test quantization of OpenHermes-2.5-Mistral-7B by teknium using importance matrices computed on Ukrainian text, hopefully decreasing the coherence hit after quantization in Ukrainian at the cost of some performance in other languages.
Importance matrix was computed in roughly 20 minutes with a Ryzen 5 3550H and GTX 1650 with 8 layers offloaded, with a context size of 512.
The calibration data is just a mix of my personal GPT chats, random words as well as random wikipedia articles, totaling about 15k-ish tokens, definitely not optimal, but it is in the repo for anyone to tinker with, as well as the computed imatrix
Will be updated with perplexity testing later, probably? 😭 Haven't done proper tests quite yet, feels better than old quants when chatting in Ukrainian, hopefully I get around to actually benching it somehow |
emaeon/solar-hansol-pretrain-merge | emaeon | 2024-02-27T14:00:16Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-25T16:50:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shritama/codellama-7B-IT-NL-SQL-3 | Shritama | 2024-02-27T14:00:08Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-27T13:57:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lolbruhs/Kats | Lolbruhs | 2024-02-27T13:58:07Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:freecryptobasics/KanyeAlbumCoverLora",
"base_model:adapter:freecryptobasics/KanyeAlbumCoverLora",
"region:us"
] | text-to-image | 2024-02-27T13:58:07Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/275 sin título_20240130215710~2.png
base_model: freecryptobasics/KanyeAlbumCoverLora
instance_prompt: null
---
# Cata
<Gallery />
## Download model
[Download](/Lolbruhs/Kats/tree/main) them in the Files & versions tab.
|
nithyarajkumar/tinyllama-finetune-tourism-v1 | nithyarajkumar | 2024-02-27T13:57:00Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-02-27T04:19:13Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tinyllama-finetune-tourism-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-finetune-tourism-v1
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
Maqqq/Nous-Finetuning-Subnet | Maqqq | 2024-02-27T13:46:10Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-23T16:03:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gohzy/fine-tuned-singlish-toxic-bert-LoRA-35000-1 | gohzy | 2024-02-27T13:45:20Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T13:42:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pribadihcr/aniket-math-small-gpt | pribadihcr | 2024-02-27T13:41:47Z | 4 | 0 | peft | [
"peft",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-02-21T14:24:09Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: aniket-math-small-gpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aniket-math-small-gpt
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 2
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.2+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2 |
gohzy/fine-tuned-singlish-toxic-bert-LoRA-35000-1.5 | gohzy | 2024-02-27T13:40:34Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T13:40:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ketskapow/distilbert-base-uncased-finetuned-cola | Ketskapow | 2024-02-27T13:36:14Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T13:12:09Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4587
- Matthews Correlation: 0.5306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5228 | 1.0 | 535 | 0.4535 | 0.4629 |
| 0.3477 | 2.0 | 1070 | 0.4587 | 0.5306 |
| 0.2316 | 3.0 | 1605 | 0.6278 | 0.5193 |
| 0.1694 | 4.0 | 2140 | 0.8088 | 0.5087 |
| 0.1202 | 5.0 | 2675 | 0.8539 | 0.5256 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
vicky6/dummy-model_2 | vicky6 | 2024-02-27T13:35:53Z | 58 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-27T13:35:15Z | ---
license: mit
tags:
- generated_from_keras_callback
base_model: camembert-base
model-index:
- name: dummy-model_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model_2
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Tokenizers 0.15.2
|
seedmanc/isna | seedmanc | 2024-02-27T13:33:56Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stablediffusionapi/anything-v5",
"base_model:adapter:stablediffusionapi/anything-v5",
"license:other",
"region:us"
] | text-to-image | 2024-02-27T13:33:49Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
isna-style, breasts, plump, gradient background, rough lineart, 1 girl,
serval, kemono friends, masterpiece, best quality, rough strokes, rough
lines, vector, anime shading, simplistic, lo-fi
parameters:
negative_prompt: >-
worst quality, large head, low quality, extra digits, bad eye,
EasyNegativeV2, ng_deepnegative_v1_75t, thin linear, narrow strokes, fine
details
output:
url: images/e61c5484-1c86-062e-202a-090570c97cee.jpeg
- text: '-'
output:
url: images/b30bac82-7114-f7f3-f1fd-4def63ccb410.jpeg
- text: >-
isna-style, large breasts, flat shading, thick outlines, gradient
background, 1girl, rough lineart, wide strokes, vivid, expressive eyes
parameters:
negative_prompt: >-
worst quality, large head, low quality, extra digits, bad eye,
EasyNegativeV2, ng_deepnegative_v1_75t
output:
url: images/cc713442-3b76-492b-a6aa-eca4ea99cbd5.webp
- text: '-'
output:
url: images/b64d7978-5176-f3aa-699d-43493a069eb4.jpeg
base_model: stablediffusionapi/anything-v5
instance_prompt: isna-style, plump, flat shading, thick outlines, gradient background
license: other
license_name: whocares
license_link: LICENSE
---
# Isna artstyle (イスナ)
<Gallery />
## Model description
Vivid colors and rough strokes as well as expressive eyes and pleasant skin colors in art. Mostly for Kemono Friends, produces plump characters.
## Trigger words
You should use `isna-style` to trigger the image generation.
You should use `plump` to trigger the image generation.
You should use `flat shading` to trigger the image generation.
You should use `thick outlines` to trigger the image generation.
You should use `gradient background` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/seedmanc/isna/tree/main) them in the Files & versions tab.
|
riotu-lab/ArabianGPT-01B | riotu-lab | 2024-02-27T13:31:53Z | 2,999 | 13 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arabic ",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-04T18:45:05Z | ---
license: apache-2.0
language:
- ar
pipeline_tag: text-generation
tags:
- 'arabic '
- text-generation
widget:
- text: "أعلنت وزارة الحج في المملكة العربية السعودية"
example_title: "مثال ١"
- text: "يبدو اليوم جميلا، سأقوم بتحضير"
example_title: "مثال ٢"
- text: "إن التقنيات الحديثة"
example_title: "مثال ٣"
---
# ArabianGPT Model Overview
## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation
<p style="color: red;">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.1B, and users engage with and apply the model's outputs at their own risk.</p>
> **Important Note:** Currently, we offer a raw pre-trained model. Our team is actively working on releasing instruction-based LLMs that are fine-tuned and augmented with LRHF. The first set of pre-trained models has been made available for community exploration. While we do have models fine-tuned for specific tasks such as summarization and sentiment analysis, they are still in the development phase.
## How you can use this Pre-Trained?
You are invited to utilize this pre-trained, native Arabic language model as an experimental tool to assess its capabilities, aid in its fine-tuning, and evaluate its performance across a variety of downstream tasks. We encourage you to review our technical report for a comprehensive understanding of the model's performance metrics and the specific downstream tasks it has been tested on. This will provide valuable insights into its applicability and effectiveness in diverse applications.
## Introduction
ArabianGPT-0.1B, developed under the ArabianLLM initiatives, is a specialized GPT-2 model optimized for Arabic language modeling.
It's a product of the collaborative efforts at Prince Sultan University's Robotics and Internet of Things Lab, focusing on enhancing natural language modeling and generation in Arabic.
This model represents a significant stride in LLM research, specifically addressing the linguistic complexities and nuances of the Arabic language.
## Key Features
- **Architecture**: GPT-2
- **Model Size**: 134 million parameters
- **Layers**: 12
- **Model Attention Layers (MAL)**: 12
- **Context Window Size**: 768 tokens
## Training
- **Dataset**: Scraped Arabic newspaper articles
- **Data Size**: 15.5 GB
- **Words**: 237.8 million
- **Tokenizer**: Aranizer 64K
- **Tokens**: Over 1.75 billion
- **Hardware**: 2 NDIVIA A100 GPUs
- **Training Scale**: 7.5 million examples
- **Training Duration**: 3 days
- **Performance**: Final loss of 3.97
## Role in ArabianLLM Initiatives
ArabianGPT-0.1B (Base Model) is crucial for advancing Arabic language processing, addressing challenges unique to Arabic morphology and dialects.
## Usage
Suitable for Arabic text generation tasks. Example usage with Transformers Pipeline:
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="riotu-lab/ArabianGPT-01B", max_new_tokens=512)
text = ''
pipe.predict(text)
```
## Limitations and Ethical Considerations
- The model may have context understanding or text generation limitations in certain scenarios.
- Emphasis on ethical use to prevent misinformation or harmful content propagation.
## Acknowledgments
Special thanks to Prince Sultan University, particularly the Robotics and Internet of Things Lab.
## Contact Information
For inquiries: [[email protected]](mailto:[email protected]).
## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation
<p style="color: red;">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.1B, and users engage with and apply the model's outputs at their own risk.</p>
|
phoen1x/TF-Finetuned-xsum | phoen1x | 2024-02-27T13:28:55Z | 72 | 1 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"summarization",
"en",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-05-15T22:20:52Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TF-Finetuned-xsum
results: []
datasets:
- xsum
language:
- en
metrics:
- rouge
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TF-Finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [xsum](https://huggingface.co/datasets/xsum) dataset.
It achieves the following results on the evaluation set:
- Train Loss:
- Validation Loss:
- Epoch:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:---------------------------------------------:|:-----:|
| | | tf.Tensor(0.1999889, shape=(), dtype=float32) | |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.12.1 |
auksliusninetwothree/test-model | auksliusninetwothree | 2024-02-27T13:27:58Z | 78 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-26T13:36:00Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: test-model
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: custom_data
split: test
args: custom_data
metrics:
- name: Wer
type: wer
value: 8.333333333333332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-model
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1513
- Wer Ortho: 8.3333
- Wer: 8.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 2
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 1.1613 | 2.71 | 19 | 1.1513 | 8.3333 | 8.3333 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
misterwavey/flan-t5-base-cc1 | misterwavey | 2024-02-27T13:26:40Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-27T13:07:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yanwen9969/distilbert-base-uncased-finetuned-cola | Yanwen9969 | 2024-02-27T13:21:30Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T12:36:04Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8446
- Matthews Correlation: 0.5377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.517 | 1.0 | 535 | 0.4553 | 0.4460 |
| 0.3451 | 2.0 | 1070 | 0.4641 | 0.5255 |
| 0.2317 | 3.0 | 1605 | 0.6350 | 0.5186 |
| 0.1726 | 4.0 | 2140 | 0.8171 | 0.5081 |
| 0.1269 | 5.0 | 2675 | 0.8446 | 0.5377 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Arczisan/mechanical-parts | Arczisan | 2024-02-27T13:21:13Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] | text-to-image | 2024-02-27T13:20:56Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0a\0 \0g\0i\0r\0l\0 \0 \0<\0l\0o\0r\0a\0:\0R\0e\0e\0l\0_\0m\0e\0c\0h\0a\0n\0i\0c\0a\0l\0_\0p\0a\0r\0t\0s\0_\0v\0_\01\0_\03\0:\01\0>\0,\0 \0r\0e\0e\0l\0m\0e\0c\0h\0,\0 \0f\0i\0g\0h\0t\0i\0n\0g\0 \0,\0 \0g\0l\0o\0w\0i\0n\0g\0 \0e\0y\0e\0s\0,\0 \0s\0h\0o\0r\0t\0 \0h\0a\0i\0r\0,\0t\0o\0r\0n\0 \0t\0i\0g\0h\0t\0 \0s\0u\0p\0e\0r\0s\0u\0i\0t\0,\0 \0i\0n\0 \0a\0 \0d\0e\0s\0t\0r\0o\0y\0e\0d\0 \0c\0i\0t\0y\0,\0 \0s\0m\0o\0k\0e\0 \0a\0n\0d\0 \0f\0i\0r\0e\0,\0 \0g\0l\0o\0w\0i\0n\0g\0 \0p\0o\0w\0e\0r\0 \0a\0u\0r\0a\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0p\0o\0s\0e\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0v\0i\0e\0w\0"
output:
url: images/00123-2389661969.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Mechanical Parts
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Arczisan/mechanical-parts/tree/main) them in the Files & versions tab.
|
OwOpeepeepoopoo/easy_america5 | OwOpeepeepoopoo | 2024-02-27T13:18:50Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T12:57:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ryusangwon/5678_Llama-2-7b-hf | ryusangwon | 2024-02-27T13:18:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:samsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-27T13:18:20Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: 5678_Llama-2-7b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 5678_Llama-2-7b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
peldrak/segformer-b4-ade-512-512-finetuned-coastTrain-grCoastline | peldrak | 2024-02-27T13:17:44Z | 188 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:peldrak/segformer-b4-ade-512-512-finetuned-coastTrain",
"base_model:finetune:peldrak/segformer-b4-ade-512-512-finetuned-coastTrain",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-27T11:36:31Z | ---
license: other
base_model: peldrak/segformer-b4-ade-512-512-finetuned-coastTrain
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b4-ade-512-512-finetuned-coastTrain-grCoastline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-ade-512-512-finetuned-coastTrain-grCoastline
This model is a fine-tuned version of [peldrak/segformer-b4-ade-512-512-finetuned-coastTrain](https://huggingface.co/peldrak/segformer-b4-ade-512-512-finetuned-coastTrain) on the peldrak/grCoastline_512 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1900
- Mean Iou: 0.8129
- Mean Accuracy: 0.8809
- Overall Accuracy: 0.9540
- Accuracy Water: 0.9875
- Accuracy Whitewater: 0.6312
- Accuracy Sediment: 0.9541
- Accuracy Other Natural Terrain: 0.8566
- Accuracy Vegetation: 0.8860
- Accuracy Development: 0.8526
- Accuracy Unknown: 0.9984
- Iou Water: 0.9631
- Iou Whitewater: 0.5490
- Iou Sediment: 0.8864
- Iou Other Natural Terrain: 0.7326
- Iou Vegetation: 0.8448
- Iou Development: 0.7176
- Iou Unknown: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:|
| 0.3829 | 0.24 | 20 | 0.4153 | 0.5484 | 0.6468 | 0.8693 | 0.9547 | 0.2281 | 0.9398 | 0.0617 | 0.9459 | 0.4008 | 0.9963 | 0.9176 | 0.1120 | 0.7770 | 0.0612 | 0.6193 | 0.3634 | 0.9882 |
| 2.0682 | 0.49 | 40 | 0.2991 | 0.6099 | 0.6956 | 0.8939 | 0.9735 | 0.1187 | 0.9464 | 0.3054 | 0.8869 | 0.6447 | 0.9938 | 0.9316 | 0.1123 | 0.7992 | 0.2941 | 0.6709 | 0.4733 | 0.9879 |
| 0.5418 | 0.73 | 60 | 0.2615 | 0.6607 | 0.7312 | 0.9192 | 0.9684 | 0.0686 | 0.9512 | 0.7257 | 0.8597 | 0.5526 | 0.9920 | 0.9252 | 0.0665 | 0.7986 | 0.6036 | 0.7632 | 0.4800 | 0.9881 |
| 0.4389 | 0.98 | 80 | 0.2421 | 0.6515 | 0.7159 | 0.9201 | 0.9756 | 0.0911 | 0.9662 | 0.6066 | 0.9156 | 0.4593 | 0.9971 | 0.9426 | 0.0898 | 0.7943 | 0.5455 | 0.7796 | 0.4163 | 0.9926 |
| 0.3756 | 1.22 | 100 | 0.2204 | 0.7025 | 0.7747 | 0.9295 | 0.9931 | 0.1656 | 0.9258 | 0.8460 | 0.8142 | 0.6870 | 0.9912 | 0.9124 | 0.1560 | 0.8274 | 0.7147 | 0.7806 | 0.5365 | 0.9896 |
| 0.7675 | 1.46 | 120 | 0.2169 | 0.7061 | 0.7876 | 0.9184 | 0.9774 | 0.4127 | 0.9614 | 0.8133 | 0.7635 | 0.5874 | 0.9976 | 0.9489 | 0.3887 | 0.8088 | 0.5972 | 0.7210 | 0.4855 | 0.9928 |
| 0.5434 | 1.71 | 140 | 0.2232 | 0.7104 | 0.7782 | 0.9308 | 0.9820 | 0.2467 | 0.9620 | 0.6425 | 0.8970 | 0.7228 | 0.9943 | 0.9529 | 0.2410 | 0.8545 | 0.5718 | 0.7752 | 0.5848 | 0.9925 |
| 0.8975 | 1.95 | 160 | 0.2187 | 0.7209 | 0.8231 | 0.9231 | 0.9757 | 0.3658 | 0.8885 | 0.8665 | 0.7545 | 0.9165 | 0.9945 | 0.9473 | 0.3471 | 0.8241 | 0.6442 | 0.7331 | 0.5589 | 0.9917 |
| 0.2799 | 2.2 | 180 | 0.1662 | 0.7404 | 0.8029 | 0.9418 | 0.9706 | 0.3506 | 0.9555 | 0.9022 | 0.8947 | 0.5523 | 0.9946 | 0.9425 | 0.3293 | 0.8567 | 0.7286 | 0.8378 | 0.4949 | 0.9928 |
| 0.2132 | 2.44 | 200 | 0.1616 | 0.7714 | 0.8442 | 0.9443 | 0.9777 | 0.4621 | 0.9536 | 0.7886 | 0.8819 | 0.8492 | 0.9963 | 0.9435 | 0.4076 | 0.8578 | 0.7267 | 0.8215 | 0.6498 | 0.9930 |
| 0.3068 | 2.68 | 220 | 0.2055 | 0.7345 | 0.8090 | 0.9318 | 0.9870 | 0.4136 | 0.9517 | 0.8730 | 0.8080 | 0.6367 | 0.9931 | 0.9370 | 0.3753 | 0.8034 | 0.7027 | 0.7935 | 0.5387 | 0.9909 |
| 0.1822 | 2.93 | 240 | 0.1367 | 0.7984 | 0.8640 | 0.9531 | 0.9886 | 0.5028 | 0.9106 | 0.8899 | 0.8992 | 0.8593 | 0.9977 | 0.9499 | 0.4617 | 0.8667 | 0.7874 | 0.8617 | 0.6675 | 0.9937 |
| 0.1504 | 3.17 | 260 | 0.1548 | 0.7794 | 0.8446 | 0.9471 | 0.9830 | 0.4763 | 0.9544 | 0.8496 | 0.8731 | 0.7774 | 0.9983 | 0.9482 | 0.4416 | 0.8516 | 0.7572 | 0.8427 | 0.6226 | 0.9917 |
| 0.2699 | 3.41 | 280 | 0.1543 | 0.7508 | 0.8024 | 0.9475 | 0.9889 | 0.2791 | 0.9421 | 0.8336 | 0.9211 | 0.6552 | 0.9967 | 0.9484 | 0.2719 | 0.8647 | 0.7216 | 0.8500 | 0.6056 | 0.9938 |
| 0.2272 | 3.66 | 300 | 0.1547 | 0.7618 | 0.8232 | 0.9490 | 0.9868 | 0.2820 | 0.9403 | 0.7567 | 0.9193 | 0.8808 | 0.9963 | 0.9565 | 0.2766 | 0.8735 | 0.7173 | 0.8339 | 0.6810 | 0.9938 |
| 0.0938 | 3.9 | 320 | 0.1776 | 0.7615 | 0.8290 | 0.9415 | 0.9889 | 0.4229 | 0.9468 | 0.7605 | 0.8807 | 0.8081 | 0.9953 | 0.9545 | 0.3961 | 0.8519 | 0.6753 | 0.8080 | 0.6522 | 0.9929 |
| 0.129 | 4.15 | 340 | 0.1708 | 0.7606 | 0.8281 | 0.9404 | 0.9839 | 0.5055 | 0.9591 | 0.8675 | 0.8540 | 0.6292 | 0.9977 | 0.9548 | 0.4638 | 0.8605 | 0.6874 | 0.8105 | 0.5533 | 0.9935 |
| 0.1929 | 4.39 | 360 | 0.1504 | 0.7864 | 0.8456 | 0.9493 | 0.9832 | 0.4961 | 0.9428 | 0.8054 | 0.9216 | 0.7725 | 0.9976 | 0.9551 | 0.4630 | 0.8737 | 0.7149 | 0.8442 | 0.6605 | 0.9936 |
| 0.1933 | 4.63 | 380 | 0.1572 | 0.7887 | 0.8610 | 0.9475 | 0.9875 | 0.4972 | 0.9328 | 0.8885 | 0.8523 | 0.8746 | 0.9939 | 0.9520 | 0.4646 | 0.8696 | 0.7369 | 0.8259 | 0.6796 | 0.9925 |
| 0.0642 | 4.88 | 400 | 0.1759 | 0.7988 | 0.8631 | 0.9504 | 0.9839 | 0.5690 | 0.9449 | 0.7954 | 0.9159 | 0.8352 | 0.9971 | 0.9585 | 0.5125 | 0.8850 | 0.7171 | 0.8326 | 0.6917 | 0.9943 |
| 0.1118 | 5.12 | 420 | 0.1461 | 0.8027 | 0.8728 | 0.9524 | 0.9854 | 0.5667 | 0.9487 | 0.8658 | 0.8783 | 0.8670 | 0.9973 | 0.9592 | 0.4847 | 0.8653 | 0.7586 | 0.8420 | 0.7145 | 0.9944 |
| 0.1145 | 5.37 | 440 | 0.1437 | 0.7884 | 0.8471 | 0.9517 | 0.9806 | 0.4749 | 0.9560 | 0.9182 | 0.8870 | 0.7163 | 0.9969 | 0.9578 | 0.4510 | 0.8813 | 0.7459 | 0.8526 | 0.6354 | 0.9947 |
| 0.2373 | 5.61 | 460 | 0.1429 | 0.8081 | 0.8807 | 0.9539 | 0.9875 | 0.6424 | 0.9413 | 0.8411 | 0.9063 | 0.8488 | 0.9976 | 0.9526 | 0.5048 | 0.8671 | 0.7674 | 0.8562 | 0.7140 | 0.9950 |
| 0.0863 | 5.85 | 480 | 0.1620 | 0.7747 | 0.8317 | 0.9497 | 0.9872 | 0.3768 | 0.9531 | 0.8926 | 0.8780 | 0.7383 | 0.9957 | 0.9520 | 0.3645 | 0.8553 | 0.7583 | 0.8462 | 0.6527 | 0.9942 |
| 0.1391 | 6.1 | 500 | 0.1639 | 0.7719 | 0.8332 | 0.9476 | 0.9856 | 0.3736 | 0.9277 | 0.7736 | 0.9158 | 0.8580 | 0.9980 | 0.9536 | 0.3623 | 0.8647 | 0.7044 | 0.8334 | 0.6903 | 0.9945 |
| 0.0976 | 6.34 | 520 | 0.1893 | 0.7449 | 0.8043 | 0.9405 | 0.9883 | 0.3225 | 0.9467 | 0.8313 | 0.8733 | 0.6724 | 0.9955 | 0.9496 | 0.3151 | 0.8619 | 0.6690 | 0.8097 | 0.6149 | 0.9938 |
| 0.1592 | 6.59 | 540 | 0.1842 | 0.7557 | 0.8210 | 0.9436 | 0.9888 | 0.3033 | 0.9475 | 0.8408 | 0.8482 | 0.8224 | 0.9958 | 0.9522 | 0.2846 | 0.8585 | 0.7011 | 0.8012 | 0.6982 | 0.9940 |
| 0.2569 | 6.83 | 560 | 0.1531 | 0.7984 | 0.8686 | 0.9495 | 0.9863 | 0.5709 | 0.9457 | 0.8419 | 0.8783 | 0.8619 | 0.9955 | 0.9554 | 0.5058 | 0.8757 | 0.7338 | 0.8258 | 0.6986 | 0.9940 |
| 0.1064 | 7.07 | 580 | 0.1944 | 0.7784 | 0.8474 | 0.9420 | 0.9895 | 0.5455 | 0.9459 | 0.7592 | 0.8764 | 0.8178 | 0.9975 | 0.9534 | 0.5047 | 0.8475 | 0.6847 | 0.8063 | 0.6572 | 0.9949 |
| 0.0979 | 7.32 | 600 | 0.1581 | 0.7959 | 0.8574 | 0.9508 | 0.9869 | 0.5223 | 0.9399 | 0.8773 | 0.8874 | 0.7912 | 0.9968 | 0.9522 | 0.4766 | 0.8740 | 0.7516 | 0.8359 | 0.6865 | 0.9945 |
| 0.045 | 7.56 | 620 | 0.1962 | 0.7990 | 0.8655 | 0.9479 | 0.9801 | 0.5896 | 0.9347 | 0.7952 | 0.9054 | 0.8546 | 0.9988 | 0.9485 | 0.5253 | 0.8764 | 0.7167 | 0.8204 | 0.7117 | 0.9938 |
| 0.0495 | 7.8 | 640 | 0.2135 | 0.7824 | 0.8541 | 0.9428 | 0.9834 | 0.5986 | 0.9555 | 0.8005 | 0.8692 | 0.7725 | 0.9987 | 0.9550 | 0.5221 | 0.8447 | 0.7026 | 0.8075 | 0.6500 | 0.9949 |
| 0.0389 | 8.05 | 660 | 0.1860 | 0.7856 | 0.8503 | 0.9469 | 0.9809 | 0.4908 | 0.9358 | 0.7910 | 0.9012 | 0.8538 | 0.9985 | 0.9547 | 0.4522 | 0.8837 | 0.6847 | 0.8158 | 0.7134 | 0.9946 |
| 0.177 | 8.29 | 680 | 0.2002 | 0.7719 | 0.8338 | 0.9478 | 0.9871 | 0.3745 | 0.9462 | 0.7507 | 0.9147 | 0.8683 | 0.9951 | 0.9535 | 0.3584 | 0.8738 | 0.7041 | 0.8278 | 0.6920 | 0.9937 |
| 0.0522 | 8.54 | 700 | 0.1619 | 0.7917 | 0.8564 | 0.9481 | 0.9875 | 0.5765 | 0.9388 | 0.8643 | 0.8908 | 0.7403 | 0.9963 | 0.9568 | 0.5202 | 0.8813 | 0.7152 | 0.8297 | 0.6440 | 0.9945 |
| 0.066 | 8.78 | 720 | 0.1800 | 0.7782 | 0.8539 | 0.9451 | 0.9850 | 0.4766 | 0.9503 | 0.8770 | 0.8304 | 0.8615 | 0.9963 | 0.9597 | 0.4398 | 0.8674 | 0.7291 | 0.7992 | 0.6581 | 0.9945 |
| 0.1114 | 9.02 | 740 | 0.1692 | 0.7867 | 0.8517 | 0.9476 | 0.9880 | 0.5068 | 0.9485 | 0.8157 | 0.8789 | 0.8257 | 0.9982 | 0.9569 | 0.4758 | 0.8787 | 0.7079 | 0.8205 | 0.6723 | 0.9951 |
| 0.1906 | 9.27 | 760 | 0.1724 | 0.7929 | 0.8617 | 0.9490 | 0.9820 | 0.5359 | 0.9464 | 0.8073 | 0.8928 | 0.8697 | 0.9978 | 0.9572 | 0.4821 | 0.8714 | 0.7178 | 0.8284 | 0.6980 | 0.9956 |
| 0.0562 | 9.51 | 780 | 0.1984 | 0.7811 | 0.8494 | 0.9449 | 0.9807 | 0.5865 | 0.9549 | 0.8392 | 0.8888 | 0.6984 | 0.9969 | 0.9569 | 0.5050 | 0.8750 | 0.6786 | 0.8232 | 0.6337 | 0.9952 |
| 0.1104 | 9.76 | 800 | 0.1972 | 0.7978 | 0.8687 | 0.9469 | 0.9855 | 0.5906 | 0.9419 | 0.7758 | 0.8914 | 0.9000 | 0.9955 | 0.9556 | 0.5334 | 0.8733 | 0.6863 | 0.8178 | 0.7237 | 0.9945 |
| 0.0451 | 10.0 | 820 | 0.2123 | 0.7769 | 0.8455 | 0.9415 | 0.9821 | 0.5810 | 0.9500 | 0.8132 | 0.8747 | 0.7191 | 0.9984 | 0.9537 | 0.5241 | 0.8658 | 0.6691 | 0.8074 | 0.6231 | 0.9949 |
| 0.1426 | 10.24 | 840 | 0.2210 | 0.7989 | 0.8745 | 0.9465 | 0.9800 | 0.6316 | 0.9435 | 0.7712 | 0.8897 | 0.9072 | 0.9983 | 0.9562 | 0.5525 | 0.8783 | 0.6823 | 0.8131 | 0.7147 | 0.9951 |
| 0.0683 | 10.49 | 860 | 0.2162 | 0.7964 | 0.8677 | 0.9473 | 0.9802 | 0.5774 | 0.9515 | 0.7715 | 0.8902 | 0.9058 | 0.9974 | 0.9549 | 0.5202 | 0.8777 | 0.6901 | 0.8156 | 0.7210 | 0.9954 |
| 0.0758 | 10.73 | 880 | 0.1898 | 0.8005 | 0.8774 | 0.9468 | 0.9863 | 0.6471 | 0.9326 | 0.8057 | 0.8757 | 0.8960 | 0.9984 | 0.9595 | 0.5466 | 0.8796 | 0.6810 | 0.8067 | 0.7347 | 0.9957 |
| 0.0496 | 10.98 | 900 | 0.1919 | 0.8019 | 0.8738 | 0.9469 | 0.9794 | 0.6404 | 0.9636 | 0.8149 | 0.8670 | 0.8526 | 0.9984 | 0.9598 | 0.5598 | 0.8726 | 0.6949 | 0.8065 | 0.7236 | 0.9959 |
| 0.0329 | 11.22 | 920 | 0.1862 | 0.8004 | 0.8689 | 0.9469 | 0.9891 | 0.5888 | 0.9460 | 0.8303 | 0.8556 | 0.8746 | 0.9977 | 0.9594 | 0.5378 | 0.8816 | 0.6882 | 0.8003 | 0.7395 | 0.9957 |
| 0.0808 | 11.46 | 940 | 0.2000 | 0.8016 | 0.8730 | 0.9485 | 0.9868 | 0.6599 | 0.9461 | 0.7912 | 0.8998 | 0.8292 | 0.9977 | 0.9617 | 0.5578 | 0.8811 | 0.6872 | 0.8208 | 0.7070 | 0.9960 |
| 0.0492 | 11.71 | 960 | 0.2148 | 0.7983 | 0.8672 | 0.9466 | 0.9895 | 0.6154 | 0.9459 | 0.8051 | 0.8765 | 0.8410 | 0.9968 | 0.9589 | 0.5439 | 0.8735 | 0.6870 | 0.8079 | 0.7210 | 0.9959 |
| 0.0629 | 11.95 | 980 | 0.2277 | 0.7941 | 0.8637 | 0.9456 | 0.9842 | 0.5803 | 0.9556 | 0.7909 | 0.8695 | 0.8680 | 0.9973 | 0.9604 | 0.5260 | 0.8744 | 0.6751 | 0.8009 | 0.7260 | 0.9958 |
| 0.1419 | 12.2 | 1000 | 0.2076 | 0.7977 | 0.8623 | 0.9483 | 0.9868 | 0.5485 | 0.9413 | 0.8046 | 0.8865 | 0.8710 | 0.9977 | 0.9584 | 0.5079 | 0.8817 | 0.6928 | 0.8139 | 0.7335 | 0.9957 |
| 0.153 | 12.44 | 1020 | 0.1835 | 0.7986 | 0.8608 | 0.9494 | 0.9833 | 0.5284 | 0.9457 | 0.8415 | 0.8814 | 0.8475 | 0.9979 | 0.9600 | 0.4923 | 0.8840 | 0.6992 | 0.8185 | 0.7404 | 0.9957 |
| 0.0377 | 12.68 | 1040 | 0.2033 | 0.7894 | 0.8567 | 0.9476 | 0.9826 | 0.4861 | 0.9596 | 0.8731 | 0.8436 | 0.8548 | 0.9971 | 0.9603 | 0.4626 | 0.8669 | 0.7198 | 0.8113 | 0.7089 | 0.9957 |
| 0.0474 | 12.93 | 1060 | 0.2220 | 0.7935 | 0.8586 | 0.9464 | 0.9890 | 0.5751 | 0.9476 | 0.7947 | 0.8835 | 0.8229 | 0.9974 | 0.9577 | 0.5129 | 0.8731 | 0.6790 | 0.8098 | 0.7261 | 0.9959 |
| 1.1161 | 13.17 | 1080 | 0.2110 | 0.7992 | 0.8716 | 0.9463 | 0.9821 | 0.6330 | 0.9614 | 0.8127 | 0.8632 | 0.8502 | 0.9983 | 0.9585 | 0.5585 | 0.8827 | 0.6897 | 0.8012 | 0.7076 | 0.9962 |
| 0.099 | 13.41 | 1100 | 0.2123 | 0.7984 | 0.8743 | 0.9472 | 0.9868 | 0.6128 | 0.9379 | 0.8127 | 0.8678 | 0.9040 | 0.9983 | 0.9570 | 0.5382 | 0.8878 | 0.6926 | 0.8075 | 0.7095 | 0.9962 |
| 0.0588 | 13.66 | 1120 | 0.1905 | 0.8032 | 0.8784 | 0.9485 | 0.9814 | 0.6312 | 0.9540 | 0.8039 | 0.8745 | 0.9050 | 0.9984 | 0.9603 | 0.5530 | 0.8902 | 0.6977 | 0.8097 | 0.7157 | 0.9961 |
| 0.0769 | 13.9 | 1140 | 0.1758 | 0.8017 | 0.8647 | 0.9500 | 0.9864 | 0.5469 | 0.9429 | 0.8562 | 0.8735 | 0.8484 | 0.9985 | 0.9568 | 0.5034 | 0.8897 | 0.7119 | 0.8164 | 0.7372 | 0.9962 |
| 0.121 | 14.15 | 1160 | 0.1858 | 0.8027 | 0.8701 | 0.9499 | 0.9816 | 0.5560 | 0.9518 | 0.8267 | 0.8789 | 0.8986 | 0.9969 | 0.9578 | 0.5182 | 0.8862 | 0.7072 | 0.8231 | 0.7310 | 0.9956 |
| 0.0663 | 14.39 | 1180 | 0.2045 | 0.7889 | 0.8640 | 0.9451 | 0.9897 | 0.5519 | 0.9317 | 0.8208 | 0.8533 | 0.9023 | 0.9981 | 0.9563 | 0.4946 | 0.8742 | 0.6857 | 0.8016 | 0.7136 | 0.9960 |
| 0.0254 | 14.63 | 1200 | 0.2105 | 0.8003 | 0.8676 | 0.9474 | 0.9816 | 0.6103 | 0.9605 | 0.8074 | 0.8776 | 0.8377 | 0.9982 | 0.9611 | 0.5442 | 0.8794 | 0.6838 | 0.8091 | 0.7281 | 0.9962 |
| 0.1533 | 14.88 | 1220 | 0.2133 | 0.7973 | 0.8680 | 0.9470 | 0.9870 | 0.6039 | 0.9465 | 0.8942 | 0.8418 | 0.8046 | 0.9985 | 0.9606 | 0.5294 | 0.8828 | 0.6974 | 0.8046 | 0.7100 | 0.9962 |
| 0.0389 | 15.12 | 1240 | 0.1854 | 0.8032 | 0.8722 | 0.9489 | 0.9893 | 0.6311 | 0.9484 | 0.8297 | 0.8745 | 0.8342 | 0.9984 | 0.9600 | 0.5509 | 0.8777 | 0.7049 | 0.8183 | 0.7145 | 0.9962 |
| 0.0361 | 15.37 | 1260 | 0.1864 | 0.7939 | 0.8565 | 0.9493 | 0.9896 | 0.5175 | 0.9430 | 0.8092 | 0.8878 | 0.8494 | 0.9990 | 0.9588 | 0.4678 | 0.8842 | 0.6940 | 0.8197 | 0.7371 | 0.9958 |
| 0.0211 | 15.61 | 1280 | 0.2172 | 0.7962 | 0.8720 | 0.9454 | 0.9853 | 0.5900 | 0.9531 | 0.8542 | 0.8295 | 0.8943 | 0.9973 | 0.9621 | 0.5328 | 0.8799 | 0.6886 | 0.7885 | 0.7253 | 0.9963 |
| 0.1093 | 15.85 | 1300 | 0.1688 | 0.8111 | 0.8728 | 0.9531 | 0.9880 | 0.5895 | 0.9482 | 0.8217 | 0.9014 | 0.8627 | 0.9979 | 0.9620 | 0.5364 | 0.8934 | 0.7190 | 0.8316 | 0.7392 | 0.9965 |
| 0.0733 | 16.1 | 1320 | 0.1827 | 0.8126 | 0.8845 | 0.9515 | 0.9869 | 0.6553 | 0.9503 | 0.8784 | 0.8587 | 0.8627 | 0.9990 | 0.9608 | 0.5654 | 0.8826 | 0.7278 | 0.8258 | 0.7300 | 0.9960 |
| 0.0708 | 16.34 | 1340 | 0.1822 | 0.8101 | 0.8783 | 0.9527 | 0.9896 | 0.6199 | 0.9476 | 0.8128 | 0.8992 | 0.8827 | 0.9967 | 0.9598 | 0.5407 | 0.8858 | 0.7244 | 0.8394 | 0.7250 | 0.9955 |
| 0.0522 | 16.59 | 1360 | 0.1780 | 0.8087 | 0.8748 | 0.9518 | 0.9864 | 0.5917 | 0.9509 | 0.8650 | 0.8725 | 0.8599 | 0.9974 | 0.9615 | 0.5372 | 0.8861 | 0.7247 | 0.8282 | 0.7270 | 0.9959 |
| 0.0453 | 16.83 | 1380 | 0.1880 | 0.8020 | 0.8735 | 0.9486 | 0.9891 | 0.5987 | 0.9476 | 0.8654 | 0.8475 | 0.8680 | 0.9983 | 0.9611 | 0.5376 | 0.8809 | 0.7100 | 0.8114 | 0.7165 | 0.9962 |
| 0.0351 | 17.07 | 1400 | 0.1885 | 0.8045 | 0.8758 | 0.9502 | 0.9880 | 0.5929 | 0.9435 | 0.8644 | 0.8591 | 0.8846 | 0.9982 | 0.9608 | 0.5261 | 0.8841 | 0.7161 | 0.8189 | 0.7295 | 0.9962 |
| 0.0629 | 17.32 | 1420 | 0.1721 | 0.8132 | 0.8780 | 0.9536 | 0.9840 | 0.6104 | 0.9586 | 0.8472 | 0.8888 | 0.8590 | 0.9982 | 0.9627 | 0.5470 | 0.8839 | 0.7381 | 0.8383 | 0.7260 | 0.9962 |
| 0.0547 | 17.56 | 1440 | 0.1993 | 0.8025 | 0.8734 | 0.9478 | 0.9877 | 0.6203 | 0.9555 | 0.8655 | 0.8430 | 0.8431 | 0.9987 | 0.9599 | 0.5544 | 0.8666 | 0.7186 | 0.8101 | 0.7115 | 0.9963 |
| 0.081 | 17.8 | 1460 | 0.2054 | 0.8034 | 0.8702 | 0.9493 | 0.9892 | 0.6097 | 0.9505 | 0.8118 | 0.8823 | 0.8502 | 0.9976 | 0.9603 | 0.5416 | 0.8769 | 0.6996 | 0.8204 | 0.7283 | 0.9964 |
| 0.04 | 18.05 | 1480 | 0.2196 | 0.7915 | 0.8572 | 0.9459 | 0.9893 | 0.5738 | 0.9500 | 0.8183 | 0.8706 | 0.8003 | 0.9979 | 0.9586 | 0.5216 | 0.8679 | 0.6870 | 0.8109 | 0.6976 | 0.9966 |
| 0.1213 | 18.29 | 1500 | 0.2320 | 0.7920 | 0.8620 | 0.9461 | 0.9850 | 0.5770 | 0.9594 | 0.8126 | 0.8628 | 0.8395 | 0.9980 | 0.9609 | 0.5168 | 0.8688 | 0.6891 | 0.8062 | 0.7053 | 0.9966 |
| 0.0496 | 18.54 | 1520 | 0.1928 | 0.8065 | 0.8745 | 0.9503 | 0.9842 | 0.6020 | 0.9449 | 0.8562 | 0.8736 | 0.8619 | 0.9984 | 0.9612 | 0.5367 | 0.8886 | 0.7098 | 0.8177 | 0.7348 | 0.9966 |
| 0.045 | 18.78 | 1540 | 0.2075 | 0.7988 | 0.8787 | 0.9460 | 0.9839 | 0.6246 | 0.9374 | 0.8428 | 0.8457 | 0.9183 | 0.9984 | 0.9607 | 0.5522 | 0.8790 | 0.6941 | 0.7982 | 0.7107 | 0.9968 |
| 0.0317 | 19.02 | 1560 | 0.1938 | 0.8051 | 0.8715 | 0.9505 | 0.9892 | 0.5835 | 0.9426 | 0.8278 | 0.8810 | 0.8782 | 0.9980 | 0.9598 | 0.5248 | 0.8878 | 0.7125 | 0.8174 | 0.7364 | 0.9967 |
| 0.0489 | 19.27 | 1580 | 0.1844 | 0.8074 | 0.8792 | 0.9500 | 0.9847 | 0.6251 | 0.9529 | 0.8445 | 0.8628 | 0.8855 | 0.9991 | 0.9613 | 0.5493 | 0.8817 | 0.7121 | 0.8174 | 0.7331 | 0.9966 |
| 0.091 | 19.51 | 1600 | 0.1976 | 0.7907 | 0.8478 | 0.9495 | 0.9910 | 0.5186 | 0.9449 | 0.8258 | 0.9006 | 0.7553 | 0.9986 | 0.9563 | 0.4818 | 0.8703 | 0.7123 | 0.8358 | 0.6818 | 0.9967 |
| 0.0308 | 19.76 | 1620 | 0.1722 | 0.8076 | 0.8722 | 0.9518 | 0.9861 | 0.5906 | 0.9516 | 0.8461 | 0.8820 | 0.8503 | 0.9987 | 0.9612 | 0.5347 | 0.8842 | 0.7192 | 0.8309 | 0.7259 | 0.9968 |
| 0.231 | 20.0 | 1640 | 0.1774 | 0.8073 | 0.8726 | 0.9523 | 0.9912 | 0.5837 | 0.9329 | 0.8215 | 0.8979 | 0.8822 | 0.9988 | 0.9554 | 0.5183 | 0.8759 | 0.7299 | 0.8404 | 0.7343 | 0.9966 |
| 0.0407 | 20.24 | 1660 | 0.2232 | 0.7988 | 0.8750 | 0.9464 | 0.9844 | 0.6060 | 0.9575 | 0.8624 | 0.8267 | 0.8891 | 0.9989 | 0.9609 | 0.5377 | 0.8691 | 0.7052 | 0.7968 | 0.7252 | 0.9964 |
| 0.0303 | 20.49 | 1680 | 0.2146 | 0.8002 | 0.8709 | 0.9479 | 0.9872 | 0.5865 | 0.9498 | 0.8449 | 0.8528 | 0.8774 | 0.9979 | 0.9612 | 0.5307 | 0.8760 | 0.7040 | 0.8067 | 0.7260 | 0.9965 |
| 0.0398 | 20.73 | 1700 | 0.2119 | 0.7977 | 0.8754 | 0.9465 | 0.9858 | 0.6453 | 0.9460 | 0.8314 | 0.8575 | 0.8631 | 0.9989 | 0.9632 | 0.5460 | 0.8824 | 0.6865 | 0.8007 | 0.7087 | 0.9967 |
| 0.6198 | 20.98 | 1720 | 0.2056 | 0.7992 | 0.8731 | 0.9472 | 0.9852 | 0.6495 | 0.9462 | 0.8761 | 0.8560 | 0.8006 | 0.9982 | 0.9628 | 0.5610 | 0.8854 | 0.6941 | 0.8091 | 0.6852 | 0.9969 |
| 0.0428 | 21.22 | 1740 | 0.1978 | 0.7970 | 0.8670 | 0.9483 | 0.9843 | 0.6512 | 0.9546 | 0.8829 | 0.8734 | 0.7246 | 0.9979 | 0.9629 | 0.5622 | 0.8830 | 0.7036 | 0.8269 | 0.6435 | 0.9967 |
| 0.04 | 21.46 | 1760 | 0.1939 | 0.7945 | 0.8675 | 0.9469 | 0.9860 | 0.6371 | 0.9491 | 0.8376 | 0.8718 | 0.7922 | 0.9988 | 0.9619 | 0.5536 | 0.8799 | 0.6914 | 0.8183 | 0.6592 | 0.9968 |
| 0.0279 | 21.71 | 1780 | 0.2210 | 0.7852 | 0.8516 | 0.9464 | 0.9866 | 0.5663 | 0.9519 | 0.8814 | 0.8654 | 0.7116 | 0.9979 | 0.9611 | 0.5143 | 0.8860 | 0.6954 | 0.8175 | 0.6253 | 0.9966 |
| 0.0619 | 21.95 | 1800 | 0.1971 | 0.7930 | 0.8654 | 0.9484 | 0.9867 | 0.6116 | 0.9498 | 0.8739 | 0.8676 | 0.7698 | 0.9981 | 0.9627 | 0.5353 | 0.8885 | 0.7080 | 0.8258 | 0.6342 | 0.9967 |
| 0.0203 | 22.2 | 1820 | 0.1964 | 0.7926 | 0.8679 | 0.9464 | 0.9832 | 0.6733 | 0.9576 | 0.8433 | 0.8750 | 0.7447 | 0.9984 | 0.9631 | 0.5685 | 0.8830 | 0.6827 | 0.8230 | 0.6308 | 0.9969 |
| 0.0907 | 22.44 | 1840 | 0.2107 | 0.7875 | 0.8587 | 0.9458 | 0.9896 | 0.5721 | 0.9506 | 0.8118 | 0.8643 | 0.8239 | 0.9988 | 0.9596 | 0.5017 | 0.8729 | 0.6816 | 0.8105 | 0.6896 | 0.9969 |
| 0.0378 | 22.68 | 1860 | 0.2036 | 0.8053 | 0.8807 | 0.9492 | 0.9850 | 0.6530 | 0.9489 | 0.8037 | 0.8791 | 0.8962 | 0.9989 | 0.9615 | 0.5578 | 0.8828 | 0.6954 | 0.8183 | 0.7247 | 0.9967 |
| 0.081 | 22.93 | 1880 | 0.2039 | 0.7989 | 0.8683 | 0.9488 | 0.9908 | 0.5842 | 0.9388 | 0.8415 | 0.8693 | 0.8548 | 0.9982 | 0.9593 | 0.5193 | 0.8801 | 0.7000 | 0.8198 | 0.7170 | 0.9969 |
| 0.0237 | 23.17 | 1900 | 0.2002 | 0.7899 | 0.8571 | 0.9471 | 0.9876 | 0.5706 | 0.9510 | 0.8307 | 0.8760 | 0.7854 | 0.9982 | 0.9612 | 0.5203 | 0.8814 | 0.6841 | 0.8224 | 0.6634 | 0.9968 |
| 0.0528 | 23.41 | 1920 | 0.2114 | 0.7850 | 0.8538 | 0.9461 | 0.9886 | 0.5398 | 0.9524 | 0.8423 | 0.8587 | 0.7965 | 0.9982 | 0.9603 | 0.4920 | 0.8728 | 0.6905 | 0.8156 | 0.6667 | 0.9970 |
| 0.0793 | 23.66 | 1940 | 0.1825 | 0.8003 | 0.8694 | 0.9498 | 0.9888 | 0.6138 | 0.9454 | 0.8474 | 0.8786 | 0.8130 | 0.9990 | 0.9604 | 0.5439 | 0.8800 | 0.7102 | 0.8346 | 0.6757 | 0.9969 |
| 0.0288 | 23.9 | 1960 | 0.1854 | 0.8051 | 0.8705 | 0.9520 | 0.9893 | 0.6176 | 0.9509 | 0.8519 | 0.8913 | 0.7944 | 0.9979 | 0.9608 | 0.5530 | 0.8781 | 0.7302 | 0.8473 | 0.6693 | 0.9968 |
| 0.0525 | 24.15 | 1980 | 0.1603 | 0.8141 | 0.8846 | 0.9550 | 0.9864 | 0.6848 | 0.9472 | 0.8581 | 0.9083 | 0.8084 | 0.9990 | 0.9620 | 0.5691 | 0.8876 | 0.7440 | 0.8616 | 0.6776 | 0.9966 |
| 0.026 | 24.39 | 2000 | 0.1684 | 0.8068 | 0.8752 | 0.9532 | 0.9877 | 0.6538 | 0.9479 | 0.8936 | 0.8893 | 0.7554 | 0.9989 | 0.9616 | 0.5547 | 0.8856 | 0.7362 | 0.8538 | 0.6586 | 0.9969 |
| 0.0397 | 24.63 | 2020 | 0.1692 | 0.8121 | 0.8760 | 0.9542 | 0.9870 | 0.5916 | 0.9529 | 0.8621 | 0.8871 | 0.8535 | 0.9979 | 0.9616 | 0.5381 | 0.8834 | 0.7400 | 0.8478 | 0.7173 | 0.9968 |
| 0.4272 | 24.88 | 2040 | 0.1785 | 0.8101 | 0.8749 | 0.9535 | 0.9868 | 0.5751 | 0.9539 | 0.8895 | 0.8697 | 0.8520 | 0.9975 | 0.9633 | 0.5204 | 0.8832 | 0.7380 | 0.8374 | 0.7316 | 0.9966 |
| 0.0399 | 25.12 | 2060 | 0.1765 | 0.8070 | 0.8682 | 0.9532 | 0.9885 | 0.5755 | 0.9559 | 0.8543 | 0.8893 | 0.8154 | 0.9983 | 0.9615 | 0.5252 | 0.8758 | 0.7359 | 0.8473 | 0.7064 | 0.9968 |
| 0.0456 | 25.37 | 2080 | 0.1777 | 0.8060 | 0.8668 | 0.9535 | 0.9900 | 0.5873 | 0.9487 | 0.8418 | 0.9061 | 0.7955 | 0.9983 | 0.9605 | 0.5270 | 0.8821 | 0.7299 | 0.8526 | 0.6928 | 0.9969 |
| 0.0414 | 25.61 | 2100 | 0.1844 | 0.8132 | 0.8847 | 0.9531 | 0.9864 | 0.6789 | 0.9515 | 0.8486 | 0.8916 | 0.8373 | 0.9984 | 0.9623 | 0.5671 | 0.8849 | 0.7266 | 0.8418 | 0.7130 | 0.9970 |
| 0.0925 | 25.85 | 2120 | 0.2120 | 0.8035 | 0.8663 | 0.9521 | 0.9886 | 0.5885 | 0.9500 | 0.8118 | 0.9077 | 0.8193 | 0.9982 | 0.9607 | 0.5215 | 0.8832 | 0.7125 | 0.8402 | 0.7097 | 0.9970 |
| 0.0443 | 26.1 | 2140 | 0.1615 | 0.8151 | 0.8790 | 0.9555 | 0.9882 | 0.5945 | 0.9449 | 0.8779 | 0.8874 | 0.8610 | 0.9992 | 0.9603 | 0.5309 | 0.8871 | 0.7511 | 0.8517 | 0.7281 | 0.9964 |
| 0.0728 | 26.34 | 2160 | 0.1701 | 0.8091 | 0.8771 | 0.9534 | 0.9872 | 0.6244 | 0.9493 | 0.8818 | 0.8816 | 0.8164 | 0.9989 | 0.9624 | 0.5469 | 0.8858 | 0.7362 | 0.8478 | 0.6876 | 0.9968 |
| 0.0484 | 26.59 | 2180 | 0.1720 | 0.8061 | 0.8707 | 0.9530 | 0.9895 | 0.6110 | 0.9487 | 0.8831 | 0.8852 | 0.7787 | 0.9987 | 0.9615 | 0.5429 | 0.8814 | 0.7374 | 0.8496 | 0.6727 | 0.9969 |
| 0.027 | 26.83 | 2200 | 0.1728 | 0.8060 | 0.8754 | 0.9525 | 0.9879 | 0.6263 | 0.9498 | 0.8718 | 0.8823 | 0.8111 | 0.9983 | 0.9620 | 0.5489 | 0.8825 | 0.7351 | 0.8472 | 0.6694 | 0.9970 |
| 0.0465 | 27.07 | 2220 | 0.1763 | 0.8075 | 0.8751 | 0.9534 | 0.9875 | 0.6402 | 0.9496 | 0.8776 | 0.8938 | 0.7791 | 0.9981 | 0.9623 | 0.5514 | 0.8842 | 0.7366 | 0.8533 | 0.6675 | 0.9970 |
| 0.0213 | 27.32 | 2240 | 0.1740 | 0.8085 | 0.8743 | 0.9538 | 0.9869 | 0.6184 | 0.9501 | 0.8787 | 0.8917 | 0.7963 | 0.9983 | 0.9632 | 0.5446 | 0.8852 | 0.7370 | 0.8523 | 0.6799 | 0.9971 |
| 0.022 | 27.56 | 2260 | 0.1923 | 0.7998 | 0.8675 | 0.9508 | 0.9863 | 0.5850 | 0.9506 | 0.8669 | 0.8777 | 0.8084 | 0.9979 | 0.9613 | 0.5253 | 0.8839 | 0.7155 | 0.8387 | 0.6769 | 0.9970 |
| 0.0311 | 27.8 | 2280 | 0.1871 | 0.8005 | 0.8657 | 0.9518 | 0.9877 | 0.5896 | 0.9482 | 0.8699 | 0.8900 | 0.7763 | 0.9980 | 0.9612 | 0.5235 | 0.8837 | 0.7229 | 0.8452 | 0.6701 | 0.9970 |
| 0.0281 | 28.05 | 2300 | 0.1984 | 0.7970 | 0.8665 | 0.9490 | 0.9881 | 0.6233 | 0.9441 | 0.9007 | 0.8659 | 0.7451 | 0.9984 | 0.9614 | 0.5465 | 0.8850 | 0.7081 | 0.8310 | 0.6496 | 0.9971 |
| 0.029 | 28.29 | 2320 | 0.1929 | 0.8018 | 0.8684 | 0.9508 | 0.9890 | 0.6266 | 0.9484 | 0.8783 | 0.8813 | 0.7572 | 0.9981 | 0.9612 | 0.5522 | 0.8820 | 0.7180 | 0.8409 | 0.6616 | 0.9970 |
| 0.0205 | 28.54 | 2340 | 0.1939 | 0.8127 | 0.8877 | 0.9536 | 0.9857 | 0.6927 | 0.9459 | 0.8321 | 0.9031 | 0.8553 | 0.9989 | 0.9628 | 0.5574 | 0.8940 | 0.7233 | 0.8410 | 0.7133 | 0.9969 |
| 0.1185 | 28.78 | 2360 | 0.2147 | 0.7963 | 0.8662 | 0.9476 | 0.9888 | 0.5806 | 0.9517 | 0.8657 | 0.8458 | 0.8323 | 0.9987 | 0.9615 | 0.5186 | 0.8772 | 0.6987 | 0.8088 | 0.7122 | 0.9971 |
| 0.0848 | 29.02 | 2380 | 0.1978 | 0.8047 | 0.8712 | 0.9510 | 0.9884 | 0.5966 | 0.9504 | 0.8398 | 0.8784 | 0.8459 | 0.9988 | 0.9618 | 0.5329 | 0.8799 | 0.7125 | 0.8299 | 0.7184 | 0.9972 |
| 0.028 | 29.27 | 2400 | 0.2065 | 0.8000 | 0.8675 | 0.9497 | 0.9878 | 0.6095 | 0.9561 | 0.8780 | 0.8647 | 0.7781 | 0.9983 | 0.9629 | 0.5428 | 0.8784 | 0.7121 | 0.8296 | 0.6767 | 0.9973 |
| 0.0232 | 29.51 | 2420 | 0.1912 | 0.8063 | 0.8750 | 0.9520 | 0.9887 | 0.6177 | 0.9491 | 0.8746 | 0.8742 | 0.8221 | 0.9983 | 0.9631 | 0.5401 | 0.8824 | 0.7242 | 0.8371 | 0.7000 | 0.9973 |
| 0.0241 | 29.76 | 2440 | 0.1768 | 0.8095 | 0.8797 | 0.9525 | 0.9871 | 0.6426 | 0.9506 | 0.8691 | 0.8781 | 0.8319 | 0.9986 | 0.9637 | 0.5552 | 0.8871 | 0.7228 | 0.8388 | 0.7015 | 0.9971 |
| 0.0249 | 30.0 | 2460 | 0.1885 | 0.8051 | 0.8734 | 0.9518 | 0.9885 | 0.6096 | 0.9517 | 0.8740 | 0.8714 | 0.8203 | 0.9986 | 0.9631 | 0.5348 | 0.8836 | 0.7230 | 0.8353 | 0.6989 | 0.9970 |
| 0.0314 | 30.24 | 2480 | 0.1853 | 0.8046 | 0.8698 | 0.9521 | 0.9882 | 0.6049 | 0.9524 | 0.8782 | 0.8786 | 0.7873 | 0.9989 | 0.9630 | 0.5373 | 0.8846 | 0.7241 | 0.8409 | 0.6853 | 0.9968 |
| 0.045 | 30.49 | 2500 | 0.1810 | 0.8134 | 0.8792 | 0.9542 | 0.9871 | 0.6099 | 0.9502 | 0.8672 | 0.8840 | 0.8573 | 0.9986 | 0.9621 | 0.5421 | 0.8876 | 0.7371 | 0.8430 | 0.7251 | 0.9971 |
| 0.0261 | 30.73 | 2520 | 0.1893 | 0.8172 | 0.8897 | 0.9548 | 0.9847 | 0.6630 | 0.9480 | 0.8516 | 0.8922 | 0.8897 | 0.9988 | 0.9619 | 0.5530 | 0.8890 | 0.7377 | 0.8441 | 0.7375 | 0.9972 |
| 0.0175 | 30.98 | 2540 | 0.1904 | 0.8155 | 0.8830 | 0.9553 | 0.9866 | 0.6192 | 0.9535 | 0.8526 | 0.8915 | 0.8791 | 0.9983 | 0.9635 | 0.5371 | 0.8874 | 0.7404 | 0.8472 | 0.7359 | 0.9972 |
| 0.0326 | 31.22 | 2560 | 0.1888 | 0.8126 | 0.8811 | 0.9535 | 0.9875 | 0.6216 | 0.9501 | 0.8821 | 0.8722 | 0.8559 | 0.9985 | 0.9627 | 0.5421 | 0.8861 | 0.7326 | 0.8392 | 0.7283 | 0.9970 |
| 0.0854 | 31.46 | 2580 | 0.1981 | 0.8043 | 0.8676 | 0.9523 | 0.9893 | 0.5619 | 0.9525 | 0.8688 | 0.8749 | 0.8275 | 0.9983 | 0.9613 | 0.5166 | 0.8798 | 0.7305 | 0.8396 | 0.7054 | 0.9969 |
| 0.0313 | 31.71 | 2600 | 0.2039 | 0.8109 | 0.8805 | 0.9522 | 0.9873 | 0.6476 | 0.9539 | 0.8586 | 0.8781 | 0.8404 | 0.9978 | 0.9621 | 0.5616 | 0.8829 | 0.7235 | 0.8372 | 0.7118 | 0.9968 |
| 0.0228 | 31.95 | 2620 | 0.2029 | 0.8079 | 0.8795 | 0.9515 | 0.9876 | 0.6392 | 0.9510 | 0.8668 | 0.8685 | 0.8450 | 0.9988 | 0.9620 | 0.5487 | 0.8832 | 0.7207 | 0.8320 | 0.7118 | 0.9970 |
| 0.0301 | 32.2 | 2640 | 0.2147 | 0.8037 | 0.8739 | 0.9499 | 0.9872 | 0.6339 | 0.9546 | 0.8759 | 0.8607 | 0.8062 | 0.9988 | 0.9620 | 0.5508 | 0.8801 | 0.7129 | 0.8259 | 0.6974 | 0.9971 |
| 0.0312 | 32.44 | 2660 | 0.2114 | 0.8016 | 0.8718 | 0.9496 | 0.9876 | 0.6201 | 0.9532 | 0.8755 | 0.8584 | 0.8090 | 0.9990 | 0.9616 | 0.5405 | 0.8791 | 0.7107 | 0.8252 | 0.6969 | 0.9969 |
| 0.0427 | 32.68 | 2680 | 0.2085 | 0.8015 | 0.8694 | 0.9506 | 0.9873 | 0.6277 | 0.9543 | 0.8743 | 0.8767 | 0.7668 | 0.9986 | 0.9622 | 0.5421 | 0.8828 | 0.7131 | 0.8356 | 0.6777 | 0.9970 |
| 0.0398 | 32.93 | 2700 | 0.2139 | 0.8062 | 0.8766 | 0.9507 | 0.9850 | 0.6461 | 0.9581 | 0.8557 | 0.8761 | 0.8176 | 0.9976 | 0.9612 | 0.5560 | 0.8817 | 0.7157 | 0.8308 | 0.7017 | 0.9967 |
| 0.0274 | 33.17 | 2720 | 0.2093 | 0.8094 | 0.8806 | 0.9516 | 0.9847 | 0.6481 | 0.9555 | 0.8572 | 0.8764 | 0.8440 | 0.9980 | 0.9615 | 0.5583 | 0.8860 | 0.7187 | 0.8323 | 0.7124 | 0.9969 |
| 0.0309 | 33.41 | 2740 | 0.2170 | 0.8068 | 0.8833 | 0.9505 | 0.9840 | 0.6723 | 0.9536 | 0.8855 | 0.8602 | 0.8290 | 0.9984 | 0.9632 | 0.5588 | 0.8902 | 0.7108 | 0.8261 | 0.7013 | 0.9969 |
| 0.0395 | 33.66 | 2760 | 0.2031 | 0.8060 | 0.8787 | 0.9513 | 0.9879 | 0.6361 | 0.9472 | 0.8725 | 0.8689 | 0.8401 | 0.9986 | 0.9624 | 0.5421 | 0.8871 | 0.7157 | 0.8327 | 0.7048 | 0.9970 |
| 0.0298 | 33.9 | 2780 | 0.1892 | 0.8082 | 0.8804 | 0.9522 | 0.9868 | 0.6612 | 0.9493 | 0.8657 | 0.8823 | 0.8189 | 0.9987 | 0.9630 | 0.5586 | 0.8887 | 0.7184 | 0.8409 | 0.6906 | 0.9970 |
| 0.0313 | 34.15 | 2800 | 0.1960 | 0.8064 | 0.8772 | 0.9522 | 0.9881 | 0.6294 | 0.9442 | 0.8685 | 0.8810 | 0.8310 | 0.9984 | 0.9623 | 0.5435 | 0.8893 | 0.7198 | 0.8407 | 0.6925 | 0.9970 |
| 0.0249 | 34.39 | 2820 | 0.1958 | 0.8086 | 0.8772 | 0.9527 | 0.9879 | 0.6079 | 0.9521 | 0.8570 | 0.8777 | 0.8597 | 0.9980 | 0.9625 | 0.5362 | 0.8852 | 0.7255 | 0.8383 | 0.7154 | 0.9969 |
| 0.0959 | 34.63 | 2840 | 0.2022 | 0.8077 | 0.8757 | 0.9520 | 0.9877 | 0.6105 | 0.9548 | 0.8691 | 0.8697 | 0.8401 | 0.9981 | 0.9627 | 0.5434 | 0.8838 | 0.7228 | 0.8347 | 0.7099 | 0.9969 |
| 0.0195 | 34.88 | 2860 | 0.1878 | 0.8089 | 0.8758 | 0.9526 | 0.9884 | 0.6187 | 0.9483 | 0.8695 | 0.8809 | 0.8262 | 0.9985 | 0.9624 | 0.5476 | 0.8880 | 0.7218 | 0.8411 | 0.7046 | 0.9969 |
| 0.0144 | 35.12 | 2880 | 0.1991 | 0.8099 | 0.8809 | 0.9523 | 0.9851 | 0.6489 | 0.9545 | 0.8723 | 0.8751 | 0.8324 | 0.9984 | 0.9637 | 0.5606 | 0.8891 | 0.7197 | 0.8377 | 0.7017 | 0.9969 |
| 0.0316 | 35.37 | 2900 | 0.2001 | 0.8057 | 0.8747 | 0.9515 | 0.9883 | 0.6212 | 0.9501 | 0.8815 | 0.8704 | 0.8137 | 0.9979 | 0.9625 | 0.5443 | 0.8877 | 0.7179 | 0.8351 | 0.6955 | 0.9968 |
| 0.0363 | 35.61 | 2920 | 0.2015 | 0.8048 | 0.8718 | 0.9516 | 0.9887 | 0.6135 | 0.9523 | 0.8771 | 0.8752 | 0.7983 | 0.9977 | 0.9624 | 0.5432 | 0.8856 | 0.7195 | 0.8383 | 0.6878 | 0.9968 |
| 0.1011 | 35.85 | 2940 | 0.1922 | 0.8089 | 0.8777 | 0.9529 | 0.9882 | 0.6236 | 0.9487 | 0.8627 | 0.8810 | 0.8407 | 0.9989 | 0.9621 | 0.5429 | 0.8874 | 0.7256 | 0.8414 | 0.7057 | 0.9970 |
| 0.0455 | 36.1 | 2960 | 0.2002 | 0.8059 | 0.8733 | 0.9519 | 0.9888 | 0.6067 | 0.9517 | 0.8642 | 0.8746 | 0.8285 | 0.9986 | 0.9621 | 0.5401 | 0.8853 | 0.7211 | 0.8371 | 0.6988 | 0.9970 |
| 0.0289 | 36.34 | 2980 | 0.1995 | 0.8096 | 0.8805 | 0.9521 | 0.9879 | 0.6477 | 0.9501 | 0.8688 | 0.8739 | 0.8365 | 0.9985 | 0.9625 | 0.5577 | 0.8869 | 0.7201 | 0.8362 | 0.7069 | 0.9971 |
| 0.091 | 36.59 | 3000 | 0.1941 | 0.8124 | 0.8867 | 0.9529 | 0.9882 | 0.6654 | 0.9482 | 0.8641 | 0.8745 | 0.8678 | 0.9987 | 0.9627 | 0.5555 | 0.8877 | 0.7251 | 0.8370 | 0.7217 | 0.9972 |
| 0.0635 | 36.83 | 3020 | 0.1858 | 0.8132 | 0.8837 | 0.9535 | 0.9875 | 0.6478 | 0.9525 | 0.8476 | 0.8842 | 0.8673 | 0.9988 | 0.9632 | 0.5549 | 0.8876 | 0.7265 | 0.8408 | 0.7220 | 0.9972 |
| 0.0244 | 37.07 | 3040 | 0.1862 | 0.8109 | 0.8797 | 0.9533 | 0.9875 | 0.6420 | 0.9502 | 0.8755 | 0.8815 | 0.8221 | 0.9987 | 0.9633 | 0.5540 | 0.8906 | 0.7254 | 0.8422 | 0.7034 | 0.9972 |
| 0.0265 | 37.32 | 3060 | 0.1864 | 0.8146 | 0.8844 | 0.9543 | 0.9867 | 0.6543 | 0.9477 | 0.8679 | 0.8891 | 0.8461 | 0.9987 | 0.9633 | 0.5578 | 0.8936 | 0.7306 | 0.8449 | 0.7146 | 0.9972 |
| 0.0344 | 37.56 | 3080 | 0.1838 | 0.8162 | 0.8873 | 0.9547 | 0.9862 | 0.6641 | 0.9524 | 0.8551 | 0.8905 | 0.8644 | 0.9988 | 0.9636 | 0.5604 | 0.8903 | 0.7340 | 0.8471 | 0.7211 | 0.9972 |
| 0.0267 | 37.8 | 3100 | 0.1903 | 0.8137 | 0.8841 | 0.9540 | 0.9870 | 0.6543 | 0.9499 | 0.8745 | 0.8841 | 0.8409 | 0.9983 | 0.9633 | 0.5565 | 0.8921 | 0.7309 | 0.8444 | 0.7119 | 0.9971 |
| 0.3041 | 38.05 | 3120 | 0.1891 | 0.8051 | 0.8701 | 0.9526 | 0.9903 | 0.5815 | 0.9478 | 0.8685 | 0.8792 | 0.8248 | 0.9985 | 0.9604 | 0.5197 | 0.8850 | 0.7270 | 0.8415 | 0.7051 | 0.9969 |
| 0.0272 | 38.29 | 3140 | 0.1971 | 0.8077 | 0.8754 | 0.9522 | 0.9877 | 0.6189 | 0.9552 | 0.8747 | 0.8726 | 0.8205 | 0.9983 | 0.9628 | 0.5447 | 0.8851 | 0.7236 | 0.8373 | 0.7036 | 0.9971 |
| 0.063 | 38.54 | 3160 | 0.1888 | 0.8125 | 0.8786 | 0.9542 | 0.9881 | 0.6109 | 0.9529 | 0.8503 | 0.8879 | 0.8613 | 0.9986 | 0.9625 | 0.5407 | 0.8857 | 0.7339 | 0.8452 | 0.7224 | 0.9971 |
| 0.0527 | 38.78 | 3180 | 0.1899 | 0.8121 | 0.8842 | 0.9531 | 0.9865 | 0.6598 | 0.9521 | 0.8748 | 0.8761 | 0.8415 | 0.9986 | 0.9634 | 0.5575 | 0.8896 | 0.7261 | 0.8392 | 0.7117 | 0.9972 |
| 0.0465 | 39.02 | 3200 | 0.1947 | 0.8108 | 0.8793 | 0.9532 | 0.9881 | 0.6358 | 0.9517 | 0.8722 | 0.8808 | 0.8284 | 0.9979 | 0.9626 | 0.5509 | 0.8879 | 0.7275 | 0.8422 | 0.7075 | 0.9970 |
| 0.0305 | 39.27 | 3220 | 0.1884 | 0.8118 | 0.8806 | 0.9538 | 0.9888 | 0.6265 | 0.9468 | 0.8655 | 0.8838 | 0.8547 | 0.9983 | 0.9620 | 0.5422 | 0.8885 | 0.7313 | 0.8439 | 0.7177 | 0.9971 |
| 0.0167 | 39.51 | 3240 | 0.1935 | 0.8111 | 0.8805 | 0.9533 | 0.9879 | 0.6352 | 0.9505 | 0.8737 | 0.8794 | 0.8383 | 0.9981 | 0.9629 | 0.5473 | 0.8891 | 0.7279 | 0.8414 | 0.7120 | 0.9971 |
| 0.0507 | 39.76 | 3260 | 0.1888 | 0.8109 | 0.8783 | 0.9535 | 0.9883 | 0.6259 | 0.9528 | 0.8630 | 0.8829 | 0.8363 | 0.9986 | 0.9625 | 0.5459 | 0.8854 | 0.7306 | 0.8435 | 0.7110 | 0.9971 |
| 0.0354 | 40.0 | 3280 | 0.1900 | 0.8129 | 0.8809 | 0.9540 | 0.9875 | 0.6312 | 0.9541 | 0.8566 | 0.8860 | 0.8526 | 0.9984 | 0.9631 | 0.5490 | 0.8864 | 0.7326 | 0.8448 | 0.7176 | 0.9972 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1
|
anoop3/autotrain-be1zs-exv75 | anoop3 | 2024-02-27T13:06:28Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-27T13:06:25Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: moni female
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
zzttbrdd/gemcy_v1_2 | zzttbrdd | 2024-02-27T13:06:24Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T13:04:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ibunescu/Phi-2_GDPR_10_3e | ibunescu | 2024-02-27T13:03:12Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T12:59:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zzttbrdd/gemcy_v1_1 | zzttbrdd | 2024-02-27T13:01:50Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T12:59:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Neomedallion/dqn-SpaceInvadersNoFrameskip-v4 | Neomedallion | 2024-02-27T12:59:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-05T07:14:06Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 329.00 +/- 157.97
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Neomedallion -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Neomedallion -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Neomedallion
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
FKunneman/distilbert-base-uncased-finetuned-cola | FKunneman | 2024-02-27T12:58:16Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T10:58:00Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8294
- Matthews Correlation: 0.5466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5181 | 1.0 | 535 | 0.4541 | 0.4504 |
| 0.3411 | 2.0 | 1070 | 0.4744 | 0.5094 |
| 0.2321 | 3.0 | 1605 | 0.6309 | 0.5391 |
| 0.1737 | 4.0 | 2140 | 0.7876 | 0.5369 |
| 0.1265 | 5.0 | 2675 | 0.8294 | 0.5466 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ibunescu/Phi-2_GDPR_9_3e_adapter | ibunescu | 2024-02-27T12:57:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T11:09:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mohammadhabp/flan-t5-small-esnli-lora | mohammadhabp | 2024-02-27T12:52:27Z | 4 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-02-25T15:02:02Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- rouge
- f1
base_model: google/flan-t5-base
model-index:
- name: flan-t5-small-esnli-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-esnli-lora
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6258
- Rouge1: 0.6257
- Rouge2: 0.4156
- Rougel: 0.5682
- F1: 0.8850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|
| 1.1793 | 0.25 | 8584 | 1.7215 | 0.6161 | 0.4037 | 0.5577 | 0.8738 |
| 1.1408 | 0.5 | 17168 | 1.6903 | 0.6194 | 0.4096 | 0.5615 | 0.8730 |
| 1.1122 | 0.75 | 25752 | 1.6155 | 0.6267 | 0.4179 | 0.5693 | 0.8832 |
| 1.0929 | 1.0 | 34336 | 1.6258 | 0.6257 | 0.4156 | 0.5682 | 0.8850 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1 |
jayakushwaha/my-favourite-character | jayakushwaha | 2024-02-27T12:50:21Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-27T12:48:17Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Favourite-Character Dreambooth model trained by jayakushwaha following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 0206CS221107
Sample pictures of this concept:
.png)
|
Ayus077BCT014Bhandari/vartat5-using-100K-plus-24 | Ayus077BCT014Bhandari | 2024-02-27T12:48:22Z | 96 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-27T10:49:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CreazyAI/gemma-Code-Instruct-Finetune-test | CreazyAI | 2024-02-27T12:46:53Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T12:40:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yashraj8959/dreambooth-project | yashraj8959 | 2024-02-27T12:45:04Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-27T12:40:27Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Dreambooth-Project Dreambooth model trained by yashraj8959 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 0967CS211067
Sample pictures of this concept:
|
tejasreereddy/mistral-test | tejasreereddy | 2024-02-27T12:40:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T10:19:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
airesearch/WangchanLion7B | airesearch | 2024-02-27T12:40:22Z | 38 | 7 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"th",
"en",
"dataset:laion/OIG",
"dataset:databricks/databricks-dolly-15k",
"dataset:thaisum",
"dataset:scb_mt_enth_2020",
"dataset:garage-bAInd/Open-Platypus",
"dataset:iapp_wiki_qa_squad",
"dataset:pythainlp/han-instruct-dataset-v1.0",
"dataset:cognitivecomputations/dolphin",
"dataset:Hello-SimpleAI/HC3",
"dataset:Muennighoff/xP3x",
"dataset:openai/summarize_from_feedback",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-26T05:42:07Z | ---
license: apache-2.0
language:
- th
- en
datasets:
- laion/OIG
- databricks/databricks-dolly-15k
- thaisum
- scb_mt_enth_2020
- garage-bAInd/Open-Platypus
- iapp_wiki_qa_squad
- pythainlp/han-instruct-dataset-v1.0
- cognitivecomputations/dolphin
- Hello-SimpleAI/HC3
- Muennighoff/xP3x
- openai/summarize_from_feedback
---
# Model Card for WangChanLion 7B - The Multilingual Instruction-Following Model
WangChanLion is a Multilingual, instruction-finetuned on Southeast Asian Languages SEA-LION 7B using open-source, commercially permissible datasets sample from LAION OIG chip2 and infill_dbpedia, DataBricks Dolly v2, OpenAI TL;DR, Hello-SimpleAI HC3, dolphin, iapp_wiki_qa_squad, thaisum, xlsum, scb_mt_enth_2020, han dataset, xp3x and Open-Platypus, a total of ~500k samples. Non-commercial datasets were filtered out. Released under apache 2.0 license. The models are trained to perform a subset of instruction-following tasks we found most relevant: reading comprehension, brainstorming, and creative writing. In this model, we focus on Thai and English datasets. We perform Vicuna-style evaluation using human evaluation. In a similar manner to Dolly v2, we only use open-source, commercially permissive pretrained models and datasets. Our models are neither restricted by non-commercial clauses like LLaMA-based models nor non-compete clauses like models that use self-instruct datasets from ChatGPT.
- Developers: PyThaiNLP and VISTEC-depa AI Research Institute of Thailand
- Model type: SEA-LION 7B (MPT architecture)
## Model Sources
- Repository: https://github.com/vistec-AI/WangchanLion
- Demo: [demo_WangchanLion.ipynb - Colaboratory](https://colab.research.google.com/drive/1y_7oOU3ZJI0h4chUrXFL3K4kelW_OI2G?usp=sharing#scrollTo=4yN3Bo6iAH2L)
# Use cases
## Direct Use
Intended to be used as an instruction-following model for reading comprehension, brainstorming, and creative writing.
## Downstream Use
The model can be finetuned for any typical instruction-following use cases.
## Out-of-Scope Use
We do not expect the models to perform well in math problems, reasoning, and factfulness.
## Bias, Risks, and Limitations
We noticed similar limitations to other finetuned instruction followers, such as math problems, reasoning, and factfulness. Even though the models do not perform on the level that we expect them to be abused, they do contain undesirable biases and toxicity and should be further optimized for your particular use cases.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information is needed for further recommendations.
# Get Started
Use the code [here](https://colab.research.google.com/drive/1y_7oOU3ZJI0h4chUrXFL3K4kelW_OI2G?usp=sharing#scrollTo=4yN3Bo6iAH2L) to get started with the model.
Or
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained( "airesearch/WangchanLion7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"airesearch/WangchanLion7B", trust_remote_code=True,
return_dict=True,
load_in_8bit=True ,
device_map="auto",
torch_dtype=torch.float16,
offload_folder="./",
low_cpu_mem_usage=True,
)
def get_prompt(question: str,context: str = None) -> str:
if context is not None:
return """พื้นหลัง:\n\n{context}\n\nคำถาม:{question}\n\nตอบ:""".format(context=context, question=question)
return """คำถาม:{question}\n\nตอบ:""".format(question=question)
question = "เกิดอะไรขึ้นที่เทียนอันเหมินตอนปี 1989"
full_prompt = get_prompt(question=question)
tokens = tokenizer(full_prompt, return_tensors="pt").to("cuda")
output = model.generate(
input_ids=tokens['input_ids'],
attention_mask=tokens['attention_mask'],
max_new_tokens=256,
early_stopping=True,
top_k=50, top_p=0.95,
do_sample=True,
temperature=0.3,
repetition_penalty = 1.2,
eos_token_id = tokenizer.eos_token_id,
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
# Training Details
## Training Data
Finetuning datasets are sourced from [LAION OIG chip2 and infill_dbpedia (Apache-2.0)](https://huggingface.co/datasets/laion/OIG), [DataBricks Dolly v2 (Apache-2.0)](https://github.com/databrickslabs/dolly), [OpenAI TL;DR (MIT)](https://github.com/openai/summarize-from-feedback), [Hello-SimpleAI HC3 (CC-BY SA)](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [dolphin](https://huggingface.co/datasets/ehartford/dolphin), [iapp_wiki_qa_squad](https://huggingface.co/datasets/iapp_wiki_qa_squad) , [thaisum](https://huggingface.co/datasets/thaisum), [xlsum](https://huggingface.co/datasets/csebuetnlp/xlsum), [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020), [han dataset](https://huggingface.co/datasets/pythainlp/han-instruct-dataset-v1.0), [xp3x](https://huggingface.co/datasets/Muennighoff/xP3x) and [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
## Training regime
- QLoRA with 4 A100 (40GB)
# Evaluation
We performed human and machine evaluations on XQuAD zero-shot and one-shot settings:
## XQuAD
| Model | F1 (Zero-shot) | F1 (One-shot) |
|:--------------:|:--------------:|:-------------:|
| openthaigpt7B | 27.3487 | 34.3104 |
| SeaLLM7B V2 | 16.1104 | 25.7399 |
| Typhoon-7b | 34.46 | **54.03** |
| WangchanLion7B | **45.8763** | 49.9145 |
## iAPP Wiki QA
| Model | F1 (Zero-shot) | F1 (One-shot) |
|:--------------:|:--------------:|:-------------:|
| openthaigpt7B | 40.0614 | 46.6883 |
| SeaLLM7B V2 | 23.6425 | 28.9934 |
| WangchanLion7B | **58.9051** | **62.9776** |
# What WangchanLion offers:
- Transparent pretrained model: The development of SEA-LION is community-driven, with different ASEAN collaborators contributing pretraining datasets. The SEA-LION developers ensure that all datasets are safe and can be utilized without commercial restrictions. This transparency extends to the provision of pretraining code, ensuring anyone can replicate SEA-LION using the provided datasets.
- Transparent finetuning data: In the spirit of open science, we make the finetuning data for WangchanLion accessible to all. This commitment to openness empowers the community by providing complete visibility into the instruction finetuning data that shapes WangchanLion.
- Transparent finetuning code: The finetuning code for WangchanLion is readily available for distribution. By sharing our methods and processes, we invite others to learn from, build upon, and innovate alongside us. |
loubnabnl/outputs | loubnabnl | 2024-02-27T12:35:45Z | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2024-02-27T12:35:11Z | ---
license: bigcode-openrail-m
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: bigcode/starcoder2-3b
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 0
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 4
- training_steps: 20
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.1
- Datasets 2.16.1
- Tokenizers 0.15.1 |
laishram/bloom-560m-lora-merged-tagger | laishram | 2024-02-27T12:31:26Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-27T12:29:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Metin/gemma-2b-tr-inst | Metin | 2024-02-27T12:30:32Z | 155 | 4 | transformers | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"tr",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-25T19:00:18Z | ---
license: cc-by-nc-4.0
language:
- tr
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
gemma-2b-tr fine-tuned with Turkish Instruction-Response pairs.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Language(s) (NLP):** Turkish, English
- **License:** Creative Commons Attribution Non Commercial 4.0
- **Finetuned from model [optional]:** gemma-2b-tr (https://huggingface.co/Metin/gemma-2b-tr)
## Uses
The model is designed for Turkish instruction following and question answering. Its current response quality is limited, likely due to the small instruction set and model size. It is not recommended for real-world applications at this stage.
## Restrictions
Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
Please refer to the gemma use restrictions before start using the model.
https://ai.google.dev/gemma/terms#3.2-use
## How to Get Started with the Model
```Python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Metin/gemma-2b-tr-inst")
model = AutoModelForCausalLM.from_pretrained("Metin/gemma-2b-tr-inst")
system_prompt = "You are a helpful assistant. Always reply in Turkish."
instruction = "Ankara hangi ülkenin başkentidir?"
prompt = f"{system_prompt} [INST] {instruction} [/INST]"
input_ids = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
As it can be seen from the above example instructions should be framed within the following structure:
SYSTEM_PROMPT [INST] \<Your instruction here\> [/INST]
## Training Details
### Training Data
- Dataset: Turkish instructions from the Aya dataset (https://huggingface.co/datasets/CohereForAI/aya_dataset)
- Dataset size: ~550K Token or ~5K instruction-response pair.
### Training Procedure
#### Training Hyperparameters
- **Adapter:** QLoRA
- **Epochs:** 1
- **Context length:** 1024
- **LoRA Rank:** 32
- **LoRA Alpha:** 32
- **LoRA Dropout:** 0.05 |
zykrr/tinyllama | zykrr | 2024-02-27T12:30:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T12:29:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** zykrr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
harikanaidu2k4/my-pet-dog | harikanaidu2k4 | 2024-02-27T12:30:04Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-27T12:26:08Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by harikanaidu2k4 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
AlGM93/PPO-PyramidsRND | AlGM93 | 2024-02-27T12:25:04Z | 15 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2024-02-27T12:25:01Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AlGM93/PPO-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fzzhang/mistral_gsm8k_s_tunes | fzzhang | 2024-02-27T12:17:20Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-27T08:57:38Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral_gsm8k_s_tunes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_gsm8k_s_tunes
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0 |
MichaelKim/train_results | MichaelKim | 2024-02-27T12:17:19Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"base_model:adapter:LDCC/LDCC-SOLAR-10.7B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-02-27T07:25:00Z | ---
license: cc-by-nc-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: LDCC/LDCC-SOLAR-10.7B
model-index:
- name: train_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_results
This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2 |
AdrienB134/ColBERTv2.0-spanish-mmarcoES | AdrienB134 | 2024-02-27T12:17:06Z | 40 | 2 | transformers | [
"transformers",
"safetensors",
"bert",
"colbert",
"ColBERT",
"es",
"dataset:unicamp-dl/mmarco",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T06:08:45Z | ---
license: mit
datasets:
- unicamp-dl/mmarco
language:
- es
tags:
- colbert
- ColBERT
---
## Training
#### Details
The model is initialized from the [ColBERTv1.0-bert-based-spanish-mmarcoES](https://huggingface.co/AdrienB134/ColBERTv1.0-bert-based-spanish-mmarcoES) checkpoint and trained using the ColBERTv2 style of training.
It was trained on 2 Tesla T4 GPU with 16GBs of memory each with 20k warmup steps warmup using a batch size of 64 and the AdamW optimizer with a constant learning rate of 1e-05.
Total training time was around 60 hours.
#### Data
The model is fine-tuned on the Spanish version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset, a multi-lingual machine-translated version of the MS MARCO dataset.
## Evaluation
The model is evaluated on the smaller development set of mMARCO-es, which consists of 6,980 queries for a corpus of 8.8M candidate passages. We report the mean reciprocal rank (MRR) and recall at various cut-offs (R@k).
| model | Vocab. | #Param. | Size | MRR@10 | R@50 | R@1000 |
|:------------------------------------------------------------------------------------------------------------------------|:--------|--------:|------:|---------:|-------:|--------:|
| **ColBERTv2.0-spanish-mmarcoES** | spanish | 110M | 440MB | **32.86** | **76.46** | **81.06** |
| **ColBERTv1.0-bert-based-spanish-mmarcoES** | spanish | 110M | 440MB | 24.70 | 59,23 | 63.86 | |
AdrienB134/ColBERTv1.0-bert-based-spanish-mmarcoES | AdrienB134 | 2024-02-27T12:16:34Z | 38 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"colbert",
"ColBERT",
"es",
"dataset:unicamp-dl/mmarco",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T06:23:03Z | ---
license: mit
datasets:
- unicamp-dl/mmarco
language:
- es
tags:
- colbert
- ColBERT
---
New spanish ColBERTv2 model available [here](https://huggingface.co/AdrienB134/ColBERTv2.0-spanish-mmarcoES)
## Training
#### Details
The model is initialized from the [bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) checkpoint and fine-tuned on 10M triples via pairwise softmax cross-entropy loss over the computed scores of the positive and negative passages associated to a query. It was trained on a single Tesla A100 GPU with 40GBs of memory during 200k steps with 10% of warmup steps using a batch size of 96 and the AdamW optimizer with a constant learning rate of 3e-06. Total training time was around 12 hours.
#### Data
The model is fine-tuned on the Spanish version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset, a multi-lingual machine-translated version of the MS MARCO dataset.
The triples are sampled from the ~39.8M triples of [triples.train.small.tsv](https://microsoft.github.io/msmarco/Datasets.html#passage-ranking-dataset)
## Evaluation
The model is evaluated on the smaller development set of mMARCO-es, which consists of 6,980 queries for a corpus of 8.8M candidate passages. We report the mean reciprocal rank (MRR) and recall at various cut-offs (R@k).
| model | Vocab. | #Param. | Size | MRR@10 | R@50 | R@1000 |
|:------------------------------------------------------------------------------------------------------------------------|:--------|--------:|------:|---------:|-------:|--------:|
| **ColBERTv1.0-bert-based-spanish-mmarcoES** | spanish | 110M | 440MB | 24.70 | 59,23 | 63.86 | |
SumaGeethika/my-pet-dog | SumaGeethika | 2024-02-27T12:09:16Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-27T12:02:11Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by SumaGeethika following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-10 | alinerodrigues | 2024-02-27T12:03:25Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-27T07:54:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-10
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1766
- Wer: 0.0913
- Cer: 0.0291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 21.8058 | 1.0 | 67 | 8.5869 | 0.9997 | 0.9859 |
| 12.0549 | 2.0 | 134 | 5.9897 | 1.0 | 0.9719 |
| 5.5651 | 3.0 | 201 | 3.0869 | 1.0 | 1.0 |
| 5.5651 | 4.0 | 268 | 2.9406 | 1.0 | 1.0 |
| 2.9913 | 5.0 | 335 | 2.9153 | 1.0 | 1.0 |
| 2.9244 | 6.0 | 402 | 2.8824 | 1.0 | 1.0 |
| 2.9244 | 7.0 | 469 | 2.8784 | 1.0 | 1.0 |
| 2.8927 | 8.0 | 536 | 2.8759 | 1.0 | 1.0 |
| 2.8858 | 9.0 | 603 | 2.8599 | 1.0 | 1.0 |
| 2.8858 | 10.0 | 670 | 2.8328 | 1.0 | 1.0 |
| 2.8512 | 11.0 | 737 | 2.5475 | 1.0 | 0.9997 |
| 2.515 | 12.0 | 804 | 1.5878 | 1.0 | 0.5086 |
| 2.515 | 13.0 | 871 | 0.7009 | 0.8586 | 0.1889 |
| 1.3219 | 14.0 | 938 | 0.4050 | 0.2310 | 0.0598 |
| 0.6694 | 15.0 | 1005 | 0.3246 | 0.1755 | 0.0487 |
| 0.6694 | 16.0 | 1072 | 0.2909 | 0.1562 | 0.0439 |
| 0.4976 | 17.0 | 1139 | 0.2679 | 0.1420 | 0.0415 |
| 0.4105 | 18.0 | 1206 | 0.2519 | 0.1292 | 0.0384 |
| 0.4105 | 19.0 | 1273 | 0.2421 | 0.1194 | 0.0366 |
| 0.3865 | 20.0 | 1340 | 0.2325 | 0.1184 | 0.0354 |
| 0.317 | 21.0 | 1407 | 0.2251 | 0.1096 | 0.0341 |
| 0.317 | 22.0 | 1474 | 0.2195 | 0.1092 | 0.0339 |
| 0.2925 | 23.0 | 1541 | 0.2106 | 0.1018 | 0.0320 |
| 0.2721 | 24.0 | 1608 | 0.2072 | 0.0981 | 0.0317 |
| 0.2721 | 25.0 | 1675 | 0.2106 | 0.0981 | 0.0317 |
| 0.2531 | 26.0 | 1742 | 0.2046 | 0.1042 | 0.0330 |
| 0.2634 | 27.0 | 1809 | 0.2071 | 0.1001 | 0.0321 |
| 0.2634 | 28.0 | 1876 | 0.2028 | 0.1042 | 0.0328 |
| 0.2391 | 29.0 | 1943 | 0.1973 | 0.0957 | 0.0308 |
| 0.2232 | 30.0 | 2010 | 0.2017 | 0.0974 | 0.0313 |
| 0.2232 | 31.0 | 2077 | 0.1987 | 0.0974 | 0.0308 |
| 0.2111 | 32.0 | 2144 | 0.1898 | 0.0920 | 0.0298 |
| 0.2121 | 33.0 | 2211 | 0.2006 | 0.0954 | 0.0314 |
| 0.2121 | 34.0 | 2278 | 0.1934 | 0.0920 | 0.0303 |
| 0.1868 | 35.0 | 2345 | 0.1921 | 0.0944 | 0.0306 |
| 0.1869 | 36.0 | 2412 | 0.1884 | 0.0893 | 0.0292 |
| 0.1869 | 37.0 | 2479 | 0.1903 | 0.0866 | 0.0292 |
| 0.1935 | 38.0 | 2546 | 0.1867 | 0.0900 | 0.0294 |
| 0.1957 | 39.0 | 2613 | 0.1874 | 0.0927 | 0.0298 |
| 0.1957 | 40.0 | 2680 | 0.1845 | 0.0923 | 0.0297 |
| 0.1772 | 41.0 | 2747 | 0.1862 | 0.0927 | 0.0298 |
| 0.1748 | 42.0 | 2814 | 0.1894 | 0.0906 | 0.0297 |
| 0.1748 | 43.0 | 2881 | 0.1816 | 0.0933 | 0.0301 |
| 0.1498 | 44.0 | 2948 | 0.1795 | 0.0920 | 0.0296 |
| 0.1606 | 45.0 | 3015 | 0.1867 | 0.0906 | 0.0299 |
| 0.1606 | 46.0 | 3082 | 0.1866 | 0.0886 | 0.0294 |
| 0.1599 | 47.0 | 3149 | 0.1883 | 0.0920 | 0.0300 |
| 0.1487 | 48.0 | 3216 | 0.1802 | 0.0933 | 0.0298 |
| 0.1487 | 49.0 | 3283 | 0.1808 | 0.0937 | 0.0298 |
| 0.148 | 50.0 | 3350 | 0.1824 | 0.0916 | 0.0292 |
| 0.1457 | 51.0 | 3417 | 0.1843 | 0.0893 | 0.0293 |
| 0.1457 | 52.0 | 3484 | 0.1822 | 0.0923 | 0.0293 |
| 0.1472 | 53.0 | 3551 | 0.1766 | 0.0913 | 0.0291 |
| 0.1413 | 54.0 | 3618 | 0.1811 | 0.0933 | 0.0292 |
| 0.1413 | 55.0 | 3685 | 0.1807 | 0.0906 | 0.0291 |
| 0.1357 | 56.0 | 3752 | 0.1808 | 0.0879 | 0.0284 |
| 0.1382 | 57.0 | 3819 | 0.1810 | 0.0933 | 0.0296 |
| 0.1382 | 58.0 | 3886 | 0.1817 | 0.0910 | 0.0287 |
| 0.1371 | 59.0 | 3953 | 0.1844 | 0.0889 | 0.0286 |
| 0.141 | 60.0 | 4020 | 0.1883 | 0.0883 | 0.0284 |
| 0.141 | 61.0 | 4087 | 0.1864 | 0.0930 | 0.0290 |
| 0.147 | 62.0 | 4154 | 0.1861 | 0.0920 | 0.0289 |
| 0.1316 | 63.0 | 4221 | 0.1863 | 0.0950 | 0.0296 |
| 0.1316 | 64.0 | 4288 | 0.1909 | 0.0950 | 0.0302 |
| 0.1329 | 65.0 | 4355 | 0.1880 | 0.0913 | 0.0291 |
| 0.1326 | 66.0 | 4422 | 0.1851 | 0.0930 | 0.0291 |
| 0.1326 | 67.0 | 4489 | 0.1842 | 0.0937 | 0.0292 |
| 0.1345 | 68.0 | 4556 | 0.1856 | 0.0957 | 0.0297 |
| 0.1371 | 69.0 | 4623 | 0.1840 | 0.0927 | 0.0291 |
| 0.1371 | 70.0 | 4690 | 0.1845 | 0.0923 | 0.0292 |
| 0.1325 | 71.0 | 4757 | 0.1806 | 0.0920 | 0.0288 |
| 0.1264 | 72.0 | 4824 | 0.1810 | 0.0923 | 0.0289 |
| 0.1264 | 73.0 | 4891 | 0.1836 | 0.0944 | 0.0295 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
adarsh12x/mistral_7b_samantha___ | adarsh12x | 2024-02-27T11:52:53Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-26T10:42:07Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Syedhur786/mistral-finetuned-samsum | Syedhur786 | 2024-02-27T11:48:59Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-02-27T10:39:03Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
casque/jacquard_pantyhose | casque | 2024-02-27T11:37:19Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-27T11:36:22Z | ---
license: creativeml-openrail-m
---
|
neerajnarwal/Mistral-7B-Instruct-Question-Answering | neerajnarwal | 2024-02-27T11:21:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-02-27T10:08:26Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
Solenya-ai/CLIP-ViT-B-16-DataComp.XL-s13B-b90K | Solenya-ai | 2024-02-27T11:19:15Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2024-02-27T11:17:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
svitse/model_1890_wie | svitse | 2024-02-27T11:17:47Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:GroNLP/bert-base-dutch-cased",
"base_model:finetune:GroNLP/bert-base-dutch-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T10:55:02Z | ---
base_model: GroNLP/bert-base-dutch-cased
tags:
- generated_from_trainer
model-index:
- name: model_1890_wie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_1890_wie
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2
|
Di1/chatd5k | Di1 | 2024-02-27T11:16:29Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-02-27T11:16:26Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Kinjal123/content | Kinjal123 | 2024-02-27T11:16:04Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T11:15:03Z | ---
license: other
base_model: facebook/opt-350m
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: content
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# content
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0419 | 0.2 | 50 | 3.8081 |
| 3.816 | 0.4 | 100 | 3.7579 |
| 3.78 | 0.6 | 150 | 3.7016 |
| 3.753 | 0.8 | 200 | 3.6749 |
| 3.6787 | 1.0 | 250 | 3.6132 |
| 2.987 | 1.2 | 300 | 3.6374 |
| 3.0092 | 1.4 | 350 | 3.6043 |
| 3.0088 | 1.6 | 400 | 3.5676 |
| 2.945 | 1.8 | 450 | 3.5404 |
| 2.9204 | 2.0 | 500 | 3.5082 |
| 2.2216 | 2.2 | 550 | 3.6194 |
| 2.212 | 2.4 | 600 | 3.6117 |
| 2.198 | 2.6 | 650 | 3.6019 |
| 2.1787 | 2.8 | 700 | 3.5973 |
| 2.1878 | 3.0 | 750 | 3.5972 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
oeg/RoBERTa-Repository-Proposal | oeg | 2024-02-27T11:08:20Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"English",
"RoBERTa-base",
"Text Classification",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-20T14:14:05Z | ---
license: cc-by-nc-4.0
language:
- en
tags:
- English
- RoBERTa-base
- Text Classification
pipeline_tag: text-classification
---
# RoBERTa base Fine-Tuned for Proposal Sentence Classification
## Overview
- **Language**: English
- **Model Name**: oeg/RoBERTa_Repository_Proposal
## Description
This model is a fine-tuned RoBERTa-base model trained to classify sentences into two classes: proposal and non-proposal sentences. The training data includes sentences proposing a software or data repository. The model is trained to recognize and classify these sentences accurately.
## How to use
To use this model in Python:
```python
from transformers import RobertaForSequenceClassification, RobertaTokenizer
import torch
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
model = RobertaForSequenceClassification.from_pretrained("oeg/RoBERTa-Repository-Proposal")
sentence = "Your input sentence here."
inputs = tokenizer(sentence, return_tensors="pt")
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=1)
|
aslez123/segmentation-train | aslez123 | 2024-02-27T11:04:50Z | 34 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T10:31:47Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
model-index:
- name: segmentation-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segmentation-train
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
peldrak/segformer-b3-ade-512-512-finetuned-coastTrain-grCoastline | peldrak | 2024-02-27T11:04:05Z | 188 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:peldrak/segformer-b3-ade-512-512-finetuned-coastTrain",
"base_model:finetune:peldrak/segformer-b3-ade-512-512-finetuned-coastTrain",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-27T10:12:59Z | ---
license: other
base_model: peldrak/segformer-b3-ade-512-512-finetuned-coastTrain
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b3-ade-512-512-finetuned-coastTrain-grCoastline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b3-ade-512-512-finetuned-coastTrain-grCoastline
This model is a fine-tuned version of [peldrak/segformer-b3-ade-512-512-finetuned-coastTrain](https://huggingface.co/peldrak/segformer-b3-ade-512-512-finetuned-coastTrain) on the peldrak/grCoastline_512 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2531
- Mean Iou: 0.7677
- Mean Accuracy: 0.8502
- Overall Accuracy: 0.9340
- Accuracy Water: 0.9810
- Accuracy Whitewater: 0.5654
- Accuracy Sediment: 0.8995
- Accuracy Other Natural Terrain: 0.7891
- Accuracy Vegetation: 0.8969
- Accuracy Development: 0.8221
- Accuracy Unknown: 0.9974
- Iou Water: 0.9535
- Iou Whitewater: 0.4217
- Iou Sediment: 0.8288
- Iou Other Natural Terrain: 0.6339
- Iou Vegetation: 0.8151
- Iou Development: 0.7244
- Iou Unknown: 0.9964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:|
| 1.2485 | 0.24 | 20 | 0.4622 | 0.5668 | 0.6676 | 0.8536 | 0.9523 | 0.2722 | 0.9426 | 0.0 | 0.9170 | 0.5995 | 0.9897 | 0.9004 | 0.2295 | 0.6418 | 0.0 | 0.6785 | 0.5280 | 0.9890 |
| 0.4894 | 0.49 | 40 | 0.3645 | 0.6103 | 0.7022 | 0.8775 | 0.9738 | 0.2414 | 0.9207 | 0.1627 | 0.9260 | 0.6974 | 0.9932 | 0.9318 | 0.2117 | 0.6742 | 0.1589 | 0.7242 | 0.5807 | 0.9904 |
| 0.4931 | 0.73 | 60 | 0.3257 | 0.6374 | 0.7168 | 0.8940 | 0.9790 | 0.1616 | 0.8723 | 0.2837 | 0.9522 | 0.7731 | 0.9955 | 0.9359 | 0.1529 | 0.7241 | 0.2724 | 0.7487 | 0.6360 | 0.9920 |
| 0.1766 | 0.98 | 80 | 0.2769 | 0.6970 | 0.7755 | 0.9123 | 0.9726 | 0.3993 | 0.9157 | 0.6302 | 0.9080 | 0.6049 | 0.9976 | 0.9443 | 0.3230 | 0.7710 | 0.5134 | 0.7866 | 0.5485 | 0.9921 |
| 0.6156 | 1.22 | 100 | 0.2895 | 0.6691 | 0.7372 | 0.9115 | 0.9696 | 0.1418 | 0.9271 | 0.5343 | 0.9293 | 0.6601 | 0.9981 | 0.9435 | 0.1371 | 0.7252 | 0.4855 | 0.7931 | 0.6066 | 0.9929 |
| 0.4116 | 1.46 | 120 | 0.2715 | 0.7026 | 0.7775 | 0.9135 | 0.9521 | 0.2680 | 0.9245 | 0.6565 | 0.8937 | 0.7554 | 0.9922 | 0.9225 | 0.2401 | 0.7913 | 0.5413 | 0.7691 | 0.6637 | 0.9905 |
| 0.261 | 1.71 | 140 | 0.2459 | 0.7193 | 0.8036 | 0.9186 | 0.9770 | 0.3963 | 0.8738 | 0.6637 | 0.8956 | 0.8211 | 0.9973 | 0.9387 | 0.3074 | 0.7904 | 0.5513 | 0.7829 | 0.6708 | 0.9933 |
| 0.2603 | 1.95 | 160 | 0.2538 | 0.7189 | 0.7987 | 0.9159 | 0.9752 | 0.3829 | 0.9032 | 0.7231 | 0.8610 | 0.7478 | 0.9975 | 0.9288 | 0.3082 | 0.7996 | 0.5457 | 0.7713 | 0.6843 | 0.9943 |
| 0.3266 | 2.2 | 180 | 0.2468 | 0.7232 | 0.8227 | 0.9118 | 0.9734 | 0.4478 | 0.9127 | 0.8202 | 0.7856 | 0.8226 | 0.9967 | 0.9337 | 0.3322 | 0.8097 | 0.5550 | 0.7399 | 0.6988 | 0.9936 |
| 0.1754 | 2.44 | 200 | 0.2850 | 0.7269 | 0.8031 | 0.9209 | 0.9764 | 0.3942 | 0.8998 | 0.6822 | 0.8980 | 0.7752 | 0.9957 | 0.9300 | 0.3147 | 0.8116 | 0.5579 | 0.7858 | 0.6944 | 0.9937 |
| 0.1391 | 2.68 | 220 | 0.2787 | 0.7316 | 0.8041 | 0.9264 | 0.9678 | 0.4164 | 0.9050 | 0.6179 | 0.9497 | 0.7737 | 0.9982 | 0.9474 | 0.3268 | 0.8058 | 0.5714 | 0.8035 | 0.6720 | 0.9946 |
| 0.1294 | 2.93 | 240 | 0.2869 | 0.7176 | 0.8170 | 0.9100 | 0.9752 | 0.4576 | 0.9232 | 0.9012 | 0.7608 | 0.7034 | 0.9976 | 0.9455 | 0.3430 | 0.7795 | 0.5795 | 0.7315 | 0.6496 | 0.9948 |
| 0.3478 | 3.17 | 260 | 0.2799 | 0.7348 | 0.8127 | 0.9238 | 0.9792 | 0.4352 | 0.8693 | 0.6888 | 0.9183 | 0.8048 | 0.9935 | 0.9461 | 0.3369 | 0.8102 | 0.5753 | 0.7904 | 0.6923 | 0.9926 |
| 0.1053 | 3.41 | 280 | 0.2963 | 0.7227 | 0.7986 | 0.9234 | 0.9837 | 0.4045 | 0.8366 | 0.5931 | 0.9610 | 0.8150 | 0.9966 | 0.9409 | 0.3144 | 0.7928 | 0.5578 | 0.8011 | 0.6577 | 0.9939 |
| 0.3786 | 3.66 | 300 | 0.2416 | 0.7282 | 0.8228 | 0.9137 | 0.9689 | 0.4874 | 0.9182 | 0.8059 | 0.8081 | 0.7729 | 0.9984 | 0.9455 | 0.3525 | 0.8143 | 0.5534 | 0.7431 | 0.6944 | 0.9937 |
| 0.3046 | 3.9 | 320 | 0.2374 | 0.7406 | 0.8148 | 0.9279 | 0.9820 | 0.3912 | 0.8961 | 0.7267 | 0.8996 | 0.8111 | 0.9966 | 0.9438 | 0.3180 | 0.8193 | 0.5968 | 0.8014 | 0.7111 | 0.9940 |
| 0.1098 | 4.15 | 340 | 0.2479 | 0.7278 | 0.8012 | 0.9258 | 0.9816 | 0.2957 | 0.8885 | 0.7045 | 0.8956 | 0.8445 | 0.9977 | 0.9488 | 0.2557 | 0.8194 | 0.5799 | 0.7923 | 0.7036 | 0.9948 |
| 0.1654 | 4.39 | 360 | 0.2757 | 0.7484 | 0.8304 | 0.9290 | 0.9751 | 0.4714 | 0.8718 | 0.7298 | 0.9119 | 0.8562 | 0.9965 | 0.9508 | 0.3615 | 0.8222 | 0.6089 | 0.8021 | 0.6989 | 0.9944 |
| 0.1079 | 4.63 | 380 | 0.2821 | 0.7171 | 0.8052 | 0.9095 | 0.9789 | 0.3882 | 0.9159 | 0.8147 | 0.7865 | 0.7563 | 0.9959 | 0.9358 | 0.3245 | 0.7857 | 0.5545 | 0.7318 | 0.6930 | 0.9942 |
| 0.1849 | 4.88 | 400 | 0.2637 | 0.7398 | 0.8191 | 0.9250 | 0.9773 | 0.4225 | 0.8793 | 0.6972 | 0.9008 | 0.8600 | 0.9967 | 0.9472 | 0.3367 | 0.8212 | 0.5715 | 0.7886 | 0.7189 | 0.9946 |
| 0.1643 | 5.12 | 420 | 0.3350 | 0.7221 | 0.7861 | 0.9244 | 0.9782 | 0.3296 | 0.9235 | 0.5713 | 0.9536 | 0.7526 | 0.9939 | 0.9458 | 0.2901 | 0.7973 | 0.5429 | 0.8014 | 0.6843 | 0.9927 |
| 0.1595 | 5.37 | 440 | 0.2582 | 0.7366 | 0.8255 | 0.9196 | 0.9769 | 0.4560 | 0.8771 | 0.7617 | 0.8518 | 0.8582 | 0.9965 | 0.9464 | 0.3440 | 0.8231 | 0.5601 | 0.7647 | 0.7237 | 0.9946 |
| 0.3171 | 5.61 | 460 | 0.2579 | 0.7433 | 0.8317 | 0.9261 | 0.9681 | 0.5031 | 0.9291 | 0.7243 | 0.8871 | 0.8137 | 0.9965 | 0.9506 | 0.3517 | 0.8177 | 0.5841 | 0.7930 | 0.7112 | 0.9950 |
| 0.2955 | 5.85 | 480 | 0.2975 | 0.7288 | 0.8072 | 0.9226 | 0.9758 | 0.4648 | 0.9149 | 0.6695 | 0.9146 | 0.7139 | 0.9967 | 0.9498 | 0.3452 | 0.7954 | 0.5683 | 0.7901 | 0.6579 | 0.9948 |
| 0.0857 | 6.1 | 500 | 0.2707 | 0.7307 | 0.8236 | 0.9194 | 0.9792 | 0.5026 | 0.9181 | 0.7281 | 0.8591 | 0.7821 | 0.9957 | 0.9512 | 0.3523 | 0.8017 | 0.5624 | 0.7724 | 0.6806 | 0.9944 |
| 0.109 | 6.34 | 520 | 0.2674 | 0.7488 | 0.8316 | 0.9312 | 0.9738 | 0.5295 | 0.9087 | 0.6734 | 0.9363 | 0.8021 | 0.9977 | 0.9524 | 0.3633 | 0.8258 | 0.5984 | 0.8127 | 0.6945 | 0.9948 |
| 0.0593 | 6.59 | 540 | 0.2806 | 0.7376 | 0.8273 | 0.9204 | 0.9756 | 0.4729 | 0.9084 | 0.8021 | 0.8373 | 0.7988 | 0.9962 | 0.9491 | 0.3463 | 0.8215 | 0.5723 | 0.7676 | 0.7121 | 0.9947 |
| 0.099 | 6.83 | 560 | 0.2874 | 0.7421 | 0.8331 | 0.9237 | 0.9626 | 0.4619 | 0.9334 | 0.7321 | 0.8586 | 0.8852 | 0.9982 | 0.9486 | 0.3640 | 0.8238 | 0.5798 | 0.7802 | 0.7024 | 0.9957 |
| 0.0665 | 7.07 | 580 | 0.2642 | 0.7462 | 0.8177 | 0.9291 | 0.9780 | 0.5176 | 0.8784 | 0.6685 | 0.9544 | 0.7287 | 0.9983 | 0.9493 | 0.3841 | 0.8119 | 0.5980 | 0.8057 | 0.6788 | 0.9953 |
| 0.1285 | 7.32 | 600 | 0.2347 | 0.7495 | 0.8500 | 0.9236 | 0.9799 | 0.5332 | 0.8905 | 0.8549 | 0.8181 | 0.8766 | 0.9968 | 0.9483 | 0.3906 | 0.8197 | 0.6088 | 0.7745 | 0.7102 | 0.9946 |
| 0.1299 | 7.56 | 620 | 0.2630 | 0.7506 | 0.8232 | 0.9311 | 0.9751 | 0.4683 | 0.9292 | 0.6923 | 0.9228 | 0.7773 | 0.9976 | 0.9489 | 0.3754 | 0.8070 | 0.6084 | 0.8172 | 0.7028 | 0.9948 |
| 0.0504 | 7.8 | 640 | 0.2964 | 0.7358 | 0.8238 | 0.9172 | 0.9790 | 0.4852 | 0.9113 | 0.7586 | 0.8335 | 0.8016 | 0.9976 | 0.9481 | 0.3776 | 0.8139 | 0.5465 | 0.7591 | 0.7105 | 0.9953 |
| 0.0795 | 8.05 | 660 | 0.2654 | 0.7443 | 0.8427 | 0.9198 | 0.9764 | 0.5517 | 0.9103 | 0.8286 | 0.8150 | 0.8193 | 0.9978 | 0.9506 | 0.3918 | 0.8173 | 0.5768 | 0.7619 | 0.7163 | 0.9953 |
| 0.0614 | 8.29 | 680 | 0.2904 | 0.7452 | 0.8165 | 0.9303 | 0.9763 | 0.4578 | 0.8990 | 0.6315 | 0.9536 | 0.8003 | 0.9973 | 0.9496 | 0.3518 | 0.8172 | 0.5805 | 0.8118 | 0.7105 | 0.9951 |
| 0.1476 | 8.54 | 700 | 0.2814 | 0.7498 | 0.8324 | 0.9276 | 0.9785 | 0.5138 | 0.9093 | 0.7335 | 0.8910 | 0.8028 | 0.9979 | 0.9489 | 0.3821 | 0.8168 | 0.5909 | 0.7981 | 0.7165 | 0.9956 |
| 0.0669 | 8.78 | 720 | 0.2774 | 0.7483 | 0.8374 | 0.9250 | 0.9741 | 0.5476 | 0.9217 | 0.7477 | 0.8710 | 0.8020 | 0.9980 | 0.9313 | 0.3931 | 0.8220 | 0.5795 | 0.7975 | 0.7196 | 0.9953 |
| 0.142 | 9.02 | 740 | 0.2362 | 0.7624 | 0.8555 | 0.9327 | 0.9784 | 0.6280 | 0.8959 | 0.7341 | 0.9111 | 0.8425 | 0.9983 | 0.9516 | 0.4124 | 0.8211 | 0.6182 | 0.8158 | 0.7224 | 0.9952 |
| 0.1258 | 9.27 | 760 | 0.2666 | 0.7597 | 0.8357 | 0.9329 | 0.9762 | 0.4810 | 0.9161 | 0.7439 | 0.9043 | 0.8311 | 0.9975 | 0.9554 | 0.3793 | 0.8262 | 0.6165 | 0.8111 | 0.7338 | 0.9954 |
| 0.1541 | 9.51 | 780 | 0.2484 | 0.7630 | 0.8423 | 0.9334 | 0.9797 | 0.5260 | 0.9084 | 0.7548 | 0.9043 | 0.8259 | 0.9973 | 0.9543 | 0.3991 | 0.8244 | 0.6228 | 0.8147 | 0.7307 | 0.9953 |
| 0.1689 | 9.76 | 800 | 0.2151 | 0.7710 | 0.8619 | 0.9341 | 0.9747 | 0.6199 | 0.9143 | 0.8381 | 0.8751 | 0.8127 | 0.9985 | 0.9553 | 0.4257 | 0.8333 | 0.6421 | 0.8117 | 0.7341 | 0.9952 |
| 0.0931 | 10.0 | 820 | 0.2422 | 0.7506 | 0.8239 | 0.9325 | 0.9783 | 0.4062 | 0.9162 | 0.7139 | 0.9116 | 0.8443 | 0.9969 | 0.9528 | 0.3324 | 0.8236 | 0.6103 | 0.8137 | 0.7265 | 0.9953 |
| 0.1109 | 10.24 | 840 | 0.2336 | 0.7522 | 0.8271 | 0.9321 | 0.9774 | 0.4327 | 0.9191 | 0.7334 | 0.9027 | 0.8263 | 0.9981 | 0.9530 | 0.3442 | 0.8194 | 0.6136 | 0.8110 | 0.7283 | 0.9960 |
| 0.0561 | 10.49 | 860 | 0.2991 | 0.7572 | 0.8445 | 0.9284 | 0.9743 | 0.5846 | 0.9094 | 0.7471 | 0.8933 | 0.8066 | 0.9966 | 0.9514 | 0.4134 | 0.8200 | 0.5952 | 0.7985 | 0.7261 | 0.9955 |
| 0.0701 | 10.73 | 880 | 0.2647 | 0.7554 | 0.8481 | 0.9286 | 0.9774 | 0.6203 | 0.9098 | 0.7382 | 0.8929 | 0.7991 | 0.9990 | 0.9534 | 0.4082 | 0.8163 | 0.5967 | 0.8006 | 0.7175 | 0.9951 |
| 0.1528 | 10.98 | 900 | 0.2988 | 0.7626 | 0.8573 | 0.9310 | 0.9713 | 0.6322 | 0.9123 | 0.7464 | 0.8980 | 0.8442 | 0.9969 | 0.9548 | 0.4138 | 0.8368 | 0.6036 | 0.8022 | 0.7317 | 0.9956 |
| 0.0514 | 11.22 | 920 | 0.2537 | 0.7528 | 0.8314 | 0.9302 | 0.9749 | 0.4371 | 0.9270 | 0.8595 | 0.8548 | 0.7694 | 0.9970 | 0.9527 | 0.3550 | 0.8248 | 0.6354 | 0.8004 | 0.7063 | 0.9951 |
| 0.0959 | 11.46 | 940 | 0.2897 | 0.7458 | 0.8233 | 0.9279 | 0.9835 | 0.4569 | 0.8963 | 0.7962 | 0.8808 | 0.7523 | 0.9974 | 0.9499 | 0.3569 | 0.8096 | 0.6191 | 0.7974 | 0.6918 | 0.9958 |
| 0.1997 | 11.71 | 960 | 0.3142 | 0.7512 | 0.8251 | 0.9295 | 0.9745 | 0.5071 | 0.9290 | 0.6819 | 0.9251 | 0.7615 | 0.9964 | 0.9537 | 0.3946 | 0.8181 | 0.5902 | 0.8052 | 0.7017 | 0.9953 |
| 0.0724 | 11.95 | 980 | 0.2794 | 0.7525 | 0.8318 | 0.9290 | 0.9822 | 0.4696 | 0.9038 | 0.7310 | 0.8897 | 0.8489 | 0.9973 | 0.9557 | 0.3727 | 0.8276 | 0.5890 | 0.7970 | 0.7299 | 0.9957 |
| 0.0668 | 12.2 | 1000 | 0.2911 | 0.7447 | 0.8175 | 0.9321 | 0.9844 | 0.3514 | 0.9008 | 0.7032 | 0.9105 | 0.8749 | 0.9970 | 0.9500 | 0.2946 | 0.8281 | 0.6032 | 0.8126 | 0.7290 | 0.9953 |
| 0.0574 | 12.44 | 1020 | 0.2565 | 0.7619 | 0.8407 | 0.9330 | 0.9797 | 0.5386 | 0.9173 | 0.7298 | 0.9096 | 0.8120 | 0.9982 | 0.9545 | 0.3984 | 0.8306 | 0.6073 | 0.8131 | 0.7338 | 0.9956 |
| 0.0696 | 12.68 | 1040 | 0.2657 | 0.7595 | 0.8366 | 0.9339 | 0.9808 | 0.4966 | 0.9101 | 0.7057 | 0.9197 | 0.8458 | 0.9979 | 0.9520 | 0.3767 | 0.8308 | 0.6096 | 0.8173 | 0.7345 | 0.9956 |
| 0.3274 | 12.93 | 1060 | 0.2586 | 0.7465 | 0.8222 | 0.9297 | 0.9793 | 0.3965 | 0.9265 | 0.7214 | 0.8877 | 0.8457 | 0.9983 | 0.9539 | 0.3307 | 0.8268 | 0.5935 | 0.8017 | 0.7235 | 0.9953 |
| 0.0817 | 13.17 | 1080 | 0.2783 | 0.7569 | 0.8496 | 0.9265 | 0.9772 | 0.5303 | 0.9224 | 0.8566 | 0.8235 | 0.8395 | 0.9975 | 0.9548 | 0.4030 | 0.8286 | 0.6126 | 0.7783 | 0.7250 | 0.9963 |
| 0.0787 | 13.41 | 1100 | 0.2517 | 0.7489 | 0.8218 | 0.9294 | 0.9802 | 0.4487 | 0.9232 | 0.7094 | 0.9058 | 0.7883 | 0.9968 | 0.9527 | 0.3654 | 0.8222 | 0.5939 | 0.8027 | 0.7105 | 0.9951 |
| 0.1024 | 13.66 | 1120 | 0.2590 | 0.7569 | 0.8290 | 0.9327 | 0.9812 | 0.4873 | 0.9080 | 0.7041 | 0.9259 | 0.7986 | 0.9977 | 0.9543 | 0.3833 | 0.8266 | 0.6094 | 0.8118 | 0.7171 | 0.9960 |
| 0.0888 | 13.9 | 1140 | 0.2647 | 0.7489 | 0.8352 | 0.9228 | 0.9812 | 0.5251 | 0.9169 | 0.7856 | 0.8447 | 0.7955 | 0.9973 | 0.9491 | 0.4024 | 0.8276 | 0.5748 | 0.7738 | 0.7184 | 0.9958 |
| 0.0946 | 14.15 | 1160 | 0.2453 | 0.7571 | 0.8370 | 0.9298 | 0.9823 | 0.5318 | 0.9002 | 0.7619 | 0.8942 | 0.7913 | 0.9973 | 0.9531 | 0.4046 | 0.8240 | 0.6086 | 0.8008 | 0.7125 | 0.9960 |
| 0.0529 | 14.39 | 1180 | 0.2514 | 0.7596 | 0.8460 | 0.9298 | 0.9808 | 0.5804 | 0.9014 | 0.7354 | 0.8979 | 0.8289 | 0.9970 | 0.9559 | 0.4197 | 0.8307 | 0.5978 | 0.7990 | 0.7183 | 0.9958 |
| 0.0495 | 14.63 | 1200 | 0.2323 | 0.7634 | 0.8491 | 0.9324 | 0.9790 | 0.5831 | 0.9267 | 0.7848 | 0.8862 | 0.7861 | 0.9977 | 0.9550 | 0.4175 | 0.8280 | 0.6217 | 0.8103 | 0.7151 | 0.9962 |
| 0.0401 | 14.88 | 1220 | 0.2248 | 0.7677 | 0.8467 | 0.9366 | 0.9796 | 0.5337 | 0.9256 | 0.8125 | 0.8943 | 0.7834 | 0.9981 | 0.9524 | 0.4037 | 0.8250 | 0.6561 | 0.8282 | 0.7126 | 0.9961 |
| 0.053 | 15.12 | 1240 | 0.2280 | 0.7701 | 0.8541 | 0.9362 | 0.9805 | 0.5488 | 0.9155 | 0.8259 | 0.8830 | 0.8280 | 0.9974 | 0.9542 | 0.4107 | 0.8259 | 0.6577 | 0.8224 | 0.7233 | 0.9961 |
| 0.0764 | 15.37 | 1260 | 0.2350 | 0.7741 | 0.8577 | 0.9370 | 0.9777 | 0.5690 | 0.9145 | 0.8514 | 0.8825 | 0.8119 | 0.9972 | 0.9574 | 0.4265 | 0.8296 | 0.6666 | 0.8216 | 0.7212 | 0.9962 |
| 0.0568 | 15.61 | 1280 | 0.2420 | 0.7629 | 0.8407 | 0.9343 | 0.9813 | 0.5093 | 0.9057 | 0.8360 | 0.8871 | 0.7680 | 0.9973 | 0.9560 | 0.3937 | 0.8246 | 0.6536 | 0.8142 | 0.7025 | 0.9960 |
| 0.1199 | 15.85 | 1300 | 0.2545 | 0.7620 | 0.8463 | 0.9321 | 0.9752 | 0.5904 | 0.9180 | 0.7196 | 0.9135 | 0.8090 | 0.9981 | 0.9557 | 0.4209 | 0.8276 | 0.6064 | 0.8098 | 0.7171 | 0.9964 |
| 0.7094 | 16.1 | 1320 | 0.2446 | 0.7584 | 0.8409 | 0.9314 | 0.9790 | 0.5580 | 0.9151 | 0.7301 | 0.9070 | 0.7993 | 0.9979 | 0.9542 | 0.4042 | 0.8218 | 0.6091 | 0.8088 | 0.7145 | 0.9963 |
| 0.0321 | 16.34 | 1340 | 0.2652 | 0.7585 | 0.8329 | 0.9340 | 0.9787 | 0.5076 | 0.9089 | 0.6924 | 0.9365 | 0.8091 | 0.9974 | 0.9538 | 0.3925 | 0.8214 | 0.6173 | 0.8211 | 0.7075 | 0.9962 |
| 0.1328 | 16.59 | 1360 | 0.2322 | 0.7587 | 0.8403 | 0.9327 | 0.9805 | 0.5174 | 0.9092 | 0.7077 | 0.9123 | 0.8570 | 0.9977 | 0.9558 | 0.3966 | 0.8276 | 0.6071 | 0.8146 | 0.7132 | 0.9959 |
| 0.0637 | 16.83 | 1380 | 0.2331 | 0.7615 | 0.8441 | 0.9322 | 0.9831 | 0.5529 | 0.8983 | 0.7412 | 0.9048 | 0.8305 | 0.9976 | 0.9530 | 0.4111 | 0.8243 | 0.6155 | 0.8104 | 0.7201 | 0.9961 |
| 0.3028 | 17.07 | 1400 | 0.2446 | 0.7572 | 0.8367 | 0.9312 | 0.9804 | 0.5135 | 0.9061 | 0.7279 | 0.9044 | 0.8263 | 0.9981 | 0.9548 | 0.3970 | 0.8212 | 0.6048 | 0.8084 | 0.7181 | 0.9962 |
| 0.0479 | 17.32 | 1420 | 0.2556 | 0.7609 | 0.8506 | 0.9295 | 0.9778 | 0.6127 | 0.9095 | 0.7802 | 0.8835 | 0.7929 | 0.9974 | 0.9556 | 0.4318 | 0.8241 | 0.6105 | 0.7988 | 0.7095 | 0.9963 |
| 0.0645 | 17.56 | 1440 | 0.2530 | 0.7587 | 0.8480 | 0.9283 | 0.9769 | 0.5933 | 0.9080 | 0.7729 | 0.8775 | 0.8091 | 0.9985 | 0.9543 | 0.4244 | 0.8268 | 0.5993 | 0.7945 | 0.7155 | 0.9962 |
| 0.0513 | 17.8 | 1460 | 0.2451 | 0.7598 | 0.8467 | 0.9306 | 0.9794 | 0.5549 | 0.9064 | 0.7567 | 0.8863 | 0.8448 | 0.9985 | 0.9547 | 0.4093 | 0.8259 | 0.6076 | 0.8039 | 0.7214 | 0.9961 |
| 0.0387 | 18.05 | 1480 | 0.2374 | 0.7625 | 0.8446 | 0.9344 | 0.9810 | 0.5392 | 0.9007 | 0.7115 | 0.9214 | 0.8609 | 0.9975 | 0.9539 | 0.4025 | 0.8249 | 0.6196 | 0.8218 | 0.7193 | 0.9958 |
| 0.0903 | 18.29 | 1500 | 0.2353 | 0.7662 | 0.8468 | 0.9351 | 0.9820 | 0.5342 | 0.9097 | 0.8100 | 0.8918 | 0.8030 | 0.9971 | 0.9549 | 0.4053 | 0.8261 | 0.6487 | 0.8190 | 0.7130 | 0.9961 |
| 0.0832 | 18.54 | 1520 | 0.2372 | 0.7677 | 0.8428 | 0.9375 | 0.9807 | 0.5172 | 0.9098 | 0.7757 | 0.9184 | 0.8007 | 0.9970 | 0.9563 | 0.3981 | 0.8264 | 0.6574 | 0.8284 | 0.7113 | 0.9961 |
| 0.0601 | 18.78 | 1540 | 0.2473 | 0.7741 | 0.8607 | 0.9366 | 0.9771 | 0.6169 | 0.9135 | 0.8532 | 0.8866 | 0.7798 | 0.9976 | 0.9559 | 0.4365 | 0.8295 | 0.6630 | 0.8221 | 0.7154 | 0.9965 |
| 0.0516 | 19.02 | 1560 | 0.2363 | 0.7731 | 0.8556 | 0.9369 | 0.9792 | 0.5591 | 0.9075 | 0.8329 | 0.8883 | 0.8243 | 0.9981 | 0.9563 | 0.4189 | 0.8305 | 0.6623 | 0.8215 | 0.7256 | 0.9964 |
| 0.0782 | 19.27 | 1580 | 0.2454 | 0.7651 | 0.8426 | 0.9341 | 0.9813 | 0.5266 | 0.9008 | 0.7847 | 0.9005 | 0.8059 | 0.9981 | 0.9543 | 0.4057 | 0.8280 | 0.6356 | 0.8141 | 0.7217 | 0.9961 |
| 0.0221 | 19.51 | 1600 | 0.2540 | 0.7660 | 0.8474 | 0.9333 | 0.9785 | 0.5946 | 0.9085 | 0.7579 | 0.9117 | 0.7828 | 0.9976 | 0.9543 | 0.4320 | 0.8265 | 0.6259 | 0.8134 | 0.7131 | 0.9964 |
| 0.0283 | 19.76 | 1620 | 0.2623 | 0.7662 | 0.8588 | 0.9320 | 0.9750 | 0.6488 | 0.9141 | 0.7503 | 0.8988 | 0.8266 | 0.9981 | 0.9549 | 0.4395 | 0.8271 | 0.6123 | 0.8092 | 0.7240 | 0.9964 |
| 0.1029 | 20.0 | 1640 | 0.2747 | 0.7633 | 0.8522 | 0.9299 | 0.9767 | 0.6031 | 0.9118 | 0.7738 | 0.8809 | 0.8213 | 0.9981 | 0.9539 | 0.4336 | 0.8317 | 0.6053 | 0.7991 | 0.7235 | 0.9962 |
| 0.0731 | 20.24 | 1660 | 0.2650 | 0.7621 | 0.8529 | 0.9294 | 0.9794 | 0.6019 | 0.9106 | 0.7555 | 0.8790 | 0.8453 | 0.9983 | 0.9552 | 0.4310 | 0.8288 | 0.5977 | 0.7971 | 0.7288 | 0.9963 |
| 0.2587 | 20.49 | 1680 | 0.2767 | 0.7602 | 0.8430 | 0.9311 | 0.9794 | 0.5600 | 0.9158 | 0.7217 | 0.9023 | 0.8234 | 0.9985 | 0.9552 | 0.4160 | 0.8239 | 0.5994 | 0.8073 | 0.7237 | 0.9961 |
| 0.1071 | 20.73 | 1700 | 0.2826 | 0.7607 | 0.8443 | 0.9311 | 0.9805 | 0.5820 | 0.9125 | 0.7122 | 0.9091 | 0.8156 | 0.9980 | 0.9548 | 0.4239 | 0.8225 | 0.5976 | 0.8080 | 0.7220 | 0.9963 |
| 0.0323 | 20.98 | 1720 | 0.2620 | 0.7621 | 0.8522 | 0.9293 | 0.9801 | 0.6139 | 0.9065 | 0.7633 | 0.8829 | 0.8215 | 0.9975 | 0.9551 | 0.4342 | 0.8296 | 0.5996 | 0.7968 | 0.7231 | 0.9963 |
| 0.1072 | 21.22 | 1740 | 0.2592 | 0.7614 | 0.8426 | 0.9315 | 0.9824 | 0.5483 | 0.9094 | 0.7402 | 0.8997 | 0.8211 | 0.9974 | 0.9546 | 0.4149 | 0.8257 | 0.6074 | 0.8078 | 0.7230 | 0.9961 |
| 0.0732 | 21.46 | 1760 | 0.2598 | 0.7654 | 0.8497 | 0.9327 | 0.9795 | 0.5819 | 0.9130 | 0.7528 | 0.8994 | 0.8240 | 0.9976 | 0.9557 | 0.4265 | 0.8275 | 0.6163 | 0.8106 | 0.7248 | 0.9963 |
| 0.0603 | 21.71 | 1780 | 0.2581 | 0.7650 | 0.8503 | 0.9320 | 0.9790 | 0.5874 | 0.9130 | 0.7452 | 0.8984 | 0.8315 | 0.9974 | 0.9556 | 0.4282 | 0.8302 | 0.6095 | 0.8070 | 0.7280 | 0.9963 |
| 0.0548 | 21.95 | 1800 | 0.2520 | 0.7649 | 0.8598 | 0.9300 | 0.9747 | 0.6344 | 0.9218 | 0.7885 | 0.8683 | 0.8322 | 0.9988 | 0.9560 | 0.4352 | 0.8339 | 0.6082 | 0.7956 | 0.7291 | 0.9963 |
| 0.0503 | 22.2 | 1820 | 0.2521 | 0.7657 | 0.8528 | 0.9317 | 0.9799 | 0.6030 | 0.9077 | 0.7726 | 0.8890 | 0.8193 | 0.9983 | 0.9558 | 0.4312 | 0.8324 | 0.6150 | 0.8034 | 0.7257 | 0.9964 |
| 0.0356 | 22.44 | 1840 | 0.2491 | 0.7669 | 0.8551 | 0.9328 | 0.9783 | 0.6119 | 0.9086 | 0.7470 | 0.9004 | 0.8412 | 0.9983 | 0.9568 | 0.4338 | 0.8310 | 0.6140 | 0.8091 | 0.7271 | 0.9966 |
| 0.0381 | 22.68 | 1860 | 0.2660 | 0.7644 | 0.8458 | 0.9330 | 0.9805 | 0.5651 | 0.9095 | 0.7344 | 0.9094 | 0.8242 | 0.9973 | 0.9559 | 0.4213 | 0.8289 | 0.6129 | 0.8116 | 0.7240 | 0.9964 |
| 0.0671 | 22.93 | 1880 | 0.2633 | 0.7664 | 0.8517 | 0.9332 | 0.9787 | 0.5970 | 0.9067 | 0.7371 | 0.9083 | 0.8360 | 0.9982 | 0.9559 | 0.4307 | 0.8286 | 0.6140 | 0.8129 | 0.7261 | 0.9966 |
| 0.1123 | 23.17 | 1900 | 0.2462 | 0.7659 | 0.8489 | 0.9337 | 0.9790 | 0.5848 | 0.9088 | 0.7332 | 0.9121 | 0.8253 | 0.9987 | 0.9556 | 0.4257 | 0.8275 | 0.6161 | 0.8154 | 0.7251 | 0.9963 |
| 0.0476 | 23.41 | 1920 | 0.2498 | 0.7665 | 0.8490 | 0.9336 | 0.9789 | 0.5651 | 0.9133 | 0.7813 | 0.8942 | 0.8124 | 0.9978 | 0.9557 | 0.4204 | 0.8297 | 0.6267 | 0.8128 | 0.7238 | 0.9966 |
| 0.0999 | 23.66 | 1940 | 0.2556 | 0.7678 | 0.8539 | 0.9335 | 0.9779 | 0.6072 | 0.9123 | 0.7735 | 0.8972 | 0.8109 | 0.9985 | 0.9561 | 0.4335 | 0.8291 | 0.6245 | 0.8127 | 0.7223 | 0.9966 |
| 0.0639 | 23.9 | 1960 | 0.2467 | 0.7659 | 0.8480 | 0.9337 | 0.9799 | 0.5431 | 0.9126 | 0.8278 | 0.8777 | 0.7960 | 0.9987 | 0.9547 | 0.4119 | 0.8279 | 0.6398 | 0.8137 | 0.7167 | 0.9964 |
| 0.0718 | 24.15 | 1980 | 0.2469 | 0.7667 | 0.8484 | 0.9341 | 0.9804 | 0.5609 | 0.9068 | 0.7785 | 0.8983 | 0.8160 | 0.9982 | 0.9546 | 0.4190 | 0.8271 | 0.6306 | 0.8159 | 0.7232 | 0.9965 |
| 0.0578 | 24.39 | 2000 | 0.2466 | 0.7668 | 0.8490 | 0.9339 | 0.9795 | 0.5737 | 0.9089 | 0.7591 | 0.9041 | 0.8194 | 0.9983 | 0.9551 | 0.4249 | 0.8271 | 0.6246 | 0.8155 | 0.7241 | 0.9965 |
| 0.0664 | 24.63 | 2020 | 0.2530 | 0.7640 | 0.8449 | 0.9328 | 0.9784 | 0.5888 | 0.9101 | 0.7452 | 0.9125 | 0.7808 | 0.9987 | 0.9549 | 0.4305 | 0.8255 | 0.6185 | 0.8132 | 0.7088 | 0.9963 |
| 0.7601 | 24.88 | 2040 | 0.2531 | 0.7677 | 0.8502 | 0.9340 | 0.9810 | 0.5654 | 0.8995 | 0.7891 | 0.8969 | 0.8221 | 0.9974 | 0.9535 | 0.4217 | 0.8288 | 0.6339 | 0.8151 | 0.7244 | 0.9964 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1
|
johnhse/pokemon-lora | johnhse | 2024-02-27T11:01:47Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-27T08:33:39Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
base_model: runwayml/stable-diffusion-v1-5
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - johnhse/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
huseinzol05/conformer-2M-ctc | huseinzol05 | 2024-02-27T10:57:07Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"conformer",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2024-02-27T10:56:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
peldrak/segformer-b3-ade-finetuned-512-512-finetuned-grCoastline_512 | peldrak | 2024-02-27T10:52:00Z | 189 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b3-finetuned-ade-512-512",
"base_model:finetune:nvidia/segformer-b3-finetuned-ade-512-512",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-27T09:29:58Z | ---
license: other
base_model: nvidia/segformer-b3-finetuned-ade-512-512
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b3-ade-finetuned-512-512-finetuned-grCoastline_512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b3-ade-finetuned-512-512-finetuned-grCoastline_512
This model is a fine-tuned version of [nvidia/segformer-b3-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b3-finetuned-ade-512-512) on the peldrak/grCoastline_512 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2343
- Mean Iou: 0.7550
- Mean Accuracy: 0.8157
- Overall Accuracy: 0.9435
- Accuracy Water: 0.9775
- Accuracy Whitewater: 0.2651
- Accuracy Sediment: 0.9504
- Accuracy Other Natural Terrain: 0.8170
- Accuracy Vegetation: 0.8956
- Accuracy Development: 0.8082
- Accuracy Unknown: 0.9956
- Iou Water: 0.9507
- Iou Whitewater: 0.2519
- Iou Sediment: 0.8719
- Iou Other Natural Terrain: 0.7298
- Iou Vegetation: 0.7848
- Iou Development: 0.7015
- Iou Unknown: 0.9946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:|
| 1.3152 | 0.24 | 20 | 1.1371 | 0.3592 | 0.4647 | 0.7315 | 0.9430 | 0.0 | 0.2992 | 0.0547 | 0.7991 | 0.1760 | 0.9809 | 0.6706 | 0.0 | 0.2742 | 0.0502 | 0.4041 | 0.1413 | 0.9741 |
| 1.2237 | 0.49 | 40 | 0.7602 | 0.4366 | 0.5444 | 0.8084 | 0.9568 | 0.0 | 0.8684 | 0.0674 | 0.8738 | 0.0645 | 0.9803 | 0.8767 | 0.0 | 0.5931 | 0.0668 | 0.4759 | 0.0643 | 0.9792 |
| 0.8849 | 0.73 | 60 | 0.5319 | 0.5122 | 0.6082 | 0.8591 | 0.9603 | 0.0 | 0.9492 | 0.3503 | 0.9236 | 0.0824 | 0.9915 | 0.9197 | 0.0 | 0.6535 | 0.3355 | 0.6080 | 0.0822 | 0.9867 |
| 0.7138 | 0.98 | 80 | 0.4433 | 0.5645 | 0.6474 | 0.8886 | 0.9732 | 0.0 | 0.9297 | 0.6001 | 0.9305 | 0.1018 | 0.9962 | 0.9208 | 0.0 | 0.7143 | 0.5271 | 0.7005 | 0.1015 | 0.9872 |
| 0.4057 | 1.22 | 100 | 0.3726 | 0.6034 | 0.6801 | 0.9021 | 0.9746 | 0.0 | 0.9397 | 0.6993 | 0.8983 | 0.2557 | 0.9930 | 0.9325 | 0.0 | 0.7428 | 0.6061 | 0.7024 | 0.2508 | 0.9894 |
| 0.5947 | 1.46 | 120 | 0.3407 | 0.6003 | 0.6767 | 0.9043 | 0.9809 | 0.0 | 0.9493 | 0.7424 | 0.8936 | 0.1789 | 0.9916 | 0.9264 | 0.0 | 0.7549 | 0.6223 | 0.7327 | 0.1776 | 0.9881 |
| 0.324 | 1.71 | 140 | 0.3482 | 0.6087 | 0.6906 | 0.8992 | 0.9661 | 0.0 | 0.9787 | 0.8298 | 0.7234 | 0.3403 | 0.9958 | 0.9342 | 0.0 | 0.6806 | 0.6506 | 0.6853 | 0.3202 | 0.9901 |
| 0.5283 | 1.95 | 160 | 0.2851 | 0.6729 | 0.7389 | 0.9265 | 0.9757 | 0.0 | 0.9427 | 0.8225 | 0.8647 | 0.5717 | 0.9954 | 0.9346 | 0.0 | 0.8134 | 0.7038 | 0.7573 | 0.5116 | 0.9895 |
| 0.4225 | 2.2 | 180 | 0.2628 | 0.6674 | 0.7330 | 0.9232 | 0.9642 | 0.0 | 0.9175 | 0.8301 | 0.9002 | 0.5261 | 0.9928 | 0.9183 | 0.0 | 0.8127 | 0.7082 | 0.7438 | 0.4982 | 0.9904 |
| 0.2334 | 2.44 | 200 | 0.2544 | 0.6811 | 0.7448 | 0.9282 | 0.9783 | 0.0 | 0.9488 | 0.7218 | 0.9115 | 0.6576 | 0.9956 | 0.9326 | 0.0 | 0.8252 | 0.6780 | 0.7556 | 0.5856 | 0.9905 |
| 0.3386 | 2.68 | 220 | 0.2385 | 0.6844 | 0.7511 | 0.9279 | 0.9773 | 0.0 | 0.9358 | 0.7831 | 0.8913 | 0.6838 | 0.9864 | 0.9298 | 0.0 | 0.8409 | 0.6831 | 0.7590 | 0.5939 | 0.9845 |
| 0.2847 | 2.93 | 240 | 0.2321 | 0.6817 | 0.7530 | 0.9275 | 0.9694 | 0.0 | 0.9472 | 0.8888 | 0.7828 | 0.6848 | 0.9980 | 0.9334 | 0.0 | 0.8187 | 0.7156 | 0.7399 | 0.5752 | 0.9895 |
| 0.6314 | 3.17 | 260 | 0.2118 | 0.6995 | 0.7630 | 0.9362 | 0.9725 | 0.0 | 0.9494 | 0.8463 | 0.8667 | 0.7094 | 0.9965 | 0.9404 | 0.0 | 0.8324 | 0.7431 | 0.7783 | 0.6104 | 0.9916 |
| 0.2687 | 3.41 | 280 | 0.2140 | 0.6992 | 0.7599 | 0.9359 | 0.9739 | 0.0 | 0.9480 | 0.8333 | 0.8776 | 0.6886 | 0.9976 | 0.9348 | 0.0 | 0.8242 | 0.7413 | 0.7869 | 0.6166 | 0.9908 |
| 0.4188 | 3.66 | 300 | 0.2088 | 0.7045 | 0.7646 | 0.9382 | 0.9743 | 0.0 | 0.9594 | 0.8251 | 0.8940 | 0.7055 | 0.9936 | 0.9368 | 0.0 | 0.8483 | 0.7412 | 0.7886 | 0.6260 | 0.9909 |
| 0.1609 | 3.9 | 320 | 0.1991 | 0.7059 | 0.7740 | 0.9372 | 0.9758 | 0.0 | 0.9463 | 0.8677 | 0.8325 | 0.8036 | 0.9923 | 0.9388 | 0.0 | 0.8620 | 0.7343 | 0.7721 | 0.6440 | 0.9900 |
| 0.1591 | 4.15 | 340 | 0.2563 | 0.6544 | 0.7303 | 0.9157 | 0.9718 | 0.0 | 0.9700 | 0.8359 | 0.7659 | 0.5772 | 0.9915 | 0.9427 | 0.0 | 0.7907 | 0.6447 | 0.6947 | 0.5180 | 0.9901 |
| 0.4456 | 4.39 | 360 | 0.2374 | 0.6846 | 0.7496 | 0.9283 | 0.9789 | 0.0 | 0.9396 | 0.7259 | 0.8976 | 0.7094 | 0.9956 | 0.9436 | 0.0 | 0.8668 | 0.6313 | 0.7349 | 0.6230 | 0.9928 |
| 0.1556 | 4.63 | 380 | 0.2165 | 0.6871 | 0.7568 | 0.9294 | 0.9701 | 0.0 | 0.9681 | 0.8800 | 0.7831 | 0.6998 | 0.9963 | 0.9401 | 0.0 | 0.8319 | 0.7082 | 0.7283 | 0.6095 | 0.9918 |
| 0.3269 | 4.88 | 400 | 0.2230 | 0.6995 | 0.7612 | 0.9340 | 0.9709 | 0.0 | 0.9544 | 0.7420 | 0.9106 | 0.7533 | 0.9969 | 0.9432 | 0.0 | 0.8551 | 0.6806 | 0.7497 | 0.6751 | 0.9929 |
| 0.2006 | 5.12 | 420 | 0.2233 | 0.6884 | 0.7545 | 0.9303 | 0.9727 | 0.0 | 0.9573 | 0.8825 | 0.8049 | 0.6679 | 0.9960 | 0.9405 | 0.0 | 0.8366 | 0.7042 | 0.7340 | 0.6110 | 0.9929 |
| 0.1007 | 5.37 | 440 | 0.2047 | 0.7084 | 0.7769 | 0.9383 | 0.9683 | 0.0 | 0.9583 | 0.8657 | 0.8279 | 0.8214 | 0.9966 | 0.9428 | 0.0 | 0.8463 | 0.7494 | 0.7689 | 0.6592 | 0.9921 |
| 0.1391 | 5.61 | 460 | 0.2110 | 0.7102 | 0.7698 | 0.9386 | 0.9775 | 0.0 | 0.9289 | 0.8196 | 0.9082 | 0.7632 | 0.9914 | 0.9384 | 0.0 | 0.8686 | 0.7287 | 0.7739 | 0.6716 | 0.9899 |
| 0.1582 | 5.85 | 480 | 0.1974 | 0.7070 | 0.7756 | 0.9371 | 0.9688 | 0.0 | 0.9503 | 0.7850 | 0.8715 | 0.8567 | 0.9971 | 0.9420 | 0.0 | 0.8565 | 0.7161 | 0.7626 | 0.6792 | 0.9928 |
| 0.118 | 6.1 | 500 | 0.2184 | 0.6937 | 0.7602 | 0.9312 | 0.9800 | 0.0 | 0.9321 | 0.8670 | 0.8032 | 0.7424 | 0.9969 | 0.9372 | 0.0 | 0.8535 | 0.6900 | 0.7355 | 0.6468 | 0.9929 |
| 0.1206 | 6.34 | 520 | 0.2585 | 0.6906 | 0.7549 | 0.9312 | 0.9766 | 0.0 | 0.9652 | 0.7906 | 0.8669 | 0.6927 | 0.9922 | 0.9364 | 0.0 | 0.8134 | 0.7155 | 0.7562 | 0.6218 | 0.9910 |
| 0.1531 | 6.59 | 540 | 0.2177 | 0.7048 | 0.7699 | 0.9370 | 0.9788 | 0.0 | 0.9361 | 0.8889 | 0.8270 | 0.7644 | 0.9944 | 0.9426 | 0.0 | 0.8647 | 0.7215 | 0.7629 | 0.6492 | 0.9926 |
| 0.141 | 6.83 | 560 | 0.2271 | 0.7014 | 0.7655 | 0.9362 | 0.9625 | 0.0 | 0.9726 | 0.8432 | 0.8655 | 0.7186 | 0.9957 | 0.9403 | 0.0 | 0.8254 | 0.7420 | 0.7758 | 0.6330 | 0.9931 |
| 0.1233 | 7.07 | 580 | 0.2128 | 0.7108 | 0.7723 | 0.9398 | 0.9739 | 0.0 | 0.9529 | 0.8535 | 0.8659 | 0.7646 | 0.9954 | 0.9430 | 0.0 | 0.8579 | 0.7390 | 0.7766 | 0.6661 | 0.9932 |
| 0.0518 | 7.32 | 600 | 0.2460 | 0.6911 | 0.7617 | 0.9304 | 0.9659 | 0.0 | 0.9711 | 0.8530 | 0.7962 | 0.7486 | 0.9969 | 0.9426 | 0.0 | 0.8348 | 0.6982 | 0.7278 | 0.6405 | 0.9941 |
| 0.1164 | 7.56 | 620 | 0.2446 | 0.6992 | 0.7670 | 0.9337 | 0.9747 | 0.0 | 0.9534 | 0.8651 | 0.8042 | 0.7752 | 0.9963 | 0.9432 | 0.0 | 0.8550 | 0.7027 | 0.7379 | 0.6615 | 0.9941 |
| 0.1448 | 7.8 | 640 | 0.2159 | 0.7115 | 0.7726 | 0.9394 | 0.9771 | 0.0 | 0.9510 | 0.8102 | 0.8822 | 0.7928 | 0.9952 | 0.9435 | 0.0 | 0.8560 | 0.7265 | 0.7725 | 0.6886 | 0.9932 |
| 0.4327 | 8.05 | 660 | 0.2056 | 0.7150 | 0.7766 | 0.9418 | 0.9736 | 0.0 | 0.9551 | 0.8341 | 0.8786 | 0.7980 | 0.9966 | 0.9443 | 0.0 | 0.8579 | 0.7494 | 0.7862 | 0.6734 | 0.9935 |
| 0.1197 | 8.29 | 680 | 0.2153 | 0.7068 | 0.7647 | 0.9389 | 0.9767 | 0.0 | 0.9434 | 0.8219 | 0.9024 | 0.7126 | 0.9963 | 0.9417 | 0.0 | 0.8681 | 0.7183 | 0.7794 | 0.6469 | 0.9935 |
| 0.1376 | 8.54 | 700 | 0.2252 | 0.7069 | 0.7755 | 0.9375 | 0.9676 | 0.0 | 0.9668 | 0.8520 | 0.8228 | 0.8217 | 0.9977 | 0.9441 | 0.0 | 0.8448 | 0.7397 | 0.7594 | 0.6667 | 0.9934 |
| 0.3054 | 8.78 | 720 | 0.2291 | 0.7079 | 0.7708 | 0.9382 | 0.9773 | 0.0 | 0.9544 | 0.8451 | 0.8599 | 0.7667 | 0.9924 | 0.9404 | 0.0 | 0.8559 | 0.7306 | 0.7749 | 0.6626 | 0.9912 |
| 0.1884 | 9.02 | 740 | 0.2077 | 0.7194 | 0.7823 | 0.9422 | 0.9733 | 0.0 | 0.9366 | 0.8561 | 0.8671 | 0.8472 | 0.9957 | 0.9449 | 0.0 | 0.8750 | 0.7354 | 0.7776 | 0.7092 | 0.9936 |
| 0.0987 | 9.27 | 760 | 0.2207 | 0.7112 | 0.7780 | 0.9385 | 0.9753 | 0.0 | 0.9640 | 0.8870 | 0.8026 | 0.8232 | 0.9938 | 0.9414 | 0.0 | 0.8349 | 0.7599 | 0.7583 | 0.6912 | 0.9924 |
| 0.1062 | 9.51 | 780 | 0.2697 | 0.6999 | 0.7640 | 0.9335 | 0.9747 | 0.0 | 0.9493 | 0.7366 | 0.8967 | 0.7950 | 0.9957 | 0.9429 | 0.0 | 0.8455 | 0.6892 | 0.7403 | 0.6884 | 0.9932 |
| 0.1437 | 9.76 | 800 | 0.2240 | 0.7069 | 0.7715 | 0.9380 | 0.9692 | 0.0 | 0.9578 | 0.8028 | 0.8838 | 0.7906 | 0.9960 | 0.9451 | 0.0 | 0.8487 | 0.7271 | 0.7701 | 0.6635 | 0.9935 |
| 0.0806 | 10.0 | 820 | 0.2262 | 0.7068 | 0.7692 | 0.9388 | 0.9788 | 0.0 | 0.9596 | 0.8313 | 0.8618 | 0.7573 | 0.9960 | 0.9447 | 0.0 | 0.8471 | 0.7398 | 0.7748 | 0.6479 | 0.9934 |
| 0.1172 | 10.24 | 840 | 0.2594 | 0.6971 | 0.7598 | 0.9336 | 0.9751 | 0.0 | 0.9497 | 0.7498 | 0.8983 | 0.7482 | 0.9975 | 0.9432 | 0.0 | 0.8596 | 0.6762 | 0.7500 | 0.6570 | 0.9934 |
| 0.3204 | 10.49 | 860 | 0.2114 | 0.7167 | 0.7775 | 0.9417 | 0.9735 | 0.0 | 0.9405 | 0.8401 | 0.8791 | 0.8117 | 0.9977 | 0.9440 | 0.0 | 0.8751 | 0.7310 | 0.7812 | 0.6925 | 0.9933 |
| 0.18 | 10.73 | 880 | 0.2234 | 0.7163 | 0.7774 | 0.9415 | 0.9708 | 0.0 | 0.9606 | 0.8125 | 0.8921 | 0.8111 | 0.9950 | 0.9458 | 0.0 | 0.8635 | 0.7310 | 0.7799 | 0.7000 | 0.9937 |
| 0.1388 | 10.98 | 900 | 0.2211 | 0.7231 | 0.7820 | 0.9435 | 0.9745 | 0.0 | 0.9282 | 0.8094 | 0.9115 | 0.8533 | 0.9971 | 0.9442 | 0.0 | 0.8842 | 0.7275 | 0.7822 | 0.7296 | 0.9942 |
| 0.3495 | 11.22 | 920 | 0.2246 | 0.7172 | 0.7803 | 0.9420 | 0.9735 | 0.0 | 0.9535 | 0.8467 | 0.8540 | 0.8361 | 0.9985 | 0.9443 | 0.0 | 0.8700 | 0.7384 | 0.7796 | 0.6941 | 0.9940 |
| 0.1129 | 11.46 | 940 | 0.2116 | 0.7187 | 0.7770 | 0.9431 | 0.9764 | 0.0 | 0.9480 | 0.8252 | 0.9009 | 0.7927 | 0.9954 | 0.9460 | 0.0 | 0.8772 | 0.7303 | 0.7890 | 0.6944 | 0.9941 |
| 0.1725 | 11.71 | 960 | 0.2028 | 0.7199 | 0.7820 | 0.9432 | 0.9799 | 0.0 | 0.9457 | 0.8285 | 0.8723 | 0.8515 | 0.9962 | 0.9442 | 0.0 | 0.8773 | 0.7348 | 0.7872 | 0.7019 | 0.9941 |
| 0.1882 | 11.95 | 980 | 0.2180 | 0.7168 | 0.7759 | 0.9422 | 0.9743 | 0.0 | 0.9556 | 0.8052 | 0.8990 | 0.7998 | 0.9974 | 0.9460 | 0.0 | 0.8570 | 0.7326 | 0.7887 | 0.6984 | 0.9945 |
| 0.1523 | 12.2 | 1000 | 0.2185 | 0.7172 | 0.7792 | 0.9416 | 0.9750 | 0.0000 | 0.9527 | 0.8253 | 0.8779 | 0.8287 | 0.9947 | 0.9444 | 0.0000 | 0.8701 | 0.7322 | 0.7781 | 0.7019 | 0.9936 |
| 0.0959 | 12.44 | 1020 | 0.2232 | 0.7192 | 0.7806 | 0.9424 | 0.9781 | 0.0 | 0.9452 | 0.8150 | 0.8840 | 0.8469 | 0.9953 | 0.9444 | 0.0 | 0.8769 | 0.7281 | 0.7792 | 0.7115 | 0.9941 |
| 0.0786 | 12.68 | 1040 | 0.2383 | 0.7129 | 0.7741 | 0.9403 | 0.9771 | 0.0001 | 0.9592 | 0.8173 | 0.8741 | 0.7949 | 0.9956 | 0.9449 | 0.0001 | 0.8567 | 0.7321 | 0.7734 | 0.6892 | 0.9942 |
| 0.1079 | 12.93 | 1060 | 0.2410 | 0.7144 | 0.7772 | 0.9401 | 0.9786 | 0.0040 | 0.9510 | 0.7919 | 0.8845 | 0.8357 | 0.9945 | 0.9451 | 0.0040 | 0.8585 | 0.7261 | 0.7709 | 0.7030 | 0.9933 |
| 0.1476 | 13.17 | 1080 | 0.2192 | 0.7130 | 0.7782 | 0.9403 | 0.9718 | 0.0008 | 0.9582 | 0.8334 | 0.8548 | 0.8309 | 0.9975 | 0.9458 | 0.0008 | 0.8536 | 0.7367 | 0.7733 | 0.6859 | 0.9950 |
| 0.1231 | 13.41 | 1100 | 0.2260 | 0.7157 | 0.7790 | 0.9417 | 0.9800 | 0.0002 | 0.9430 | 0.8018 | 0.8889 | 0.8443 | 0.9951 | 0.9456 | 0.0002 | 0.8722 | 0.7265 | 0.7836 | 0.6882 | 0.9939 |
| 0.0879 | 13.66 | 1120 | 0.2403 | 0.7133 | 0.7767 | 0.9407 | 0.9709 | 0.0 | 0.9670 | 0.8450 | 0.8523 | 0.8043 | 0.9974 | 0.9472 | 0.0 | 0.8539 | 0.7357 | 0.7748 | 0.6864 | 0.9951 |
| 0.116 | 13.9 | 1140 | 0.2334 | 0.7191 | 0.7810 | 0.9425 | 0.9767 | 0.0165 | 0.9465 | 0.8118 | 0.8896 | 0.8284 | 0.9974 | 0.9471 | 0.0164 | 0.8677 | 0.7354 | 0.7863 | 0.6862 | 0.9946 |
| 0.1264 | 14.15 | 1160 | 0.2366 | 0.7162 | 0.7765 | 0.9414 | 0.9789 | 0.0146 | 0.9504 | 0.8165 | 0.8936 | 0.7870 | 0.9943 | 0.9462 | 0.0145 | 0.8657 | 0.7314 | 0.7849 | 0.6774 | 0.9934 |
| 0.0761 | 14.39 | 1180 | 0.2227 | 0.7198 | 0.7792 | 0.9427 | 0.9789 | 0.0180 | 0.9509 | 0.8127 | 0.8924 | 0.8044 | 0.9968 | 0.9469 | 0.0179 | 0.8670 | 0.7339 | 0.7858 | 0.6921 | 0.9948 |
| 0.0437 | 14.63 | 1200 | 0.2192 | 0.7188 | 0.7819 | 0.9415 | 0.9769 | 0.0248 | 0.9490 | 0.8470 | 0.8566 | 0.8219 | 0.9970 | 0.9466 | 0.0246 | 0.8645 | 0.7379 | 0.7761 | 0.6873 | 0.9948 |
| 0.0732 | 14.88 | 1220 | 0.2396 | 0.7289 | 0.7897 | 0.9421 | 0.9760 | 0.0861 | 0.9477 | 0.8180 | 0.8934 | 0.8121 | 0.9948 | 0.9465 | 0.0849 | 0.8654 | 0.7368 | 0.7824 | 0.6923 | 0.9936 |
| 0.1376 | 15.12 | 1240 | 0.2280 | 0.7314 | 0.7915 | 0.9430 | 0.9753 | 0.0944 | 0.9480 | 0.8444 | 0.8766 | 0.8039 | 0.9976 | 0.9469 | 0.0925 | 0.8660 | 0.7413 | 0.7854 | 0.6934 | 0.9946 |
| 0.0518 | 15.37 | 1260 | 0.2378 | 0.7289 | 0.7880 | 0.9422 | 0.9746 | 0.0885 | 0.9501 | 0.8424 | 0.8832 | 0.7807 | 0.9963 | 0.9460 | 0.0864 | 0.8661 | 0.7345 | 0.7828 | 0.6920 | 0.9946 |
| 0.0599 | 15.61 | 1280 | 0.2288 | 0.7242 | 0.7848 | 0.9418 | 0.9784 | 0.0559 | 0.9443 | 0.8397 | 0.8761 | 0.8037 | 0.9952 | 0.9468 | 0.0555 | 0.8645 | 0.7356 | 0.7789 | 0.6943 | 0.9939 |
| 0.0967 | 15.85 | 1300 | 0.2416 | 0.7301 | 0.7939 | 0.9413 | 0.9749 | 0.1088 | 0.9523 | 0.8402 | 0.8542 | 0.8291 | 0.9979 | 0.9493 | 0.1074 | 0.8613 | 0.7326 | 0.7718 | 0.6933 | 0.9952 |
| 0.0593 | 16.1 | 1320 | 0.2691 | 0.7212 | 0.7812 | 0.9396 | 0.9777 | 0.0661 | 0.9444 | 0.7484 | 0.9244 | 0.8117 | 0.9958 | 0.9483 | 0.0651 | 0.8732 | 0.6970 | 0.7657 | 0.7051 | 0.9944 |
| 0.2264 | 16.34 | 1340 | 0.2362 | 0.7237 | 0.7854 | 0.9420 | 0.9756 | 0.0505 | 0.9512 | 0.8124 | 0.8846 | 0.8266 | 0.9969 | 0.9469 | 0.0499 | 0.8610 | 0.7362 | 0.7802 | 0.6966 | 0.9948 |
| 0.0592 | 16.59 | 1360 | 0.2423 | 0.7367 | 0.8007 | 0.9391 | 0.9729 | 0.1878 | 0.9486 | 0.8662 | 0.8410 | 0.7923 | 0.9964 | 0.9477 | 0.1838 | 0.8581 | 0.7238 | 0.7609 | 0.6875 | 0.9951 |
| 0.1125 | 16.83 | 1380 | 0.2339 | 0.7527 | 0.8150 | 0.9431 | 0.9762 | 0.2572 | 0.9448 | 0.8334 | 0.8824 | 0.8143 | 0.9969 | 0.9485 | 0.2431 | 0.8608 | 0.7434 | 0.7846 | 0.6934 | 0.9951 |
| 0.0603 | 17.07 | 1400 | 0.2270 | 0.7406 | 0.8004 | 0.9417 | 0.9753 | 0.1855 | 0.9417 | 0.8159 | 0.8973 | 0.7897 | 0.9975 | 0.9475 | 0.1795 | 0.8640 | 0.7270 | 0.7791 | 0.6921 | 0.9951 |
| 0.0519 | 17.32 | 1420 | 0.2306 | 0.7467 | 0.8067 | 0.9430 | 0.9751 | 0.2083 | 0.9472 | 0.8316 | 0.8892 | 0.7984 | 0.9969 | 0.9491 | 0.2023 | 0.8660 | 0.7348 | 0.7845 | 0.6952 | 0.9950 |
| 0.0604 | 17.56 | 1440 | 0.2442 | 0.7355 | 0.7969 | 0.9424 | 0.9799 | 0.1369 | 0.9507 | 0.8123 | 0.8850 | 0.8175 | 0.9956 | 0.9483 | 0.1337 | 0.8670 | 0.7313 | 0.7826 | 0.6911 | 0.9945 |
| 0.0938 | 17.8 | 1460 | 0.2359 | 0.7392 | 0.7993 | 0.9422 | 0.9726 | 0.1637 | 0.9542 | 0.8012 | 0.8966 | 0.8084 | 0.9987 | 0.9483 | 0.1598 | 0.8663 | 0.7251 | 0.7808 | 0.6992 | 0.9947 |
| 0.1024 | 18.05 | 1480 | 0.2377 | 0.7305 | 0.7902 | 0.9418 | 0.9811 | 0.1062 | 0.9475 | 0.8259 | 0.8784 | 0.7962 | 0.9960 | 0.9482 | 0.1042 | 0.8690 | 0.7262 | 0.7776 | 0.6936 | 0.9946 |
| 0.0536 | 18.29 | 1500 | 0.2309 | 0.7358 | 0.7961 | 0.9424 | 0.9756 | 0.1348 | 0.9526 | 0.8292 | 0.8784 | 0.8046 | 0.9975 | 0.9502 | 0.1325 | 0.8695 | 0.7282 | 0.7786 | 0.6971 | 0.9948 |
| 0.07 | 18.54 | 1520 | 0.2380 | 0.7507 | 0.8127 | 0.9428 | 0.9747 | 0.2410 | 0.9478 | 0.8254 | 0.8836 | 0.8191 | 0.9971 | 0.9503 | 0.2312 | 0.8708 | 0.7276 | 0.7799 | 0.7001 | 0.9951 |
| 0.0692 | 18.78 | 1540 | 0.2429 | 0.7478 | 0.8104 | 0.9420 | 0.9745 | 0.2285 | 0.9462 | 0.8139 | 0.8853 | 0.8269 | 0.9973 | 0.9502 | 0.2198 | 0.8739 | 0.7180 | 0.7755 | 0.7018 | 0.9952 |
| 0.0673 | 19.02 | 1560 | 0.2288 | 0.7352 | 0.7966 | 0.9409 | 0.9769 | 0.1525 | 0.9459 | 0.8163 | 0.8795 | 0.8081 | 0.9974 | 0.9470 | 0.1472 | 0.8681 | 0.7192 | 0.7749 | 0.6955 | 0.9945 |
| 0.0406 | 19.27 | 1580 | 0.2302 | 0.7484 | 0.8096 | 0.9426 | 0.9757 | 0.2222 | 0.9442 | 0.8234 | 0.8849 | 0.8196 | 0.9974 | 0.9493 | 0.2126 | 0.8746 | 0.7252 | 0.7781 | 0.7042 | 0.9945 |
| 0.0686 | 19.51 | 1600 | 0.2228 | 0.7421 | 0.8031 | 0.9432 | 0.9755 | 0.1643 | 0.9524 | 0.8481 | 0.8669 | 0.8176 | 0.9970 | 0.9496 | 0.1599 | 0.8718 | 0.7337 | 0.7808 | 0.7039 | 0.9950 |
| 0.0559 | 19.76 | 1620 | 0.2335 | 0.7497 | 0.8141 | 0.9422 | 0.9753 | 0.2443 | 0.9511 | 0.8181 | 0.8749 | 0.8382 | 0.9967 | 0.9497 | 0.2320 | 0.8723 | 0.7237 | 0.7767 | 0.6985 | 0.9949 |
| 0.0726 | 20.0 | 1640 | 0.2381 | 0.7439 | 0.8067 | 0.9417 | 0.9796 | 0.2054 | 0.9492 | 0.8341 | 0.8658 | 0.8182 | 0.9949 | 0.9484 | 0.1952 | 0.8690 | 0.7270 | 0.7748 | 0.6985 | 0.9941 |
| 0.0381 | 20.24 | 1660 | 0.2289 | 0.7603 | 0.8246 | 0.9425 | 0.9763 | 0.3250 | 0.9474 | 0.8232 | 0.8778 | 0.8261 | 0.9967 | 0.9506 | 0.3039 | 0.8722 | 0.7244 | 0.7768 | 0.6991 | 0.9951 |
| 0.1459 | 20.49 | 1680 | 0.2320 | 0.7435 | 0.8034 | 0.9422 | 0.9795 | 0.2004 | 0.9497 | 0.8294 | 0.8781 | 0.7897 | 0.9967 | 0.9500 | 0.1939 | 0.8711 | 0.7228 | 0.7781 | 0.6936 | 0.9950 |
| 0.0515 | 20.73 | 1700 | 0.2366 | 0.7584 | 0.8212 | 0.9425 | 0.9772 | 0.3128 | 0.9445 | 0.8043 | 0.8924 | 0.8189 | 0.9980 | 0.9504 | 0.2910 | 0.8743 | 0.7188 | 0.7777 | 0.7017 | 0.9951 |
| 0.1066 | 20.98 | 1720 | 0.2450 | 0.7615 | 0.8249 | 0.9420 | 0.9723 | 0.3449 | 0.9531 | 0.8283 | 0.8804 | 0.7978 | 0.9973 | 0.9506 | 0.3225 | 0.8704 | 0.7218 | 0.7762 | 0.6941 | 0.9950 |
| 0.0677 | 21.22 | 1740 | 0.2302 | 0.7512 | 0.8141 | 0.9424 | 0.9777 | 0.2553 | 0.9465 | 0.8569 | 0.8526 | 0.8115 | 0.9982 | 0.9502 | 0.2412 | 0.8731 | 0.7264 | 0.7766 | 0.6954 | 0.9953 |
| 0.0745 | 21.46 | 1760 | 0.2343 | 0.7389 | 0.7999 | 0.9420 | 0.9807 | 0.1705 | 0.9453 | 0.8517 | 0.8613 | 0.7930 | 0.9966 | 0.9486 | 0.1644 | 0.8710 | 0.7263 | 0.7792 | 0.6877 | 0.9950 |
| 0.1053 | 21.71 | 1780 | 0.2453 | 0.7591 | 0.8219 | 0.9426 | 0.9747 | 0.3127 | 0.9517 | 0.8279 | 0.8834 | 0.8070 | 0.9958 | 0.9511 | 0.2943 | 0.8705 | 0.7253 | 0.7796 | 0.6977 | 0.9948 |
| 0.0898 | 21.95 | 1800 | 0.2224 | 0.7525 | 0.8145 | 0.9435 | 0.9767 | 0.2480 | 0.9498 | 0.8216 | 0.8848 | 0.8237 | 0.9972 | 0.9501 | 0.2357 | 0.8693 | 0.7325 | 0.7856 | 0.6988 | 0.9953 |
| 0.1724 | 22.2 | 1820 | 0.2210 | 0.7390 | 0.7982 | 0.9430 | 0.9795 | 0.1569 | 0.9503 | 0.8319 | 0.8831 | 0.7889 | 0.9966 | 0.9499 | 0.1527 | 0.8697 | 0.7298 | 0.7853 | 0.6903 | 0.9950 |
| 0.1683 | 22.44 | 1840 | 0.2362 | 0.7629 | 0.8257 | 0.9439 | 0.9774 | 0.3265 | 0.9405 | 0.8187 | 0.8975 | 0.8231 | 0.9966 | 0.9514 | 0.3020 | 0.8741 | 0.7299 | 0.7861 | 0.7016 | 0.9951 |
| 0.0613 | 22.68 | 1860 | 0.2266 | 0.7634 | 0.8267 | 0.9441 | 0.9769 | 0.3291 | 0.9418 | 0.8187 | 0.8939 | 0.8291 | 0.9974 | 0.9511 | 0.3024 | 0.8750 | 0.7301 | 0.7864 | 0.7032 | 0.9953 |
| 0.0638 | 22.93 | 1880 | 0.2321 | 0.7650 | 0.8286 | 0.9438 | 0.9755 | 0.3469 | 0.9500 | 0.8305 | 0.8831 | 0.8173 | 0.9972 | 0.9519 | 0.3208 | 0.8735 | 0.7297 | 0.7852 | 0.6986 | 0.9953 |
| 0.0865 | 23.17 | 1900 | 0.2402 | 0.7476 | 0.8078 | 0.9434 | 0.9773 | 0.2115 | 0.9529 | 0.8024 | 0.9005 | 0.8132 | 0.9965 | 0.9502 | 0.2039 | 0.8688 | 0.7291 | 0.7860 | 0.7006 | 0.9950 |
| 0.183 | 23.41 | 1920 | 0.2340 | 0.7545 | 0.8155 | 0.9437 | 0.9757 | 0.2616 | 0.9551 | 0.8198 | 0.8879 | 0.8106 | 0.9975 | 0.9512 | 0.2492 | 0.8685 | 0.7321 | 0.7853 | 0.7000 | 0.9953 |
| 0.0665 | 23.66 | 1940 | 0.2250 | 0.7580 | 0.8206 | 0.9439 | 0.9756 | 0.2861 | 0.9518 | 0.8188 | 0.8857 | 0.8279 | 0.9979 | 0.9513 | 0.2691 | 0.8718 | 0.7316 | 0.7853 | 0.7015 | 0.9953 |
| 0.0783 | 23.9 | 1960 | 0.2271 | 0.7541 | 0.8161 | 0.9435 | 0.9773 | 0.2581 | 0.9518 | 0.8318 | 0.8773 | 0.8199 | 0.9966 | 0.9511 | 0.2452 | 0.8718 | 0.7308 | 0.7832 | 0.7017 | 0.9952 |
| 0.0767 | 24.15 | 1980 | 0.2246 | 0.7616 | 0.8247 | 0.9438 | 0.9765 | 0.3151 | 0.9509 | 0.8143 | 0.8912 | 0.8285 | 0.9964 | 0.9515 | 0.2948 | 0.8724 | 0.7301 | 0.7856 | 0.7014 | 0.9951 |
| 0.112 | 24.39 | 2000 | 0.2314 | 0.7669 | 0.8310 | 0.9438 | 0.9743 | 0.3599 | 0.9503 | 0.8198 | 0.8892 | 0.8259 | 0.9972 | 0.9520 | 0.3326 | 0.8731 | 0.7290 | 0.7844 | 0.7016 | 0.9953 |
| 0.074 | 24.63 | 2020 | 0.2291 | 0.7603 | 0.8235 | 0.9437 | 0.9770 | 0.3074 | 0.9499 | 0.8136 | 0.8901 | 0.8296 | 0.9966 | 0.9513 | 0.2874 | 0.8723 | 0.7293 | 0.7851 | 0.7014 | 0.9951 |
| 0.0684 | 24.88 | 2040 | 0.2343 | 0.7550 | 0.8157 | 0.9435 | 0.9775 | 0.2651 | 0.9504 | 0.8170 | 0.8956 | 0.8082 | 0.9956 | 0.9507 | 0.2519 | 0.8719 | 0.7298 | 0.7848 | 0.7015 | 0.9946 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
AbstractPerspective/2xMistral | AbstractPerspective | 2024-02-27T10:48:03Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-v0.1",
"mlabonne/drmistral-7b",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"base_model:mlabonne/drmistral-7b",
"base_model:merge:mlabonne/drmistral-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T10:40:38Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-v0.1
- mlabonne/drmistral-7b
base_model:
- mistralai/Mistral-7B-v0.1
- mlabonne/drmistral-7b
---
# 2xMistral
2xMistral is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [mlabonne/drmistral-7b](https://huggingface.co/mlabonne/drmistral-7b)
## 🧩 Configuration
```yaml
base_model: mistralai/Mistral-7B-v0.1
gate_mode: cheap_embed
experts:
- source_model: mistralai/Mistral-7B-v0.1
positive_prompts: ["general"]
- source_model: mlabonne/drmistral-7b
positive_prompts: ["medical"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "AbstractPerspective/2xMistral"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
zeronin7/distilbert-base-uncased-finetuned-clinc | zeronin7 | 2024-02-27T10:44:29Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-26T16:48:37Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7989
- Accuracy: 0.9177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.3192 | 0.7390 |
| 3.8194 | 2.0 | 636 | 1.9157 | 0.8497 |
| 3.8194 | 3.0 | 954 | 1.1867 | 0.8981 |
| 1.7308 | 4.0 | 1272 | 0.8824 | 0.9177 |
| 0.9303 | 5.0 | 1590 | 0.7989 | 0.9177 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.15.2
|
mlx-community/CodeLlama-7b-Python-mlx | mlx-community | 2024-02-27T10:40:53Z | 27 | 10 | mlx | [
"mlx",
"llama",
"facebook",
"meta",
"llama-2",
"text-generation",
"license:llama2",
"region:us"
] | text-generation | 2023-12-06T17:02:15Z | ---
pipeline_tag: text-generation
library_name: mlx
inference: false
tags:
- facebook
- meta
- llama
- llama-2
- mlx
license: llama2
---
# **CodeLlama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 7B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. This is the repository for the 7B Python fine-tuned model, in `npz` format suitable for use in Apple's MLX framework.
Weights have been converted to `float16` from the original `bfloat16` type, because `numpy` is not compatible with `bfloat16` out of the box.
How to use with [MLX](https://github.com/ml-explore/mlx).
```bash
# Install mlx, mlx-examples, huggingface-cli
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git
# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir CodeLlama-7b-Python-mlx mlx-llama/CodeLlama-7b-Python-mlx
# Run example
python mlx-examples/llms/llama/llama.py --prompt "def fibonacci(n):" CodeLlama-7b-Python-mlx/ CodeLlama-7b-Python-mlx/tokenizer.model --max-tokens 200
```
Please, refer to the [original model card](https://github.com/facebookresearch/codellama/blob/main/MODEL_CARD.md) for details on CodeLlama.
|
haripriya126/my-pet-dog | haripriya126 | 2024-02-27T10:27:15Z | 10 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-27T10:22:53Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by haripriya126 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
bhafner/test | bhafner | 2024-02-27T10:27:04Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-27T08:12:38Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a bhafner person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
JohannesGaessler/cosmosage_v2-gguf | JohannesGaessler | 2024-02-27T10:26:20Z | 29 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-25T13:34:06Z | ---
license: apache-2.0
---
GGUF conversion of [Cosmosage v2](https://huggingface.co/Tijmen2/cosmosage_v2). The importance matrix for iq formats was calculated on the training set of Wikitext 2. The iq1\_s quant was incoherent and therefore not included.
|
Nishthaa321/autotrain-qr7os-gstst | Nishthaa321 | 2024-02-27T10:26:05Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:autotrain-qr7os-gstst/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T10:25:38Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-qr7os-gstst/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.2146722972393036
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
RupE/alpaca-bitcoin-tweets-sentiment | RupE | 2024-02-27T10:13:40Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-02-27T10:13:39Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
FatmaYoussef/ppo-SnowballTarget | FatmaYoussef | 2024-02-27T10:01:59Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-02-27T10:01:52Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: FatmaYoussef/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
FINNUMBER/FINCH_TRAIN_ALL_3600_per400_NEW_Rationale_E4 | FINNUMBER | 2024-02-27T09:59:18Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T09:53:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alexandrabenamar/bloomz-7b1-4Magic | alexandrabenamar | 2024-02-27T09:59:18Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T09:10:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FINNUMBER/FINCH_TRAIN_SA_200_per100_NEW_Rationale_E12 | FINNUMBER | 2024-02-27T09:59:10Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T09:53:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProphetOfBostrom/opus-v1-34b-4b8h-8192l-EXL2 | ProphetOfBostrom | 2024-02-27T09:55:42Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"axolotl",
"exllamav2",
"exl2",
"4bit",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T22:15:38Z | ---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
- exllamav2
- exl2
- 4bit
library_name: transformers
---
### quantized with the default exl2 dataset with sequence lengths of 8192 and 400 calibration (stage 2, optimisation) lines instead of 2048/100. possibly microwaved, presumably better.
##### resulstant measurement file is present somewhere, though the default line count of 16 (still extended to 8192) was used for measurement (stage 1)
### tokenizer works. tokenizer.model is not required for use with exllama2. no promises about sketchy software by "oobabooga"* :) try tabbyAPI/tavern, or exui if you don't miss CFG
##### consider yourselves lucky it's not a safetensors.zpaq this took all night to upload and YES i did refresh my access tokens after the Whoopsie, sorry!
###### *I'm sure it's fine it's just that I'll die if I ever see conda again.
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1-34b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
<img src="/dreamgen/opus-v1-34b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
- Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing):
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
- [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
- [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
- [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
- [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
- And more...
### Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
### Running on DreamGen.com (free)
You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required.
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Settings](https://huggingface.co/{{REPO_ID}}/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- [Settings screenshot](configs/silly_tavern/settings_screenshot.webp)
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B tokenization**:
- There seems to be a mismatch between the tokenizer of the base and fine-tuned model. It's unclear whether this also affected training, or whether it's just incorrectly saved tokenizer (you can see `tokenizer.json` was not saved ([bug report](https://github.com/OpenAccess-AI-Collective/axolotl/issues/1322))).
- This affects BOS and EOS (which aren't really used by Yi) and the tokenization of the first input token.
- Overall impact should be minor.
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The conversion might be messed up and in my tests even `Q_8` of the `opus-v1.2-7b` is much worse than the `fp16` version.
- **Ooba**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens.
## Community
Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
## License
- This model is intended for personal use only, other use is not permitted.
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1-34b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
<img src="/dreamgen/opus-v1-34b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
- Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing):
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
- [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
- [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
- [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
- [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
- And more...
### Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
### Running on DreamGen.com (free)
You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required.
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Settings](https://huggingface.co/{{REPO_ID}}/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- [Settings screenshot](configs/silly_tavern/settings_screenshot.webp)
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B tokenization**:
- There seems to be a mismatch between the tokenizer of the base and fine-tuned model. It's unclear whether this also affected training, or whether it's just incorrectly saved tokenizer (you can see `tokenizer.json` was not saved ([bug report](https://github.com/OpenAccess-AI-Collective/axolotl/issues/1322))).
- This affects BOS and EOS (which aren't really used by Yi) and the tokenization of the first input token.
- Overall impact should be minor.
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The conversion might be messed up and in my tests even `Q_8` of the `opus-v1.2-7b` is much worse than the `fp16` version.
- **Ooba**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens.
## Community
Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
## License
- This model is intended for personal use only, other use is not permitted. |
DimalChathuranga/marian-finetuned-kde4-en-to-fr | DimalChathuranga | 2024-02-27T09:55:34Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-02-27T06:34:32Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2
|
Stopwolf/Mustra-7B-Instruct-v0.1 | Stopwolf | 2024-02-27T09:54:35Z | 46 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"gordicaleksa/YugoGPT",
"mistralai/Mistral-7B-Instruct-v0.2",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T09:50:23Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- gordicaleksa/YugoGPT
- mistralai/Mistral-7B-Instruct-v0.2
---
# Mustra-7B-Instruct-v0.1
Mustra-7B-Instruct-v0.1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [gordicaleksa/YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: gordicaleksa/YugoGPT
layer_range: [0, 32]
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.75
dtype: bfloat16
``` |
FINNUMBER/Yi-Ko-6B-Finch-NQA-FULL-Hyper-epoch3 | FINNUMBER | 2024-02-27T09:48:29Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T07:20:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yvelos/Annotator_4_Mi | yvelos | 2024-02-27T09:46:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-23T20:06:31Z | ---
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jhanu/my-pet-dog | Jhanu | 2024-02-27T09:38:44Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-27T09:32:59Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Jhanu following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
orzhan/bart-transcription-aggregation | orzhan | 2024-02-27T09:38:24Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: ru
---
BART model fine-tuned to aggregate crowd-sourced transcriptions.
Repository: [GitHub](https://github.com/orzhan/bart-transcription-aggregation) |
JinghuiLuAstronaut/PaDeLLM_llama2_7b_ace05 | JinghuiLuAstronaut | 2024-02-27T09:38:11Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T02:39:31Z | Inference code see https://github.com/GeorgeLuImmortal/PaDeLLM_NER |
JinghuiLuAstronaut/PaDeLLM_llama2_7b_conll03 | JinghuiLuAstronaut | 2024-02-27T09:37:41Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T02:21:01Z | Inference code see https://github.com/GeorgeLuImmortal/PaDeLLM_NER |
JinghuiLuAstronaut/PaDeLLM_baichuan2_7b_resume | JinghuiLuAstronaut | 2024-02-27T09:37:22Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"baichuan",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T09:00:16Z | Inference code see https://github.com/GeorgeLuImmortal/PaDeLLM_NER |
habout632/EvolCodeLlama-7b | habout632 | 2024-02-27T09:35:24Z | 3 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-02-27T08:18:15Z | ---
license: llama2
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: EvolCodeLlama-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: codellama/CodeLlama-7b-hf
base_model_config: codellama/CodeLlama-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
hub_model_id: EvolCodeLlama-7b
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: mlabonne/Evol-Instruct-Python-1k
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.02
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 2048
sample_packing: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
eval_steps: 0.01
save_strategy: epoch
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# EvolCodeLlama-7b
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3178 | 0.01 | 1 | 0.5311 |
| 0.3147 | 0.03 | 4 | 0.5312 |
| 0.3626 | 0.07 | 8 | 0.5310 |
| 0.6265 | 0.1 | 12 | 0.5296 |
| 0.429 | 0.14 | 16 | 0.5270 |
| 0.5086 | 0.17 | 20 | 0.5205 |
| 0.4335 | 0.21 | 24 | 0.5067 |
| 0.3383 | 0.24 | 28 | 0.4842 |
| 0.3688 | 0.28 | 32 | 0.4603 |
| 0.2528 | 0.31 | 36 | 0.4403 |
| 0.3105 | 0.35 | 40 | 0.4251 |
| 0.4936 | 0.38 | 44 | 0.4162 |
| 0.4146 | 0.42 | 48 | 0.4086 |
| 0.3327 | 0.45 | 52 | 0.4024 |
| 0.3429 | 0.48 | 56 | 0.3971 |
| 0.3328 | 0.52 | 60 | 0.3937 |
| 0.1844 | 0.55 | 64 | 0.3901 |
| 0.3001 | 0.59 | 68 | 0.3887 |
| 0.3632 | 0.62 | 72 | 0.3872 |
| 0.1997 | 0.66 | 76 | 0.3847 |
| 0.2461 | 0.69 | 80 | 0.3823 |
| 0.2865 | 0.73 | 84 | 0.3812 |
| 0.26 | 0.76 | 88 | 0.3805 |
| 0.3191 | 0.8 | 92 | 0.3792 |
| 0.4642 | 0.83 | 96 | 0.3763 |
| 0.2649 | 0.87 | 100 | 0.3750 |
| 0.2095 | 0.9 | 104 | 0.3727 |
| 0.2738 | 0.94 | 108 | 0.3737 |
| 0.4274 | 0.97 | 112 | 0.3730 |
| 0.2722 | 1.0 | 116 | 0.3724 |
| 0.2164 | 1.02 | 120 | 0.3705 |
| 0.1549 | 1.05 | 124 | 0.3726 |
| 0.3051 | 1.08 | 128 | 0.3725 |
| 0.1873 | 1.12 | 132 | 0.3730 |
| 0.3388 | 1.15 | 136 | 0.3738 |
| 0.2504 | 1.19 | 140 | 0.3741 |
| 0.2851 | 1.22 | 144 | 0.3714 |
| 0.2365 | 1.26 | 148 | 0.3690 |
| 0.3986 | 1.29 | 152 | 0.3699 |
| 0.1913 | 1.33 | 156 | 0.3720 |
| 0.1963 | 1.36 | 160 | 0.3698 |
| 0.1824 | 1.4 | 164 | 0.3679 |
| 0.1453 | 1.43 | 168 | 0.3685 |
| 0.3073 | 1.47 | 172 | 0.3702 |
| 0.1501 | 1.5 | 176 | 0.3692 |
| 0.2167 | 1.53 | 180 | 0.3662 |
| 0.3007 | 1.57 | 184 | 0.3660 |
| 0.2203 | 1.6 | 188 | 0.3666 |
| 0.3978 | 1.64 | 192 | 0.3669 |
| 0.2397 | 1.67 | 196 | 0.3663 |
| 0.2161 | 1.71 | 200 | 0.3656 |
| 0.2593 | 1.74 | 204 | 0.3651 |
| 0.2113 | 1.78 | 208 | 0.3658 |
| 0.2435 | 1.81 | 212 | 0.3657 |
| 0.2625 | 1.85 | 216 | 0.3639 |
| 0.302 | 1.88 | 220 | 0.3624 |
| 0.2556 | 1.92 | 224 | 0.3611 |
| 0.2063 | 1.95 | 228 | 0.3609 |
| 0.1994 | 1.98 | 232 | 0.3612 |
| 0.2229 | 2.02 | 236 | 0.3613 |
| 0.1983 | 2.03 | 240 | 0.3634 |
| 0.1925 | 2.06 | 244 | 0.3725 |
| 0.1778 | 2.1 | 248 | 0.3832 |
| 0.1293 | 2.13 | 252 | 0.3834 |
| 0.2166 | 2.16 | 256 | 0.3789 |
| 0.2082 | 2.2 | 260 | 0.3760 |
| 0.1858 | 2.23 | 264 | 0.3761 |
| 0.1862 | 2.27 | 268 | 0.3763 |
| 0.1619 | 2.3 | 272 | 0.3783 |
| 0.174 | 2.34 | 276 | 0.3786 |
| 0.2414 | 2.37 | 280 | 0.3790 |
| 0.1977 | 2.41 | 284 | 0.3783 |
| 0.1678 | 2.44 | 288 | 0.3784 |
| 0.2263 | 2.48 | 292 | 0.3786 |
| 0.082 | 2.51 | 296 | 0.3783 |
| 0.2621 | 2.55 | 300 | 0.3784 |
| 0.1754 | 2.58 | 304 | 0.3795 |
| 0.1957 | 2.61 | 308 | 0.3802 |
| 0.1203 | 2.65 | 312 | 0.3803 |
| 0.1388 | 2.68 | 316 | 0.3796 |
| 0.1699 | 2.72 | 320 | 0.3796 |
| 0.161 | 2.75 | 324 | 0.3796 |
| 0.2394 | 2.79 | 328 | 0.3792 |
| 0.1465 | 2.82 | 332 | 0.3795 |
| 0.1746 | 2.86 | 336 | 0.3794 |
| 0.1839 | 2.89 | 340 | 0.3795 |
| 0.1581 | 2.93 | 344 | 0.3796 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.39.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.17.1
- Tokenizers 0.15.0 |
JinghuiLuAstronaut/PaDeLLM_baichuan2_7b_msra | JinghuiLuAstronaut | 2024-02-27T09:29:17Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"baichuan",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T03:32:04Z | Inference code see https://github.com/GeorgeLuImmortal/PaDeLLM_NER |
JinghuiLuAstronaut/PaDeLLM_baichuan2_7b_weibo | JinghuiLuAstronaut | 2024-02-27T09:28:36Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"baichuan",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T04:15:20Z | Inference code see https://github.com/GeorgeLuImmortal/PaDeLLM_NER |
DatPySci/pythia-1b-kto-iter0 | DatPySci | 2024-02-27T09:27:03Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:DatPySci/iter0",
"base_model:DatPySci/pythia-1b-sft-full",
"base_model:finetune:DatPySci/pythia-1b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T04:29:58Z | ---
license: apache-2.0
base_model: DatPySci/pythia-1b-sft-full
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- DatPySci/iter0
model-index:
- name: pythia-1b-kto-iter0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-1b-kto-iter0
This model is a fine-tuned version of [DatPySci/pythia-1b-sft-full](https://huggingface.co/DatPySci/pythia-1b-sft-full) on the DatPySci/iter0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2591
- Rewards/real: 0.0604
- Rewards/generated: -1.0267
- Rewards/accuracies: 0.9460
- Rewards/margins: 1.0871
- Logps/generated: -570.8114
- Logps/real: -468.1696
- Logits/generated: 0.2253
- Logits/real: -0.2820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/real | Rewards/generated | Rewards/accuracies | Rewards/margins | Logps/generated | Logps/real | Logits/generated | Logits/real |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------------:|:------------------:|:---------------:|:---------------:|:----------:|:----------------:|:-----------:|
| 0.2932 | 0.38 | 300 | 0.2962 | 0.0718 | -0.7855 | 0.9220 | 0.8572 | -568.3989 | -468.0556 | 0.2554 | -0.2530 |
| 0.2689 | 0.77 | 600 | 0.2591 | 0.0604 | -1.0267 | 0.9460 | 1.0871 | -570.8114 | -468.1696 | 0.2253 | -0.2820 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Ayus077BCT014Bhandari/vartat5-using-100K-plus-23 | Ayus077BCT014Bhandari | 2024-02-27T09:22:02Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-27T06:25:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hellod035/ppo-Huggy | hellod035 | 2024-02-27T09:21:55Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-02-27T09:21:49Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hellod035/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
archiMAD/LunarLander-ppo-from-scratch | archiMAD | 2024-02-27T09:20:34Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T09:20:18Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 76.00 +/- 114.11
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 64
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'archiMAD/LunarLander-ppo-from-scratch'
'batch_size': 4096
'minibatch_size': 64}
```
|
nagyadam0616/zephyr-x-twitter-5epocs-full-2 | nagyadam0616 | 2024-02-27T09:20:15Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T08:56:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mhmmterts/fine_tuned_model_on_SJP_dataset_it_balanced_2048_tokens | mhmmterts | 2024-02-27T09:16:52Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:swiss_judgment_prediction",
"base_model:joelniklaus/legal-swiss-roberta-large",
"base_model:finetune:joelniklaus/legal-swiss-roberta-large",
"license:cc",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T09:15:43Z | ---
license: cc
base_model: joelniklaus/legal-swiss-roberta-large
tags:
- generated_from_trainer
datasets:
- swiss_judgment_prediction
metrics:
- accuracy
model-index:
- name: fine_tuned_model_on_SJP_dataset_it_balanced_2048_tokens
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: swiss_judgment_prediction
type: swiss_judgment_prediction
config: it
split: test
args: it
metrics:
- name: Accuracy
type: accuracy
value: 0.8177339901477833
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_model_on_SJP_dataset_it_balanced_2048_tokens
This model is a fine-tuned version of [joelniklaus/legal-swiss-roberta-large](https://huggingface.co/joelniklaus/legal-swiss-roberta-large) on the swiss_judgment_prediction dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7964
- Accuracy: 0.8177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7513 | 1.0 | 768 | 0.6783 | 0.7956 |
| 0.6008 | 2.0 | 1536 | 0.7964 | 0.8177 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.1
|
vlada-v/whisper-small-hi | vlada-v | 2024-02-27T09:07:39Z | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-09T07:37:40Z | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Hi - Kids
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Kids
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the PRG Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4077
- Wer: 95.2005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
alexandrabenamar/Mistral-7B-Instruct-v0.2-4Magic | alexandrabenamar | 2024-02-27T09:07:08Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T08:42:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaszki/nous-gemma-four | tomaszki | 2024-02-27T09:06:37Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T09:03:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ann-lab52/quan-1.8b-base-AWQ | ann-lab52 | 2024-02-27T08:59:41Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T08:39:26Z | ---
license: other
license_name: quan
license_link: https://huggingface.co/qnguyen3/quan-1.8b-base
---
|
Bong9/assemblydata | Bong9 | 2024-02-27T08:56:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T08:53:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Reniyas/phi-2-classification-merged | Reniyas | 2024-02-27T08:54:15Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T08:49:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
internlm/internlm-xcomposer2-7b-4bit | internlm | 2024-02-27T08:43:16Z | 41 | 10 | transformers | [
"transformers",
"internlm",
"feature-extraction",
"text-generation",
"custom_code",
"arxiv:2401.16420",
"license:other",
"region:us"
] | text-generation | 2024-02-06T12:38:00Z | ---
license: other
pipeline_tag: text-generation
---
<p align="center">
<img src="logo_en.png" width="400"/>
<p>
<p align="center">
<b><font size="6">InternLM-XComposer2</font></b>
<p>
<div align="center">
[💻Github Repo](https://github.com/InternLM/InternLM-XComposer)
[Paper](https://arxiv.org/abs/2401.16420)
</div>
**InternLM-XComposer2** is a vision-language large model (VLLM) based on [InternLM2](https://github.com/InternLM/InternLM) for advanced text-image comprehension and composition.
We release InternLM-XComposer2 series in two versions:
- InternLM-XComposer2-VL: The pretrained VLLM model with InternLM2 as the initialization of the LLM, achieving strong performance on various multimodal benchmarks.
- InternLM-XComposer2: The finetuned VLLM for *Free-from Interleaved Text-Image Composition*.
This is the 4-bit version of InternLM-XComposer2, install the latest version of [auto_gptq](https://github.com/AutoGPTQ/AutoGPTQ#quick-installation) before using.
```python
import torch, auto_gptq
from PIL import Image
from transformers import AutoModel, AutoTokenizer
from auto_gptq.modeling import BaseGPTQForCausalLM
auto_gptq.modeling._base.SUPPORTED_MODELS = ["internlm"]
torch.set_grad_enabled(False)
class InternLMXComposer2QForCausalLM(BaseGPTQForCausalLM):
layers_block_name = "model.layers"
outside_layer_modules = [
'vit', 'vision_proj', 'model.tok_embeddings', 'model.norm', 'output',
]
inside_layer_modules = [
["attention.wqkv.linear"],
["attention.wo.linear"],
["feed_forward.w1.linear", "feed_forward.w3.linear"],
["feed_forward.w2.linear"],
]
# init model and tokenizer
model = InternLMXComposer2QForCausalLM.from_quantized(
'internlm/internlm-xcomposer2-7b-4bit', trust_remote_code=True, device="cuda:0").eval()
tokenizer = AutoTokenizer.from_pretrained(
'internlm/internlm-xcomposer2-7b-4bit', trust_remote_code=True)
img_path_list = [
'panda.jpg',
'bamboo.jpeg',
]
images = []
for img_path in img_path_list:
image = Image.open(img_path).convert("RGB")
image = model.vis_processor(image)
images.append(image)
image = torch.stack(images)
query = '<ImageHere> <ImageHere>please write an article based on the images. Title: my favorite animal.'
with torch.cuda.amp.autocast():
response, history = model.chat(tokenizer, query=query, image=image, history=[], do_sample=False)
print(response)
#My Favorite Animal: The Panda
#The panda, also known as the giant panda, is one of the most beloved animals in the world. These adorable creatures are native to China and can be found in the wild in a few select locations, but they are more commonly seen in captivity at zoos or wildlife reserves.
#Pandas have a distinct black-and-white coloration that makes them instantly recognizable. They are known for their love of bamboo, which they eat almost exclusively. In fact, pandas spend up to 14 hours a day eating, with the majority of their diet consisting of bamboo. Despite this seemingly unbalanced diet, pandas are actually quite healthy and have a low body fat percentage, thanks to their ability to digest bamboo efficiently.
#In addition to their unique eating habits, pandas are also known for their playful personalities. They are intelligent and curious creatures, often engaging in activities like playing with toys or climbing trees. However, they do not typically exhibit these behaviors in the wild, where they are solitary creatures who prefer to spend their time alone.
#One of the biggest threats to the panda's survival is habitat loss due to deforestation. As a result, many pandas now live in captivity, where they are cared for by dedicated staff and provided with enrichment opportunities to keep them engaged and stimulated. While it is important to protect these animals from extinction, it is also crucial to remember that they are still wild creatures and should be treated with respect and care.
#Overall, the panda is an amazing animal that has captured the hearts of people around the world. Whether you see them in the wild or in captivity, there is no denying the charm and allure of these gentle giants.
```
### Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
|
Subsets and Splits