modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-04 12:29:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-04 12:29:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Holarissun/gpt2full-airl_sft-imdb-seqsampler | Holarissun | 2024-03-10T15:22:09Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T15:21:51Z | ---
license: mit
base_model: gpt2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: gpt2full-airl_sft-imdb-seqsampler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2full-airl_sft-imdb-seqsampler
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Holarissun/gpt2full-airl_sft-imdb-randsampler | Holarissun | 2024-03-10T15:21:37Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T15:21:19Z | ---
license: mit
base_model: gpt2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: gpt2full-airl_sft-imdb-randsampler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2full-airl_sft-imdb-randsampler
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
jpodivin/upernet-swin-small-finetuned | jpodivin | 2024-03-10T15:18:13Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"upernet",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:openmmlab/upernet-swin-small",
"base_model:finetune:openmmlab/upernet-swin-small",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-03-10T13:10:48Z | ---
license: mit
base_model: openmmlab/upernet-swin-small
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: upernet-swin-small-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# upernet-swin-small-finetuned
This model is a fine-tuned version of [openmmlab/upernet-swin-small](https://huggingface.co/openmmlab/upernet-swin-small) on the jpodivin/plantorgans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2914
- Mean Iou: 0.4182
- Mean Accuracy: 0.5282
- Overall Accuracy: 0.7341
- Accuracy Void: nan
- Accuracy Fruit: 0.8590
- Accuracy Leaf: 0.7032
- Accuracy Flower: 0.0
- Accuracy Stem: 0.5505
- Iou Void: 0.0
- Iou Fruit: 0.8554
- Iou Leaf: 0.6976
- Iou Flower: 0.0
- Iou Stem: 0.5381
- Median Iou: 0.5381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Void | Accuracy Fruit | Accuracy Leaf | Accuracy Flower | Accuracy Stem | Iou Void | Iou Fruit | Iou Leaf | Iou Flower | Iou Stem | Median Iou |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------:|:--------------:|:-------------:|:---------------:|:-------------:|:--------:|:---------:|:--------:|:----------:|:--------:|:----------:|
| 0.8566 | 1.0 | 575 | 0.3365 | 0.3723 | 0.4705 | 0.6560 | nan | 0.8000 | 0.6122 | 0.0 | 0.4699 | 0.0 | 0.7976 | 0.6041 | 0.0 | 0.4598 | 0.4598 |
| 0.3338 | 2.0 | 1150 | 0.3030 | 0.3922 | 0.4937 | 0.7155 | nan | 0.8558 | 0.7024 | 0.0 | 0.4166 | 0.0 | 0.8517 | 0.6972 | 0.0 | 0.4119 | 0.4119 |
| 0.3477 | 3.0 | 1725 | 0.2914 | 0.4182 | 0.5282 | 0.7341 | nan | 0.8590 | 0.7032 | 0.0 | 0.5505 | 0.0 | 0.8554 | 0.6976 | 0.0 | 0.5381 | 0.5381 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
viki99/my-pet-cat | viki99 | 2024-03-10T15:17:29Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-03-10T15:15:36Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by viki99 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2167811242007
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
Sayyor/q-Taxi-v3-eval-seed | Sayyor | 2024-03-10T15:16:31Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T15:16:26Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-eval-seed
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.26 +/- 2.59
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Sayyor/q-Taxi-v3-eval-seed", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ThuyNT03/CS505-Classifier-T4_predictLabel_a1_v5 | ThuyNT03 | 2024-03-10T15:15:03Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T14:54:59Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
model-index:
- name: CS505-Classifier-T4_predictLabel_a1_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-Classifier-T4_predictLabel_a1_v5
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.98 | 48 | 0.6517 |
| No log | 1.96 | 96 | 0.3227 |
| No log | 2.94 | 144 | 0.2342 |
| No log | 3.92 | 192 | 0.1815 |
| No log | 4.9 | 240 | 0.1703 |
| No log | 5.88 | 288 | 0.1231 |
| No log | 6.86 | 336 | 0.0730 |
| No log | 7.84 | 384 | 0.0803 |
| No log | 8.82 | 432 | 0.0476 |
| No log | 9.8 | 480 | 0.0384 |
| 0.2908 | 10.78 | 528 | 0.0281 |
| 0.2908 | 11.76 | 576 | 0.0329 |
| 0.2908 | 12.73 | 624 | 0.0234 |
| 0.2908 | 13.71 | 672 | 0.0119 |
| 0.2908 | 14.69 | 720 | 0.0101 |
| 0.2908 | 15.67 | 768 | 0.0081 |
| 0.2908 | 16.65 | 816 | 0.0137 |
| 0.2908 | 17.63 | 864 | 0.0075 |
| 0.2908 | 18.61 | 912 | 0.0053 |
| 0.2908 | 19.59 | 960 | 0.0035 |
| 0.0216 | 20.57 | 1008 | 0.0060 |
| 0.0216 | 21.55 | 1056 | 0.0028 |
| 0.0216 | 22.53 | 1104 | 0.0027 |
| 0.0216 | 23.51 | 1152 | 0.0026 |
| 0.0216 | 24.49 | 1200 | 0.0024 |
| 0.0216 | 25.47 | 1248 | 0.0023 |
| 0.0216 | 26.45 | 1296 | 0.0022 |
| 0.0216 | 27.43 | 1344 | 0.0022 |
| 0.0216 | 28.41 | 1392 | 0.0021 |
| 0.0216 | 29.39 | 1440 | 0.0020 |
| 0.0216 | 30.37 | 1488 | 0.0021 |
| 0.0043 | 31.35 | 1536 | 0.0020 |
| 0.0043 | 32.33 | 1584 | 0.0019 |
| 0.0043 | 33.31 | 1632 | 0.0019 |
| 0.0043 | 34.29 | 1680 | 0.0019 |
| 0.0043 | 35.27 | 1728 | 0.0019 |
| 0.0043 | 36.24 | 1776 | 0.0019 |
| 0.0043 | 37.22 | 1824 | 0.0019 |
| 0.0043 | 38.2 | 1872 | 0.0018 |
| 0.0043 | 39.18 | 1920 | 0.0018 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Litzy619/V0309P1 | Litzy619 | 2024-03-10T15:00:59Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T03:00:57Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309P1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309P1
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4262 | 0.09 | 10 | 0.1204 |
| 0.1236 | 0.17 | 20 | 0.0907 |
| 0.1031 | 0.26 | 30 | 0.0766 |
| 0.0896 | 0.34 | 40 | 0.0691 |
| 0.0871 | 0.43 | 50 | 0.0719 |
| 0.0821 | 0.51 | 60 | 0.0751 |
| 0.0749 | 0.6 | 70 | 0.0676 |
| 0.0809 | 0.68 | 80 | 0.0624 |
| 0.068 | 0.77 | 90 | 0.0591 |
| 0.062 | 0.85 | 100 | 0.0666 |
| 0.0712 | 0.94 | 110 | 0.0643 |
| 0.0679 | 1.02 | 120 | 0.0600 |
| 0.0488 | 1.11 | 130 | 0.0758 |
| 0.0498 | 1.19 | 140 | 0.0573 |
| 0.0451 | 1.28 | 150 | 0.0649 |
| 0.0434 | 1.37 | 160 | 0.0692 |
| 0.0449 | 1.45 | 170 | 0.0639 |
| 0.0401 | 1.54 | 180 | 0.0697 |
| 0.0477 | 1.62 | 190 | 0.0633 |
| 0.0492 | 1.71 | 200 | 0.0609 |
| 0.0489 | 1.79 | 210 | 0.0632 |
| 0.0422 | 1.88 | 220 | 0.0679 |
| 0.0417 | 1.96 | 230 | 0.0633 |
| 0.034 | 2.05 | 240 | 0.0678 |
| 0.0247 | 2.13 | 250 | 0.0700 |
| 0.0234 | 2.22 | 260 | 0.0766 |
| 0.0187 | 2.3 | 270 | 0.0816 |
| 0.0231 | 2.39 | 280 | 0.0841 |
| 0.0245 | 2.47 | 290 | 0.0859 |
| 0.024 | 2.56 | 300 | 0.0848 |
| 0.0253 | 2.65 | 310 | 0.0847 |
| 0.0202 | 2.73 | 320 | 0.0841 |
| 0.0242 | 2.82 | 330 | 0.0814 |
| 0.0187 | 2.9 | 340 | 0.0820 |
| 0.0217 | 2.99 | 350 | 0.0820 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
lilyray/results | lilyray | 2024-03-10T14:59:22Z | 26 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-05T00:55:57Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2046
- Accuracy: 0.921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.507837996446784e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8349 | 1.0 | 1000 | 0.6184 | 0.7905 |
| 0.384 | 2.0 | 2000 | 0.3057 | 0.909 |
| 0.2544 | 3.0 | 3000 | 0.2316 | 0.926 |
| 0.2027 | 4.0 | 4000 | 0.2088 | 0.928 |
| 0.1757 | 5.0 | 5000 | 0.2030 | 0.9295 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
nikamazu/nikamazu | nikamazu | 2024-03-10T14:58:35Z | 0 | 0 | null | [
"dataset:HuggingFaceTB/cosmopedia",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2024-03-10T14:57:29Z | ---
license: mit
datasets:
- HuggingFaceTB/cosmopedia
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
samaxr/codellama_lora | samaxr | 2024-03-10T14:57:59Z | 5 | 0 | peft | [
"peft",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-10T09:06:37Z | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: codellama_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama_lora
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
gdupont/TinyLlama-1.1B-Chat-colors-v1.0_peft | gdupont | 2024-03-10T14:56:49Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T14:55:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
raminass/SCOTUS_AI_V15_CURCUIT | raminass | 2024-03-10T14:54:20Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:raminass/scotus-v10",
"base_model:finetune:raminass/scotus-v10",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T10:53:07Z | ---
license: cc-by-sa-4.0
base_model: raminass/scotus-v10
tags:
- generated_from_trainer
model-index:
- name: SCOTUS_AI_V15_CURCUIT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCOTUS_AI_V15_CURCUIT
This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co/raminass/scotus-v10) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1082
- eval_accuracy: 0.7486
- eval_runtime: 75.9291
- eval_samples_per_second: 108.114
- eval_steps_per_second: 6.769
- epoch: 4.0
- step: 8184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
drMostert/segformer-b0-scene-parse-150 | drMostert | 2024-03-10T14:54:08Z | 22 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T14:37:22Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5433
- Mean Iou: 0.0600
- Mean Accuracy: 0.1407
- Overall Accuracy: 0.4130
- Per Category Iou: [0.4725842300574752, 0.23752185781261304, 0.500907459865348, 0.26304551026233747, 0.20113818567783023, 0.2773168787458298, 0.41824906409273377, nan, 0.0, nan, 0.0011588462105728914, 0.0, 0.07620455691560078, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.09211967767850622, 0.21158826718063, 0.0, 0.009009009009009009, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan]
- Per Category Accuracy: [0.7019862011289986, 0.2599706832653203, 0.974451706755296, 0.7671708061606771, 0.8256484417005024, 0.9195901184609862, 0.558454659058402, nan, 0.0, nan, 0.0012131371727286764, 0.0, 0.08718056302201477, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7549277791078126, 0.3302933433621662, nan, 0.009011546043368065, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.2364 | 1.0 | 20 | 4.1492 | 0.0409 | 0.1240 | 0.3995 | [0.5322293849075467, 0.23690897692857837, 0.4397872027790232, 0.19607643898903274, 0.36383498030038486, 0.12773088147613518, 0.009777174103954194, nan, 0.0, nan, 0.11339002834750708, 0.0, 0.1422973407586709, 9.40875390463287e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08988905804476369, 0.44466963923794084, nan, 0.0009037191518943343, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan] | [0.8962503067930806, 0.259095816281796, 0.589595813676267, 0.7087472147177173, 0.7379164580899938, 0.3320823679143687, 0.01257170387991388, nan, 0.0, nan, 0.11999029490261817, 0.0, 0.22044921132337708, 0.0012360939431396785, 0.0, 0.0, 0.0, nan, 0.0, 0.8528449445375469, 0.7219819481007897, nan, 0.0018304702900591382, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 4.2734 | 2.0 | 40 | 3.9214 | 0.0500 | 0.1198 | 0.3713 | [0.5414063519948691, 0.19841541146471395, 0.5368811396588854, 0.1932222222222222, 0.19532902970225716, 0.1522866572371523, 0.0, nan, 0.0008067375886524823, nan, 0.00181349238333199, 0.0, 0.05775538617646365, 0.0008988949791032748, 0.0, 0.0, 0.0, 0.0, 0.0, 0.07665371555439467, 0.5463317251705208, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.7765582815951422, 0.23428088058355365, 0.7064269767347887, 0.7664607122160645, 0.804457051745555, 0.3320334170936266, 0.0, nan, 0.0008153902672867217, nan, 0.0019851335553741976, 0.0, 0.06418241179015825, 0.21508034610630408, 0.0, 0.0, 0.0, nan, 0.0, 0.7494214348415928, 0.6175253854832644, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 3.4296 | 3.0 | 60 | 3.9287 | 0.0500 | 0.1247 | 0.3684 | [0.504898336414048, 0.16609815628654262, 0.461471733451624, 0.22065343315487834, 0.16518809916592642, 0.28398331595411885, 0.1604012425930234, nan, 0.0011706985763947845, nan, 0.02186771822907331, 0.0, 0.037805308927614856, 0.00042000840016800337, 0.0, 0.0, 0.0, 0.0, 0.0, 0.059326658998615056, 0.4647721010784854, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.6853021116454109, 0.19175037128736774, 0.9458424673448566, 0.7632938564630792, 0.7964291673619223, 0.44437555069673335, 0.17647058823529413, nan, 0.0011738035715885774, nan, 0.024218629375565213, 0.0, 0.043491942779970365, 0.0519159456118665, 0.0, 0.0, 0.0, nan, 0.0, 0.8067592370920118, 0.5550959007145544, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 3.6539 | 4.0 | 80 | 3.9424 | 0.0460 | 0.1303 | 0.3287 | [0.37671262071262074, 0.13443477431760276, 0.4269336776273018, 0.1963029676535461, 0.14844067652609796, 0.2914056148070209, 0.1012685049158097, nan, 0.0, nan, 0.015320700804571772, 0.0, 0.04650892929668009, 0.0008672882232266457, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08581872964530209, 0.4295496258647466, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.5141706587642829, 0.14704138526399546, 0.9420238612371071, 0.7765326194304557, 0.9070026141609656, 0.7953529354175505, 0.10793598217377558, nan, 0.0, nan, 0.01797648719588857, 0.0, 0.05255221786618065, 0.12855377008652658, 0.0, 0.0, 0.0, nan, 0.0, 0.7728034474503231, 0.572113576532531, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 3.8072 | 5.0 | 100 | 3.6808 | 0.0524 | 0.1296 | 0.3789 | [0.49848536561886225, 0.15669095400920174, 0.5116626603724406, 0.2285989936984026, 0.16470623593542788, 0.29551710026963546, 0.1565518949715135, nan, 0.0, nan, 0.0009195500620161669, 0.0, 0.05793396722251421, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0683129055515501, 0.32468649229666785, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.7248861456789899, 0.16903666136853918, 0.8923819818363656, 0.7639060064153315, 0.8128371989543356, 0.8601801390203309, 0.1712762914226351, nan, 0.0, nan, 0.0009484526986787834, 0.0, 0.06359237940393617, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.816614795307637, 0.42600601729973675, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 4.0198 | 6.0 | 120 | 3.7189 | 0.0502 | 0.1318 | 0.3460 | [0.39116293372838296, 0.13864719866417147, 0.40087800798076706, 0.2157543281871196, 0.16127116562617994, 0.3785288215728855, 0.20748449345279119, nan, 0.0, nan, 0.0037886043888214886, 0.0, 0.0654386250902401, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08008943702143075, 0.3613156909249782, nan, 0.006734878901696671, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.5573253097473843, 0.14608733605900429, 0.9829021406418895, 0.7742635836074405, 0.8156367614068265, 0.8987697027053487, 0.22663695629262712, nan, 0.0, nan, 0.00392615303174008, 0.0, 0.07368848912373635, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7674966084111404, 0.544283565250094, nan, 0.007321881160236553, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 3.3227 | 7.0 | 140 | 3.5359 | 0.0534 | 0.1315 | 0.4101 | [0.4770347521615892, 0.24974336818456752, 0.5108344403430883, 0.2366895974550102, 0.17451872484087896, 0.3132020145632557, 0.19149852704129844, nan, 0.0, nan, 0.0, 0.0, 0.07627803718584476, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08870909519706982, 0.24312130647518587, 0.0, 0.001126443255421008, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.7282076920979192, 0.2790960866601132, 0.9653547363848508, 0.7663709302230675, 0.8270945732984779, 0.8330777012694579, 0.20748580978334513, nan, 0.0, nan, 0.0, 0.0, 0.08289299434880092, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7354161679035991, 0.3597216998871756, nan, 0.001126443255421008, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 2.7606 | 8.0 | 160 | 3.5593 | 0.0594 | 0.1378 | 0.4101 | [0.491183620322603, 0.23218100723379267, 0.5228177173827064, 0.24633373487665636, 0.20350864022596432, 0.2936651680126143, 0.3681167890630956, nan, 0.0, nan, 2.0947672713561523e-05, 0.0, 0.056203414282279394, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.07741366689718485, 0.23988607300627762, nan, 0.001126443255421008, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.7214700615404194, 0.2512355323459375, 0.968173231369142, 0.7677584701148393, 0.8041604093664831, 0.8995202819567275, 0.4362155407338262, nan, 0.0, nan, 2.2057039504157753e-05, 0.0, 0.061297809013072496, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7637858111882532, 0.3880218127115457, nan, 0.001126443255421008, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 3.1471 | 9.0 | 180 | 3.5223 | 0.0611 | 0.1404 | 0.4096 | [0.4694048515016408, 0.2304776927428032, 0.5069242587551356, 0.25709018097468106, 0.21042235106866758, 0.26575785951918235, 0.40512733060482037, nan, 0.0, nan, 0.0, 0.0, 0.07140409542602592, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.089745061462668, 0.23717794365518902, nan, 0.007744297381019431, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.7018880273432174, 0.24945296672608552, 0.9716180585721645, 0.7671136721651336, 0.8235719450469993, 0.9215318343504226, 0.5365181649829115, nan, 0.0, nan, 0.0, 0.0, 0.07923479355422397, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7486633149788524, 0.37044001504324936, nan, 0.007744297381019431, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
| 2.7459 | 10.0 | 200 | 3.5433 | 0.0600 | 0.1407 | 0.4130 | [0.4725842300574752, 0.23752185781261304, 0.500907459865348, 0.26304551026233747, 0.20113818567783023, 0.2773168787458298, 0.41824906409273377, nan, 0.0, nan, 0.0011588462105728914, 0.0, 0.07620455691560078, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.09211967767850622, 0.21158826718063, 0.0, 0.009009009009009009, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] | [0.7019862011289986, 0.2599706832653203, 0.974451706755296, 0.7671708061606771, 0.8256484417005024, 0.9195901184609862, 0.558454659058402, nan, 0.0, nan, 0.0012131371727286764, 0.0, 0.08718056302201477, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7549277791078126, 0.3302933433621662, nan, 0.009011546043368065, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan] |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Jackline/Blip2-HateSpeech-PEFT-Whole-2.7b | Jackline | 2024-03-10T14:53:53Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Salesforce/blip2-opt-2.7b",
"base_model:adapter:Salesforce/blip2-opt-2.7b",
"region:us"
] | null | 2024-03-10T14:53:46Z | ---
library_name: peft
base_model: Salesforce/blip2-opt-2.7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.1
|
ankursinghbisht/a2c-PandaPickAndPlace-v3 | ankursinghbisht | 2024-03-10T14:53:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T14:49:30Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
afaji/fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-5 | afaji | 2024-03-10T14:48:45Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T14:48:13Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-5
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4393
- Accuracy: 0.5202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 2.3938 | 0.2475 |
| No log | 2.0 | 126 | 1.5164 | 0.3636 |
| No log | 3.0 | 189 | 1.1653 | 0.4646 |
| No log | 4.0 | 252 | 0.7958 | 0.4394 |
| No log | 5.0 | 315 | 0.5525 | 0.4596 |
| No log | 6.0 | 378 | 1.1572 | 0.4747 |
| No log | 7.0 | 441 | 0.3450 | 0.4798 |
| 1.7802 | 8.0 | 504 | 0.4393 | 0.5202 |
| 1.7802 | 9.0 | 567 | 0.5459 | 0.4343 |
| 1.7802 | 10.0 | 630 | 0.4935 | 0.5101 |
| 1.7802 | 11.0 | 693 | 0.3405 | 0.4697 |
| 1.7802 | 12.0 | 756 | 0.3275 | 0.4697 |
| 1.7802 | 13.0 | 819 | 0.2442 | 0.4646 |
| 1.7802 | 14.0 | 882 | 0.2561 | 0.4495 |
| 1.7802 | 15.0 | 945 | 0.2196 | 0.4495 |
| 0.215 | 16.0 | 1008 | 0.1943 | 0.4495 |
| 0.215 | 17.0 | 1071 | 0.1845 | 0.4545 |
| 0.215 | 18.0 | 1134 | 0.1702 | 0.4444 |
| 0.215 | 19.0 | 1197 | 0.1788 | 0.4545 |
| 0.215 | 20.0 | 1260 | 0.1747 | 0.4545 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
Leokb24/ppo-LunarLander-v2 | Leokb24 | 2024-03-10T14:41:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T14:41:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.40 +/- 26.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
afaji/fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-4 | afaji | 2024-03-10T14:41:44Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T14:41:09Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4186
- Accuracy: 0.5051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 2.8693 | 0.2677 |
| No log | 2.0 | 126 | 2.2777 | 0.3485 |
| No log | 3.0 | 189 | 1.0399 | 0.4141 |
| No log | 4.0 | 252 | 1.8741 | 0.4293 |
| No log | 5.0 | 315 | 1.2779 | 0.4394 |
| No log | 6.0 | 378 | 0.7112 | 0.4646 |
| No log | 7.0 | 441 | 0.8380 | 0.4596 |
| 1.9226 | 8.0 | 504 | 0.7028 | 0.4697 |
| 1.9226 | 9.0 | 567 | 0.6589 | 0.4848 |
| 1.9226 | 10.0 | 630 | 0.6303 | 0.4495 |
| 1.9226 | 11.0 | 693 | 0.7083 | 0.4646 |
| 1.9226 | 12.0 | 756 | 0.4850 | 0.4899 |
| 1.9226 | 13.0 | 819 | 0.5145 | 0.4848 |
| 1.9226 | 14.0 | 882 | 0.7032 | 0.4697 |
| 1.9226 | 15.0 | 945 | 0.4812 | 0.4697 |
| 0.2279 | 16.0 | 1008 | 0.4186 | 0.5051 |
| 0.2279 | 17.0 | 1071 | 0.3735 | 0.5 |
| 0.2279 | 18.0 | 1134 | 0.3894 | 0.5051 |
| 0.2279 | 19.0 | 1197 | 0.3845 | 0.5051 |
| 0.2279 | 20.0 | 1260 | 0.3925 | 0.5051 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
dilip025/llama-2-7b | dilip025 | 2024-03-10T14:39:26Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T17:03:29Z | ---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_name: Llama 2 7B Chat
arxiv: 2307.09288
base_model: meta-llama/Llama-2-7b-chat-hf
inference: false
model_creator: Meta Llama 2
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
You are NutriLife chatbot, you are going to get questions related to food, nutrition, health, and diet by the users from Nepal. Answer them very shortly and accurately if the message is only about food, nutrition, and diet. Otherwise, ignore.
<</SYS>>
{prompt}[/INST]
'
quantized_by: Dilip Pokhrel
---
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 7B Chat -- Food and Nutrition
<br>
- Model creator: [Meta Llama 2]
<br>
- Original model: [Llama 2 7B Chat] <a href="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf">Original Model</a>
<br>
- Fine Tuned by: [Dilip Pokhrel] <a href="https://dilippokhrel.com.np">Profile</a>
#### Simple example code to load one of these GGUF models
```python
# Load model directly or use qunatization technique if you have low gpu ram
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("dilip025/llama-2-7b")
model = AutoModelForCausalLM.from_pretrained("dilip025/llama-2-7b")
system_message = 'You are NutriLife chatbot, you are going to get questions related to food, nutrition, health, and diet by the users from Nepal. Answer them very shortly and accurately if the message is only about food, nutrition, and diet. Otherwise, ignore.'
prompt = f"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n Tell me some of the famous Nepali food recipes [/INST]"
num_new_tokens = 200 # Change to the number of new tokens you want to generate
# Count the number of tokens in the prompt
num_prompt_tokens = len(tokenizer(prompt)['input_ids'])
# Calculate the maximum length for the generation
max_length = num_prompt_tokens + num_new_tokens
gen = pipeline('text-generation', model=model, tokenizer=tokenizer, max_length=max_length)
result = gen(prompt)
print(result[0]['generated_text'].replace(prompt, ''))
```
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) |
turgutburak01/ppo-LunarLander-v2 | turgutburak01 | 2024-03-10T14:39:03Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"tensorboard",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-03T13:52:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.49 +/- 23.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SHONOSUKE/Addtional_Trained_BERT_For_Legal_Domain_v1 | SHONOSUKE | 2024-03-10T14:36:31Z | 194 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-03-10T14:36:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
afaji/fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-3 | afaji | 2024-03-10T14:34:45Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T14:34:13Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-3
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8510
- Accuracy: 0.5303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 2.8211 | 0.2879 |
| No log | 2.0 | 126 | 2.0292 | 0.3889 |
| No log | 3.0 | 189 | 1.3492 | 0.4293 |
| No log | 4.0 | 252 | 0.8583 | 0.5152 |
| No log | 5.0 | 315 | 0.8510 | 0.5303 |
| No log | 6.0 | 378 | 1.3129 | 0.4848 |
| No log | 7.0 | 441 | 0.7994 | 0.4444 |
| 1.9846 | 8.0 | 504 | 0.6454 | 0.4697 |
| 1.9846 | 9.0 | 567 | 0.8126 | 0.4899 |
| 1.9846 | 10.0 | 630 | 0.8618 | 0.4495 |
| 1.9846 | 11.0 | 693 | 0.5559 | 0.4848 |
| 1.9846 | 12.0 | 756 | 0.5902 | 0.4949 |
| 1.9846 | 13.0 | 819 | 0.5117 | 0.5051 |
| 1.9846 | 14.0 | 882 | 0.4989 | 0.4848 |
| 1.9846 | 15.0 | 945 | 0.4913 | 0.4697 |
| 0.2505 | 16.0 | 1008 | 0.4599 | 0.4949 |
| 0.2505 | 17.0 | 1071 | 0.3934 | 0.4949 |
| 0.2505 | 18.0 | 1134 | 0.4083 | 0.4848 |
| 0.2505 | 19.0 | 1197 | 0.4291 | 0.4798 |
| 0.2505 | 20.0 | 1260 | 0.4429 | 0.4747 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
CaptainPollutionTV/DoctorBlight-OJ4 | CaptainPollutionTV | 2024-03-10T14:33:06Z | 0 | 0 | null | [
"DreamBooth",
"OpenJourney4",
"license:cc",
"region:us"
] | null | 2024-03-10T10:47:13Z | ---
license: cc
tags:
- DreamBooth
- OpenJourney4
---
Made by CaptainPollutionTV using the getimg.ai Dreambooth tool.
Details about the model:
Base Model
Openjourney v4
Instance prompt
doctorblight
Class prompt
a woman
Learning Rate
0.000001
Learning Rate Scheduler
polynomial
Training Steps
10000 (200 steps warmup)
Class images
1000
Model seed
327558656
Sample images:











































































 |
saleng/qlora_test_1k_lora | saleng | 2024-03-10T14:27:27Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-10T14:21:05Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openlm-research/open_llama_3b_v2
model-index:
- name: qlora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: openlm-research/open_llama_3b_v2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
adapter: qlora
lora_model_dir:
sequence_len: 1024
sample_packing: true
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./qlora-out
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
gptq_groupsize:
gptq_model_v1:
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out
This model is a fine-tuned version of [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2567 | 0.0 | 1 | 1.3469 |
| 1.1726 | 0.25 | 108 | 1.1364 |
| 1.1127 | 0.5 | 216 | 1.1218 |
| 1.4125 | 0.75 | 324 | 1.1111 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0 |
Holarissun/gpt2-airl_sft-imdb-randsampler | Holarissun | 2024-03-10T14:20:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:lvwerra/gpt2-imdb",
"base_model:adapter:lvwerra/gpt2-imdb",
"region:us"
] | null | 2024-03-10T14:20:46Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: lvwerra/gpt2-imdb
model-index:
- name: gpt2-airl_sft-imdb-randsampler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-airl_sft-imdb-randsampler
This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Holarissun/gpt2-airl_sft-imdb-seqsampler | Holarissun | 2024-03-10T14:20:17Z | 2 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:lvwerra/gpt2-imdb",
"base_model:adapter:lvwerra/gpt2-imdb",
"region:us"
] | null | 2024-03-10T14:20:15Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: lvwerra/gpt2-imdb
model-index:
- name: gpt2-airl_sft-imdb-seqsampler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-airl_sft-imdb-seqsampler
This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Litzy619/V0309O3 | Litzy619 | 2024-03-10T14:19:52Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T06:30:50Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309O3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309O3
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0623 | 0.09 | 10 | 0.8505 |
| 0.3477 | 0.17 | 20 | 0.1055 |
| 0.1256 | 0.26 | 30 | 0.0916 |
| 0.1151 | 0.34 | 40 | 0.0848 |
| 0.1059 | 0.43 | 50 | 0.0765 |
| 0.0925 | 0.51 | 60 | 0.0806 |
| 0.0848 | 0.6 | 70 | 0.0722 |
| 0.0864 | 0.68 | 80 | 0.0734 |
| 0.0827 | 0.77 | 90 | 0.0735 |
| 0.0799 | 0.85 | 100 | 0.0722 |
| 0.081 | 0.94 | 110 | 0.0675 |
| 0.08 | 1.02 | 120 | 0.0697 |
| 0.0794 | 1.11 | 130 | 0.0636 |
| 0.0716 | 1.19 | 140 | 0.0634 |
| 0.0655 | 1.28 | 150 | 0.0625 |
| 0.0648 | 1.37 | 160 | 0.0660 |
| 0.0636 | 1.45 | 170 | 0.0658 |
| 0.0674 | 1.54 | 180 | 0.0681 |
| 0.0696 | 1.62 | 190 | 0.0658 |
| 0.0686 | 1.71 | 200 | 0.0615 |
| 0.0674 | 1.79 | 210 | 0.0598 |
| 0.0612 | 1.88 | 220 | 0.0593 |
| 0.0616 | 1.96 | 230 | 0.0560 |
| 0.0568 | 2.05 | 240 | 0.0580 |
| 0.0492 | 2.13 | 250 | 0.0608 |
| 0.05 | 2.22 | 260 | 0.0636 |
| 0.0469 | 2.3 | 270 | 0.0632 |
| 0.0535 | 2.39 | 280 | 0.0631 |
| 0.0526 | 2.47 | 290 | 0.0629 |
| 0.0502 | 2.56 | 300 | 0.0610 |
| 0.0559 | 2.65 | 310 | 0.0611 |
| 0.0491 | 2.73 | 320 | 0.0607 |
| 0.0488 | 2.82 | 330 | 0.0614 |
| 0.0466 | 2.9 | 340 | 0.0615 |
| 0.0506 | 2.99 | 350 | 0.0614 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
alex-atelo/unigram-tokenizer | alex-atelo | 2024-03-10T14:15:31Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T14:15:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HachiML/myBit-Llama2-jp-127M-test-7 | HachiML | 2024-03-10T14:15:26Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T13:46:58Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: myBit-Llama2-jp-127M-test-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myBit-Llama2-jp-127M-7
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.6539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.0536 | 0.04 | 100 | 7.4802 |
| 6.8962 | 0.07 | 200 | 6.5875 |
| 6.3685 | 0.11 | 300 | 6.1149 |
| 5.8698 | 0.15 | 400 | 5.6208 |
| 5.6334 | 0.18 | 500 | 6.1096 |
| 8.8705 | 0.22 | 600 | 10.3915 |
| 10.5174 | 0.26 | 700 | 10.5752 |
| 10.5929 | 0.29 | 800 | 10.6066 |
| 10.6128 | 0.33 | 900 | 10.6187 |
| 10.6218 | 0.37 | 1000 | 10.6255 |
| 10.6274 | 0.4 | 1100 | 10.6302 |
| 10.6312 | 0.44 | 1200 | 10.6335 |
| 10.6343 | 0.48 | 1300 | 10.6363 |
| 10.6369 | 0.51 | 1400 | 10.6384 |
| 10.6391 | 0.55 | 1500 | 10.6404 |
| 10.6408 | 0.59 | 1600 | 10.6422 |
| 10.6426 | 0.62 | 1700 | 10.6438 |
| 10.6441 | 0.66 | 1800 | 10.6451 |
| 10.6454 | 0.7 | 1900 | 10.6464 |
| 10.6467 | 0.73 | 2000 | 10.6477 |
| 10.6479 | 0.77 | 2100 | 10.6486 |
| 10.649 | 0.81 | 2200 | 10.6496 |
| 10.6499 | 0.84 | 2300 | 10.6506 |
| 10.6508 | 0.88 | 2400 | 10.6515 |
| 10.6516 | 0.92 | 2500 | 10.6522 |
| 10.6524 | 0.95 | 2600 | 10.6531 |
| 10.6534 | 0.99 | 2700 | 10.6539 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
phamsonn/dummy-model | phamsonn | 2024-03-10T14:12:19Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-03-10T14:06:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
afaji/fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa-loop-2 | afaji | 2024-03-10T14:10:24Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T14:09:50Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-2-layer-medmcqa-distill-of-fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-2-layer-medmcqa-distill-of-fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9186
- Accuracy: 0.5152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 4.2492 | 0.2727 |
| No log | 2.0 | 126 | 3.2851 | 0.3889 |
| No log | 3.0 | 189 | 2.1889 | 0.4444 |
| No log | 4.0 | 252 | 4.3537 | 0.4646 |
| No log | 5.0 | 315 | 1.4476 | 0.4697 |
| No log | 6.0 | 378 | 1.1196 | 0.4646 |
| No log | 7.0 | 441 | 1.5751 | 0.4646 |
| 2.425 | 8.0 | 504 | 0.9802 | 0.4343 |
| 2.425 | 9.0 | 567 | 2.4061 | 0.4495 |
| 2.425 | 10.0 | 630 | 0.9186 | 0.5152 |
| 2.425 | 11.0 | 693 | 0.9569 | 0.4848 |
| 2.425 | 12.0 | 756 | 0.9649 | 0.4798 |
| 2.425 | 13.0 | 819 | 1.3807 | 0.4899 |
| 2.425 | 14.0 | 882 | 0.6900 | 0.4899 |
| 2.425 | 15.0 | 945 | 0.8787 | 0.4747 |
| 0.3146 | 16.0 | 1008 | 0.7985 | 0.4949 |
| 0.3146 | 17.0 | 1071 | 0.9305 | 0.4899 |
| 0.3146 | 18.0 | 1134 | 0.9062 | 0.4848 |
| 0.3146 | 19.0 | 1197 | 0.8571 | 0.5051 |
| 0.3146 | 20.0 | 1260 | 0.8674 | 0.5 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
jyesr/ppo-diy | jyesr | 2024-03-10T14:07:26Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T14:07:18Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.36 +/- 76.39
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.001
'num_envs': 8
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 16
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'jyesr/ppo-diy'
'batch_size': 1024
'minibatch_size': 64}
```
|
pmu/my-pet-dog | pmu | 2024-03-10T14:06:05Z | 0 | 0 | null | [
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-03-10T14:03:52Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by pmu following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 4MC22IS075
Sample pictures of this concept:





|
sujith013/whisper-medium-tamil | sujith013 | 2024-03-10T14:05:02Z | 62 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ta",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-24T09:48:50Z | ---
language:
- ta
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ./whisper-medium-tamil-openslr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-tamil
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1628
- Wer: 35.6581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.9386 | 0.15 | 25 | 0.5501 | 43.7602 |
| 0.3073 | 0.31 | 50 | 0.2054 | 40.3324 |
| 0.174 | 0.46 | 75 | 0.1713 | 36.8452 |
| 0.1539 | 0.62 | 100 | 0.1628 | 35.6581 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
flammenai/flammen5-mistral-7B | flammenai | 2024-03-10T13:58:47Z | 17 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:nbeerbower/Flammen-Kunoichi-7B",
"base_model:merge:nbeerbower/Flammen-Kunoichi-7B",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T13:51:18Z | ---
license: apache-2.0
base_model:
- nbeerbower/Flammen-Kunoichi-7B
- yam-peleg/Experiment26-7B
library_name: transformers
tags:
- mergekit
- merge
---
# flammen5-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Flammen-Kunoichi-7B](https://huggingface.co/nbeerbower/Flammen-Kunoichi-7B)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/Flammen-Kunoichi-7B
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/Flammen-Kunoichi-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
herutriana44/llama-2-7b-drug-sequence-summarizer | herutriana44 | 2024-03-10T13:58:11Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-08T08:58:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
geoartop/ppo-LunarLander-v2 | geoartop | 2024-03-10T13:47:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T13:47:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.54 +/- 19.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bibek1129/distilgpt2-nepali-multiple-qs-generator | Bibek1129 | 2024-03-10T13:37:04Z | 2 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"ne",
"dataset:Bibek1129/nepali_SQuAD_multiple_qsns",
"base_model:Sakonii/distilgpt2-nepali",
"base_model:adapter:Sakonii/distilgpt2-nepali",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-03-10T05:54:56Z | ---
library_name: peft
base_model: Sakonii/distilgpt2-nepali
license: apache-2.0
datasets:
- Bibek1129/nepali_SQuAD_multiple_qsns
language:
- ne
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The model is finetuned on Sakonii/distilgpt2-nepali with Bibek1129/nepali_SQuAD_multiple_qsns dataset.The dataset is converted to nepali using Nepali_nlp library using SQuAD dataset.
- **Model type:** distilgpt2
- **Language(s) (NLP):** ne(Nepali)
- **Finetuned from model :** https://huggingface.co/Sakonii/distilgpt2-nepali
### Model Sources
<!-- Provide the basic links for the model. -->
For training snippets and inference check the following repository.
- **Repository:** https://github.com/HordesOfGhost/Nepali_LLMs/]
## How to Get Started with the Model
Use the code below to get started with the model.
```python
!pip install peft
!pip install transformers
!pip install sentencepiece
```
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM,AutoTokenizer
from transformers import pipeline
base_model = "Sakonii/distilgpt2-nepali"
adapter_model = "Bibek1129/distilgpt2-nepali-multiple-qs-generator"
tokenizer = AutoTokenizer.from_pretrained(base_model)
config = PeftConfig.from_pretrained(adapter_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
model = model.merge_and_unload()
prompt = """तपाईं प्रश्नहरू उत्पन्न गर्ने मोडेल हुनुहुन्छ। तपाइँलाई एक सन्दर्भ दिइएको हुन्छ र तपाइँ त्यसमा आधारित प्रश्नहरू उत्पन्न गर्नुहुन्छ।
### सन्दर्भ:
राजनीति 'शहरका मामिलाहरू') गतिविधिहरूको सेट हो जुन समूहहरूमा निर्णय गर्न वा व्यक्तिहरू बीचको शक्ति सम्बन्धका अन्य रूपहरू, जस्तै स्रोत वा स्थितिको वितरणसँग सम्बन्धित छ। राजनीति र सरकारको अध्ययन गर्ने सामाजिक विज्ञानको शाखालाई राजनीति विज्ञान भनिन्छ।
यसलाई "राजनीतिक समाधान" को सन्दर्भमा सकारात्मक रूपमा प्रयोग गर्न सकिन्छ जुन सम्झौता र अहिंसात्मक छ, वा वर्णनात्मक रूपमा "सरकारको कला वा विज्ञान" को रूपमा, तर प्राय: नकारात्मक अर्थ पनि बोक्छ। अवधारणालाई विभिन्न तरिकामा परिभाषित गरिएको छ, र यसलाई
व्यापक रूपमा प्रयोग गर्ने वा सीमित रूपमा, प्रायोगिक वा सामान्य रूपमा, र यसको लागि द्वन्द्व वा सहयोग बढी आवश्यक छ कि छैन भन्ने बारेमा विभिन्न दृष्टिकोणहरूमा मौलिक रूपमा फरक फरक विचारहरू छन्।
### प्रश्नहरू:
"""
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=64)
def format_output(prompt,pipe):
inference = pipe(prompt)[0]["generated_text"]
# Select after प्रश्नहरू: and break line after each ?
inference = inference.split("प्रश्नहरू:")[-1].replace("?","?\n")
# Remove last incomplete question
index = inference.rfind("?")
inference = inference[:index+1]
return inference
print(format_output(prompt, pipe))
'''
Output:
राजनीतिशास्त्रले मानिसहरूलाई केको रूपमा देख्छ?
राजनीतिशास्त्र प्राय: कुन प्रकारको अभ्याससँग सम्बन्धित छ?
राजनीतिशास्त्रले मानिसलाई केको रूपमा देख्छ?
राजनीति विज्ञानमा केको भूमिका निर्भर छ?
राजनीतिक अर्थशास्त्रको शाखालाई कसरी प्रभावित गरेर समाजलाई सांस्कृतिक परिभाषामा के असर हुन्छ,?
'''
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The dataset is created by converting SQuAD dataset to nepali using Nepali_nlp using PEFT.
https://huggingface.co/datasets/Bibek1129/nepali_SQuAD_multiple_qsns
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model is trained with the lora config (rank=32,lora_alpha=64,target_modules="c_fc","c_attn","c_proj","lm_head");with 512 tokens per instance, 4 instances per batch, and around 118.1K training steps.
#### Training Hyperparameters
Following are the training hyperparameters.
<li>learning_rate:2e-4</li>
<li>fp16:True</li>
<li>optim:"paged_adamw_32bit"</li>
<li>lr_scheduler_type:"constant"</li>
<li>num_train_epochs:48</li>
Lora Config:
```python
config={
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "Sakonii/distilgpt2-nepali",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"lora_alpha": 64,
"lora_dropout": 0.05,
"modules_to_save": null,
"peft_type": "LORA",
"r": 32,
"rank_pattern": {},
"revision": null,
"target_modules": [
"c_proj",
"lm_head",
"c_fc",
"c_attn"
],
"task_type": "CAUSAL_LM"
}
```
### Results
<li>train/loss:3.1273</li>
### Framework versions
- PEFT 0.9.0
- |
adhityamw11/distilhubert-finetuned_distillhubert-ravdess | adhityamw11 | 2024-03-10T13:34:20Z | 147 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:adhityamw11/ravdess_distillhubert",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-10T12:22:19Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- adhityamw11/ravdess_distillhubert
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-ravdess
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-ravdess
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the RAVDESS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9331
- Accuracy: 0.8438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0641 | 1.0 | 144 | 2.0414 | 0.2778 |
| 1.751 | 2.0 | 288 | 1.7801 | 0.3854 |
| 1.5345 | 3.0 | 432 | 1.3610 | 0.5417 |
| 1.1913 | 4.0 | 576 | 1.1896 | 0.5417 |
| 0.8227 | 5.0 | 720 | 0.7924 | 0.7535 |
| 0.6563 | 6.0 | 864 | 0.6772 | 0.7743 |
| 0.4082 | 7.0 | 1008 | 0.6398 | 0.7847 |
| 0.5133 | 8.0 | 1152 | 0.6409 | 0.7951 |
| 0.0467 | 9.0 | 1296 | 0.7356 | 0.7951 |
| 0.0232 | 10.0 | 1440 | 0.8220 | 0.8160 |
| 0.0298 | 11.0 | 1584 | 0.7164 | 0.8438 |
| 0.0021 | 12.0 | 1728 | 0.7578 | 0.8611 |
| 0.0014 | 13.0 | 1872 | 0.6806 | 0.8507 |
| 0.0012 | 14.0 | 2016 | 0.6953 | 0.8507 |
| 0.0009 | 15.0 | 2160 | 0.7311 | 0.8403 |
| 0.0007 | 16.0 | 2304 | 0.7312 | 0.8472 |
| 0.0006 | 17.0 | 2448 | 0.7528 | 0.8438 |
| 0.0005 | 18.0 | 2592 | 0.7748 | 0.8299 |
| 0.0005 | 19.0 | 2736 | 0.7692 | 0.8472 |
| 0.0004 | 20.0 | 2880 | 0.7806 | 0.8403 |
| 0.0003 | 21.0 | 3024 | 0.7907 | 0.8438 |
| 0.0003 | 22.0 | 3168 | 0.7909 | 0.8438 |
| 0.0003 | 23.0 | 3312 | 0.8060 | 0.8472 |
| 0.0003 | 24.0 | 3456 | 0.8302 | 0.8438 |
| 0.0002 | 25.0 | 3600 | 0.8296 | 0.8438 |
| 0.0002 | 26.0 | 3744 | 0.8306 | 0.8403 |
| 0.0002 | 27.0 | 3888 | 0.8399 | 0.8438 |
| 0.0002 | 28.0 | 4032 | 0.8447 | 0.8438 |
| 0.0002 | 29.0 | 4176 | 0.8488 | 0.8403 |
| 0.0002 | 30.0 | 4320 | 0.8564 | 0.8472 |
| 0.0002 | 31.0 | 4464 | 0.8618 | 0.8472 |
| 0.0001 | 32.0 | 4608 | 0.8736 | 0.8438 |
| 0.0001 | 33.0 | 4752 | 0.8793 | 0.8403 |
| 0.0001 | 34.0 | 4896 | 0.8840 | 0.8438 |
| 0.0001 | 35.0 | 5040 | 0.8870 | 0.8438 |
| 0.0001 | 36.0 | 5184 | 0.8882 | 0.8472 |
| 0.0001 | 37.0 | 5328 | 0.9033 | 0.8403 |
| 0.0001 | 38.0 | 5472 | 0.8980 | 0.8403 |
| 0.0001 | 39.0 | 5616 | 0.9081 | 0.8472 |
| 0.0001 | 40.0 | 5760 | 0.9086 | 0.8472 |
| 0.0001 | 41.0 | 5904 | 0.9119 | 0.8438 |
| 0.0001 | 42.0 | 6048 | 0.9106 | 0.8507 |
| 0.0001 | 43.0 | 6192 | 0.9188 | 0.8438 |
| 0.0001 | 44.0 | 6336 | 0.9238 | 0.8438 |
| 0.0001 | 45.0 | 6480 | 0.9282 | 0.8438 |
| 0.0001 | 46.0 | 6624 | 0.9286 | 0.8438 |
| 0.0001 | 47.0 | 6768 | 0.9312 | 0.8438 |
| 0.0001 | 48.0 | 6912 | 0.9296 | 0.8472 |
| 0.0001 | 49.0 | 7056 | 0.9324 | 0.8438 |
| 0.0001 | 50.0 | 7200 | 0.9331 | 0.8438 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
hyuk2010/koalpaca-polyglot-12.8b-bill | hyuk2010 | 2024-03-10T13:33:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T13:33:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rk68/T5-small-lora-aqua-rat-gemma-rationales-400-samples | rk68 | 2024-03-10T13:32:17Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T13:31:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jojo80565/CartPole-v1-ppo | jojo80565 | 2024-03-10T13:21:57Z | 0 | 0 | null | [
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T12:41:54Z | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 32.80 +/- 9.59
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'CatePole.py'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 10000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'jojo80565/CartPole-v1-ppo'
'batch_size': 512
'minibatch_size': 128}
```
|
woonchae/distilbert-base-uncased-finetuned-emotion | woonchae | 2024-03-10T13:18:43Z | 95 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T12:30:54Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9225783519597501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7983 | 1.0 | 250 | 0.3130 | 0.9035 | 0.9029 |
| 0.246 | 2.0 | 500 | 0.2201 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Organika/StarCoder-1B-WoW-JSON | Organika | 2024-03-10T13:15:56Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T13:12:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jaekwanyda/hansol_2 | jaekwanyda | 2024-03-10T13:09:56Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T11:29:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CaptainPollutionTV/CaptainPlanet-DS | CaptainPollutionTV | 2024-03-10T13:09:11Z | 0 | 0 | null | [
"Dreambooth",
"DreamShaper",
"license:cc",
"region:us"
] | null | 2024-03-10T10:03:38Z | ---
license: cc
tags:
- Dreambooth
- DreamShaper
---
Made by CaptainPollutionTV using the getimg.ai Dreambooth tool.
Details about the model:
Base Model
DreamShaper
Instance prompt
captainplanet
Class prompt
a man
Learning Rate
0.000001
Learning Rate Scheduler
polynomial
Training Steps
10000 (200 steps warmup)
Class images
1000
Model seed
687176212
Sample images:











































































 |
Samvardhan777/gemma-2b-mt-German-to-English | Samvardhan777 | 2024-03-10T13:07:17Z | 45 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"translation",
"de",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-03-09T19:10:07Z | ---
license: mit
language:
- de
- en
pipeline_tag: translation
tags:
- text-generation-inference
---
# Description
## Gemma 2B German to English v0.1 Alpha [Experimental Release]
This is a german instruction finetuned version of Google's Gemma 2B model. This is an experiment to see if Gemma can be Translate German to English by expanding vocabulary. While the responses may be rusty at times, it shows a lot of promise for a 2B parameter model.
---
# Model description 🗄️:
Model type: A 2B parameter GPT-like model finetuned on 100,000 samples consisting of an equal proportion of English and German samples.
Language(s): Bilingual. English and German.
License: Google Gemma Terms of Use
Finetuned from model: Samvardhan777/gemma-2b-mt-German-to-English
Training Precision: bfloat16
Training Hardware: Free Google Colab
Dataset: kaitchup/opus-German-to-English
--- |
ndieckow/dqn-SpaceInvaders-NoFrameskip-v4 | ndieckow | 2024-03-10T13:07:12Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T13:06:42Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 358.50 +/- 169.93
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ndieckow -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ndieckow -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ndieckow
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Sif10/my_awesome_model_imdb | Sif10 | 2024-03-10T13:05:54Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T12:08:16Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model_imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.85908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_imdb
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7781
- Accuracy: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4013 | 1.0 | 782 | 0.3535 | 0.8445 |
| 0.2107 | 2.0 | 1564 | 0.3589 | 0.8550 |
| 0.1158 | 3.0 | 2346 | 0.5241 | 0.8576 |
| 0.0423 | 4.0 | 3128 | 0.7881 | 0.8545 |
| 0.0238 | 5.0 | 3910 | 0.7781 | 0.8591 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
DTempo/videomae-base-finetuned-ucf101-subset | DTempo | 2024-03-10T13:03:15Z | 47 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-02-15T18:55:31Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1105
- Accuracy: 0.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5664 | 0.25 | 75 | 0.5633 | 0.7571 |
| 0.3826 | 1.25 | 150 | 0.3484 | 0.8286 |
| 0.0648 | 2.25 | 225 | 0.3219 | 0.8429 |
| 0.044 | 3.25 | 300 | 0.1105 | 0.9571 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
nocudaexe/Dark-Waifu-7b | nocudaexe | 2024-03-10T12:58:50Z | 16 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Endevor/InfinityRP-v1-7B",
"base_model:merge:Endevor/InfinityRP-v1-7B",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:merge:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:Nitral-AI/Kunocchini-7b-128k-test",
"base_model:merge:Nitral-AI/Kunocchini-7b-128k-test",
"base_model:TeeZee/DarkSapling-7B-v2.0",
"base_model:merge:TeeZee/DarkSapling-7B-v2.0",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:merge:mlabonne/AlphaMonarch-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T12:55:20Z | ---
base_model:
- TeeZee/DarkSapling-7B-v2.0
- mlabonne/AlphaMonarch-7B
- Test157t/Kunocchini-7b-128k-test
- NeverSleep/Noromaid-7B-0.4-DPO
- Endevor/InfinityRP-v1-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) as a base.
### Models Merged
The following models were included in the merge:
* [TeeZee/DarkSapling-7B-v2.0](https://huggingface.co/TeeZee/DarkSapling-7B-v2.0)
* [Test157t/Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
* [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mlabonne/AlphaMonarch-7B
# No parameters necessary for base model
- model: Test157t/Kunocchini-7b-128k-test
parameters:
density: 0.43
weight: 0.4
- model: TeeZee/DarkSapling-7B-v2.0
parameters:
density: 0.23
weight: 0.3
- model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
density: 0.23
weight: 0.3
- model: Endevor/InfinityRP-v1-7B
parameters:
density: 0.2
weight: 0.3
merge_method: dare_ties
base_model: mlabonne/AlphaMonarch-7B
parameters:
int8_mask: true
dtype: bfloat16
```
|
context-mt/scat-mbart50-1toM-ctx4-cwd1-en-fr | context-mt | 2024-03-10T12:51:55Z | 99 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"arxiv:2310.01188",
"contextual-mt",
"document-mt",
"translation",
"en",
"fr",
"dataset:inseq/scat",
"dataset:gsarti/iwslt2017_context",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-14T12:17:50Z | ---
license: apache-2.0
datasets:
- inseq/scat
- gsarti/iwslt2017_context
language:
- en
- fr
pipeline_tag: translation
tags:
- arxiv:2310.01188
- contextual-mt
- document-mt
---
*This model corresponds to the [mBART 1-to-50 model](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) further trained on English-to-French translation on the [IWSLT17 dataset](https://huggingface.co/datasets/gsarti/iwslt2017_context) with context tags using the format:
```
Input: SOURCE_CTX <brk> SOURCE_CURR
Output: TARGET_CURR
```
and further fine-tuned on the training split of [SCAT+](https://huggingface.co/datasets/inseq/scat). The model was used in the evaluation of the paper [Quantifying the Plausibility of Context Reliance in Neural Machine Translation](https://openreview.net/forum?id=XTHfNGI3zT) published at ICLR 2024, also available on [Arxiv](https://arxiv.org/abs/2310.01188). It can be used for English to French contextual and non-contextual translation. |
XavierScor/poca-SoccerTwos | XavierScor | 2024-03-10T12:51:37Z | 24 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-03-10T12:51:29Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: XavierScor/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
context-mt/scat-marian-big-ctx4-cwd1-en-fr | context-mt | 2024-03-10T12:42:48Z | 136 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"arxiv:2310.01188",
"contextual-mt",
"document-mt",
"translation",
"en",
"fr",
"dataset:inseq/scat",
"dataset:gsarti/iwslt2017_context",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-13T14:03:10Z | ---
license: apache-2.0
datasets:
- inseq/scat
- gsarti/iwslt2017_context
language:
- en
- fr
pipeline_tag: translation
tags:
- arxiv:2310.01188
- contextual-mt
- document-mt
---
*This model corresponds to the [`Helsinki-NLP/opus-mt-tc-big-en-fr`](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr) further trained on English-to-French translation on the [IWSLT17 dataset](https://huggingface.co/datasets/gsarti/iwslt2017_context) with context tags using the format:
```
Input: SOURCE_CTX <brk> SOURCE_CURR
Output: TARGET_CURR
```
and further fine-tuned on the training split of [SCAT+](https://huggingface.co/datasets/inseq/scat). The model was used in the evaluation of the paper [Quantifying the Plausibility of Context Reliance in Neural Machine Translation](https://openreview.net/forum?id=XTHfNGI3zT) published at ICLR 2024, also available on [Arxiv](https://arxiv.org/abs/2310.01188). It can be used for English to French contextual and non-contextual translation. |
context-mt/scat-marian-small-target-ctx4-cwd0-en-fr | context-mt | 2024-03-10T12:41:21Z | 136 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"arxiv:2310.01188",
"contextual-mt",
"document-mt",
"translation",
"en",
"fr",
"dataset:inseq/scat",
"dataset:gsarti/iwslt2017_context",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-28T12:08:57Z | ---
license: apache-2.0
datasets:
- inseq/scat
- gsarti/iwslt2017_context
language:
- en
- fr
pipeline_tag: translation
tags:
- arxiv:2310.01188
- contextual-mt
- document-mt
---
*This model corresponds to the [`Helsinki-NLP/opus-mt-en-fr`](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) further trained on English-to-French translation on the [IWSLT17 dataset](https://huggingface.co/datasets/gsarti/iwslt2017_context) with context tags using the format:
```
Input: SOURCE_CTX <brk> SOURCE_CURR
Output: TARGET_CTX <brk> TARGET_CURR
```
and further fine-tuned on the training split of [SCAT+](https://huggingface.co/datasets/inseq/scat). The model was used in the evaluation of the paper [Quantifying the Plausibility of Context Reliance in Neural Machine Translation](https://openreview.net/forum?id=XTHfNGI3zT) published at ICLR 2024, also available on [Arxiv](https://arxiv.org/abs/2310.01188). It can be used for English to French contextual and non-contextual translation. |
context-mt/scat-mbart50-1toM-target-ctx4-cwd0-en-fr | context-mt | 2024-03-10T12:40:40Z | 140 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"arxiv:2310.01188",
"contextual-mt",
"document-mt",
"translation",
"en",
"fr",
"dataset:inseq/scat",
"dataset:gsarti/iwslt2017_context",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-28T12:08:11Z | ---
license: apache-2.0
datasets:
- inseq/scat
- gsarti/iwslt2017_context
language:
- en
- fr
pipeline_tag: translation
tags:
- arxiv:2310.01188
- contextual-mt
- document-mt
---
*This model corresponds to the [mBART 1-to-50 model](facebook/mbart-large-50-one-to-many-mmt) further trained on English-to-French translation on the [IWSLT17 dataset](https://huggingface.co/datasets/gsarti/iwslt2017_context) with context tags using the format:
```
Input: SOURCE_CTX <brk> SOURCE_CURR
Output: TARGET_CTX <brk> TARGET_CURR
```
and further fine-tuned on the training split of [SCAT+](https://huggingface.co/datasets/inseq/scat). The model was used in the evaluation of the paper [Quantifying the Plausibility of Context Reliance in Neural Machine Translation](https://openreview.net/forum?id=XTHfNGI3zT) published at ICLR 2024, also available on [Arxiv](https://arxiv.org/abs/2310.01188). It can be used for English to French contextual and non-contextual translation. |
kgy0713/bert_kor_news_classification_model | kgy0713 | 2024-03-10T12:40:19Z | 95 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T12:30:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Model type:** BERT model
- **Language(s) (NLP):** Korean
- **Finetuned from model:** kykim/bert-kor-base
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
"kykim/bert-kor-base"을 기본 모델로 해서 "KETI-AIR/kor_ag_news" 데이터로 fine-tuning 한 모델입니다. 간단한 classification model입니다.
```
labels = {"World", "Business", "Sci/Tech", "Sports"}
``` |
context-mt/scat-marian-big-target-ctx4-cwd0-en-fr | context-mt | 2024-03-10T12:39:40Z | 120 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"arxiv:2310.01188",
"contextual-mt",
"document-mt",
"translation",
"en",
"fr",
"dataset:inseq/scat",
"dataset:gsarti/iwslt2017_context",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-28T12:08:38Z | ---
license: apache-2.0
datasets:
- inseq/scat
- gsarti/iwslt2017_context
language:
- en
- fr
pipeline_tag: translation
tags:
- arxiv:2310.01188
- contextual-mt
- document-mt
---
*This model corresponds to the [`Helsinki-NLP/opus-mt-tc-big-en-fr`](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr) further trained on English-to-French translation on the [IWSLT17 dataset](https://huggingface.co/datasets/gsarti/iwslt2017_context) with context tags using the format:
```
Input: SOURCE_CTX <brk> SOURCE_CURR
Output: TARGET_CTX <brk> TARGET_CURR
```
and further fine-tuned on the training split of [SCAT+](https://huggingface.co/datasets/inseq/scat). The model was used in the evaluation of the paper [Quantifying the Plausibility of Context Reliance in Neural Machine Translation](https://openreview.net/forum?id=XTHfNGI3zT) published at ICLR 2024, also available on [Arxiv](https://arxiv.org/abs/2310.01188). It can be used for English to French contextual and non-contextual translation. |
CaptainPollutionTV/DoctorBlight-AG | CaptainPollutionTV | 2024-03-10T12:30:12Z | 0 | 0 | null | [
"DreamBooth",
"Analog",
"license:cc",
"region:us"
] | null | 2024-03-10T10:38:05Z | ---
license: cc
tags:
- DreamBooth
- Analog
---
Made by CaptainPollutionTV using the getimg.ai Dreambooth tool.
Details about the model:
Base Model
Analog
Instance prompt
doctorblight
Class prompt
a woman
Learning Rate
0.000001
Learning Rate Scheduler
polynomial
Training Steps
10000 (200 steps warmup)
Class images
1000
Model seed
1520874964
Sample images:











































































 |
Peterwu4084/ppo-Huggy | Peterwu4084 | 2024-03-10T12:11:47Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-03-10T12:09:53Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Peterwu4084/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ghost-x/ghost-7b-v0.9.0 | ghost-x | 2024-03-10T12:10:11Z | 2,288 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"ghost",
"conversational",
"en",
"vi",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:finetune:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-29T16:01:08Z | ---
language:
- en
- vi
license: mit
library_name: transformers
tags:
- ghost
pipeline_tag: text-generation
base_model: HuggingFaceH4/zephyr-7b-beta
widget:
- text: '<|system|>
You are a helpful assistant.</s>
<|user|>
Thông tin về Peristernia despecta</s>
<|assistant|>
'
output:
text: Peristernia despecta là một loài ốc biển, là động vật thân mềm chân bụng
sống ở biển trong họ Fasciolariidae.
model-index:
- name: lamhieu/ghost-7b-v0.9.0
results:
- task:
type: text-generation
dataset:
name: VMLU
type: vmlu_v1.5
metrics:
- type: avg
value: 36.06
name: Average
verified: true
- type: stem
value: 33.54
name: STEM
verified: true
- type: ss
value: 38.74
name: Social science
verified: true
- type: hm
value: 37.15
name: Humanities
verified: true
- type: ot
value: 36.78
name: Other
verified: true
- task:
type: text-generation
dataset:
name: Open LLM Leaderboard
type: open_llm_leaderboard
metrics:
- type: avg
value: 56.89
name: Average
verified: true
- type: arc
value: 53.07
name: ARC
verified: true
- type: hs
value: 77.93
name: HellaSwag
verified: true
- type: hs
value: 77.93
name: HellaSwag
verified: true
- type: mmlu
value: 55.09
name: MMLU
verified: true
- type: wg
value: 73.72
name: Winogrande
verified: true
- type: gsm8k
value: 33.74
name: GSM8K
verified: true
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 53.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.79
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 33.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0
name: Open LLM Leaderboard
---
# Model Card for Model ID
**Ghost 7B Alpha, flying, v0.9.0**
## Model Details
### Model Description
This model is fine tuned from **HuggingFaceH4/zephyr-7b-beta** on a small synthetic datasets (about 200MB) for 50% English and 50% Vietnamese.
- **Developed by:** **Lam H**
- **Language(s) (NLP):** English, Vietnamese
- **License:** MIT
- **Finetuned from model:** [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
## Uses
This model supports both conversation chat and tasks. Feel free to experiment and don't limit your creativity.
The simplest way to try it is to use the `pipeline` from `transformers`.
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="lamhieu/ghost-7b-v0.9.0",
torch_dtype=torch.bfloat16,
)
```
You can then try any of the sample codes below, formatted using the chat template.
```python
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "nói tôi biết bệnh dịch hạch ở châu Âu do khuẩn nào gây ra"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
outputs = pipe.model.generate(**tokenized, max_new_tokens=512)
results = tokenizer.batch_decode(outputs)[0]
print(results)
# Bệnh dịch hạch ở châu Âu do khuẩn gây ra là do khuẩn Yersinia pestis.
```
```python
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Thông tin về Peristernia despecta"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
outputs = pipe.model.generate(**tokenized, max_new_tokens=512)
results = tokenizer.batch_decode(outputs)[0]
print(results)
# Peristernia despecta là một loài ốc biển, là động vật thân mềm chân bụng sống ở biển trong họ Fasciolariidae.
# ...
```
```python
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "do u know vietnam ?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
outputs = pipe.model.generate(**tokenized, max_new_tokens=512)
results = tokenizer.batch_decode(outputs)[0]
print(results)
# Yes, I have knowledge about Vietnam. Vietnam is a country in Southeast Asia, bordered by China to the north, Laos and Cambodia to the west, and the South China Sea to the east and south. Its capital city is Hanoi, and its largest city is Ho Chi Minh City (formerly known as Saigon). Vietnam has a population of approximately 100 million people and a diverse cultural heritage influenced by both Chinese and French colonialism. The country has a rich history, including periods of independence, colonization, and resistance, and has experienced significant economic growth in recent years.
```
```python
messages = [
{"role": "system", "content": "You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old."},
{"role": "user", "content": "Tôi yêu em nhiều hơn em nghĩ.\n\nWhich language is this?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
outputs = pipe.model.generate(**tokenized, max_new_tokens=512)
results = tokenizer.batch_decode(outputs)[0]
print(results)
# This is Vietnamese language. Vietnamese is a language spoken mainly in Vietnam and by the Vietnamese diaspora in many other countries. The sentence you provided means "I love you more than you think." It's like you have more love for someone than they realize.
```
Another example of what you can use to chat multiple turns.
```python
messages = [
# {"role": "system", "content": "You are a helpful and knowledgeable assistant. You like to help and always give honest information, in its original language. In communication, you are always respectful, equal and promote positive behavior."},
{"role": "system", "content": "You are a helpful assistant."}, # Describe to your assistant, anything.
{"role": "user", "content": "Bla bla bla"},
{"role": "assistant", "content": "Bla bla bla"},
{"role": "user", "content": "Bla bla bla"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
outputs = pipe.model.generate(**tokenized, max_new_tokens=512)
results = tokenizer.batch_decode(outputs)[0]
print(results)
```
## Evaluation
### Results
#### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lamhieu__ghost-7b-v0.9.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.89|
|AI2 Reasoning Challenge (25-Shot)|53.07|
|HellaSwag (10-Shot) |77.93|
|MMLU (5-Shot) |55.09|
|TruthfulQA (0-shot) |47.79|
|Winogrande (5-shot) |73.72|
|GSM8k (5-shot) |33.74|
#### VMLU
Below are the results evaluated with the VMLU evaluation suite, which is often used to evaluate models that work with Vietnamese.
Note: the results are run with the model in 4bit quantization, I'm not sure if it has any loss in results or not, if someone can help me run it with full it would be great.

<details>
<summary>Details</summary>
```python
{
"stem": {
"elementary_mathematics": 32.22,
"elementary_science": 56.11,
"high_school_biology": 32.78,
"high_school_chemistry": 27.78,
"high_school_mathematics": 33.78,
"high_school_physics": 26.11,
"introduction_to_chemistry": 26.82,
"introduction_to_physics": 33.53,
"introduction_to_programming": 39.66,
"metrology_engineer": 36.17,
"middle_school_biology": 40,
"middle_school_chemistry": 26.67,
"middle_school_mathematics": 27.78,
"middle_school_physics": 27.22,
"operating_system": 38.33,
"statistics_and_probability": 18.39,
"total": 33.54,
"applied_informatics": 47.78,
"computer_architecture": 36.11,
"computer_network": 41.34,
"discrete_mathematics": 29.7,
"electrical_engineering": 26.14
},
"other": {
"total": 36.78,
"accountant": 29.17,
"civil_servant": 29.82,
"clinical_pharmacology": 35.56,
"driving_license_certificate": 56.73,
"environmental_engineering": 32.16,
"internal_basic_medicine": 36.84,
"preschool_pedagogy": 45.1,
"tax_accountant": 24.71,
"tax_civil_servant": 40.94
},
"total": 36.06,
"humanity": {
"introduction_to_vietnam_culture": 31.11,
"logic": 28.16,
"middle_school_history": 38.33,
"administrative_law": 32.22,
"revolutionary_policy_of_the_vietnamese_commununist_part": 40.56,
"vietnamese_language_and_literature": 35.06,
"total": 37.15,
"middle_school_literature": 36.21,
"business_law": 38.55,
"civil_law": 48.33,
"criminal_law": 37.42,
"economic_law": 38.51,
"education_law": 36.75,
"elementary_history": 35.03,
"high_school_history": 27.78,
"high_school_literature": 32.78,
"history_of_world_civilization": 43.33,
"idealogical_and_moral_cultivation": 39.44,
"introduction_to_laws": 49.21
},
"social_science": {
"business_administration": 37.36,
"high_school_civil_education": 42.78,
"high_school_geography": 38.27,
"ho_chi_minh_ideology": 40.22,
"macroeconomics": 27.78,
"microeconomics": 36.67,
"middle_school_civil_education": 51.69,
"middle_school_geography": 32.65,
"principles_of_marxism_and_leninism": 35.56,
"sociology": 44.38,
"total": 38.74
}
}
```
</details>
## More Information
Many thanks for
- Datasets: [5CD-AI](https://huggingface.co/5CD-AI), [vilm](https://huggingface.co/vilm).
- Library: [unsloth](https://github.com/unslothai/unsloth)
## Model Card Contact
**Lam H** ([email protected])
|
ZainAli60/X | ZainAli60 | 2024-03-10T12:04:28Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T12:03:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
automerger/Multi_verse_modelM7-7B | automerger | 2024-03-10T12:00:54Z | 17 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:MTSAIR/multi_verse_model",
"base_model:merge:MTSAIR/multi_verse_model",
"base_model:liminerity/M7-7b",
"base_model:merge:liminerity/M7-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T08:38:04Z | ---
base_model:
- liminerity/M7-7b
- ammarali32/multi_verse_model
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) as a base.
### Models Merged
The following models were included in the merge:
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ammarali32/multi_verse_model
# No parameters necessary for base model
- model: liminerity/M7-7b
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: ammarali32/multi_verse_model
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
|
CaptainPollutionTV/MegaMan-ICBINPF | CaptainPollutionTV | 2024-03-10T12:00:29Z | 0 | 0 | null | [
"DreamBooth",
"ICBINP Final",
"license:cc",
"region:us"
] | null | 2024-03-10T11:28:37Z | ---
license: cc
tags:
- DreamBooth
- ICBINP Final
---
Made by CaptainPollutionTV using the getimg.ai Dreambooth tool.
Details about the model:
Base Model
ICBINP Final
Instance prompt
megaman
Class prompt
a robot
Learning Rate
0.000001
Learning Rate Scheduler
polynomial
Training Steps
10000 (200 steps warmup)
Class images
1000
Model seed
612422015
Sample images:











































































 |
MoveScores18/ISNET_CARVENET97 | MoveScores18 | 2024-03-10T11:59:06Z | 0 | 0 | null | [
"onnx",
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-03-10T11:57:39Z | ---
license: bigscience-openrail-m
---
|
gotzmann/v0.8.2-adapter | gotzmann | 2024-03-10T11:54:16Z | 3 | 0 | peft | [
"peft",
"llama",
"generated_from_trainer",
"base_model:gotzmann/uni",
"base_model:adapter:gotzmann/uni",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-10T11:53:17Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: gotzmann/uni
model-index:
- name: home/exported
results: []
---
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
|
JoniJoniAl/diversetraining10maart | JoniJoniAl | 2024-03-10T11:52:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T11:51:50Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** JoniJoniAl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
etri-vilab/koala-1b | etri-vilab | 2024-03-10T11:50:50Z | 172 | 16 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"KOALA",
"arxiv:2312.04005",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-01-09T05:47:30Z | ---
tags:
- text-to-image
- KOALA
---
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/yosvi68jvyarbvymxc4hm/github_logo.png?rlkey=r9ouwcd7cqxjbvio43q9b3djd&dl=1" width="1024px" />
</div>
<div style="display:flex;justify-content: center">
<a href="https://youngwanlee.github.io/KOALA/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github-pages"></a>  
<a href="https://github.com/youngwanLEE/sdxl-koala"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>  
<a href="https://arxiv.org/abs/2312.04005"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv:KOALA&color=red&logo=arxiv"></a>  
</div>
# KOALA-1B Model Card
## KOALA Model Cards
|Model|link|
|:--|:--|
|koala-700m | https://huggingface.co/etri-vilab/koala-700m|
|koala-700m-llava-cap | https://huggingface.co/etri-vilab/koala-700m-llava-cap|
|koala-1b | https://huggingface.co/etri-vilab/koala-1b|
|koala-1b-llava-cap | https://huggingface.co/etri-vilab/koala-1b-llava-cap|
## Abstract
### TL;DR
> We propose a fast text-to-image model, called KOALA, by compressing SDXL's U-Net and distilling knowledge from SDXL into our model. KOALA-700M can generate a 1024x1024 image in less than 1.5 seconds on an NVIDIA 4090 GPU, which is more than 2x faster than SDXL. KOALA-700M can be used as a decent alternative between SDM and SDXL in limited resources.
<details><summary>FULL abstract</summary>
Stable diffusion is the mainstay of the text-to-image (T2I) synthesis in the community due to its generation performance and open-source nature.
Recently, Stable Diffusion XL (SDXL), the successor of stable diffusion, has received a lot of attention due to its significant performance improvements with a higher resolution of 1024x1024 and a larger model.
However, its increased computation cost and model size require higher-end hardware (e.g., bigger VRAM GPU) for end-users, incurring higher costs of operation.
To address this problem, in this work, we propose an efficient latent diffusion model for text-to-image synthesis obtained by distilling the knowledge of SDXL.
To this end, we first perform an in-depth analysis of the denoising U-Net in SDXL, which is the main bottleneck of the model, and then design a more efficient U-Net based on the analysis.
Secondly, we explore how to effectively distill the generation capability of SDXL into an efficient U-Net and eventually identify four essential factors, the core of which is that self-attention is the most important part.
With our efficient U-Net and self-attention-based knowledge distillation strategy, we build our efficient T2I models, called KOALA-1B &-700M, while reducing the model size up to 54% and 69% of the original SDXL model.
In particular, the KOALA-700M is more than twice as fast as SDXL while still retaining a decent generation quality.
We hope that due to its balanced speed-performance tradeoff, our KOALA models can serve as a cost-effective alternative to SDXL in resource-constrained environments.
</details>
<br>
These 1024x1024 samples are generated by KOALA-700M with 25 denoising steps.
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/rjsqqgfney7be069y2yr7/teaser.png?rlkey=7lq0m90xpjcoqclzl4tieajpo&dl=1" width="1024px" />
</div>
## Architecture
There are two two types of compressed U-Net, KOALA-1B and KOALA-700M, which are realized by reducing residual blocks and transformer blocks.
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/5ydeywgiyt1d3njw63dpk/arch.png?rlkey=1p6imbjs4lkmfpcxy153i1a2t&dl=1" width="1024px" />
</div>
### U-Net comparison
| U-Net | SDM-v2.0 | SDXL-Base-1.0 | KOALA-1B | KOALA-700M |
|-------|:----------:|:-----------:|:-----------:|:-------------:|
| Param. | 865M | 2,567M | 1,161M | 782M |
| CKPT size | 3.46GB | 10.3GB | 4.4GB | 3.0GB |
| Tx blocks | [1, 1, 1, 1] | [0, 2, 10] | [0, 2, 6] | [0, 2, 5] |
| Mid block | ✓ | ✓ | ✓ | ✗ |
| Latency | 1.131s | 3.133s | 1.604s | 1.257s |
- Tx menans transformer block and CKPT means the trained checkpoint file.
- We measured latency with FP16-precision, and 25 denoising steps in NVIDIA 4090 GPU (24GB).
- SDM-v2.0 uses 768x768 resolution, while SDXL and KOALA models uses 1024x1024 resolution.
## Latency and memory usage comparison on different GPUs
We measure the inference time of SDM-v2.0 with 768x768 resolution and the other models with 1024x1024 using a variety of consumer-grade GPUs: NVIDIA 3060Ti (8GB), 2080Ti (11GB), and 4090 (24GB). We use 25 denoising steps and FP16/FP32 precisions. OOM means Out-of-Memory. Note that SDXL-Base cannot operate in the 8GB-GPU.
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/u1az20y0zfww1l5lhbcyd/latency_gpu.svg?rlkey=vjn3gpkmywmp7jpilar4km7sd&dl=1" width="1024px" />
</div>
## Key Features
- **Efficient U-Net Architecture**: KOALA models use a simplified U-Net architecture that reduces the model size by up to 54% and 69% respectively compared to its predecessor, Stable Diffusion XL (SDXL).
- **Self-Attention-Based Knowledge Distillation**: The core technique in KOALA focuses on the distillation of self-attention features, which proves crucial for maintaining image generation quality.
## Model Description
- Developed by [ETRI Visual Intelligence Lab](https://huggingface.co/etri-vilab)
- Developer: [Youngwan Lee](https://youngwanlee.github.io/), [Kwanyong Park](https://pkyong95.github.io/), [Yoorhim Cho](https://ofzlo.github.io/), [Young-Ju Lee](https://scholar.google.com/citations?user=6goOQh8AAAAJ&hl=en), [Sung Ju Hwang](http://www.sungjuhwang.com/)
- Model Description: Latent Diffusion based text-to-image generative model. KOALA models uses the same text encoders as [SDXL-Base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and only replace the denoising U-Net with the compressed U-Nets.
- Training data: [LAION-aesthetics-V2 6+](https://laion.ai/blog/laion-aesthetics/)
- Resources for more information: Check out [KOALA report on arXiv](https://arxiv.org/abs/2312.04005) and [project page](https://youngwanlee.github.io/KOALA/).
## Usage with 🤗[Diffusers library](https://github.com/huggingface/diffusers)
The inference code with denoising step 25
```python
import torch
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained("etri-vilab/koala-1b", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "A portrait painting of a Golden Retriever like Leonard da Vinci"
negative = "worst quality, low quality, illustration, low resolution"
image = pipe(prompt=prompt, negative_prompt=negative).images[0]
```
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
- Text Rendering: The models face challenges in rendering long, legible text within images.
- Complex Prompts: KOALA sometimes struggles with complex prompts involving multiple attributes.
- Dataset Dependencies: The current limitations are partially attributed to the characteristics of the training dataset (LAION-aesthetics-V2 6+).
## Citation
```bibtex
@misc{Lee@koala,
title={KOALA: Self-Attention Matters in Knowledge Distillation of Latent Diffusion Models for Memory-Efficient and Fast Image Synthesis},
author={Youngwan Lee and Kwanyong Park and Yoorhim Cho and Yong-Ju Lee and Sung Ju Hwang},
year={2023},
eprint={2312.04005},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
AdamGrzesik/Mistral_7B_SamanthaPL-old-1-epoch | AdamGrzesik | 2024-03-10T11:46:38Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T10:43:52Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** AdamGrzesik
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
- **Dataset :** cognitivecomputations/samantha-data (Samantha_AG_PL.json)
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lukegbsn1/ring_ai_test | lukegbsn1 | 2024-03-10T11:33:59Z | 28 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-10T11:32:25Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ravi6923/QuizBot | Ravi6923 | 2024-03-10T11:28:46Z | 109 | 1 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T11:28:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
automerger/Experiment26Pastiche-7B | automerger | 2024-03-10T11:20:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"base_model:merge:CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T11:19:39Z | ---
base_model:
- yam-peleg/Experiment26-7B
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
|
drakrig/poca-SoccerTwos | drakrig | 2024-03-10T11:17:18Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-03-10T11:14:55Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: drakrig/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
samanjoy2/gemma7b-it_banglaNewsSum | samanjoy2 | 2024-03-10T11:07:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T11:07:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OwOOwO/eacc_contTrain_l2_g54l2-1 | OwOOwO | 2024-03-10T11:05:56Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T11:03:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
afaji/fresh-12-layer-swag-distill-of-fresh-12-layer-gpqa | afaji | 2024-03-10T11:05:47Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T11:04:16Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-12-layer-swag-distill-of-fresh-12-layer-gpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-12-layer-swag-distill-of-fresh-12-layer-gpqa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 14.3157
- Accuracy: 0.3788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 14.5865 | 0.2424 |
| No log | 2.0 | 126 | 13.4274 | 0.2879 |
| No log | 3.0 | 189 | 14.5755 | 0.3434 |
| No log | 4.0 | 252 | 14.6965 | 0.3586 |
| No log | 5.0 | 315 | 14.6065 | 0.3737 |
| No log | 6.0 | 378 | 13.0578 | 0.3737 |
| No log | 7.0 | 441 | 13.1651 | 0.3586 |
| 1.7518 | 8.0 | 504 | 13.7708 | 0.3636 |
| 1.7518 | 9.0 | 567 | 13.5531 | 0.3535 |
| 1.7518 | 10.0 | 630 | 13.3979 | 0.3384 |
| 1.7518 | 11.0 | 693 | 13.8865 | 0.3434 |
| 1.7518 | 12.0 | 756 | 13.8410 | 0.3687 |
| 1.7518 | 13.0 | 819 | 15.6234 | 0.3283 |
| 1.7518 | 14.0 | 882 | 17.4878 | 0.3485 |
| 1.7518 | 15.0 | 945 | 16.2413 | 0.3081 |
| 2.2378 | 16.0 | 1008 | 14.6003 | 0.3232 |
| 2.2378 | 17.0 | 1071 | 16.5984 | 0.3232 |
| 2.2378 | 18.0 | 1134 | 14.3157 | 0.3788 |
| 2.2378 | 19.0 | 1197 | 13.5424 | 0.3485 |
| 2.2378 | 20.0 | 1260 | 13.3978 | 0.3586 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
Sif10/my_awesome_model | Sif10 | 2024-03-10T10:59:44Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T10:01:42Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.85588
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8771
- Accuracy: 0.8559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3564 | 1.0 | 1563 | 0.3677 | 0.8426 |
| 0.2878 | 2.0 | 3126 | 0.3378 | 0.8588 |
| 0.2124 | 3.0 | 4689 | 0.4398 | 0.8550 |
| 0.1556 | 4.0 | 6252 | 0.5750 | 0.8555 |
| 0.1075 | 5.0 | 7815 | 0.6733 | 0.8558 |
| 0.0831 | 6.0 | 9378 | 0.7218 | 0.8561 |
| 0.0652 | 7.0 | 10941 | 0.7331 | 0.8564 |
| 0.0458 | 8.0 | 12504 | 0.8166 | 0.8538 |
| 0.0415 | 9.0 | 14067 | 0.8619 | 0.8568 |
| 0.0357 | 10.0 | 15630 | 0.8771 | 0.8559 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
trsdimi/q-FrozenLake-v1-4x4-noSlippery | trsdimi | 2024-03-10T10:55:36Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T10:55:33Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="trsdimi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ThuyNT03/CS505-Classifier-T4_predictLabel_a1 | ThuyNT03 | 2024-03-10T10:43:02Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T10:25:51Z | ---
base_model: vinai/phobert-base
tags:
- generated_from_trainer
model-index:
- name: CS505-Classifier-T4_predictLabel_a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-Classifier-T4_predictLabel_a1
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.98 | 48 | 1.0473 |
| No log | 1.96 | 96 | 0.5664 |
| No log | 2.94 | 144 | 0.3371 |
| No log | 3.92 | 192 | 0.2277 |
| No log | 4.9 | 240 | 0.1850 |
| No log | 5.88 | 288 | 0.1451 |
| No log | 6.86 | 336 | 0.1126 |
| No log | 7.84 | 384 | 0.0853 |
| No log | 8.82 | 432 | 0.0635 |
| No log | 9.8 | 480 | 0.0598 |
| 0.4029 | 10.78 | 528 | 0.0407 |
| 0.4029 | 11.76 | 576 | 0.0337 |
| 0.4029 | 12.73 | 624 | 0.0300 |
| 0.4029 | 13.71 | 672 | 0.0270 |
| 0.4029 | 14.69 | 720 | 0.0209 |
| 0.4029 | 15.67 | 768 | 0.0196 |
| 0.4029 | 16.65 | 816 | 0.0205 |
| 0.4029 | 17.63 | 864 | 0.0181 |
| 0.4029 | 18.61 | 912 | 0.0160 |
| 0.4029 | 19.59 | 960 | 0.0155 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
digiplay/AnalogMadness-realistic-model-v5 | digiplay | 2024-03-10T10:42:36Z | 52,810 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-10T10:08:42Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/8030/analog-madness-realistic-model
|
ankursinghbisht/a2c-PandaReachDense-v3 | ankursinghbisht | 2024-03-10T10:42:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T10:34:46Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
afaji/fresh-12-layer-medmcqa-distill-of-fresh-12-layer-gpqa | afaji | 2024-03-10T10:37:54Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T10:36:22Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-12-layer-medmcqa-distill-of-fresh-12-layer-gpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-12-layer-medmcqa-distill-of-fresh-12-layer-gpqa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.3380
- Accuracy: 0.5253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 14.7499 | 0.2929 |
| No log | 2.0 | 126 | 13.0612 | 0.3636 |
| No log | 3.0 | 189 | 13.1660 | 0.4293 |
| No log | 4.0 | 252 | 13.4796 | 0.4848 |
| No log | 5.0 | 315 | 11.9863 | 0.5101 |
| No log | 6.0 | 378 | 11.3380 | 0.5253 |
| No log | 7.0 | 441 | 11.5841 | 0.4242 |
| 4.5481 | 8.0 | 504 | 15.3570 | 0.3485 |
| 4.5481 | 9.0 | 567 | 14.1857 | 0.1465 |
| 4.5481 | 10.0 | 630 | 13.5387 | 0.1263 |
| 4.5481 | 11.0 | 693 | 13.4757 | 0.1566 |
| 4.5481 | 12.0 | 756 | 14.4836 | 0.0657 |
| 4.5481 | 13.0 | 819 | 13.8175 | 0.0707 |
| 4.5481 | 14.0 | 882 | 14.0705 | 0.1313 |
| 4.5481 | 15.0 | 945 | 14.3308 | 0.0 |
| 7.3037 | 16.0 | 1008 | 14.2806 | 0.1263 |
| 7.3037 | 17.0 | 1071 | 14.2719 | 0.0101 |
| 7.3037 | 18.0 | 1134 | 13.7977 | 0.2727 |
| 7.3037 | 19.0 | 1197 | 14.2746 | 0.0657 |
| 7.3037 | 20.0 | 1260 | 14.0949 | 0.0 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
ShadyML/distillbert-finetuned-finer-4-v3 | ShadyML | 2024-03-10T10:34:33Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-10T09:09:29Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distillbert-finetuned-finer-4-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-finetuned-finer-4-v3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0223
- Precision: 0.9044
- Recall: 0.9318
- F1: 0.9179
- Accuracy: 0.9931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0248 | 1.0 | 2095 | 0.0271 | 0.8736 | 0.9052 | 0.8891 | 0.9907 |
| 0.0183 | 2.0 | 4190 | 0.0236 | 0.8864 | 0.9324 | 0.9088 | 0.9922 |
| 0.0118 | 3.0 | 6285 | 0.0223 | 0.9044 | 0.9318 | 0.9179 | 0.9931 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
HachiML/myBit-Llama2-jp-127M-test-6 | HachiML | 2024-03-10T10:30:13Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T10:04:17Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: myBit-Llama2-jp-127M-test-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myBit-Llama2-jp-127M-test-6
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.8e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.8677 | 0.04 | 100 | 9.1385 |
| 8.4868 | 0.07 | 200 | 7.7575 |
| 7.2146 | 0.11 | 300 | 6.8688 |
| 6.6972 | 0.14 | 400 | 6.5702 |
| 6.4628 | 0.18 | 500 | 6.3746 |
| 6.3058 | 0.22 | 600 | 6.2362 |
| 6.1813 | 0.25 | 700 | 6.1241 |
| 6.0708 | 0.29 | 800 | 6.0228 |
| 5.963 | 0.33 | 900 | 5.9109 |
| 5.8577 | 0.36 | 1000 | 5.7948 |
| 5.7614 | 0.4 | 1100 | 5.7155 |
| 5.6876 | 0.43 | 1200 | 5.6376 |
| 5.6044 | 0.47 | 1300 | 5.5631 |
| 5.5538 | 0.51 | 1400 | 5.5045 |
| 5.5007 | 0.54 | 1500 | 5.4649 |
| 5.4556 | 0.58 | 1600 | 5.4282 |
| 5.4246 | 0.62 | 1700 | 5.3917 |
| 5.3982 | 0.65 | 1800 | 5.3762 |
| 5.3854 | 0.69 | 1900 | 5.3546 |
| 5.365 | 0.72 | 2000 | 5.3447 |
| 5.3579 | 0.76 | 2100 | 5.3473 |
| 5.3552 | 0.8 | 2200 | 5.3463 |
| 5.3682 | 0.83 | 2300 | 5.3630 |
| 5.3743 | 0.87 | 2400 | 5.3718 |
| 5.3957 | 0.91 | 2500 | 5.3887 |
| 5.4079 | 0.94 | 2600 | 5.4010 |
| 5.423 | 0.98 | 2700 | 5.4087 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
smorce/smorce-qlora-Qwen1.5-4B-Chat | smorce | 2024-03-10T10:29:22Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T10:26:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hemg/Face-Mask-Detection | Hemg | 2024-03-10T10:24:56Z | 311 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-10T09:18:32Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Face-Mask-Detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Face-Mask-Detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0239
- Accuracy: 0.9953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1218 | 1.0 | 147 | 0.0251 | 0.9953 |
| 0.0186 | 1.99 | 294 | 0.0239 | 0.9953 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
digiplay/AnalogMadness-realistic-model-v6 | digiplay | 2024-03-10T10:21:50Z | 45,635 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-10T10:02:48Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/8030/analog-madness-realistic-model |
darkelf12/shawgpt-ft | darkelf12 | 2024-03-10T10:10:07Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T10:10:06Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: shawgpt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6612 | 0.91 | 8 | 1.3313 |
| 1.1146 | 1.94 | 17 | 1.1069 |
| 0.9461 | 2.97 | 26 | 0.9471 |
| 0.8283 | 4.0 | 35 | 0.8600 |
| 0.8635 | 4.91 | 43 | 0.8291 |
| 0.7343 | 5.94 | 52 | 0.8114 |
| 0.7219 | 6.97 | 61 | 0.8000 |
| 0.7052 | 8.0 | 70 | 0.7931 |
| 0.7785 | 8.91 | 78 | 0.7905 |
| 0.4625 | 9.14 | 80 | 0.7904 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
afaji/fresh-12-layer-gpqa | afaji | 2024-03-10T10:09:52Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T10:08:19Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-12-layer-gpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-12-layer-gpqa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1852
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 1.3865 | 0.2677 |
| No log | 2.0 | 26 | 1.3857 | 0.2374 |
| No log | 3.0 | 39 | 1.3852 | 0.2475 |
| No log | 4.0 | 52 | 1.3852 | 0.2929 |
| No log | 5.0 | 65 | 1.3847 | 0.2929 |
| No log | 6.0 | 78 | 1.3844 | 0.3081 |
| No log | 7.0 | 91 | 1.3839 | 0.2980 |
| No log | 8.0 | 104 | 1.3800 | 0.3081 |
| No log | 9.0 | 117 | 1.3751 | 0.3535 |
| No log | 10.0 | 130 | 1.2007 | 0.6263 |
| No log | 11.0 | 143 | 0.9272 | 0.6515 |
| No log | 12.0 | 156 | 1.0185 | 0.6768 |
| No log | 13.0 | 169 | 0.6580 | 0.7424 |
| No log | 14.0 | 182 | 0.4847 | 0.7828 |
| No log | 15.0 | 195 | 0.3170 | 0.8384 |
| No log | 16.0 | 208 | 0.2830 | 0.8485 |
| No log | 17.0 | 221 | 0.3068 | 0.9192 |
| No log | 18.0 | 234 | 0.2519 | 0.9141 |
| No log | 19.0 | 247 | 0.2426 | 0.9343 |
| No log | 20.0 | 260 | 0.1852 | 0.9394 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
learn3r/longt5_xl_gov_memsum_25 | learn3r | 2024-03-10T10:06:05Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-08T14:39:50Z | ---
tags:
- generated_from_trainer
model-index:
- name: longt5_xl_gov_memsum_25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_gov_memsum_25
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4372 | 1.0 | 68 | 1.6270 |
| 0.3678 | 1.99 | 136 | 1.8330 |
| 0.3026 | 2.99 | 204 | 1.8467 |
| 0.2785 | 3.99 | 272 | 1.9830 |
| 0.2489 | 5.0 | 341 | 2.1279 |
| 0.181 | 6.0 | 409 | 2.2981 |
| 0.1753 | 6.99 | 477 | 2.3683 |
| 0.1511 | 7.99 | 545 | 2.3130 |
| 0.1483 | 8.99 | 613 | 2.5342 |
| 0.2277 | 10.0 | 682 | 2.3054 |
| 0.1952 | 10.99 | 750 | 2.2331 |
| 0.1773 | 11.99 | 818 | 2.1944 |
| 0.1524 | 12.99 | 886 | 2.3607 |
| 0.1373 | 14.0 | 955 | 2.3946 |
| 0.1238 | 14.95 | 1020 | 2.3918 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
valxy/sd-class-butterflies-32 | valxy | 2024-03-10T09:58:46Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-03-10T09:44:47Z | ---
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
'''python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('valxy/sd-class-butterflies-32')
image = pipeline().images[0]
image
'''
|
HachiML/myBit-Llama2-jp-127M-test-5 | HachiML | 2024-03-10T09:56:27Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T09:30:01Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: myBit-Llama2-jp-127M-test-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myBit-Llama2-jp-127M-test-5
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.7481 | 0.04 | 100 | 8.9526 |
| 8.17 | 0.07 | 200 | 7.3998 |
| 6.9639 | 0.11 | 300 | 6.7999 |
| 6.5874 | 0.15 | 400 | 6.4947 |
| 6.3463 | 0.18 | 500 | 6.3007 |
| 6.18 | 0.22 | 600 | 6.1431 |
| 6.0112 | 0.26 | 700 | 5.9703 |
| 5.8465 | 0.29 | 800 | 5.8159 |
| 5.7114 | 0.33 | 900 | 5.7018 |
| 5.5979 | 0.36 | 1000 | 5.6067 |
| 5.518 | 0.4 | 1100 | 5.5270 |
| 5.4294 | 0.44 | 1200 | 5.4639 |
| 5.3976 | 0.47 | 1300 | 5.4143 |
| 5.3487 | 0.51 | 1400 | 5.3701 |
| 5.3162 | 0.55 | 1500 | 5.3509 |
| 5.2915 | 0.58 | 1600 | 5.3452 |
| 5.3009 | 0.62 | 1700 | 5.3910 |
| 5.3894 | 0.66 | 1800 | 5.5080 |
| 5.5553 | 0.69 | 1900 | 5.7414 |
| 5.9356 | 0.73 | 2000 | 6.2225 |
| 6.515 | 0.77 | 2100 | 6.8978 |
| 7.2177 | 0.8 | 2200 | 7.5843 |
| 7.8453 | 0.84 | 2300 | 8.1251 |
| 8.3069 | 0.88 | 2400 | 8.5042 |
| 8.6156 | 0.91 | 2500 | 8.7458 |
| 8.8104 | 0.95 | 2600 | 8.8901 |
| 8.9132 | 0.99 | 2700 | 8.9523 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Alistair-R/EncodeHackathonLevelGen | Alistair-R | 2024-03-10T09:54:28Z | 179 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T01:57:07Z | ---
license: apache-2.0
---
Model created for the Encode Hackathon Virtual Protocol Bounty. The model is a fine-tuned version of distilgpt2 designed to output level schematics for a platform when given the prompt "LevelSchematic:".
For further details on the project, please go [here](https://github.com/lauraharkins/Hackathon). |
taoki/gemma-2b-it-qlora-amenokaku-code | taoki | 2024-03-10T09:50:26Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"feature-extraction",
"text-generation-inference",
"unsloth",
"trl",
"ja",
"dataset:kunishou/amenokaku-code-instruct",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-it-bnb-4bit",
"license:other",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-10T03:07:09Z | ---
language:
- ja
license: other
tags:
- text-generation-inference
- transformers
- unsloth
- trl
- gemma
datasets:
- kunishou/amenokaku-code-instruct
license_name: gemma
base_model: unsloth/gemma-2b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** taoki
- **License:** gemma
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
# Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained(
"taoki/gemma-2b-it-qlora-amenokaku-code"
)
model = AutoModelForCausalLM.from_pretrained(
"taoki/gemma-2b-it-qlora-amenokaku-code"
)
if torch.cuda.is_available():
model = model.to("cuda")
prompt="""<start_of_turn>user
紫式部と清少納言の作風をjsonで出力してください。
<end_of_turn>
<start_of_turn>model
"""
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=512,
do_sample=True,
top_p=0.95,
temperature=0.1,
repetition_penalty=1.0,
)
print(tokenizer.decode(outputs[0]))
```
# Output
````
<bos><start_of_turn>user
紫式部と清少納言の作風をjsonで出力してください。<end_of_turn>
<start_of_turn>model
```json
{
"紫式部": {
"style": "紫式部",
"name": "紫式部",
"description": "紫式部の作風"
},
"清少納言": {
"style": "清少納言",
"name": "清少納言",
"description": "清少納言の作風"
}
}
```<eos>
````
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
afaji/fresh-8-layer-medmcqa-distill-of-fresh-8-layer-gpqa | afaji | 2024-03-10T09:42:54Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T09:41:35Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-8-layer-medmcqa-distill-of-fresh-8-layer-gpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-8-layer-medmcqa-distill-of-fresh-8-layer-gpqa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 17.1123
- Accuracy: 0.5455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 22.9570 | 0.2273 |
| No log | 2.0 | 126 | 21.2108 | 0.3636 |
| No log | 3.0 | 189 | 26.1433 | 0.4040 |
| No log | 4.0 | 252 | 24.1795 | 0.3838 |
| No log | 5.0 | 315 | 17.9657 | 0.4747 |
| No log | 6.0 | 378 | 20.0576 | 0.5354 |
| No log | 7.0 | 441 | 17.5133 | 0.5 |
| 10.1769 | 8.0 | 504 | 22.3248 | 0.5101 |
| 10.1769 | 9.0 | 567 | 20.7352 | 0.4848 |
| 10.1769 | 10.0 | 630 | 22.9071 | 0.4596 |
| 10.1769 | 11.0 | 693 | 17.8100 | 0.4899 |
| 10.1769 | 12.0 | 756 | 17.9827 | 0.5202 |
| 10.1769 | 13.0 | 819 | 19.2382 | 0.5 |
| 10.1769 | 14.0 | 882 | 18.8849 | 0.4949 |
| 10.1769 | 15.0 | 945 | 17.6397 | 0.5202 |
| 2.2143 | 16.0 | 1008 | 19.0081 | 0.5101 |
| 2.2143 | 17.0 | 1071 | 17.8718 | 0.5152 |
| 2.2143 | 18.0 | 1134 | 17.5239 | 0.5303 |
| 2.2143 | 19.0 | 1197 | 17.1123 | 0.5455 |
| 2.2143 | 20.0 | 1260 | 17.7756 | 0.5404 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
Samuael/amhat5-small | Samuael | 2024-03-10T09:42:06Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T08:45:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Benevolent/PerfectHands | Benevolent | 2024-03-10T09:30:13Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-cascade",
"base_model:adapter:stabilityai/stable-cascade",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-03-10T09:14:56Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0 \0s\0c\0o\0r\0e\0_\09\0,\0 \0s\0c\0o\0r\0e\0_\08\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\07\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\06\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\05\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\04\0_\0u\0p\0,\0 \0s\0o\0u\0r\0c\0e\0_\0a\0n\0i\0m\0e\0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0b\0e\0a\0u\0t\0i\0f\0u\0l\0 \0f\0a\0c\0e\0,\0 \0s\0t\0r\0o\0n\0g\0 \0f\0a\0c\0e\0,\0 \0(\0(\0s\0t\0r\0o\0n\0g\0 \0t\0h\0i\0c\0k\0 \0b\0o\0d\0y\0,\0 \0i\0c\0e\0 \0g\0i\0a\0n\0t\0,\0 \0l\0o\0n\0g\0 \0w\0h\0i\0t\0e\0 \0h\0a\0i\0r\0,\0 \0b\0l\0u\0e\0 \0s\0k\0i\0n\0,\0 \0s\0e\0x\0y\0 \0l\0e\0a\0t\0h\0e\0r\0 \0a\0r\0m\0o\0r\0,\0 \0t\0h\0i\0c\0k\0 \0t\0h\0i\0g\0h\0s\0,\0 \0p\0i\0e\0r\0c\0e\0d\0 \0n\0i\0p\0p\0l\0e\0s\0,\0 \0b\0a\0t\0t\0l\0e\0 \0a\0x\0e\0)\0)\0,\0 \0(\0b\0i\0g\0 \0p\0u\0s\0s\0y\0,\0 \0w\0h\0i\0t\0e\0 \0p\0u\0b\0i\0c\0 \0h\0a\0i\0r\0,\0 \0h\0a\0p\0p\0y\0 \0t\0r\0a\0i\0l\0)\0,\0 \0h\0a\0i\0r\0y\0 \0b\0o\0d\0y\0,\0 \0s\0n\0o\0w\0 \0s\0t\0o\0r\0m\0,\0 \0m\0o\0u\0n\0t\0a\0i\0n\0 \0t\0o\0p\0"
output:
url: >-
images/B01A48105298502D2C7E3F41DDED247E42F3563763692F9451356B9698C52A6F.jpeg
base_model: stabilityai/stable-cascade
instance_prompt: null
license: apache-2.0
---
# HandsV2
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Benevolent/PerfectHands/tree/main) them in the Files & versions tab.
|
DisgustingOzil/Mistral-MCQ-Model | DisgustingOzil | 2024-03-10T09:29:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T09:27:04Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ZainAli60/bart | ZainAli60 | 2024-03-10T09:27:36Z | 191 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T09:26:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HachiML/myBit-Llama2-jp-127M-test-4 | HachiML | 2024-03-10T09:26:39Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T08:59:46Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: myBit-Llama2-jp-127M-test-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myBit-Llama2-jp-127M-test-4
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.6724 | 0.04 | 100 | 8.7189 |
| 7.811 | 0.07 | 200 | 6.9856 |
| 6.7931 | 0.11 | 300 | 6.5599 |
| 6.4108 | 0.15 | 400 | 6.1841 |
| 6.1428 | 0.18 | 500 | 5.9554 |
| 5.8814 | 0.22 | 600 | 5.7176 |
| 5.6803 | 0.26 | 700 | 5.5171 |
| 5.5181 | 0.29 | 800 | 5.4037 |
| 5.4115 | 0.33 | 900 | 5.3197 |
| 5.3497 | 0.37 | 1000 | 5.2965 |
| 5.3629 | 0.4 | 1100 | 5.3632 |
| 5.6291 | 0.44 | 1200 | 5.9554 |
| 6.9173 | 0.47 | 1300 | 8.0749 |
| 9.1158 | 0.51 | 1400 | 9.8847 |
| 10.2012 | 0.55 | 1500 | 10.3942 |
| 10.4725 | 0.58 | 1600 | 10.5218 |
| 10.5453 | 0.62 | 1700 | 10.5627 |
| 10.5752 | 0.66 | 1800 | 10.5838 |
| 10.5915 | 0.69 | 1900 | 10.5969 |
| 10.6018 | 0.73 | 2000 | 10.6053 |
| 10.6091 | 0.77 | 2100 | 10.6115 |
| 10.6141 | 0.8 | 2200 | 10.6156 |
| 10.6175 | 0.84 | 2300 | 10.6186 |
| 10.6203 | 0.88 | 2400 | 10.6212 |
| 10.6225 | 0.91 | 2500 | 10.6225 |
| 10.6238 | 0.95 | 2600 | 10.6240 |
| 10.625 | 0.99 | 2700 | 10.6247 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Subsets and Splits