modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-28 18:26:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 477
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-28 18:24:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e7_s6789_v3_l5_v50 | KingKazma | 2023-08-10T16:38:29Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:38:25Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e6_s6789_v3_l5_v100 | KingKazma | 2023-08-10T16:32:24Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:32:22Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
hostingfuze/virtualizor | hostingfuze | 2023-08-10T16:31:09Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-06T05:33:05Z | # license Virtualizor
these files are only educational learning
First, get the script and make it executable:
```bash
curl -L -o /root/preinstalled.sh https://raw.githubusercontent.com/tactu2023/license/main/preinstalled.sh --silent
chmod +x /root/preinstalled.sh
```
Then run it:
```sh
/root/preinstalled.sh
```
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e6_s6789_v3_l5_v50 | KingKazma | 2023-08-10T16:30:48Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:30:45Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
RohaanKhanCentric/llama2-qlora-finetunined-instruct-human-assistant-prompt | RohaanKhanCentric | 2023-08-10T16:30:15Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:30:06Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
MRNH/rl_course_vizdoom_health_gathering_supreme | MRNH | 2023-08-10T16:28:34Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T16:28:26Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.04 +/- 5.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MRNH/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
arhamk/vizdoom_health_gathering_supreme | arhamk | 2023-08-10T16:23:56Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T16:23:46Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.84 +/- 3.77
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r arhamk/vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
renatostrianese/a2c-PandaReachDense-v3 | renatostrianese | 2023-08-10T16:12:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T14:45:25Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.24 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e-1_s6789_v3_l6_r4_manual | KingKazma | 2023-08-10T16:09:53Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:09:51Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e-1_s6789_v3_l4_r4_manual | KingKazma | 2023-08-10T16:07:04Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:07:02Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
ivanzidov/izidov_dreambooth | ivanzidov | 2023-08-10T16:02:37Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-08-10T16:02:35Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a izidov
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Test enoder was not trained.
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e2_s6789_v3_l5_v50 | KingKazma | 2023-08-10T16:00:08Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:00:04Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
patonw/tqc-PandaPickAndPlace-v3 | patonw | 2023-08-10T15:55:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T07:33:18Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -6.30 +/- 1.79
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **TQC** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
# 1 - 2
env_id = "PandaPickAndPlace-v3"
env = gym.make(env_id)
# 4
from stable_baselines3 import HerReplayBuffer, SAC
model = TQC(policy = "MultiInputPolicy",
env = env,
batch_size=2048,
gamma=0.95,
learning_rate=1e-4,
train_freq=64,
gradient_steps=64,
tau=0.05,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
n_sampled_goal=4,
goal_selection_strategy="future",
),
policy_kwargs=dict(
net_arch=[512, 512, 512],
n_critics=2,
),
tensorboard_log=f"runs/{wandb_run.id}",
)
# 5
model.learn(1_000_000, progress_bar=True, callback=WandbCallback(verbose=2))
wandb_run.finish()
```
Weights & Biases charts: https://wandb.ai/patonw/PandaPickAndPlace-v3/runs/w7lzlwnx/workspace?workspace=user-patonw |
danielavornic/q-FrozenLake-v1-4x4-noSlippery | danielavornic | 2023-08-10T15:51:58Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T15:51:56Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="danielavornic/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
austenjs/ppo-LunarLander-v2 | austenjs | 2023-08-10T15:47:07Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T15:46:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.23 +/- 19.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e1_s6789_v3_l5_v100 | KingKazma | 2023-08-10T15:47:05Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:47:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Aesir12/lora-trained-xl-colab | Aesir12 | 2023-08-10T15:45:52Z | 7 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-08-10T14:24:22Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Aesir12/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e0_s6789_v3_l5_v50 | KingKazma | 2023-08-10T15:44:48Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:44:45Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
LovenOO/distilBERT_without_preprocessing | LovenOO | 2023-08-10T15:44:23Z | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T17:29:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LovenOO/distilBERT_without_preprocessing
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LovenOO/distilBERT_without_preprocessing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1466
- Validation Loss: 0.3625
- Train Precision: 0.8491
- Train Recall: 0.8642
- Train F1: 0.8544
- Train Accuracy: 0.8906
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2565, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.8177 | 0.4723 | 0.8407 | 0.7879 | 0.7948 | 0.8575 | 0 |
| 0.3642 | 0.3777 | 0.8666 | 0.8315 | 0.8465 | 0.8847 | 1 |
| 0.2734 | 0.3804 | 0.8466 | 0.8563 | 0.8471 | 0.8872 | 2 |
| 0.2020 | 0.3704 | 0.8526 | 0.8663 | 0.8551 | 0.8896 | 3 |
| 0.1638 | 0.3625 | 0.8491 | 0.8642 | 0.8544 | 0.8906 | 4 |
| 0.1466 | 0.3625 | 0.8491 | 0.8642 | 0.8544 | 0.8906 | 5 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.13.0
- Datasets 2.14.2
- Tokenizers 0.11.0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e0_s6789_v3_l5_v100 | KingKazma | 2023-08-10T15:38:01Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:38:00Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e-1_s6789_v3_l5_v100 | KingKazma | 2023-08-10T15:28:56Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:26:07Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Waterminer/UpcaleModels | Waterminer | 2023-08-10T15:27:36Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-10T15:23:17Z | This is a model library for
[SD-DatasetProcessor](https://github.com/waterminer/SD-DatasetProcessor) |
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e9_s6789_v3_l5_v100 | KingKazma | 2023-08-10T15:27:01Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:27:00Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e7_s6789_v3_l5_v20 | KingKazma | 2023-08-10T15:12:18Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:12:17Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e9_s6789_v3_l5_v50 | KingKazma | 2023-08-10T15:08:49Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:08:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e7_s6789_v3_l5_v100 | KingKazma | 2023-08-10T15:08:48Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:08:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e9_s6789_v3_l5_v100 | KingKazma | 2023-08-10T15:07:49Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:07:44Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
YusufAhmed58231/my-great-gpt2-i-think-it-makes-novels | YusufAhmed58231 | 2023-08-10T15:03:14Z | 147 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-10T14:51:48Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my-great-gpt2-i-think-it-makes-novels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-great-gpt2-i-think-it-makes-novels
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001171
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.2 | 150 | 2.3466 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e8_s6789_v3_l5_v50 | KingKazma | 2023-08-10T15:01:31Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:01:29Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e8_s6789_v3_l5_v100 | KingKazma | 2023-08-10T15:00:50Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T15:00:44Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
alkalinevk/testrep | alkalinevk | 2023-08-10T14:59:42Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-08-10T14:59:40Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks car
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Test enoder was not trained.
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e5_s6789_v3_l5_v20 | KingKazma | 2023-08-10T14:57:40Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:57:39Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e7_s6789_v3_l5_v50 | KingKazma | 2023-08-10T14:54:14Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:54:12Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e7_s6789_v3_l5_v100 | KingKazma | 2023-08-10T14:53:49Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:53:44Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e5_s6789_v3_l5_v100 | KingKazma | 2023-08-10T14:50:35Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:50:34Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ggml-q4 | openthaigpt | 2023-08-10T14:43:22Z | 0 | 1 | transformers | [
"transformers",
"openthaigpt",
"llama",
"text-generation",
"th",
"en",
"dataset:kobkrit/rd-taxqa",
"dataset:iapp_wiki_qa_squad",
"dataset:Thaweewat/alpaca-cleaned-52k-th",
"dataset:Thaweewat/instruction-wild-52k-th",
"dataset:Thaweewat/databricks-dolly-15k-th",
"dataset:Thaweewat/hc3-24k-th",
"dataset:Thaweewat/gpteacher-20k-th",
"dataset:Thaweewat/onet-m6-social",
"dataset:Thaweewat/alpaca-finance-43k-th",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-10T14:18:37Z | ---
license: apache-2.0
datasets:
- kobkrit/rd-taxqa
- iapp_wiki_qa_squad
- Thaweewat/alpaca-cleaned-52k-th
- Thaweewat/instruction-wild-52k-th
- Thaweewat/databricks-dolly-15k-th
- Thaweewat/hc3-24k-th
- Thaweewat/gpteacher-20k-th
- Thaweewat/onet-m6-social
- Thaweewat/alpaca-finance-43k-th
language:
- th
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- openthaigpt
- llama
---
# 🇹🇭 OpenThaiGPT 1.0.0-alpha
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Fb8eiMDaqiEQL6ahbAY0h%2Fimage.png?alt=media&token=6fce78fd-2cca-4c0a-9648-bd5518e644ce
https://openthaigpt.aieat.or.th/" width="200px">
OpenThaiGPT Version 1.0.0-alpha is the first Thai implementation of a 7B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions below and makes use of the Huggingface LLaMA implementation.
# ---- Quantized 4 bit GGML of OpenThaiGPT 1.0.0-alpha ----
## Upgrade from OpenThaiGPT 0.1.0-beta
- Using Facebook LLama v2 model 7b chat as a base model which is pretrained on over 2 trillion token.
- Context Length is upgrade from 2048 token to 4096 token
- Allow research and commerical use.a
## Pretrain Model
- [https://huggingface.co/meta-llama/Llama-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat)
## Support
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
## License
**Source Code**: License Apache Software License 2.0.<br>
**Weight**: Research and **Commercial uses**.<br>
## Code and Weight
**Colab Demo**: https://colab.research.google.com/drive/1kDQidCtY9lDpk49i7P3JjLAcJM04lawu?usp=sharing<br>
**Finetune Code**: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta<br>
**Inference Code**: https://github.com/OpenThaiGPT/openthaigpt<br>
**Weight (Lora Adapter)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat<br>
**Weight (Huggingface Checkpoint)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ckpt-hf<br>
**Weight (GGML)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ggml<br>
**Weight (Quantized 4bit GGML)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ggml-q4
## Sponsors
Pantip.com, ThaiSC<br>
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2FiWjRxBQgo0HUDcpZKf6A%2Fimage.png?alt=media&token=4fef4517-0b4d-46d6-a5e3-25c30c8137a6" width="100px">
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Ft96uNUI71mAFwkXUtxQt%2Fimage.png?alt=media&token=f8057c0c-5c5f-41ac-bb4b-ad02ee3d4dc2" width="100px">
### Powered by
OpenThaiGPT Volunteers, Artificial Intelligence Entrepreneur Association of Thailand (AIEAT), and Artificial Intelligence Association of Thailand (AIAT)
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2F6yWPXxdoW76a4UBsM8lw%2Fimage.png?alt=media&token=1006ee8e-5327-4bc0-b9a9-a02e93b0c032" width="100px">
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2FBwsmSovEIhW9AEOlHTFU%2Fimage.png?alt=media&token=5b550289-e9e2-44b3-bb8f-d3057d74f247" width="100px">
### Authors
* Kobkrit Viriyayudhakorn ([email protected])
* Sumeth Yuenyong ([email protected])
* Thaweewat Rugsujarit ([email protected])
* Jillaphat Jaroenkantasima ([email protected])
* Norapat Buppodom ([email protected])
* Koravich Sangkaew ([email protected])
* Peerawat Rojratchadakorn ([email protected])
* Surapon Nonesung ([email protected])
* Chanon Utupon ([email protected])
* Sadhis Wongprayoon ([email protected])
* Nucharee Thongthungwong ([email protected])
* Chawakorn Phiantham ([email protected])
* Patteera Triamamornwooth ([email protected])
* Nattarika Juntarapaoraya ([email protected])
* Kriangkrai Saetan ([email protected])
* Pitikorn Khlaisamniang ([email protected])
<i>Disclaimer: Provided responses are not guaranteed.</i> |
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e4_s6789_v3_l5_v100 | KingKazma | 2023-08-10T14:41:29Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:16:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e5_s6789_v3_l5_v100 | KingKazma | 2023-08-10T14:39:51Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:39:46Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
deepsdh99/llama2-qlora-finetunined-french | deepsdh99 | 2023-08-10T14:32:56Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:32:51Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e4_s6789_v3_l5_v50 | KingKazma | 2023-08-10T14:32:21Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:32:19Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e1_s6789_v3_l5_v20 | KingKazma | 2023-08-10T14:28:25Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:28:24Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e3_s6789_v3_l5_v100 | KingKazma | 2023-08-10T14:25:52Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:25:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e0_s6789_v3_l5_v20 | KingKazma | 2023-08-10T14:21:06Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:21:05Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e2_s6789_v3_l5_v50 | KingKazma | 2023-08-10T14:17:47Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:17:44Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s6789_v3_l5_v100 | KingKazma | 2023-08-10T14:14:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T12:50:13Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e-1_s6789_v3_l5_v20 | KingKazma | 2023-08-10T14:13:47Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:13:45Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
tommilyjones/bertweet-base-finetuned-hateful-meme | tommilyjones | 2023-08-10T14:11:03Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-10T13:53:11Z | ---
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bertweet-base-finetuned-hateful-meme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-hateful-meme
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6262
- Accuracy: 0.532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5865 | 1.0 | 532 | 0.7576 | 0.564 |
| 0.5203 | 2.0 | 1064 | 0.8139 | 0.562 |
| 0.4746 | 3.0 | 1596 | 0.9082 | 0.566 |
| 0.4377 | 4.0 | 2128 | 1.0089 | 0.538 |
| 0.3858 | 5.0 | 2660 | 0.9339 | 0.558 |
| 0.3561 | 6.0 | 3192 | 1.0688 | 0.54 |
| 0.3292 | 7.0 | 3724 | 1.4158 | 0.532 |
| 0.3009 | 8.0 | 4256 | 1.3316 | 0.54 |
| 0.2831 | 9.0 | 4788 | 1.5418 | 0.532 |
| 0.269 | 10.0 | 5320 | 1.6262 | 0.532 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e1_s6789_v3_l5_v50 | KingKazma | 2023-08-10T14:10:29Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:10:27Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Yntec/Toonify2 | Yntec | 2023-08-10T14:06:21Z | 527 | 7 | diffusers | [
"diffusers",
"safetensors",
"anime",
"comic",
"art",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"BetterThanNothing",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-10T13:46:20Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- comic
- art
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- BetterThanNothing
---
# Toonify
Preview and prompt:


sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED EYES, Futuristic city of tokyo japan, Magazine ad, iconic, 1943, sharp focus, 4k. (Sweaty). visible comic art by ROSSDRAWS and Clay Mann and kyoani
Original page:
https://civitai.com/models/36281
|
ClaraG/PPO_LunarLander-v2 | ClaraG | 2023-08-10T14:05:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T14:05:26Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.09 +/- 14.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Cyrema/Slimpit | Cyrema | 2023-08-10T14:04:09Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2023-07-26T14:03:46Z | ---
language:
- en
---
# LLaMa-7b The Pit Project/Slimpit.
## Lora(s) Details
* **Backbone Model**: [LLaMA](https://github.com/facebookresearch/llama/tree/llama_v1)
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: This lora is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform).
## Datasets Details
- Scraped posts of a particular subject within an image board.
- The dataset was heavily augmented with various types of filtering to improve coherency and relevency to the origin and our goals.
- For our Slimpit model, it contains 30,116 entries.
### Prompt Template
The model was not trained in an instructional or chat-style format, please ensure your inference program does not attempt to inject anything more than your sole input when inferencing, simply type whatever comes to mind and the model will attempt to complete it.
## Hardware and Software
* **Hardware**: We utilized 4 Nvidia RTX 4090 hours for training our lora.
* **Training Factors**: We created this lora using [HuggingFace trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
## Training details
- The rank and alpha we used was 128 and 256 alpha.
- Our learning rate was 3e-4 with 10 warmups steps with a cosine with restarts scheduler for 2 epochs.
- Our batch size was 24 microbatch, 72 batch size, GA 3
## Limitations
It is strongly recommend to not deploy this model into a real-world environment unless its behavior is well-understood and explicit and strict limitations on the scope, impact, and duration of the deployment are enforced. |
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e0_s6789_v3_l5_v50 | KingKazma | 2023-08-10T14:03:11Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:03:09Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e9_s6789_v3_l5_v20 | KingKazma | 2023-08-10T14:00:42Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:00:39Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e-1_s6789_v3_l5_v100 | KingKazma | 2023-08-10T13:57:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:57:50Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Roy61/textual_inversion_H3D_numVector5 | Roy61 | 2023-08-10T13:56:36Z | 9 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-10T06:34:24Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Roy61/textual_inversion_H3D_numVector5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e-1_s6789_v3_l5_v50 | KingKazma | 2023-08-10T13:56:02Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:56:00Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
stoyky/dqn-SpaceInvaders-v4 | stoyky | 2023-08-10T13:47:48Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T13:47:08Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 679.50 +/- 178.33
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga stoyky -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga stoyky -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga stoyky
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
RogerB/marian-finetuned-multidataset-kin-to-en | RogerB | 2023-08-10T13:46:05Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-rw-en",
"base_model:finetune:Helsinki-NLP/opus-mt-rw-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-08-10T09:20:46Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-rw-en
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: marian-finetuned-multidataset-kin-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-multidataset-kin-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-rw-en](https://huggingface.co/Helsinki-NLP/opus-mt-rw-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7550
- Bleu: 36.2717
## Model Description
The model has been fine-tuned to perform machine translation from Kinyarwanda to English.
## Intended Uses & Limitations
The primary intended use of this model is for research purposes.
## Training and Evaluation Data
The model was fine-tuned using a combination of datasets from the following sources:
- [Digital Umuganda](https://huggingface.co/datasets/DigitalUmuganda/kinyarwanda-english-machine-translation-dataset/tree/main)
- [Masakhane](https://huggingface.co/datasets/masakhane/mafand/viewer/en-kin/validation)
- [Muennighoff](https://huggingface.co/datasets/Muennighoff/flores200)
For the training of the machine translation model, the dataset underwent the following preprocessing steps:
- Text was converted to lowercase
- Digits were removed
The combined dataset was divided into training and validation sets, with a split of 90% for training and 10% for validation.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e9_s6789_v3_l5_v20 | KingKazma | 2023-08-10T13:44:29Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:44:28Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e8_s6789_v3_l5_v50 | KingKazma | 2023-08-10T13:39:21Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:39:17Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
caffeinatedwoof/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan | caffeinatedwoof | 2023-08-10T13:37:54Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-08-10T06:24:15Z | ---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6104
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.687 | 1.0 | 450 | 0.6688 | 0.76 |
| 1.393 | 2.0 | 900 | 0.5216 | 0.88 |
| 0.024 | 3.0 | 1350 | 0.5718 | 0.85 |
| 0.0004 | 4.0 | 1800 | 0.6104 | 0.89 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e5_s6789_v3_l5_v20 | KingKazma | 2023-08-10T13:32:50Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:32:49Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
goodpinokio/big5_model | goodpinokio | 2023-08-10T13:30:31Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:30:30Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
osca/oscaxl | osca | 2023-08-10T13:29:21Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-08-10T13:29:19Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a ogdc person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Test enoder was not trained.
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e4_s6789_v3_l5_v20 | KingKazma | 2023-08-10T13:25:53Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:25:52Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e6_s6789_v3_l5_v50 | KingKazma | 2023-08-10T13:24:13Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:24:10Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
RIOLITE/products_matching_aumet_fine_tune_2023-08-10 | RIOLITE | 2023-08-10T13:22:26Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-08-10T13:18:49Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
RIOLITE/products_matching_aumet_scratch_2023-08-10 | RIOLITE | 2023-08-10T13:22:08Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-08-10T13:18:27Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
tdjey33/my_awesome_qa_model | tdjey33 | 2023-08-10T13:19:07Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-02T14:33:00Z | . https://tdjey.github.io/petyalox/ |
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e3_s6789_v3_l5_v20 | KingKazma | 2023-08-10T13:18:57Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:18:55Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
morell23/warglmr | morell23 | 2023-08-10T13:16:53Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-10T13:14:31Z | ---
license: creativeml-openrail-m
---
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e5_s6789_v3_l5_v50 | KingKazma | 2023-08-10T13:16:39Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:16:36Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e5_s6789_v3_l5_v20 | KingKazma | 2023-08-10T13:15:10Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:15:09Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
OldCrazyCoder/dqn-SpaceInvadersNoFrameskip-v4 | OldCrazyCoder | 2023-08-10T13:13:08Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T13:12:35Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 450.00 +/- 146.08
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga OldCrazyCoder -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga OldCrazyCoder -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga OldCrazyCoder
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pssubitha/llama2-qlora-finetunev01-QA | pssubitha | 2023-08-10T13:10:04Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:09:59Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e4_s6789_v3_l5_v50 | KingKazma | 2023-08-10T13:09:06Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T13:09:02Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e3_s6789_v3_l5_v20 | KingKazma | 2023-08-10T13:00:32Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T19:25:10Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
fp16-guy/Counterfeit-V3.0_fp16_cleaned | fp16-guy | 2023-08-10T12:58:25Z | 0 | 1 | null | [
"text-to-image",
"region:us"
] | text-to-image | 2023-07-26T11:48:01Z | ---
pipeline_tag: text-to-image
---
Counterfeit, but fp16/cleaned - smaller size, same result.
========
///
**[**original checkpoint link**](https://civitai.com/models/4468?modelVersionId=125050)**
*(all rights to the model belong to rqdwdw)*
---
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/counterfeit%2001.png) *(1.99gb version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/counterfeit%2002%20no%20vae.png) *(1.83gb version - no vae)*
*[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/counterfeit%2030%20inp%2001%2020230810125843-111-CounterfeitV30_v30_fp16-Euler%20a-6.png) *(1.99gb inpainting version)*
*[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/counterfeit%2030%20inp%2002%2020230810130201-111-CounterfeitV30_v30_fp16-Euler%20a-6.png) *(1.83gb inpainting version - no vae)* |
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e0_s6789_v3_l5_v20 | KingKazma | 2023-08-10T12:58:05Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T12:58:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s6789_v3_l5_v20 | KingKazma | 2023-08-10T12:45:54Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T19:11:20Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e0_s6789_v3_l5_v50 | KingKazma | 2023-08-10T12:38:49Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T12:38:46Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e-1_s6789_v3_l5_v20 | KingKazma | 2023-08-10T12:31:16Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T18:57:36Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e-1_s6789_v3_l5_v50 | KingKazma | 2023-08-10T12:31:14Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T12:31:10Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
DrishtiSharma/distilhubert-finetuned-gtzan-bs-4 | DrishtiSharma | 2023-08-10T12:00:09Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-08-10T10:09:57Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-bs-4
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-bs-4
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6851
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8665 | 1.0 | 225 | 1.7962 | 0.45 |
| 1.1445 | 2.0 | 450 | 1.1084 | 0.68 |
| 0.9474 | 3.0 | 675 | 0.8338 | 0.73 |
| 0.8286 | 4.0 | 900 | 0.7530 | 0.76 |
| 0.2336 | 5.0 | 1125 | 0.5369 | 0.84 |
| 0.2092 | 6.0 | 1350 | 0.5608 | 0.86 |
| 0.2092 | 7.0 | 1575 | 0.5390 | 0.88 |
| 0.04 | 8.0 | 1800 | 0.5567 | 0.88 |
| 0.0046 | 9.0 | 2025 | 0.5736 | 0.86 |
| 0.0029 | 10.0 | 2250 | 0.6236 | 0.86 |
| 0.0035 | 11.0 | 2475 | 0.8139 | 0.85 |
| 0.0018 | 12.0 | 2700 | 0.5752 | 0.9 |
| 0.0016 | 13.0 | 2925 | 0.6745 | 0.85 |
| 0.0016 | 14.0 | 3150 | 0.6959 | 0.85 |
| 0.0014 | 15.0 | 3375 | 0.6851 | 0.86 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AljoSt/q-FrozenLake-v1-4x4-noSlippery | AljoSt | 2023-08-10T11:59:29Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T11:59:26Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AljoSt/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wildgrape14/distilbert-base-uncased-finetuned-emotion | wildgrape14 | 2023-08-10T11:57:57Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-10T11:57:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9249069634242804
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8142 | 1.0 | 250 | 0.3171 | 0.9095 | 0.9082 |
| 0.2524 | 2.0 | 500 | 0.2187 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
morell23/mchmsme | morell23 | 2023-08-10T11:55:08Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-10T11:53:29Z | ---
license: creativeml-openrail-m
---
|
polejowska/detr-r50-cd45rb-8ah-4l-corrected | polejowska | 2023-08-10T11:53:34Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2023-08-09T13:30:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-8ah-4l-corrected
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-8ah-4l-corrected
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.0619 | 1.0 | 4606 | 2.1934 |
| 2.6954 | 2.0 | 9212 | 2.0490 |
| 2.5695 | 3.0 | 13818 | 1.9843 |
| 2.518 | 4.0 | 18424 | 1.9648 |
| 2.4652 | 5.0 | 23030 | 1.9354 |
| 2.4235 | 6.0 | 27636 | 1.9127 |
| 2.3947 | 7.0 | 32242 | 1.8715 |
| 2.369 | 8.0 | 36848 | 1.8564 |
| 2.3542 | 9.0 | 41454 | 1.8511 |
| 2.3407 | 10.0 | 46060 | 1.8325 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Tombarz/Therapist_AI_fine_tuned_80_precent | Tombarz | 2023-08-10T11:45:46Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T11:29:02Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Pietro995/bloomz-560m_PROMPT_TUNING_CAUSAL_LMPROVA | Pietro995 | 2023-08-10T11:43:45Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T11:43:42Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
shre-db/marian-finetuned-kde4-en-to-fr | shre-db | 2023-08-10T11:10:00Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-08-10T09:19:16Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.88529894542656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
esantiago/llama2-qlora-finetunned-french | esantiago | 2023-08-10T11:08:45Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T11:08:38Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
eskalofi/rabianur | eskalofi | 2023-08-10T11:05:43Z | 0 | 0 | null | [
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-08-10T11:04:16Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### rabianur Dreambooth model trained by eskalofi with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
morell23/kaelakovalskia | morell23 | 2023-08-10T11:02:17Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-10T11:01:48Z | ---
license: creativeml-openrail-m
---
|
skshreyas714/lora-trained-xl-colab | skshreyas714 | 2023-08-10T11:01:49Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-08-10T08:58:39Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - skshreyas714/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
abdelhamidmalki/q-FrozenLake-v1-4x4-noSlippery | abdelhamidmalki | 2023-08-10T10:57:17Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T10:57:14Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="abdelhamidmalki/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ui-chope/distilbert-base-uncased-finetuned-ner | ui-chope | 2023-08-10T10:56:37Z | 482 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-31T03:21:56Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1660
- Precision: 0.9701
- Recall: 0.9679
- F1: 0.9690
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0091 | 1.0 | 5372 | 0.1034 | 0.9693 | 0.9649 | 0.9671 | 0.9858 |
| 0.0052 | 2.0 | 10744 | 0.1362 | 0.9715 | 0.9679 | 0.9697 | 0.9868 |
| 0.0064 | 3.0 | 16116 | 0.1415 | 0.9715 | 0.9657 | 0.9686 | 0.9844 |
| 0.0026 | 4.0 | 21488 | 0.1629 | 0.9709 | 0.9701 | 0.9705 | 0.9870 |
| 0.0034 | 5.0 | 26860 | 0.1345 | 0.9737 | 0.9687 | 0.9712 | 0.9851 |
| 0.0019 | 6.0 | 32232 | 0.1297 | 0.9700 | 0.9649 | 0.9675 | 0.9841 |
| 0.0031 | 7.0 | 37604 | 0.1543 | 0.9716 | 0.9701 | 0.9709 | 0.9868 |
| 0.0021 | 8.0 | 42976 | 0.0605 | 0.9782 | 0.9716 | 0.9749 | 0.9903 |
| 0.0023 | 9.0 | 48348 | 0.1506 | 0.9731 | 0.9701 | 0.9716 | 0.9877 |
| 0.0021 | 10.0 | 53720 | 0.1714 | 0.9693 | 0.9672 | 0.9682 | 0.9860 |
| 0.0015 | 11.0 | 59092 | 0.1660 | 0.9701 | 0.9679 | 0.9690 | 0.9863 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shajahan123/my-pet-cat | shajahan123 | 2023-08-10T10:52:52Z | 0 | 0 | null | [
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-08-10T10:49:39Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by shajahan123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET91
Sample pictures of this concept:
.jpg)
|
StofEzz/whisper-tiny-fr | StofEzz | 2023-08-10T10:49:32Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-10T06:33:39Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-fr
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8198
- Wer: 0.8502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6223 | 1.0 | 250 | 0.7567 | 0.7225 |
| 0.475 | 2.0 | 500 | 0.6213 | 0.5461 |
| 0.2938 | 3.0 | 750 | 0.5860 | 0.5383 |
| 0.1613 | 4.0 | 1000 | 0.5903 | 0.4384 |
| 0.1026 | 5.0 | 1250 | 0.5992 | 0.4451 |
| 0.0615 | 6.0 | 1500 | 0.6322 | 0.5383 |
| 0.0422 | 7.0 | 1750 | 0.6398 | 0.4373 |
| 0.019 | 8.0 | 2000 | 0.6682 | 0.5239 |
| 0.0125 | 9.0 | 2250 | 0.6980 | 0.6681 |
| 0.0069 | 10.0 | 2500 | 0.7335 | 0.8679 |
| 0.0039 | 11.0 | 2750 | 0.7354 | 0.6238 |
| 0.0026 | 12.0 | 3000 | 0.7458 | 0.6315 |
| 0.0021 | 13.0 | 3250 | 0.7599 | 0.6715 |
| 0.0018 | 14.0 | 3500 | 0.7682 | 0.7103 |
| 0.0015 | 15.0 | 3750 | 0.7750 | 0.7081 |
| 0.0013 | 16.0 | 4000 | 0.7846 | 0.7125 |
| 0.0012 | 17.0 | 4250 | 0.7897 | 0.7114 |
| 0.001 | 18.0 | 4500 | 0.7962 | 0.9345 |
| 0.0009 | 19.0 | 4750 | 0.8001 | 0.7170 |
| 0.0009 | 20.0 | 5000 | 0.8074 | 0.8335 |
| 0.0008 | 21.0 | 5250 | 0.8107 | 0.8424 |
| 0.0007 | 22.0 | 5500 | 0.8152 | 0.8402 |
| 0.0007 | 23.0 | 5750 | 0.8181 | 0.8446 |
| 0.0007 | 24.0 | 6000 | 0.8187 | 0.8479 |
| 0.0007 | 25.0 | 6250 | 0.8198 | 0.8502 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
GrazittiInteractive/llama-2-13b | GrazittiInteractive | 2023-08-10T10:49:14Z | 8 | 1 | transformers | [
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"dataset:meta-llama/Llama-2-13b",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-08-01T07:39:26Z | ---
inference: false
language:
- en
pipeline_tag: text-generation
datasets:
- meta-llama/Llama-2-13b
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_type: llama
license: other
---
# Meta's Llama 2 13B GGML
A 4 Bit GGML format quantized version of base mode Llama-2-13b taken from https://huggingface.co/meta-llama/Llama-2-13b, reduced from 24.2 GB to 7.37GB
These files are GGML format model files for [Meta's Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| ggml-model-q4_0.bin | q4_0 | 4 | 6.85 GB| 9.118 GB | Original quant method, 4-bit. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
we used langchain with llama-cpp-python, adjust for your tastes and needs:
How to use this Llama-2-13b model from Python code and langchain
First, make sure you have langchain and llama-cpp installed:
```
pip install llama-cpp-python
```
```
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="/Users/rlm/Desktop/Code/llama/llama-2-13b-ggml/ggml-model-q4_0.bin",
input={"temperature": 0.75, "max_length": 2000, "top_p": 1},
callback_manager=callback_manager,
verbose=True,
)
```
# Original model card: Meta's Llama 2 13B
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Subsets and Splits