modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-28 18:26:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 477
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-28 18:24:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
szymonrucinski/good-mood | szymonrucinski | 2023-08-09T21:09:47Z | 0 | 0 | null | [
"license:cc-by-nc-sa-3.0",
"region:us"
] | null | 2023-08-09T16:17:39Z | ---
license: cc-by-nc-sa-3.0
---
|
badokorach/bert-finetuned-squad-7 | badokorach | 2023-08-09T21:00:46Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:badokorach/bert-finetuned-squad-5",
"base_model:finetune:badokorach/bert-finetuned-squad-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-09T16:23:33Z | ---
license: apache-2.0
base_model: badokorach/bert-finetuned-squad-5
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/bert-finetuned-squad-7
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# badokorach/bert-finetuned-squad-7
This model is a fine-tuned version of [badokorach/bert-finetuned-squad-5](https://huggingface.co/badokorach/bert-finetuned-squad-5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0011
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 1950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.0673 | 0 |
| 0.1201 | 1 |
| 0.0502 | 2 |
| 0.0209 | 3 |
| 0.0278 | 4 |
| 0.0358 | 5 |
| 0.0268 | 6 |
| 0.0258 | 7 |
| 0.0212 | 8 |
| 0.0247 | 9 |
| 0.0104 | 10 |
| 0.0101 | 11 |
| 0.0033 | 12 |
| 0.0044 | 13 |
| 0.0185 | 14 |
| 0.0051 | 15 |
| 0.0011 | 16 |
| 0.0043 | 17 |
| 0.0022 | 18 |
| 0.0026 | 19 |
| 0.0019 | 20 |
| 0.0012 | 21 |
| 0.0013 | 22 |
| 0.0009 | 23 |
| 0.0008 | 24 |
| 0.0007 | 25 |
| 0.0016 | 26 |
| 0.0006 | 27 |
| 0.0006 | 28 |
| 0.0011 | 29 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
chronopt-research/vietnamese-gpt2-base | chronopt-research | 2023-08-09T20:58:46Z | 147 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"vi",
"dataset:duongttr/vi-dataset-for-pretrain",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-09T20:29:15Z | ---
license: apache-2.0
datasets:
- duongttr/vi-dataset-for-pretrain
language:
- vi
metrics:
- perplexity
pipeline_tag: text-generation
widget:
- text: Hôm nay tôi rất vui vì
- text: Hoàng Sa, Trường Sa là của Việt
model-index:
- name: chronopt-research/vietnamese-gpt2-base
results:
- task:
type: text-generation
metrics:
- type: perplexity
value: 51.35
verified: true
---
# Vietnamese `gpt2-base`
<!-- Provide a quick summary of what the model is/does. -->
This is a pretrained `gpt2-base` for Vietnamese language using casual language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model Description
GPT-2 (*at first*) is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
This is the **base version** of GPT-2, with 137M parameters.
You could've found other pretrained version from here: [gpt2-medium](https://huggingface.co/chronopt-research/vietnamese-gpt2-medium), [gpt2-large]()
## Dataset used for pretraining
This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.
The dataset consists of:
- [`vietgpt/covid_19_news_vi`](https://huggingface.co/datasets/vietgpt/covid_19_news_vi)
- [`hieunguyen1053/binhvq-news-corpus`](https://huggingface.co/datasets/hieunguyen1053/binhvq-news-corpus)
- [`oscar (unshuffled_deduplicated_vi)`](https://huggingface.co/datasets/oscar)
- [`vietgpt/wikipedia_vi`](https://huggingface.co/datasets/vietgpt/wikipedia_vi)
You can find out the combined version here: [duongttr/vi-dataset-for-pretrain](https://huggingface.co/datasets/duongttr/vi-dataset-for-pretrain)
## Hyperparamters & Results
We trained the model ~100k steps, with `lr=1e-4`, `bs=2560` (`single_batch_size=32` * `num_core=8` * `grad_cum=10`), `optimizer=adamw` on TPU-VM-3.8 from [TRC Program](https://sites.research.google/trc/about/). The training costs around **1 day**.
|Model|Eval Loss|Eval Perplexity|
|---|---|---|
|**gpt2-base**|**3.939**|**51.35**|
|gpt2-medium|2.8676|17.5948|
|gpt2-large|-|-|
## Contacts
Feel free to contact us via: [email]()
|
Jbrophy/falcon-7B-Instruct-Romance | Jbrophy | 2023-08-09T20:58:15Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-08T00:39:51Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
chronopt-research/vietnamese-gpt2-medium | chronopt-research | 2023-08-09T20:54:47Z | 146 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"vi",
"dataset:duongttr/vi-dataset-for-pretrain",
"doi:10.57967/hf/3874",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-06T11:34:08Z | ---
license: apache-2.0
datasets:
- duongttr/vi-dataset-for-pretrain
language:
- vi
metrics:
- perplexity
pipeline_tag: text-generation
widget:
- text: Việt Nam là quốc gia có
- text: Hoàng Sa, Trường Sa là của
model-index:
- name: chronopt-research/vietnamese-gpt2-medium
results:
- task:
type: text-generation
metrics:
- type: perplexity
value: 17.5948
verified: true
---
# Vietnamese `gpt2-medium`
<!-- Provide a quick summary of what the model is/does. -->
This is a pretrained `gpt2-medium` for Vietnamese language using casual language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model Description
GPT-2 (*at first*) is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
This is the **medium version** of GPT-2, with 380M parameters.
You could've found other pretrained version from here: [gpt2-base](https://huggingface.co/chronopt-research/vietnamese-gpt2-base), [gpt2-large]()
## Dataset used for pretraining
This is a combination of multiple Vietnamese dataset for pretraining CLMs such as GPT, GPT2, etc.
The dataset consists of:
- [`vietgpt/covid_19_news_vi`](https://huggingface.co/datasets/vietgpt/covid_19_news_vi)
- [`hieunguyen1053/binhvq-news-corpus`](https://huggingface.co/datasets/hieunguyen1053/binhvq-news-corpus)
- [`oscar (unshuffled_deduplicated_vi)`](https://huggingface.co/datasets/oscar)
- [`vietgpt/wikipedia_vi`](https://huggingface.co/datasets/vietgpt/wikipedia_vi)
You can find out the combined version here: [duongttr/vi-dataset-for-pretrain](https://huggingface.co/datasets/duongttr/vi-dataset-for-pretrain)
## Hyperparamters & Results
We trained the model ~100k steps, with `lr=1e-4`, `bs=1920`, `optimizer=adamw` on TPU-VM-3.8 from [TRC Program](https://sites.research.google/trc/about/). The training costs around **2.5 days**.
|Model|Eval Loss|Eval Perplexity|
|---|---|---|
|gpt2-base|3.939|51.35|
|**gpt2-medium**|**2.8676**|**17.5948**|
|gpt2-large|-|-|
## Contacts
Feel free to contact us via: [email]() |
GhifSmile/distilbert-base-uncased-DSC-new | GhifSmile | 2023-08-09T20:49:02Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T19:25:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: distilbert-base-uncased-DSC-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-DSC-new
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1017
- Accuracy: 0.9902
- Precision: 0.9910
- Recall: 0.9909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 0.4743 | 1.0 | 618 | 0.1856 | 0.9633 | 0.9672 | 0.9647 |
| 0.0946 | 2.0 | 1236 | 0.1577 | 0.9707 | 0.9749 | 0.9733 |
| 0.0851 | 3.0 | 1854 | 0.1081 | 0.9853 | 0.9869 | 0.9858 |
| 0.0633 | 4.0 | 2472 | 0.1449 | 0.9841 | 0.9851 | 0.9837 |
| 0.0258 | 5.0 | 3090 | 0.1155 | 0.9829 | 0.9838 | 0.9829 |
| 0.022 | 6.0 | 3708 | 0.1089 | 0.9890 | 0.9899 | 0.9897 |
| 0.0147 | 7.0 | 4326 | 0.1092 | 0.9878 | 0.9885 | 0.9875 |
| 0.0043 | 8.0 | 4944 | 0.1017 | 0.9902 | 0.9910 | 0.9909 |
| 0.0041 | 9.0 | 5562 | 0.1033 | 0.9878 | 0.9885 | 0.9874 |
| 0.0012 | 10.0 | 6180 | 0.1093 | 0.9878 | 0.9885 | 0.9874 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
EgilKarlsen/DistilRoBERTa_Thunderbird-Anomaly_Baseline | EgilKarlsen | 2023-08-09T20:45:29Z | 107 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T20:25:00Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DistilRoBERTa_Thunderbird-Anomaly_Baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilRoBERTa_Thunderbird-Anomaly_Baseline
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0031
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0958 | 1.0 | 1094 | 0.0623 | 0.9846 |
| 0.0514 | 2.0 | 2188 | 0.0340 | 0.9846 |
| 0.0261 | 3.0 | 3282 | 0.0168 | 0.9896 |
| 0.0147 | 4.0 | 4376 | 0.0095 | 1.0 |
| 0.01 | 5.0 | 5470 | 0.0061 | 1.0 |
| 0.0071 | 6.0 | 6564 | 0.0042 | 1.0 |
| 0.0058 | 7.0 | 7658 | 0.0031 | 1.0 |
| 0.0046 | 8.0 | 8752 | 0.0025 | 1.0 |
| 0.0043 | 9.0 | 9846 | 0.0022 | 1.0 |
| 0.0038 | 10.0 | 10940 | 0.0021 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ederdt2023/Eder_Duenas | ederdt2023 | 2023-08-09T20:43:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-09T20:43:13Z | ---
license: creativeml-openrail-m
---
|
raptz/autotrain-rstt_fullsumm-81171141667 | raptz | 2023-08-09T20:34:42Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:raptz/autotrain-data-rstt_fullsumm",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-08-09T20:30:49Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- raptz/autotrain-data-rstt_fullsumm
co2_eq_emissions:
emissions: 1.5473924434284785
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 81171141667
- CO2 Emissions (in grams): 1.5474
## Validation Metrics
- Loss: 0.650
- Rouge1: 68.031
- Rouge2: 53.314
- RougeL: 59.901
- RougeLsum: 61.660
- Gen Len: 61.707
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/raptz/autotrain-rstt_fullsumm-81171141667
``` |
TotoLefo/Sheirlou500Epoch | TotoLefo | 2023-08-09T20:33:56Z | 0 | 0 | null | [
"AI VOICE",
"fr",
"region:us"
] | null | 2023-08-09T20:31:07Z | ---
language:
- fr
tags:
- AI VOICE
---
# Model Card for Model ID
- **Developed by:** TOTO
|
jannikseus/aspect_extraction_laptop_reviews | jannikseus | 2023-08-09T20:30:32Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-08-06T20:55:25Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: aspect_extraction_laptop_reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aspect_extraction_laptop_reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1003
- Precision: 0.7872
- Recall: 0.7817
- F1: 0.7845
- Accuracy: 0.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 362 | 0.0854 | 0.7070 | 0.7817 | 0.7425 | 0.9675 |
| 0.1121 | 2.0 | 724 | 0.0937 | 0.7466 | 0.7676 | 0.7569 | 0.9696 |
| 0.0383 | 3.0 | 1086 | 0.0959 | 0.7622 | 0.7676 | 0.7649 | 0.9714 |
| 0.0383 | 4.0 | 1448 | 0.1003 | 0.7872 | 0.7817 | 0.7845 | 0.9732 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
FredericProtat/dqn-SpaceInvadersNoFrameskip-v4 | FredericProtat | 2023-08-09T20:24:42Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T20:24:06Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 691.00 +/- 253.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga FredericProtat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga FredericProtat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga FredericProtat
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
JabrilJacobs/poca-SoccerTwos | JabrilJacobs | 2023-08-09T20:13:59Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-08-09T20:11:14Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JabrilJacobs/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Prabna/sd-class-butterflies-32-1 | Prabna | 2023-08-09T20:11:43Z | 31 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-08-09T20:11:30Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Prabna/sd-class-butterflies-32-1')
image = pipeline().images[0]
image
```
|
Pixel390/BOY | Pixel390 | 2023-08-09T20:11:24Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-08-09T19:20:44Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a uxz boy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Pixel390/BOY
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a uxz boy using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
|
asenella/mhd_config_1_MMVAE_beta_5_scale_True_seed_0 | asenella | 2023-08-09T20:06:44Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-09T20:06:32Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Josrf/a2c-PandaReachDense-v3 | Josrf | 2023-08-09T20:03:20Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T19:57:18Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BauyrjanQ/whisper-kk-speech2ner-b16-ms2k-1500-s-cl | BauyrjanQ | 2023-08-09T19:55:00Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-09T06:05:26Z | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-kk-speech2ner-b16-ms2k-1500-s-cl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-kk-speech2ner-b16-ms2k-1500-s-cl
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3907
- Wer: 264.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2596 | 0.34 | 1500 | 0.3907 | 264.8997 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cjohlmacher/ppo-SnowballTarget | cjohlmacher | 2023-08-09T19:47:19Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-08-09T19:45:03Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: cjohlmacher/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
StofEzz/mascir_frwav2vec2-large-xlsr-53 | StofEzz | 2023-08-09T19:43:27Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-09T17:37:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: mascir_frwav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mascir_frwav2vec2-large-xlsr-53
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4708
- Wer: 0.3789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.3318 | 2.0 | 500 | 3.0088 | 0.9856 |
| 1.5916 | 4.0 | 1000 | 0.7746 | 0.6411 |
| 0.3961 | 6.0 | 1500 | 0.5238 | 0.5211 |
| 0.2205 | 8.0 | 2000 | 0.5014 | 0.4733 |
| 0.1401 | 10.0 | 2500 | 0.5166 | 0.4878 |
| 0.1147 | 12.0 | 3000 | 0.5058 | 0.4333 |
| 0.0938 | 14.0 | 3500 | 0.4635 | 0.4233 |
| 0.0788 | 16.0 | 4000 | 0.4997 | 0.4144 |
| 0.0645 | 18.0 | 4500 | 0.4840 | 0.4122 |
| 0.0534 | 20.0 | 5000 | 0.4789 | 0.4022 |
| 0.0437 | 22.0 | 5500 | 0.4785 | 0.3978 |
| 0.041 | 24.0 | 6000 | 0.4708 | 0.3789 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
HG7/ReQLoRA_all8 | HG7 | 2023-08-09T19:34:28Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T19:34:07Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
ElcKeT/bert-sst2-finetuned-peft | ElcKeT | 2023-08-09T19:21:37Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T19:20:27Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Den4ikAI/ruBert_base_intent_detection | Den4ikAI | 2023-08-09T19:18:27Z | 140 | 8 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-10T11:10:35Z | ---
license: mit
widget:
- text: Сколько будет 2+2?
- text: Который час?
- text: Вруби свет
- text: Сколько баллов пробки?
language:
- ru
pipeline_tag: text-classification
---
Модель на основе ruBert-base для определения намерений |
huggingnft-app/milady | huggingnft-app | 2023-08-09T19:17:48Z | 2 | 0 | transformers | [
"transformers",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"unconditional-image-generation",
"dataset:huggingnft/milady",
"license:mit",
"endpoints_compatible",
"region:us"
] | unconditional-image-generation | 2023-08-09T19:17:21Z | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
- unconditional-image-generation
datasets:
- huggingnft/milady
license: mit
---
# Hugging NFT: milady
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/milady).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/milady).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/milady).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
|
HG7/ReQLoRA_GUD8 | HG7 | 2023-08-09T19:04:43Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T19:04:35Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
dfomin/Reinforce-1 | dfomin | 2023-08-09T18:51:39Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T18:51:29Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cto-algo-huggingface/EternityRing | cto-algo-huggingface | 2023-08-09T18:42:30Z | 24 | 0 | diffusers | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-09T18:40:55Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### eternity_ring on Stable Diffusion via Dreambooth
#### model by cto-algo-huggingface
This your the Stable Diffusion model fine-tuned the eternity_ring concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<eternity_ring> jewellery**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:







|
stoyky/ppo-Huggy | stoyky | 2023-08-09T18:40:35Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-08-09T18:40:27Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: stoyky/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AEJaspan/ppo-LunarLander-v2 | AEJaspan | 2023-08-09T18:37:08Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T18:36:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.39 +/- 20.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e5_s6789_v3_l5_v100 | KingKazma | 2023-08-09T18:36:20Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T18:36:19Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e7_s6789_v3_l5_v50 | KingKazma | 2023-08-09T18:31:49Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T18:31:48Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e3_s6789_v3_l5_v100 | KingKazma | 2023-08-09T18:22:28Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T18:22:26Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e4_s6789_v3_l5_v50 | KingKazma | 2023-08-09T18:11:32Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T18:11:31Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e1_s6789_v3_l5_v100 | KingKazma | 2023-08-09T18:08:35Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T18:08:34Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Norod78/sdxl-BrainSlug-dreambooth | Norod78 | 2023-08-09T18:08:34Z | 58 | 2 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"autotrain",
"en",
"dataset:Norod78/BrainSlug-blip-captions-1024",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-08-09T17:10:00Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a brain slug
tags:
- text-to-image
- diffusers
- lora
- autotrain
widget:
- text: photo of a brain slug enjoying a nice sunny day on the beach
- text: photo of a brain slug attached to Snoop Doggs head
- text: >-
photo of a shocked old granny with a gooey (brain slug attached to her
head), Very detailed, clean, high quality, sharp image
- text: >-
photo of a brain slug attacking the head of an anime girl, cartoon style,
high quality
datasets:
- Norod78/BrainSlug-blip-captions-1024
inference: true
language:
- en
---
# DreamBooth trained by AutoTrain
Text enoder was not trained.
# Trigger words
Use "photo of a brain slug" / "brain slug" and etc
# Examples
photo of a brain slug enjoying a nice sunny day on the beach

photo of a shocked old granny with a gooey (brain slug attached to her
head), Very detailed, clean, high quality, sharp image
,_Very_detailed,_clean,_high_quality,_sharp_image,_Dave_Dorman-generated_image.jpg) |
iampraveenvemula/lora-trained-xl-colab | iampraveenvemula | 2023-08-09T18:06:47Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-08-09T16:51:25Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - iampraveenvemula/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
IAyoub/finetuning-bert-sentiment-reviews-2 | IAyoub | 2023-08-09T18:06:32Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T14:21:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-bert-sentiment-reviews-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-bert-sentiment-reviews-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2086
- Accuracy: 0.9308
- F1: 0.8368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.01 | 10 | 0.6716 | 0.7463 | 0.2849 |
| No log | 0.03 | 20 | 0.5789 | 0.7463 | 0.2849 |
| No log | 0.04 | 30 | 0.4971 | 0.7788 | 0.3849 |
| No log | 0.06 | 40 | 0.4298 | 0.8672 | 0.5506 |
| No log | 0.07 | 50 | 0.3837 | 0.8794 | 0.5686 |
| No log | 0.09 | 60 | 0.3481 | 0.8802 | 0.5672 |
| No log | 0.1 | 70 | 0.3680 | 0.8757 | 0.5604 |
| No log | 0.12 | 80 | 0.3259 | 0.8854 | 0.5736 |
| No log | 0.13 | 90 | 0.3179 | 0.8854 | 0.5727 |
| No log | 0.15 | 100 | 0.3306 | 0.8891 | 0.6295 |
| No log | 0.16 | 110 | 0.3253 | 0.8894 | 0.6692 |
| No log | 0.18 | 120 | 0.3041 | 0.9024 | 0.7285 |
| No log | 0.19 | 130 | 0.2997 | 0.9068 | 0.7426 |
| No log | 0.21 | 140 | 0.2881 | 0.9057 | 0.7434 |
| No log | 0.22 | 150 | 0.2892 | 0.9094 | 0.7587 |
| No log | 0.24 | 160 | 0.2771 | 0.9149 | 0.7801 |
| No log | 0.25 | 170 | 0.2779 | 0.9135 | 0.7782 |
| No log | 0.27 | 180 | 0.2992 | 0.9109 | 0.7720 |
| No log | 0.28 | 190 | 0.2809 | 0.9083 | 0.7622 |
| No log | 0.3 | 200 | 0.2636 | 0.9146 | 0.7680 |
| No log | 0.31 | 210 | 0.3381 | 0.9079 | 0.7694 |
| No log | 0.33 | 220 | 0.2661 | 0.9197 | 0.7858 |
| No log | 0.34 | 230 | 0.3377 | 0.8854 | 0.7582 |
| No log | 0.36 | 240 | 0.2614 | 0.9190 | 0.7881 |
| No log | 0.37 | 250 | 0.2459 | 0.9264 | 0.7981 |
| No log | 0.38 | 260 | 0.2490 | 0.9246 | 0.7934 |
| No log | 0.4 | 270 | 0.2475 | 0.9197 | 0.7876 |
| No log | 0.41 | 280 | 0.2648 | 0.9161 | 0.7840 |
| No log | 0.43 | 290 | 0.2533 | 0.9249 | 0.8010 |
| No log | 0.44 | 300 | 0.2446 | 0.9234 | 0.8067 |
| No log | 0.46 | 310 | 0.2271 | 0.9260 | 0.8114 |
| No log | 0.47 | 320 | 0.2219 | 0.9246 | 0.8211 |
| No log | 0.49 | 330 | 0.2269 | 0.9320 | 0.8306 |
| No log | 0.5 | 340 | 0.2276 | 0.9264 | 0.8219 |
| No log | 0.52 | 350 | 0.2835 | 0.9201 | 0.7994 |
| No log | 0.53 | 360 | 0.2787 | 0.9231 | 0.8029 |
| No log | 0.55 | 370 | 0.2317 | 0.9301 | 0.8275 |
| No log | 0.56 | 380 | 0.2502 | 0.9131 | 0.8076 |
| No log | 0.58 | 390 | 0.2254 | 0.9294 | 0.8321 |
| No log | 0.59 | 400 | 0.2066 | 0.9312 | 0.8215 |
| No log | 0.61 | 410 | 0.2013 | 0.9342 | 0.8391 |
| No log | 0.62 | 420 | 0.2295 | 0.9260 | 0.8279 |
| No log | 0.64 | 430 | 0.2100 | 0.9338 | 0.8428 |
| No log | 0.65 | 440 | 0.2129 | 0.9316 | 0.8297 |
| No log | 0.67 | 450 | 0.2135 | 0.9327 | 0.8203 |
| No log | 0.68 | 460 | 0.2681 | 0.9212 | 0.8028 |
| No log | 0.7 | 470 | 0.2178 | 0.9320 | 0.8312 |
| No log | 0.71 | 480 | 0.1999 | 0.9342 | 0.8321 |
| No log | 0.72 | 490 | 0.2172 | 0.9305 | 0.8334 |
| 0.2988 | 0.74 | 500 | 0.2086 | 0.9308 | 0.8368 |
| 0.2988 | 0.75 | 510 | 0.2052 | 0.9342 | 0.8430 |
| 0.2988 | 0.77 | 520 | 0.2111 | 0.9331 | 0.8333 |
| 0.2988 | 0.78 | 530 | 0.2279 | 0.9327 | 0.8250 |
| 0.2988 | 0.8 | 540 | 0.2361 | 0.9271 | 0.8164 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Phaaarus/QLoRA_replica_16rank_QKadap | Phaaarus | 2023-08-09T18:02:31Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T18:02:03Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e1_s6789_v3_l5_v50 | KingKazma | 2023-08-09T17:51:16Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:51:15Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e0_s6789_v3_l5_v50 | KingKazma | 2023-08-09T17:44:30Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:44:29Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
rashmi035/wav2vec2-large-mms-1b-hindi_2-colab | rashmi035 | 2023-08-09T17:44:01Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-fl102",
"base_model:finetune:facebook/mms-1b-fl102",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-01T18:01:18Z | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-fl102
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-hindi_2-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-hindi_2-colab
This model is a fine-tuned version of [facebook/mms-1b-fl102](https://huggingface.co/facebook/mms-1b-fl102) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1619
- Wer: 0.9015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.5635 | 0.02 | 20 | 7.4070 | 0.9985 |
| 13.6122 | 0.04 | 40 | 14.5202 | 1.0 |
| 10.4272 | 0.06 | 60 | 8.7994 | 1.5440 |
| 8.1195 | 0.08 | 80 | 10.3713 | 1.0 |
| 9.9347 | 0.1 | 100 | 7.1064 | 1.0 |
| 5.752 | 0.12 | 120 | 5.6953 | 1.0 |
| 5.1715 | 0.14 | 140 | 5.0103 | 1.0 |
| 6.3111 | 0.15 | 160 | 4.6935 | 1.0 |
| 4.4929 | 0.17 | 180 | 5.4670 | 1.0263 |
| 6.038 | 0.19 | 200 | 5.6732 | 1.3148 |
| 4.1732 | 0.21 | 220 | 4.2880 | 1.0015 |
| 3.8954 | 0.23 | 240 | 4.3895 | 1.0 |
| 3.9351 | 0.25 | 260 | 3.7766 | 1.0 |
| 3.6591 | 0.27 | 280 | 3.7521 | 1.0 |
| 3.6009 | 0.29 | 300 | 3.8260 | 1.0 |
| 3.5822 | 0.31 | 320 | 3.5655 | 1.0 |
| 3.5705 | 0.33 | 340 | 3.6623 | 1.0 |
| 3.6825 | 0.35 | 360 | 3.5988 | 1.0 |
| 3.5239 | 0.37 | 380 | 3.5307 | 1.0 |
| 3.558 | 0.39 | 400 | 3.5847 | 1.0 |
| 3.4658 | 0.41 | 420 | 3.4300 | 1.0 |
| 3.4045 | 0.43 | 440 | 3.5261 | 1.0 |
| 3.4564 | 0.44 | 460 | 3.4799 | 1.0 |
| 3.4403 | 0.46 | 480 | 3.4126 | 1.0 |
| 3.4733 | 0.48 | 500 | 3.5358 | 1.0 |
| 3.445 | 0.5 | 520 | 3.3526 | 1.0 |
| 3.4155 | 0.52 | 540 | 3.3508 | 1.0 |
| 3.412 | 0.54 | 560 | 3.3205 | 1.0 |
| 3.2547 | 0.56 | 580 | 3.3143 | 1.0 |
| 3.2652 | 0.58 | 600 | 3.3057 | 1.0 |
| 3.1801 | 0.6 | 620 | 3.2361 | 1.0 |
| 3.2835 | 0.62 | 640 | 3.3567 | 1.0 |
| 3.3545 | 0.64 | 660 | 3.2300 | 1.0 |
| 3.1898 | 0.66 | 680 | 3.1771 | 1.0 |
| 3.1109 | 0.68 | 700 | 3.3033 | 1.0 |
| 3.1631 | 0.7 | 720 | 3.0177 | 0.9997 |
| 3.0386 | 0.71 | 740 | 3.0339 | 0.9997 |
| 3.074 | 0.73 | 760 | 3.0702 | 1.0 |
| 2.8598 | 0.75 | 780 | 2.8458 | 1.0 |
| 2.8116 | 0.77 | 800 | 2.9836 | 0.9995 |
| 2.8086 | 0.79 | 820 | 2.5641 | 1.0 |
| 2.6645 | 0.81 | 840 | 2.6182 | 1.0 |
| 2.7035 | 0.83 | 860 | 2.5176 | 0.9995 |
| 2.4736 | 0.85 | 880 | 2.3965 | 0.9995 |
| 2.6259 | 0.87 | 900 | 2.5697 | 1.0 |
| 2.44 | 0.89 | 920 | 2.3085 | 1.0 |
| 2.22 | 0.91 | 940 | 2.1551 | 0.9997 |
| 2.5394 | 0.93 | 960 | 2.1955 | 1.0 |
| 2.1734 | 0.95 | 980 | 2.1015 | 1.0 |
| 2.407 | 0.97 | 1000 | 2.3892 | 1.0 |
| 2.1967 | 0.99 | 1020 | 1.9439 | 0.9943 |
| 2.1704 | 1.0 | 1040 | 1.9236 | 0.9827 |
| 1.9929 | 1.02 | 1060 | 1.9353 | 0.9964 |
| 2.1652 | 1.04 | 1080 | 2.1551 | 0.9899 |
| 2.003 | 1.06 | 1100 | 1.9230 | 0.9820 |
| 2.0048 | 1.08 | 1120 | 1.9293 | 0.9869 |
| 2.1665 | 1.1 | 1140 | 1.8845 | 0.9990 |
| 1.8297 | 1.12 | 1160 | 1.7173 | 0.9866 |
| 1.8388 | 1.14 | 1180 | 1.8550 | 0.9871 |
| 1.8399 | 1.16 | 1200 | 1.7772 | 0.9789 |
| 1.7256 | 1.18 | 1220 | 1.7840 | 0.9863 |
| 2.0516 | 1.2 | 1240 | 1.7693 | 0.9520 |
| 1.8014 | 1.22 | 1260 | 1.6744 | 0.9814 |
| 1.8244 | 1.24 | 1280 | 1.6614 | 0.9907 |
| 1.8233 | 1.26 | 1300 | 1.5975 | 0.9948 |
| 1.6977 | 1.28 | 1320 | 1.5738 | 0.9874 |
| 1.9592 | 1.29 | 1340 | 1.5922 | 0.9897 |
| 1.6181 | 1.31 | 1360 | 1.4764 | 0.9626 |
| 1.6739 | 1.33 | 1380 | 1.5381 | 0.9928 |
| 1.6855 | 1.35 | 1400 | 1.4613 | 0.9410 |
| 1.5535 | 1.37 | 1420 | 1.4878 | 0.9348 |
| 1.7467 | 1.39 | 1440 | 1.6077 | 0.9618 |
| 1.6744 | 1.41 | 1460 | 1.4419 | 0.9727 |
| 1.6115 | 1.43 | 1480 | 1.6700 | 0.9379 |
| 1.7357 | 1.45 | 1500 | 1.5228 | 0.9964 |
| 1.7096 | 1.47 | 1520 | 1.4350 | 0.9611 |
| 1.7402 | 1.49 | 1540 | 1.4351 | 0.9567 |
| 1.4819 | 1.51 | 1560 | 1.4062 | 0.9727 |
| 1.6863 | 1.53 | 1580 | 1.4908 | 0.9889 |
| 1.5539 | 1.55 | 1600 | 1.4099 | 0.9827 |
| 1.5733 | 1.57 | 1620 | 1.4508 | 0.9209 |
| 1.7331 | 1.58 | 1640 | 1.3913 | 0.9755 |
| 1.4361 | 1.6 | 1660 | 1.3525 | 0.9237 |
| 1.4806 | 1.62 | 1680 | 1.3748 | 0.9557 |
| 1.5834 | 1.64 | 1700 | 1.3428 | 0.9386 |
| 1.4226 | 1.66 | 1720 | 1.2990 | 0.9523 |
| 1.6159 | 1.68 | 1740 | 1.3351 | 0.9428 |
| 1.4486 | 1.7 | 1760 | 1.2982 | 0.9276 |
| 1.3682 | 1.72 | 1780 | 1.3810 | 0.9312 |
| 1.3828 | 1.74 | 1800 | 1.2621 | 0.9242 |
| 1.4604 | 1.76 | 1820 | 1.2883 | 0.9051 |
| 1.4368 | 1.78 | 1840 | 1.2462 | 0.9191 |
| 1.3652 | 1.8 | 1860 | 1.2544 | 0.8935 |
| 1.4347 | 1.82 | 1880 | 1.2682 | 0.9185 |
| 1.4109 | 1.84 | 1900 | 1.2385 | 0.8966 |
| 1.251 | 1.86 | 1920 | 1.2293 | 0.9015 |
| 1.4793 | 1.87 | 1940 | 1.2410 | 0.9075 |
| 1.2481 | 1.89 | 1960 | 1.1916 | 0.9134 |
| 1.2951 | 1.91 | 1980 | 1.2061 | 0.8891 |
| 1.3724 | 1.93 | 2000 | 1.1730 | 0.9381 |
| 1.3093 | 1.95 | 2020 | 1.1763 | 0.8951 |
| 1.3305 | 1.97 | 2040 | 1.1709 | 0.9028 |
| 1.3152 | 1.99 | 2060 | 1.1619 | 0.9015 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
papepipopu/q-FrozenLake-v1-4x4-noSlippery-course | papepipopu | 2023-08-09T17:41:08Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T17:41:05Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery-course
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="papepipopu/q-FrozenLake-v1-4x4-noSlippery-course", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e9_s6789_v3_l5_v100 | KingKazma | 2023-08-09T17:38:03Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:38:02Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
emptor/distilgender-es-2M | emptor | 2023-08-09T17:34:13Z | 1,110 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"es",
"dataset:ittailup/issste",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T01:29:28Z | ---
license: apache-2.0
datasets:
- ittailup/issste
language:
- es
metrics:
- accuracy: 0.9951
widget:
- text: AGATA
- text: GABRIEL
---
## Model Card
### Overview
This model card provides details about a trained model, its training process, and evaluation metrics. This information ensures transparency and assists users in understanding the model's performance and behavior.
### Training Details
- **Training Epochs**: The model was trained for 2 epochs.
- **Training Steps**: The model underwent 1856 training steps.
- **Training Runtime**: The model's training runtime was approximately 2680.184 seconds.
- **Training Speed**: The model trained at a rate of 0.692 steps per second and processed approximately 1417.813 samples per second.
- **Learning Rate**: The learning rate during training was approximately 0.0000095905.
- **Training Loss**: The average training loss recorded was approximately 0.0184, with a specific loss value of 0.023423514232553285.
### Evaluation Details
- **Evaluation Loss**: The model achieved an evaluation loss of 0.017659155651926994.
- **Evaluation Runtime**: The evaluation process took approximately 23.8414 seconds.
- **Evaluation Speed**: The model was evaluated at a rate of 2.055 steps per second, processing approximately 4194.378 samples per second.
### Performance Metrics
- **Accuracy**: The model achieved an accuracy of 0.9951 during evaluation.
- **Precision**: The precision of the model is approximately 0.9957234121187588.
- **Recall**: The model's recall is approximately 0.9956533216014078.
- **F1-Score**: The F1-Score for the model is approximately 0.995688365626595.
|
cyriac880/dog | cyriac880 | 2023-08-09T17:29:51Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-09T17:17:29Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### DOG Dreambooth model trained by cyriac880 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET294
Sample pictures of this concept:
.jpg)
.jpg)
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e8_s6789_v3_l5_v100 | KingKazma | 2023-08-09T17:29:26Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:29:25Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e9_s6789_v3_l5_v20 | KingKazma | 2023-08-09T17:22:43Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:22:42Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
alokedeep/distilbert-base-uncased-finetuned-emotion | alokedeep | 2023-08-09T17:18:49Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T13:50:58Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9265400264321207
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2135
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8263 | 1.0 | 250 | 0.3211 | 0.9035 | 0.9024 |
| 0.2495 | 2.0 | 500 | 0.2135 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e8_s6789_v3_l5_v20 | KingKazma | 2023-08-09T17:16:04Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:16:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Norod78/sd15-bender-lora | Norod78 | 2023-08-09T17:15:57Z | 6 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"dataset:Norod78/bender-blip2-captions-512",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-14T08:25:16Z | ---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: A photo of bender
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
datasets:
- Norod78/bender-blip2-captions-512
inference: true
widget:
- text: >-
A picture of a a cute little bender working as a pokemon trainer
- text: >-
A picture of Godzilla as bender, Very detailed, clean, high quality, sharp image
- text: A picture of bender
- text: A photo of Bender rocking out on stage, shredding a guitar with sparks flying in the air. robot, reflective metal
---
LoRA for generating images of Bender, the robot from futurama
A diffusers version of [this model](https://civitai.com/models/85775/bender-lora)
Make sure to include the word "Bender" in your prompt |
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e7_s6789_v3_l5_v20 | KingKazma | 2023-08-09T17:09:25Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:09:24Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e5_s6789_v3_l5_v100 | KingKazma | 2023-08-09T17:03:39Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:03:38Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e6_s6789_v3_l5_v20 | KingKazma | 2023-08-09T17:02:46Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:02:45Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
psychodoge/llama2-qlora-finetunined-friendchathinglish | psychodoge | 2023-08-09T17:00:07Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T17:00:01Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e5_s6789_v3_l5_v20 | KingKazma | 2023-08-09T16:56:07Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:56:06Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3_l5_v100 | KingKazma | 2023-08-09T16:55:03Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:55:02Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e4_s6789_v3_l5_v20 | KingKazma | 2023-08-09T16:49:28Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:49:27Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
scarlett623/wav2vec2-large-xlsr53-zh-cn-subset-colab | scarlett623 | 2023-08-09T16:46:39Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-09T03:52:42Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr53-zh-cn-subset-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: zh-CN
split: test[:20%]
args: zh-CN
metrics:
- name: Wer
type: wer
value: 0.9394977168949772
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr53-zh-cn-subset-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3992
- Wer: 0.9395
- Cer: 0.3184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 13
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 26
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| No log | 1.9 | 400 | 33.6533 | 1.0 | 1.0 |
| 70.5767 | 3.81 | 800 | 6.8140 | 1.0 | 1.0 |
| 7.1379 | 5.71 | 1200 | 6.5163 | 1.0 | 1.0 |
| 6.4771 | 7.62 | 1600 | 6.4602 | 1.0 | 1.0 |
| 6.3627 | 9.52 | 2000 | 6.3406 | 1.0 | 0.9700 |
| 6.3627 | 11.43 | 2400 | 6.1021 | 1.0 | 0.9678 |
| 6.1201 | 13.33 | 2800 | 5.1523 | 1.0 | 0.8385 |
| 5.3531 | 15.24 | 3200 | 4.2224 | 1.0 | 0.7084 |
| 4.1733 | 17.14 | 3600 | 3.6981 | 1.0 | 0.6332 |
| 3.5472 | 19.05 | 4000 | 3.2708 | 0.9994 | 0.5827 |
| 3.5472 | 20.95 | 4400 | 2.9629 | 0.9989 | 0.5510 |
| 3.0668 | 22.86 | 4800 | 2.7122 | 0.9943 | 0.5165 |
| 2.7248 | 24.76 | 5200 | 2.5171 | 0.9914 | 0.4976 |
| 2.4609 | 26.67 | 5600 | 2.3538 | 0.9897 | 0.4759 |
| 2.2323 | 28.57 | 6000 | 2.2112 | 0.9874 | 0.4555 |
| 2.2323 | 30.48 | 6400 | 2.0850 | 0.9834 | 0.4370 |
| 2.0438 | 32.38 | 6800 | 1.9982 | 0.9806 | 0.4261 |
| 1.8837 | 34.29 | 7200 | 1.9179 | 0.9766 | 0.4137 |
| 1.7646 | 36.19 | 7600 | 1.8278 | 0.9766 | 0.4030 |
| 1.6469 | 38.1 | 8000 | 1.7627 | 0.9755 | 0.3937 |
| 1.6469 | 40.0 | 8400 | 1.7063 | 0.9709 | 0.3853 |
| 1.5422 | 41.9 | 8800 | 1.6649 | 0.9663 | 0.3787 |
| 1.4561 | 43.81 | 9200 | 1.6336 | 0.9697 | 0.3714 |
| 1.3842 | 45.71 | 9600 | 1.5943 | 0.9606 | 0.3647 |
| 1.3164 | 47.62 | 10000 | 1.5681 | 0.9669 | 0.3621 |
| 1.3164 | 49.52 | 10400 | 1.5535 | 0.9600 | 0.3582 |
| 1.2654 | 51.43 | 10800 | 1.5354 | 0.9538 | 0.3544 |
| 1.2186 | 53.33 | 11200 | 1.5003 | 0.9555 | 0.3482 |
| 1.1781 | 55.24 | 11600 | 1.4979 | 0.9572 | 0.3473 |
| 1.1344 | 57.14 | 12000 | 1.4820 | 0.9549 | 0.3453 |
| 1.1344 | 59.05 | 12400 | 1.4707 | 0.9509 | 0.3396 |
| 1.0965 | 60.95 | 12800 | 1.4657 | 0.9509 | 0.3384 |
| 1.0637 | 62.86 | 13200 | 1.4610 | 0.9509 | 0.3371 |
| 1.0306 | 64.76 | 13600 | 1.4461 | 0.9509 | 0.3361 |
| 1.0014 | 66.67 | 14000 | 1.4437 | 0.9503 | 0.3328 |
| 1.0014 | 68.57 | 14400 | 1.4334 | 0.9463 | 0.3304 |
| 0.9758 | 70.48 | 14800 | 1.4267 | 0.9429 | 0.3295 |
| 0.9486 | 72.38 | 15200 | 1.4250 | 0.9469 | 0.3269 |
| 0.933 | 74.29 | 15600 | 1.4214 | 0.9441 | 0.3273 |
| 0.9121 | 76.19 | 16000 | 1.4161 | 0.9441 | 0.3267 |
| 0.9121 | 78.1 | 16400 | 1.4137 | 0.9446 | 0.3268 |
| 0.9001 | 80.0 | 16800 | 1.4216 | 0.9446 | 0.3253 |
| 0.8789 | 81.9 | 17200 | 1.4164 | 0.9435 | 0.3264 |
| 0.8659 | 83.81 | 17600 | 1.3996 | 0.9424 | 0.3216 |
| 0.8471 | 85.71 | 18000 | 1.4079 | 0.9458 | 0.3226 |
| 0.8471 | 87.62 | 18400 | 1.4042 | 0.9412 | 0.3214 |
| 0.8387 | 89.52 | 18800 | 1.4073 | 0.9424 | 0.3214 |
| 0.8299 | 91.43 | 19200 | 1.4005 | 0.9418 | 0.3192 |
| 0.8257 | 93.33 | 19600 | 1.4040 | 0.9406 | 0.3200 |
| 0.813 | 95.24 | 20000 | 1.4012 | 0.9412 | 0.3184 |
| 0.813 | 97.14 | 20400 | 1.4011 | 0.9389 | 0.3183 |
| 0.8062 | 99.05 | 20800 | 1.3992 | 0.9395 | 0.3184 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e2_s6789_v3_l5_v100 | KingKazma | 2023-08-09T16:37:52Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:37:51Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e2_s6789_v3_l5_v20 | KingKazma | 2023-08-09T16:36:10Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:36:09Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e1_s6789_v3_l5_v20 | KingKazma | 2023-08-09T16:29:31Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:29:30Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3_l5_v100 | KingKazma | 2023-08-09T16:29:16Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:29:15Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
avurity/layoutlmv3-finetuned-wildreceipt | avurity | 2023-08-09T16:24:08Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:wildreceipt",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-08-05T16:29:09Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- wildreceipt
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-wildreceipt
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wildreceipt
type: wildreceipt
config: WildReceipt
split: test
args: WildReceipt
metrics:
- name: Precision
type: precision
value: 0.8738394320043692
- name: Recall
type: recall
value: 0.88093599449415
- name: F1
type: f1
value: 0.8773733634930428
- name: Accuracy
type: accuracy
value: 0.9245552383044147
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-wildreceipt
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the wildreceipt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3068
- Precision: 0.8738
- Recall: 0.8809
- F1: 0.8774
- Accuracy: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.32 | 100 | 1.3498 | 0.6130 | 0.3126 | 0.4140 | 0.6742 |
| No log | 0.63 | 200 | 0.8939 | 0.6665 | 0.5317 | 0.5915 | 0.7815 |
| No log | 0.95 | 300 | 0.7159 | 0.7311 | 0.6425 | 0.6840 | 0.8161 |
| No log | 1.26 | 400 | 0.5901 | 0.7554 | 0.6690 | 0.7095 | 0.8405 |
| 1.0677 | 1.58 | 500 | 0.5263 | 0.7632 | 0.7232 | 0.7427 | 0.8578 |
| 1.0677 | 1.89 | 600 | 0.4759 | 0.7871 | 0.7777 | 0.7824 | 0.8774 |
| 1.0677 | 2.21 | 700 | 0.4299 | 0.8054 | 0.8070 | 0.8062 | 0.8890 |
| 1.0677 | 2.52 | 800 | 0.4165 | 0.8064 | 0.8311 | 0.8185 | 0.8937 |
| 1.0677 | 2.84 | 900 | 0.3845 | 0.8344 | 0.8300 | 0.8322 | 0.9005 |
| 0.4267 | 3.15 | 1000 | 0.3540 | 0.8433 | 0.8318 | 0.8375 | 0.9056 |
| 0.4267 | 3.47 | 1100 | 0.3429 | 0.8362 | 0.8540 | 0.8450 | 0.9086 |
| 0.4267 | 3.79 | 1200 | 0.3274 | 0.8451 | 0.8545 | 0.8498 | 0.9105 |
| 0.4267 | 4.1 | 1300 | 0.3433 | 0.8397 | 0.8535 | 0.8466 | 0.9092 |
| 0.4267 | 4.42 | 1400 | 0.3181 | 0.8514 | 0.8604 | 0.8559 | 0.9154 |
| 0.2869 | 4.73 | 1500 | 0.3191 | 0.8472 | 0.8637 | 0.8554 | 0.9129 |
| 0.2869 | 5.05 | 1600 | 0.3128 | 0.8613 | 0.8658 | 0.8635 | 0.9182 |
| 0.2869 | 5.36 | 1700 | 0.3121 | 0.8622 | 0.8695 | 0.8658 | 0.9182 |
| 0.2869 | 5.68 | 1800 | 0.3230 | 0.8473 | 0.8661 | 0.8566 | 0.9140 |
| 0.2869 | 5.99 | 1900 | 0.2986 | 0.8729 | 0.8633 | 0.8681 | 0.9209 |
| 0.2134 | 6.31 | 2000 | 0.3032 | 0.8555 | 0.8694 | 0.8624 | 0.9169 |
| 0.2134 | 6.62 | 2100 | 0.3056 | 0.8705 | 0.8710 | 0.8708 | 0.9220 |
| 0.2134 | 6.94 | 2200 | 0.3122 | 0.8630 | 0.8790 | 0.8709 | 0.9217 |
| 0.2134 | 7.26 | 2300 | 0.3047 | 0.8692 | 0.8778 | 0.8734 | 0.9215 |
| 0.2134 | 7.57 | 2400 | 0.3103 | 0.8701 | 0.8780 | 0.8741 | 0.9225 |
| 0.1661 | 7.89 | 2500 | 0.3080 | 0.8712 | 0.8787 | 0.8749 | 0.9226 |
| 0.1661 | 8.2 | 2600 | 0.3011 | 0.8653 | 0.8834 | 0.8743 | 0.9236 |
| 0.1661 | 8.52 | 2700 | 0.3034 | 0.8735 | 0.8798 | 0.8766 | 0.9247 |
| 0.1661 | 8.83 | 2800 | 0.3054 | 0.8698 | 0.8793 | 0.8745 | 0.9238 |
| 0.1661 | 9.15 | 2900 | 0.3105 | 0.8697 | 0.8812 | 0.8754 | 0.9237 |
| 0.1415 | 9.46 | 3000 | 0.3068 | 0.8738 | 0.8809 | 0.8774 | 0.9246 |
| 0.1415 | 9.78 | 3100 | 0.3086 | 0.8730 | 0.8793 | 0.8761 | 0.9229 |
| 0.1415 | 10.09 | 3200 | 0.3013 | 0.8755 | 0.8830 | 0.8792 | 0.9256 |
| 0.1415 | 10.41 | 3300 | 0.3107 | 0.8692 | 0.8815 | 0.8753 | 0.9241 |
| 0.1415 | 10.73 | 3400 | 0.3073 | 0.8759 | 0.8794 | 0.8777 | 0.9261 |
| 0.1239 | 11.04 | 3500 | 0.3109 | 0.8727 | 0.8819 | 0.8773 | 0.9253 |
| 0.1239 | 11.36 | 3600 | 0.3124 | 0.8723 | 0.8790 | 0.8756 | 0.9243 |
| 0.1239 | 11.67 | 3700 | 0.3171 | 0.8724 | 0.8805 | 0.8764 | 0.9241 |
| 0.1239 | 11.99 | 3800 | 0.3081 | 0.8739 | 0.8804 | 0.8771 | 0.9254 |
| 0.1239 | 12.3 | 3900 | 0.3095 | 0.8735 | 0.8798 | 0.8766 | 0.9254 |
| 0.1106 | 12.62 | 4000 | 0.3094 | 0.8740 | 0.8796 | 0.8768 | 0.9254 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e0_s6789_v3_l5_v20 | KingKazma | 2023-08-09T16:22:51Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:22:50Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e0_s6789_v3_l5_v100 | KingKazma | 2023-08-09T16:20:40Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:20:39Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
kaoyer/pokemon-lora | kaoyer | 2023-08-09T16:17:44Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-08-09T13:49:50Z |
---
license: creativeml-openrail-m
base_model: /root/autodl-fs/pre_trained_models/runwayml-stable-diffusion-v1-5/runwayml-stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - kaoyer/pokemon-lora
These are LoRA adaption weights for /root/autodl-fs/pre_trained_models/runwayml-stable-diffusion-v1-5/runwayml-stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e-1_s6789_v3_l5_v20 | KingKazma | 2023-08-09T16:16:11Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:16:10Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
MarioNapoli/DynamicWav2Vec_TEST_9 | MarioNapoli | 2023-08-09T16:09:04Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_1_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-03T14:29:32Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_1_0
model-index:
- name: DynamicWav2Vec_TEST_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DynamicWav2Vec_TEST_9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_1_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e9_s6789_v3_l5_v20 | KingKazma | 2023-08-09T16:05:14Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:05:14Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
foilfoilfoil/cheesegulag3.5 | foilfoilfoil | 2023-08-09T16:04:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:04:02Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e8_s6789_v3_l5_v50 | KingKazma | 2023-08-09T16:03:48Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T16:03:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e7_s6789_v3_l5_v50 | KingKazma | 2023-08-09T15:56:16Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:56:15Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
adon81/bert-finetuned-fishing-NER | adon81 | 2023-08-09T15:48:12Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:adon81/bert-finetuned-ner",
"base_model:finetune:adon81/bert-finetuned-ner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-08-09T13:13:46Z | ---
license: apache-2.0
base_model: adon81/bert-finetuned-ner
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-fishing-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-fishing-NER
This model is a fine-tuned version of [adon81/bert-finetuned-ner](https://huggingface.co/adon81/bert-finetuned-ner) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300000000000000000000000000000000
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Shafaet02/bert-fine-tuned-cola | Shafaet02 | 2023-08-09T15:48:02Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T08:59:17Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: Shafaet02/bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Shafaet02/bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2831
- Validation Loss: 0.4311
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4914 | 0.4282 | 0 |
| 0.2831 | 0.4311 | 1 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.11.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Francesco-A/bert-finetuned-ner | Francesco-A | 2023-08-09T15:45:53Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-08-09T15:29:35Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9323631552836117
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.940528818083243
- name: Accuracy
type: accuracy
value: 0.9861217401542356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9324
- Recall: 0.9488
- F1: 0.9405
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0774 | 1.0 | 1756 | 0.0764 | 0.9146 | 0.9337 | 0.9241 | 0.9802 |
| 0.0394 | 2.0 | 3512 | 0.0554 | 0.9265 | 0.9483 | 0.9373 | 0.9860 |
| 0.0261 | 3.0 | 5268 | 0.0592 | 0.9324 | 0.9488 | 0.9405 | 0.9861 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3_l5_v20 | KingKazma | 2023-08-09T15:44:09Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:44:08Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
mbueno/llama2-qlora-finetunined-french | mbueno | 2023-08-09T15:40:36Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:40:28Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e5_s6789_v3_l5_v20 | KingKazma | 2023-08-09T15:37:07Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:37:06Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Ripo-2007/dreambooth_alfonso | Ripo-2007 | 2023-08-09T15:32:17Z | 4 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-08-09T13:35:48Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: alfonsoaraco
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Test enoder was not trained.
|
santiagotoso/ppo-LunarLander-v2 | santiagotoso | 2023-08-09T15:27:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T13:24:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 232.20 +/- 76.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
murodbek/uzroberta-panx-uz | murodbek | 2023-08-09T15:27:23Z | 167 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-04-13T09:47:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: uzroberta-panx-uz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uzroberta-panx-uz
This model is a fine-tuned version of [rifkat/uztext-3Gb-BPE-Roberta](https://huggingface.co/rifkat/uztext-3Gb-BPE-Roberta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- F1: 0.9175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0515 | 1.0 | 150 | 0.1373 | 0.9141 |
| 0.0415 | 2.0 | 300 | 0.1268 | 0.9194 |
| 0.0101 | 3.0 | 450 | 0.1225 | 0.9416 |
| 0.0038 | 4.0 | 600 | 0.1426 | 0.9353 |
| 0.0004 | 5.0 | 750 | 0.1458 | 0.9320 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
Meohong/Dialect-Polyglot-12.8b-QLoRA | Meohong | 2023-08-09T15:26:17Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:26:09Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
felixshier/osc-01-bert-finetuned | felixshier | 2023-08-09T15:24:55Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T13:35:56Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: osc-01-bert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# osc-01-bert-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3193
- Validation Loss: 0.7572
- Train Precision: 0.6026
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 110, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Epoch |
|:----------:|:---------------:|:---------------:|:-----:|
| 0.6873 | 0.6937 | 0.5147 | 0 |
| 0.6544 | 0.6854 | 0.5 | 1 |
| 0.6127 | 0.7071 | 0.5242 | 2 |
| 0.5651 | 0.6813 | 0.5591 | 3 |
| 0.5015 | 0.7012 | 0.5747 | 4 |
| 0.4006 | 0.7292 | 0.5882 | 5 |
| 0.3193 | 0.7572 | 0.6026 | 6 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
felixshier/csc-01-bert-finetuned | felixshier | 2023-08-09T15:24:52Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T13:35:35Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: csc-01-bert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# csc-01-bert-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4789
- Validation Loss: 0.7231
- Train Precision: 0.6429
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 70, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Epoch |
|:----------:|:---------------:|:---------------:|:-----:|
| 0.7100 | 0.7421 | 0.0 | 0 |
| 0.6764 | 0.6861 | 0.625 | 1 |
| 0.6311 | 0.6838 | 0.5862 | 2 |
| 0.5909 | 0.7072 | 0.6286 | 3 |
| 0.5413 | 0.7504 | 0.6667 | 4 |
| 0.4789 | 0.7231 | 0.6429 | 5 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e3_s6789_v3_l5_v20 | KingKazma | 2023-08-09T15:23:02Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:23:01Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
jordyvl/vit-base_rvl-cdip_r2_32 | jordyvl | 2023-08-09T15:18:05Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-08-08T08:10:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl-cdip_r2_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl-cdip_r2_32
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6372
- Accuracy: 0.8985
- Brier Loss: 0.1792
- Nll: 1.1736
- F1 Micro: 0.8985
- F1 Macro: 0.8987
- Ece: 0.0847
- Aurc: 0.0201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.1647 | 1.0 | 3334 | 0.4024 | 0.8887 | 0.1682 | 1.2086 | 0.8887 | 0.8891 | 0.0457 | 0.0178 |
| 0.1418 | 2.0 | 6668 | 0.4075 | 0.8941 | 0.1646 | 1.2066 | 0.8941 | 0.8942 | 0.0522 | 0.0177 |
| 0.0989 | 3.0 | 10002 | 0.4409 | 0.8932 | 0.1690 | 1.1966 | 0.8932 | 0.8932 | 0.0647 | 0.0175 |
| 0.0614 | 4.0 | 13336 | 0.4781 | 0.8944 | 0.1730 | 1.2083 | 0.8944 | 0.8951 | 0.0694 | 0.0181 |
| 0.0392 | 5.0 | 16670 | 0.5329 | 0.8959 | 0.1761 | 1.1777 | 0.8959 | 0.8958 | 0.0776 | 0.0187 |
| 0.0231 | 6.0 | 20004 | 0.5714 | 0.8957 | 0.1799 | 1.2083 | 0.8957 | 0.8958 | 0.0813 | 0.0198 |
| 0.0126 | 7.0 | 23338 | 0.6002 | 0.8966 | 0.1802 | 1.1732 | 0.8966 | 0.8972 | 0.0839 | 0.0197 |
| 0.0079 | 8.0 | 26672 | 0.6193 | 0.8984 | 0.1789 | 1.1849 | 0.8984 | 0.8985 | 0.0833 | 0.0200 |
| 0.0049 | 9.0 | 30006 | 0.6333 | 0.8976 | 0.1798 | 1.1906 | 0.8976 | 0.8978 | 0.0851 | 0.0205 |
| 0.0034 | 10.0 | 33340 | 0.6372 | 0.8985 | 0.1792 | 1.1736 | 0.8985 | 0.8987 | 0.0847 | 0.0201 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e2_s6789_v3_l5_v20 | KingKazma | 2023-08-09T15:16:00Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:15:59Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
imvladikon/alephbertgimmel_parashoot | imvladikon | 2023-08-09T15:10:27Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"he",
"dataset:imvladikon/parashoot",
"base_model:imvladikon/alephbertgimmel-base-512",
"base_model:finetune:imvladikon/alephbertgimmel-base-512",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-02T07:44:16Z | ---
base_model: imvladikon/alephbertgimmel-base-512
tags:
- generated_from_trainer
datasets:
- imvladikon/parashoot
model-index:
- name: alephbertgimmel_parashoot
results: []
language:
- he
metrics:
- f1
- exact_match
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alephbertgimmel_parashoot
This model is a fine-tuned version of [imvladikon/alephbertgimmel-base-512](https://huggingface.co/imvladikon/alephbertgimmel-base-512) on the [imvladikon/parashoot](https://huggingface.co/datasets/imvladikon/parashoot) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
```
***** predict metrics *****
predict_samples = 1102
test_exact_match = 27.7073
test_f1 = 51.787
test_runtime = 0:00:32.05
test_samples_per_second = 34.383
test_steps_per_second = 4.306
```
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3 |
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3_l5_v20 | KingKazma | 2023-08-09T15:08:58Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:08:56Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e0_s6789_v3_l5_v50 | KingKazma | 2023-08-09T15:03:33Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-09T15:03:32Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Cheetor1996/Efanatika_aku_no_onna_kanbu | Cheetor1996 | 2023-08-09T15:02:56Z | 0 | 0 | null | [
"art",
"en",
"license:cc-by-2.0",
"region:us"
] | null | 2023-08-09T15:00:15Z | ---
license: cc-by-2.0
language:
- en
tags:
- art
---
**Efanatika** from **Aku no onna kanbu**
- Trained with Anime (final-full-pruned) model.
- Recommended LoRA weights: 0.7+
- Recommended LoRA weight blocks: ALL, MIDD, OUTD, and OUTALL
- **Activation ta**g: *efanatika*, use with pink hair, long hair, very long hair, colored skin, blue skin, yellow eyes, colored sclera, and black sclera. |
jcy204/cold_model2 | jcy204 | 2023-08-09T15:02:39Z | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-09T14:57:29Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: jcy204/cold_model2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jcy204/cold_model2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3582
- Validation Loss: 0.6678
- Train Accuracy: 0.7477
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1545, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7779 | 0.6213 | 0.7392 | 0 |
| 0.5323 | 0.6326 | 0.7315 | 1 |
| 0.3582 | 0.6678 | 0.7477 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
leonard-pak/q-FrozenLake-v1-4x4-noSlippery | leonard-pak | 2023-08-09T14:59:17Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T14:58:08Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="leonard-pak/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
LarryAIDraw/ToukaLora-15 | LarryAIDraw | 2023-08-09T14:58:48Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-09T14:39:49Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/125271/touka-kirishima-tokyo-ghoul-lora |
LarryAIDraw/MiaChristoph-10 | LarryAIDraw | 2023-08-09T14:58:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-09T14:39:26Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/124748/mia-christoph-tenpuru |
LarryAIDraw/GirlsFrontlineAk12 | LarryAIDraw | 2023-08-09T14:58:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-09T14:39:04Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/76960/ak-12-quiet-azure-girls-frontline |
gsaivinay/Llama-2-7b-Chat-GPTQ | gsaivinay | 2023-08-09T14:57:09Z | 26 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-18T19:21:58Z | ---
language:
- en
license: other
inference: true
model_type: llama
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# Meta's Llama 2 7b Chat GPTQ
## * Duplicated from TheBloke *
These files are GPTQ model files for [Meta's Llama 2 7b Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
## Prompt template: Llama-2-Chat
```
System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: {prompt}
Assistant:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 3.90 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-7b-Chat-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-7b-Chat-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Llama-2-7b-Chat-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-7b-Chat-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Llama-2-7b-Chat-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: {prompt}
Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
# Original model card: Meta's Llama 2 7b Chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
liadraz/q-FrozenLake-v1-4x4-noSlippery | liadraz | 2023-08-09T14:54:50Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T14:54:46Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="liadraz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
broAleks13/stablecode-completion-alpha-3b-4k | broAleks13 | 2023-08-09T14:49:26Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-09T14:42:38Z | ---
license: apache-2.0
--- stabilityai/stablecode-completion-alpha-3b-4k
|
twbrandon7/rl-course-unit3-dqn-SpaceInvadersNoFrameskip-v4 | twbrandon7 | 2023-08-09T14:45:26Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-09T14:44:48Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 604.00 +/- 212.04
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga twbrandon7 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga twbrandon7 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga twbrandon7
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Subsets and Splits