modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 12:29:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 12:29:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mekaneeky/backup-lora-multi-27-4-440_repo
|
mekaneeky
| 2023-10-19T13:50:34Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2023-10-17T15:46:11Z |
"epoch": 1.18,
"eval_BLEU_ach": 26.2891,
"eval_BLEU_lgg": 30.2572,
"eval_BLEU_lug": 39.035,
"eval_BLEU_mean": 31.3648,
"eval_BLEU_nyn": 30.2598,
"eval_BLEU_teo": 30.9829,
"eval_loss": 2.526149034500122,
"eval_runtime": 250.5702,
"eval_samples_per_second": 9.977,
"eval_steps_per_second": 1.249,
"step": 400
|
yanka9/dqn-SpaceInvadersNoFrameskip-v4
|
yanka9
| 2023-10-19T13:47:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T13:03:53Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 312.50 +/- 34.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yanka9 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yanka9 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yanka9
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
SrimathiE21ALR044/chatai
|
SrimathiE21ALR044
| 2023-10-19T13:47:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-19T13:42:15Z |
title: Chatbot
emoji: 🔥
colorFrom: pink
colorTo: green
sdk: gradio
sdk_version: 3.50.0
app_file: app.py
pinned: false
|
mekaneeky/mbart-lora-1682519228
|
mekaneeky
| 2023-10-19T13:44:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-17T16:06:50Z |
620:
"epoch": 1.82,
"eval_BLEU_ach": 26.736,
"eval_BLEU_lgg": 30.4377,
"eval_BLEU_lug": 39.0714,
"eval_BLEU_mean": 31.5653,
"eval_BLEU_nyn": 30.3798,
"eval_BLEU_teo": 31.2015,
"eval_loss": 2.5229475498199463,
"eval_runtime": 265.8843,
"eval_samples_per_second": 9.403,
"eval_steps_per_second": 1.177,
"step": 620
710:
"epoch": 2.09,
"eval_BLEU_ach": 26.9132,
"eval_BLEU_lgg": 30.5758,
"eval_BLEU_lug": 39.466,
"eval_BLEU_mean": 31.6847,
"eval_BLEU_nyn": 30.612,
"eval_BLEU_teo": 30.8565,
"eval_loss": 2.5260353088378906,
"eval_runtime": 271.1154,
"eval_samples_per_second": 9.221,
"eval_steps_per_second": 1.154,
"step": 710
720:
{
"epoch": 2.12,
"eval_BLEU_ach": 26.624,
"eval_BLEU_lgg": 30.0305,
"eval_BLEU_lug": 39.5048,
"eval_BLEU_mean": 31.4963,
"eval_BLEU_nyn": 30.4588,
"eval_BLEU_teo": 30.8634,
"eval_loss": 2.5240414142608643,
"eval_runtime": 269.5843,
"eval_samples_per_second": 9.274,
"eval_steps_per_second": 1.161,
"step": 720
}
|
msmccor100/my_awesome_eli5_mlm_model
|
msmccor100
| 2023-10-19T13:44:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-19T13:39:59Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.254 | 1.0 | 1150 | 2.0411 |
| 2.1582 | 2.0 | 2300 | 2.0088 |
| 2.1274 | 3.0 | 3450 | 1.9967 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.14.1
|
lljllll2219/uk-mt5-base-xlsum-v2
|
lljllll2219
| 2023-10-19T13:44:24Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"base_model:kravchenko/uk-mt5-base",
"base_model:finetune:kravchenko/uk-mt5-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-10-18T22:42:05Z |
---
base_model: kravchenko/uk-mt5-base
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: uk-mt5-base-xlsum-v2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
config: ukrainian
split: validation
args: ukrainian
metrics:
- name: Rouge1
type: rouge
value: 4.4311
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uk-mt5-base-xlsum-v2
This model is a fine-tuned version of [kravchenko/uk-mt5-base](https://huggingface.co/kravchenko/uk-mt5-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0401
- Rouge1: 4.4311
- Rouge2: 0.8944
- Rougel: 4.4294
- Rougelsum: 4.4527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.2519 | 1.0 | 2000 | 2.0993 | 4.1141 | 0.5944 | 4.1014 | 4.11 |
| 2.5587 | 2.0 | 4000 | 2.0428 | 4.5015 | 0.6167 | 4.4863 | 4.518 |
| 2.3299 | 3.0 | 6000 | 2.0175 | 4.4642 | 1.0833 | 4.4528 | 4.5167 |
| 2.1543 | 4.0 | 8000 | 2.0183 | 4.3294 | 0.9444 | 4.3408 | 4.3611 |
| 2.0276 | 5.0 | 10000 | 2.0039 | 4.6694 | 0.9444 | 4.6264 | 4.6527 |
| 1.9119 | 6.0 | 12000 | 2.0139 | 4.9447 | 1.0675 | 4.8908 | 4.9633 |
| 1.8305 | 7.0 | 14000 | 2.0134 | 4.9385 | 1.1595 | 4.8774 | 4.9294 |
| 1.7669 | 8.0 | 16000 | 2.0253 | 4.2697 | 0.9667 | 4.2524 | 4.3167 |
| 1.7141 | 9.0 | 18000 | 2.0354 | 4.4527 | 0.9 | 4.448 | 4.4941 |
| 1.681 | 10.0 | 20000 | 2.0401 | 4.4311 | 0.8944 | 4.4294 | 4.4527 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
JohnCalimoso/animalbreedidentifierapi
|
JohnCalimoso
| 2023-10-19T13:41:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-19T13:37:12Z |
---
title: Animalbreedidentificationversion1.5
emoji: 📊
colorFrom: gray
colorTo: purple
sdk: streamlit
sdk_version: 1.25.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
bellagio-ai/t2i-vietnam-pictures-2k
|
bellagio-ai
| 2023-10-19T13:38:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-10-19T12:07:23Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: bellagio-ai/t2i-vietnam-pictures
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - bellagio-ai/t2i-vietnam-pictures-2k
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **bellagio-ai/t2i-vietnam-pictures** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: None:
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
espnet/kohei0209_ted3_asr2_e_branchformer1_raw_wavlm_large_21_km1000_bpe_rm2000_bpe_ts500_sp
|
espnet
| 2023-10-19T13:37:36Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:tedlium3",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2023-10-19T13:02:54Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- tedlium3
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/kohei0209_ted3_asr2_e_branchformer1_raw_wavlm_large_21_km1000_bpe_rm2000_bpe_ts500_sp`
This model was trained by kohei0209 using tedlium3 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 7bcdab47ff7f47e55d52061e55db4128913f32b6
pip install -e .
cd egs2/tedlium3/asr2
./run.sh --skip_data_prep false --skip_train true --download_model espnet/kohei0209_ted3_asr2_e_branchformer1_raw_wavlm_large_21_km1000_bpe_rm2000_bpe_ts500_sp
```
<!-- Generated by -bash -->
# RESULTS
## Environments
- date: `Thu Oct 19 22:11:12 JST 2023`
- python version: `3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]`
- espnet version: `espnet 202308`
- pytorch version: `pytorch 1.13.1`
- Git hash: `7bcdab47ff7f47e55d52061e55db4128913f32b6`
- Commit date: `Thu Aug 31 20:42:18 2023 +0900`
## exp/asr_train_discrete_asr_e_branchformer1_raw_wavlm_large_21_km1000_bpe_rm2000_bpe_ts500_sp/
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|1155|27500|94.6|3.4|2.0|3.5|8.9|79.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|1155|145066|97.4|0.9|1.7|4.2|6.7|79.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|1155|54206|96.1|2.2|1.7|3.8|7.7|79.0|
## exp/asr_train_discrete_asr_e_branchformer1_raw_wavlm_large_21_km1000_bpe_rm2000_bpe_ts500_sp/decode_asr_model_valid.acc.ave
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|507|17783|94.2|3.7|2.2|3.2|9.0|84.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|507|95429|97.2|0.9|1.9|3.6|6.3|84.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|507|36002|95.8|2.3|1.9|3.2|7.4|84.8|
## ASR config
<details><summary>expand</summary>
```
config: ./conf/tuning/train_discrete_asr_e_branchformer1_1gpu_bins80M.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/asr_train_discrete_asr_e_branchformer1_1gpu_bins80M_raw_wavlm_large_21_km1000_bpe_rm2000_bpe_ts500_sp
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 35
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 500
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 80000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_rm_wavlm_large_21_km1000_bpe2000_bpe500_sp/train/src_text_shape.bpe
- exp/asr_stats_raw_rm_wavlm_large_21_km1000_bpe2000_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_rm_wavlm_large_21_km1000_bpe2000_bpe500_sp/valid/text_shape.bpe
- exp/asr_stats_raw_rm_wavlm_large_21_km1000_bpe2000_bpe500_sp/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 150
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/train_sp/text.rm.wavlm_large_21_km1000
- src_text
- text
- - dump/raw/train_sp/text.ts.en
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/text.ts.en
- text
- text
- - dump/raw/dev/text.rm.wavlm_large_21_km1000
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 10000
token_list:
- <blank>
- <unk>
- "[unk]"
- ▁
- s
- ▁the
- t
- ▁and
- e
- ▁a
- ▁to
- d
- ▁of
- "'"
- n
- ing
- ▁in
- ▁that
- re
- ▁i
- c
- o
- u
- ▁we
- y
- a
- ed
- ▁it
- ▁you
- i
- m
- ▁is
- er
- p
- g
- w
- al
- ▁this
- ▁so
- f
- le
- b
- ar
- ▁f
- k
- ▁c
- r
- in
- or
- ▁for
- ▁be
- ve
- ▁was
- te
- th
- ▁do
- es
- ly
- ▁they
- ro
- ▁are
- ▁with
- ▁have
- an
- v
- ch
- ▁on
- se
- lo
- ▁but
- en
- ri
- li
- ▁what
- it
- ic
- ▁can
- l
- ur
- ce
- ent
- ▁me
- ▁b
- ▁ma
- ▁he
- ra
- ▁de
- ll
- at
- ▁about
- ▁one
- ▁not
- ne
- ▁all
- ▁my
- ter
- el
- il
- ▁there
- "on"
- ad
- ▁mo
- ol
- ation
- nd
- ▁like
- ▁people
- po
- ▁at
- ▁us
- us
- ▁g
- ci
- ▁our
- h
- pe
- ▁as
- ▁from
- vi
- ▁if
- as
- ▁ex
- ▁con
- ▁an
- ver
- ▁out
- ▁just
- un
- ▁see
- la
- ▁di
- ▁when
- ▁now
- ▁p
- ha
- ▁who
- ck
- ▁these
- ▁because
- ▁or
- ▁know
- ion
- ir
- ▁co
- ▁up
- ▁pa
- ment
- ▁think
- ge
- ▁how
- ide
- ▁by
- ul
- ity
- ▁go
- ▁get
- ▁ho
- ive
- ▁very
- ate
- ng
- ▁no
- ▁had
- ac
- ▁bo
- ry
- ▁more
- ▁them
- ▁some
- mi
- ▁time
- ▁your
- me
- ▁going
- op
- am
- per
- et
- ▁would
- ru
- ure
- ti
- ist
- ▁their
- x
- ▁were
- ▁look
- ▁pro
- ▁which
- ▁work
- tion
- est
- ty
- im
- z
- ta
- ▁want
- ▁two
- age
- ▁really
- om
- ma
- ers
- ting
- ▁world
- co
- ▁way
- ▁don
- wa
- hi
- tra
- ▁la
- ▁here
- able
- lu
- ▁other
- mo
- ies
- ▁has
- ▁could
- j
- ▁make
- ally
- ▁sta
- ten
- ▁will
- ▁un
- ig
- ▁where
- ▁into
- ke
- ▁than
- ▁comp
- ▁actually
- tic
- sh
- ▁did
- tor
- fa
- ical
- ▁she
- ▁years
- ▁say
- one
- ted
- ▁things
- ph
- ▁new
- ▁pre
- ▁any
- ▁thousand
- ▁been
- ▁inter
- ▁his
- ▁com
- ▁need
- nce
- ▁right
- ▁take
- ▁even
- ▁over
- ▁start
- ▁hundred
- min
- ▁sp
- ▁those
- ▁car
- ▁then
- mp
- ap
- ▁first
- les
- ize
- ▁every
- ba
- ▁something
- ▁well
- ard
- ▁str
- ▁back
- und
- ia
- pl
- ki
- ho
- ▁call
- ▁most
- ▁also
- bi
- ▁thing
- ▁life
- um
- ▁said
- ▁kind
- ▁lot
- ▁much
- va
- ▁ra
- ▁little
- ▁dr
- ▁got
- ▁come
- ful
- ▁talk
- ▁part
- ▁day
- ant
- ction
- ▁happen
- ▁only
- ▁many
- ▁wo
- pri
- ▁her
- ▁br
- qui
- ▁mean
- ▁three
- iv
- ▁different
- ugh
- ain
- ▁human
- ance
- ▁change
- ▁let
- ▁real
- ▁show
- ▁good
- ▁around
- ▁through
- ▁jo
- bu
- ▁down
- ight
- ga
- ▁why
- ▁live
- ff
- ▁tell
- ▁put
- ▁idea
- port
- ▁same
- ▁give
- ated
- ish
- ible
- ▁though
- ious
- ▁problem
- ▁five
- par
- ▁fact
- ▁cha
- ition
- ▁year
- ▁big
- ▁plan
- ▁great
- ▁find
- ▁four
- ▁app
- ▁after
- ▁system
- ▁place
- ▁em
- ▁build
- ▁percent
- ▁again
- ▁point
- ▁learn
- ▁own
- ▁long
- ▁made
- ▁today
- ▁nine
- ities
- ▁gene
- ▁six
- ▁question
- light
- ▁should
- ▁came
- ▁feel
- ▁turn
- ▁person
- ▁end
- ▁hu
- ▁design
- ▁help
- ▁brain
- ▁last
- ▁create
- ▁important
- ▁before
- ▁high
- ▁never
- ▁trans
- ▁another
- ▁him
- ▁eight
- ▁might
- ▁understand
- ▁power
- ▁better
- q
- ▁found
- ▁play
- ▁twenty
- ▁still
- ▁school
- ▁each
- ▁seven
- ▁together
- ▁few
- ▁hand
- ▁example
- que
- ▁next
- ▁million
- ▁story
- ▁women
- ▁under
- ▁number
- ▁course
- ▁water
- ▁ago
- ▁grow
- ▁between
- ▁develop
- ▁america
- ▁sort
- ▁technology
- ▁believe
- ▁second
- ▁small
- ▁maybe
- ▁become
- press
- ▁health
- ▁space
- ▁word
- ▁hard
- ▁children
- ▁organ
- ▁always
- ▁country
- ▁reason
- ▁experience
- ▁large
- ▁everything
- ▁friend
- ▁project
- ▁computer
- ▁fifty
- ▁money
- ▁information
- graph
- ▁walk
- ization
- ▁africa
- ▁picture
- ▁process
- ▁teach
- ▁enough
- ▁elect
- ▁thirty
- "0"
- "1"
- "2"
- "9"
- "3"
- "5"
- "8"
- "4"
- "7"
- "6"
- "&"
- +
- "#"
- "@"
- "*"
- \
- ^
- R
- _
- "-"
- "%"
- "="
- $
- M
- ā
- "]"
- E
- U
- A
- G
- "["
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- 侕
- 价
- 倆
- 侄
- 两
- 並
- 丄
- 丠
- 儰
- 予
- 何
- 傾
- 佛
- 佼
- 佯
- 傈
- 両
- 兢
- 仪
- 佦
- 侉
- 俖
- 丟
- 俈
- 偛
- 冤
- 冇
- 丸
- 侣
- 傉
- 侷
- 働
- 侞
- 侸
- 侻
- 僴
- 仉
- 亞
- 凋
- 典
- 兪
- 俪
- 仈
- 価
- 俙
- 偵
- 偗
- 冺
- 准
- 僺
- 乃
- 仕
- 倌
- 偄
- 冮
- 俫
- 乓
- 乿
- 僼
- 产
- 倩
- 倄
- 乷
- 主
- 乐
- 冉
- 仳
- 傌
- 佣
- 倰
- 併
- 丘
- 傔
- 倢
- 倬
- 乙
- 俐
- 凌
- 侠僽
- 倜
- 像
- 儯
- 仔
- 侜
- 乬
- 令
- 兘
- 兟
- 佰
- 且
- 備
- 倣
- 俜
- 伆
- 亅
- 侫
- 仠
- 偞
- 傏伒
- 兆
- 倛
- 儬
- 俠
- 倲
- 傹
- 伀
- 僘
- 俕
- 倗
- 兲
- 专
- 六
- 儜
- 儈
- 丏
- 偎
- 伓
- 使偍
- 儤
- 冽
- 久
- 侀
- 仑
- 仮
- 俏
- 偾
- 假
- ▁丞仃
- 佶
- 你
- 侮
- 兒
- 俲
- 伧
- 亵
- 佪
- 俼
- 亼侭
- 倩俠
- 伾
- 傧
- 侽
- 买
- 乡
- 乽
- 僓
- 伴
- 侨
- 中
- 仝
- 乖
- 乣
- 伜
- 傡
- 业
- 乩
- 儚
- 公
- 儏
- 乘
- 佲
- 乯
- 仴
- 僡
- 丧
- 估
- 乵
- 倕
- 丷
- 佤
- 儊
- 冐
- 亙
- 僙
- 傒
- 乳
- 侈
- 仏
- 丗
- 侍凢
- 俹
- 偹
- 佸
- 儑
- 伊
- 傇
- 俫兢
- 佷
- 倂
- 兀
- 伥
- 倸
- 今
- 位
- 乒
- 冉僙
- 傛
- 乏
- 倨
- 兝
- 兴
- 佱
- 乍
- 丼
- 兂
- 僽
- 亰
- 仂
- 交
- 傏
- 侙亚
- 冯
- 偿
- 倳
- 丐元
- 們
- 丑
- 乶
- 偲
- 乓偭
- 临
- 僛
- 偉
- 凁
- 凂
- 丐
- 俯
- 冚
- 侳
- 亀僺
- 亭
- 伦
- 乇
- 傰
- 今兞
- 傑
- 偏
- 互
- 俒
- 乲
- 乖佂
- 冬
- 佡
- 了
- 亲
- 仐
- 丒
- 冁
- 凣
- 依
- 僆
- 億
- 凄
- 偼
- 俗
- 偽
- 佸僽
- 倿
- 亣
- 俄
- 丙
- 作
- 丈
- 况
- 侂
- 凍
- 个
- 乢
- 侷侮
- 亥
- 儫
- 似
- 佟
- 乊
- 丧俏
- 儱
- 乤
- 仡
- 偭
- 傁
- 佨
- 侀偧
- 俥
- 乾
- 仗
- 人
- 儋
- 兿七
- 丁
- 児乷
- 侕偲
- 兲俚
- 万
- 係伜
- 冩
- 儂仠
- 三
- 偕亃
- 倡
- 儝
- 位佂
- 仛元
- 侏
- 丑佧
- 习典
- 习
- 凖
- 佌冤
- 伻
- 像元
- 凡
- 之
- 侯
- 僸
- 丹侕
- 仚
- 兦
- 兮
- 俺
- 傐
- 佌
- 冓
- 亝
- 也
- 冔
- 侠
- 傓偉侸
- 傂
- 兞
- 元
- 僁
- 伃
- 保
- 傡仧
- 亀
- 份
- 侰乪
- 佈
- 仦
- 亠
- 倠
- 偓
- 亻偗
- 倛僺
- 丆
- 乕傝
- 丣
- 儌
- 亻
- 僞
- 仡丕
- 乩佧
- 丹仉
- 伅
- 凉
- 佳
- 僬
- 乔
- 代儏
- 再
- 俹俜
- 乀
- 佧
- 儕
- 僟
- 俴
- 儼
- 侑乥
- 共
- 乙伧
- 僀
- 儂
- 优
- 仇乪
- 凑
- 保亜
- 傎
- 俘
- 住
- 傝
- 倇
- 仛
- 儮偫
- 亴
- 俔
- 丅
- 亼
- 佀
- 値
- 俕伅
- 亃
- 乚
- 几
- 丢乷
- 侭
- 乴
- 儺
- 伝
- 丱
- 凒
- 乏倐
- 倵
- 亻丷
- 伉
- 亱
- 俚
- 乺
- 众
- 买交
- 侚
- 僄
- 乸仛
- 冨
- 俴傾
- 仆
- 丫
- 仨僞
- 俉佊丨
- 儛
- 全
- 傚乥
- 倊倂
- 儧
- 乸仛元
- 侑
- 俎
- 伖
- 倖乪
- 佐
- 仹
- 亪享串
- 做
- ▁儹侊偁
- 俼侸
- 乮
- 侔
- 凙休
- 伶
- 偂
- 企
- 円
- 作傒
- 仧
- 会
- 乏倐佈
- 俽
- 佂
- 兮偲
- 儙
- 兙伈
- 京
- 亳
- 体
- 俢
- 偧
- 儮
- 书
- 儧丙
- 仓
- 侗
- 俅亂
- 傆
- 伕
- 仱
- 佛仴
- 亖
- 乤亊丕
- 倹
- 丳仍
- 入
- 僅
- 候
- 儍伤
- 冞
- 他
- 凓
- 世
- 佮
- 亿
- 偫
- 僸僁
- 傚儑
- 僯
- 串
- 佄
- 內
- 七
- 価乒
- 丁似
- 仔乬
- 倓侘
- 仅丳仍
- 侇亍儤
- 乤倛
- 傺
- 仍
- 倭
- 傺么俽
- 儸
- 乁
- 乶儛
- 乊傰
- 冶
- 侾冘乹
- ▁仞丽
- 冂
- 凝
- 么
- 佾
- 净
- 僼仗
- 佬亥
- 仂備
- 兑
- 偉侸
- 乂
- 东
- 丈僆
- 佗
- 児
- 儫冎
- 乞偄
- 偗乔
- 俅
- 俹倄
- 佻
- 儝伖
- 亝佘
- 凝伃
- 仌
- 丢
- 兆亭
- 兪倣
- 冎
- 亯
- 丱冮
- 丁似亃
- 丒儋
- 凈
- 伣
- 儷
- 俱
- 仐傅
- 乒僓
- 俅佟亂
- 亇
- 以
- 乕傝亥
- 仴乮
- 冂侀
- 仲
- 佣准
- 乲侕乲
- 仨
- 倫
- 冧
- 侯儭亟
- 俀
- 偣
- 亂
- 二佱
- 伃仉
- 使
- 九
- 兔侃
- 係
- 世亲
- 亼倂
- 丐仛
- 僤
- 伭
- 兿
- 儢
- 侯份
- 倲傰
- 侹
- 会佾
- 俿
- 伏
- 兎以
- 俯儸
- 乨
- 偂備
- 佺
- 傒乥
- 之凄
- 亍儤
- 仅仍
- 住主
- 儱偓
- 伉冬
- 丒傇
- 俕傡
- 兜偀佖
- 乚傐
- 伌
- 争
- 偠乆任
- 俓
- 什
- 偦
- 偟
- 丳
- 儆
- 伡
- 仸
- 傼
- 傱
- 入僷
- 偕之凄
- 侳体
- 價
- 俻
- 償
- 侑傒
- 佴
- 也偮
- 佻俥
- 億仪
- 倦亸伏
- 仱伌
- 佸伥
- 傰位
- 乔冁
- 儠
- 仆仗
- 兔侃俜
- 不
- 亵冨
- 仇
- 丹
- 公企
- 亣儭亟
- 亀冪
- 乑儀
- 丨
- 佟乛
- 倅
- 俗倞偀佖
- 倿偽
- 冋
- 准凡
- 倊
- 倦倄
- 伊凙
- 会偖
- 僷
- 傚
- 亰侏
- 丝
- 乵偼
- 乎仓
- 偍
- 冭
- 僅了
- 傾兟
- 俗倞偀
- 傁儥
- 侯亟
- 仉偏
- 仺
- 冃冤
- 佷亗
- 冋倲
- 乎
- 丯俎
- 便五偤
- 万伾
- 佬
- 俇
- 佈倰
- 做傅
- 兼凈
- 乛
- 傧倗
- 僄亻偗
- 件
- 伶俇
- 债
- 冬仔
- 侕乲
- 兵倾
- 份亮
- 佛兞
- 乊兓
- 侤
- 僵偩
- 偛兀
- 偕之
- 伊丛
- 丶
- 僢
- 么俽
- 乍争
- 兼
- 乱仟俋
- 丝亄
- 俨侌
- 丌
- 兂你
- 似亃
- 冘乹
- 偙像元
- 亍
- 俭
- 兜
- 兹優
- 倈傳
- 兎
- ▁仞侊偁
- 傰仌
- 俍
- 僱
- 侇
- 丶佰
- 傏俥
- 侓凒係伜
- 乸俄
- 伈
- 佊
- 僅亠
- 乻
- 兕
- 侜丈
- 仒
- 儃
- 储
- 乔丣
- 凁兆
- 倘佘
- 下
- 偕
- 亁
- 亙俙
- 僬伯
- 傓
- 傚傒
- 儨
- 丼仑
- 凝亄
- 乪
- 來
- 互僋侞
- 二
- 倘
- ▁儹侊
- 俗倞佖
- 伩僻义
- 佉
- 仅丳
- 佟僋侞
- 侵
- 僮
- 侠佸
- 企临
- 丷侷
- 仕丶乛
- 佒
- 乨伎
- 倣冤
- 乸
- 亘
- 乽冂
- 倶
- 乓儺
- 佟佰
- 仏僳
- 偺
- 冪
- 億侄
- 俀仝
- ▁仞侊丽
- 东兩
- 交买
- 乏倐倰
- 伙
- 佬亥丝亄
- 俬兴
- 丿偺
- 仴儛
- ▁仞侊
- 冡
- 倧
- 倖
- 偑
- 一
- 例儀
- 丝亄丹
- 佴仲
- 兜乁佖
- 偤
- 僳
- 倇兴
- 倿六
- 傧倎
- 仺兗
- 侗傁
- 佅
- 兂佪
- 世僉
- 凙
- 丬優
- 侹亪享串
- 偙俄
- 倞
- 冑
- 侁九
- 倌兀
- 凝亄伃
- 侼
- 伒
- 冃
- 上侏
- 上催
- 丰
- 仐傅冮
- 僄亻
- 侗俀
- 举
- 偺佬
- 偒
- 倉
- 兔
- 冿
- 傜
- 体六
- 倦伷与伏
- 丯
- 交俫
- 俻冥乫
- 兜乁
- 僄丷
- 億丄
- 亗
- 偏偕似亃
- 佖
- 再儱
- 偁
- 军
- 偏伌
- 丑丏
- 倳共
- 侎
- 侕丹
- 军举
- 佒伎
- 乸元
- 伤
- 冦
- 倞偀佖
- 俰
- 与
- 乱仟俋亻
- 乃俍
- 乬伧
- 企仝
- 凋乩佧
- 例
- 代
- 亢
- 些
- 儿
- 侙俄
- 傅
- 儷伄
- 互僋
- 佻亱
- 伧儧丙
- 兂你佪
- 冣傕丩
- 冚佀亜冩
- 丸侉
- 儘
- 冄
- 俪俈
- 仡亊丕
- 僂俎
- 侓
- 佺僣俨侌
- 京佻亱
- 佀亜冩
- 倔
- 侻倐佈
- 冂偽偧
- 傌亻偗
- 仢兄
- 倲乖
- 兺井
- 凊
- 凓丛
- 內俜
- 健
- 世僟
- 儖
- 倦与伏
- 傝亥
- 僻义
- 凋傎
- 侙
- 丕
- 令估
- 六亀
- 價佳
- 佦兼凈
- 乄僳
- 兛
- 偹冤
- 丶兵倾
- 偖
- 侴傱
- 俬
- 仾
- 冇乡
- 僓佷
- 佷亖
- 仅
- 仿
- 仯
- 僃儏
- 侷乔
- 儯万伾
- 儸丄
- 伬
- 亏倱
- 倒傋
- 備乡
- 丮
- 侯份亮
- 侅
- 侬
- 傓偉
- 俁
- 便五
- 也偠乆任
- 凋佧
- 冒
- 丿
- 伲上
- 仴冧
- 侴
- 偓儚
- 佟僋乛
- 克
- 俉
- 佺僣侌
- 凌业
- 儼佾
- 侰
- 偙
- 佞
- 佛乒
- 位仌
- 伴冨
- 仱伌似亃
- 丟丑
- 倦
- 乔侫
- 凁儆
- 僡俅亂
- 僉
- 乮入
- 丯代儏
- 佮儀
- 俛
- 中偽
- 仺冐
- 仕丶
- 亸
- 偩
- 倪
- 兩
- 仉丁
- 僸倎
- 例儀偫
- 兢価
- 俯冂
- 三丿
- 伧儧
- 僙丄
- 偏偕
- ▁丞
- 亲乡
- 乓儺冯
- 侯儭
- 冲
- 余
- 丈仉
- 亲倸
- 侹僒
- 凢
- 催
- 僵
- 倌乡
- 伦儠
- 丒互
- 亊
- 佪仠
- 凖仏僳
- 丈侕
- 伝冯
- 乽儸
- 傧倎僁
- 亦
- 丬優乂
- 侇亍
- 作乥
- 僙丠
- 俲俈
- 丸买
- 偛乡
- 丬
- 傇冣傕丩
- 兓
- 儞
- 亝便五偤
- 俲伴
- 兀偛
- 倒傤
- 僛亐伎
- 伩僻
- 乖仌
- 侂保
- 僃
- 侾
- 体倿
- 傊
- 三丿偺
- 事內
- 侈况
- 兢俫
- 倍倲
- 乳儆
- 冞亣
- 傁临
- 公企临
- 停
- 俕傡仧
- 傤
- 凊佐
- 事侃
- 佦兼凈仔
- 侙亚亣儭亟
- 乕
- 兙
- 儭亟
- 冨伴冨
- 偕似亃
- 儬丧俏
- 仙
- 儖倜
- 伷与伏
- 冉僙丠
- 佺俨侌
- 乙伧仮
- 凓丛倰
- 倈傳兦
- 兓冩
- 低
- 伧仮
- 冠
- 倎
- 兿况
- 亙僟
- 亪
- 也乻
- 凈仔
- 儷伄兩
- 倦亸与伏
- 侏催
- 兡传侧
- 作傒乥
- 保傰
- 儅
- 凊偫
- 丒互僋乛
- 倹亚
- 倞偀
- 侕仓
- 儲
- 亄
- 公企仝
- 凋値
- 仩
- 億丠
- 丰伅
- 凁亭
- 买佯
- 優
- 俑丗
- 俑
- 亸与伏
- 假兮
- 兼凈仔
- 偿下
- 偮
- 冂侀偧
- 倛俁
- 侫入
- 佣众
- 信
- 乞
- 伲俴
- 乔倹
- 业俫
- 儷偄
- 乍争俔
- 佊丨
- 侮乃
- 偙俄亚
- 凣丵
- 儯万
- 乜
- 仜
- 傥
- 冞侯
- 傂兎
- 不們
- 乏倐佈倰
- 今伩僻义
- 于儴
- 冣修兗
- 侁
- 俬偒
- 兞亞
- 冐么俽
- 亵俈亵
- 俣
- 冴傆俽
- 伨
- 争俔
- 傃儲
- 伺僷
- 乔仺兗
- 僬伍
- 儥
- 上
- 下凑
- 儕丿偺僉
- 凍丿偺僉
- 俐俖
- 侂保傰
- 冞亣儭亟
- 伄
- 似之
- 佶俈
- 僔
- 丅儌
- 丐仛元
- 伩
- 乎乀
- 伅乽
- 倍
- 丶乛
- 三丿偺僉
- 仆両
- 伔凚働
- ▁仞侊偁再
- 偏偕亃
- 俏乚
- 儺冯
- 傌亻丷
- 侖
- 倌偛
- 亚
- 亸伏
- 俆
- 亜
- 凣伹
- 侫佾
- 佛今
- 僲
- 傱乂
- 凞
- 丩
- 倐
- 兹
- 乥
- 乶乮
- 什丱
- 傕
- 似业
- 亞伅
- 乩偙
- 佘
- 也儗
- 俄亚
- 入偠乆任
- 准両
- 乀仓
- 俫两
- 侌
- 佰倽
- 傧倎俱
- 佭停
- 伣冮
- 俨
- 住仍
- 侙俄亚
- 佣二佱
- 傞
- 伊丛倰
- 临僛
- 享串
- 偛侉
- 兾
- 丟丑佧
- 伉冬仔
- 价仴
- 僷丌
- 偭丏
- 僄偗
- 佛仨僞
- 仁
- 伃丹
- 仡党
- 儜乣
- 仄
- 价偲
- 乯傛
- 傣
- 侹僒仸
- 兯
- 偺僉
- 乔仺
- 凁兆亭
- 亪享
- 例儀儮偫
- 傰佂
- 僼体
- 亣儭
- 冽况
- 仉丈仉
- 僛亐
- 兗
- 伾僅
- 亷
- 伺丌
- 伷
- ▁仞侊偁侕
- 免
- 为
- 傌亻
- 佷伮仳
- 僬俶
- 乄侭
- 伓乾
- 亟
- 休
- 侢
- 傦丢
- 冫
- 傁儥伓
- 伧倬
- 乕傝亥丝亄
- 冨丘
- 乫
- 俪俈俪
- 侘
- 倽
- 仂備乡
- 仡亊佈
- 京佻
- 乕亥丝亄
- 倞佖
- 丹仱
- 丈仉丈
- 亁凑
- 僊丢乷
- 傦
- 丳主
- 为兴
- 冣修丩
- 伔凚
- 僈
- 僬伯冉
- 傋
- 付偉
- 伲
- 佽
- 傪
- 乇俻冥乫
- 仗儸
- 为偒
- 伢
- 仝俏
- 亽
- 乽仗
- 儚佯儚
- 働俠
- 付
- 偓傔
- 冎仍
- 乵乔
- 佷僐
- 佋
- 倳侣
- 丈凌
- 们
- 令估令估
- 佟僋
- 侾倚乹
- 俘亸伏
- 傁俀
- 傝亥丝亄
- 冣傕
- 仡党倡
- 兴丟
- 典侉
- 再倕
- 伍
- 亀傣
- 凓倰
- 侪佲
- 义
- 僛傋
- 俌
- 亀兪
- 伉乯
- 俤
- 傐傗亢
- 僷侨
- 万伾僅
- 冬乬
- 克傉
- 佯交
- 俑不
- 傷
- 凋偙
- 习乶
- 佷亿
- 便
- 兪俼侸
- 儸侄
- 三丿偺佬
- 丄倨
- 佫
- 侕兮
- 先
- 倀
- 凖仏乿
- 仒乙
- 侁九兩
- 倌仈
- 伮
- 偿下仦世
- 估佯估
- 俑不們
- 乤亊
- 仕互僋侞
- 侠伥
- 僩
- 会偖入
- 內兩
- 不亳
- 偐
- 丹仱侕
- 侨儗
- 乄
- 僖
- 乑
- 兌
- 使住
- 丐丷
- 乶冧
- 児伤
- 偨
- 僼准
- 侓乜僖
- 偖佳僷
- 傖
- 丁侕
- 不伣
- 乙伧儧
- 傅冮
- 儠偞
- 兹優乂
- 乇俻乼凎
- 偙像仕
- 倠令倠
- 伝亁
- 俞
- 冢
- 倅不們
- 凇
- 偾久
- 俍伎
- 亰克
- 儭
- 凘
- 丆倸
- 偖入
- 侑儑
- 佟丶
- 仕互僋乛
- 偋
- 兺
- 傺冴傆俽
- 侗儥
- 冁你
- 党
- 体偽
- 傁儥乚
- 僋
- 伲上催
- 佣二
- 仰
- 倻
- 俴上
- 兔事侃
- 六亀僺
- 乄仏乿
- 保亜位
- 兊
- 倬偿倵
- 会入
- 傼俅亂
- 僋乛
- 严
- 偏似亃
- 催仂
- 佭停們
- 乃俍伎
- 佭們
- 万伾僅了
- 傉佸
- 做傅僵偩
- 児丢乷
- 冇凝伃
- 倩俠乾
- 争両俔
- 偽偧
- 仼丢
- 僒乭
- 亜冩
- 乵凍
- 佪僺
- 凝产
- 乙伧冽
- 俬偒兴
- 凝亄丹
- 乊傰位
- 冣傄
- 偏似
- 乤党倡
- 兲侯
- 也伙
- 倗丙
- 仐做傅
- 僅亠偂
- 专偾
- 佝
- 乌
- 僪
- 买交买
- 佟丶乛
- 乃俢
- 儐
- 伌之凄
- 佀亜
- 俒佂
- 仼
- 倦亸
- 僊丢
- 僊
- 冣
- 以偼
- 习企
- 冉倗
- 俊
- ▁儹丽乎
- 佺僣俨
- 仡丕倰
- 倛凅
- 体倿偽
- 倗俔
- 佾偠乆任
- 儏丰
- 丬亇侴傱乂
- 亡
- 凖乄僳
- 凖乄
- 僣
- 倅乯
- 倒
- 儕丿偺
- 冸
- 偆
- 事东
- 俔伛
- 仡亊
- 傟
- 冉両
- 偿下凑
- 冷
- 乤亊佈
- 乗但侸僡
- 中偽偧
- 丅儝伖
- 乎倕
- 偸
- ▁儹丽侕
- 儼偖
- 丝产
- 儠亠
- 伝亁凑
- 乼凎
- 冇凝
- 佅丟
- 兜偀
- 乩丏
- 兎以偼
- 亷僺
- 傓偉偎
- 俩俛
- 下仦
- 伤偎
- 佷伮
- 冕
- 俔伛侃
- 佶俲
- 儠亠偂
- 偕似凄
- 儢傂兎以
- 儕偼
- 争丄俔
- 估令估
- 佳伺
- 佅义
- 予伉乯
- 俑丗不
- 丼京
- 乊兓冩
- 僼兆
- 儮俚
- 丑偙
- 僟亲
- 俻乼冥乫
- 佻伒
- 丫况
- 佶俈亵
- 乄仏
- 乄傆僳
- 仱伌之凄
- 侜伃
- 俯僙
- 倯
- 佲依
- 僕
- 兞伩僻义
- 俫两兢
- 伉儰
- 冉僙丄
- 僬伍傾
- 傴佲
- 互丶乛
- 倥
- 侃俜
- 仯亰克
- 伩佅义
- 佑
- 侹享串
- 倎僁
- 俬兴丟
- 儕凍
- 俶
- 产丁
- 儛僓
- 伶亍儤
- 写丱
- 僐
- 伉冬乬
- 令估令
- 佷乌
- 儩伿
- 俦
- 内
- 仭
- 冴
- 乡佶
- 偕业
- 伾僅了
- 傗亢
- 乕傝亥丝亄丹
- 凣丵係伜
- 傇冣修兗
- 偁儱
- 侓凒僖
- 军亷僺
- 儡
- 儸丠凓丛
- 亼仏侭
- 例儀儮
- 丵
- 乗但僡俅亂为
- ▁仞仃
- 伎
- 侻倐
- 佣凡
- 丫傰
- 全凜
- 乤党
- 偃
- 儣
- 也乆任
- 倓
- 乒僬俶
- 养
- 写
- 凐
- 伺
- 亾
- 兪俼亰
- 买佯交
- 凁倖乪
- 乨伎丑佧
- 倈
- 冰
- 侠佸僽
- 僃俇
- 傎佾
- 倩俠傐
- 冃冤乓偭
- 伩僻义凋
- 冏
- 丮丗
- 会偖佳僷
- 丣仪
- 佺俨儿
- 僵伣
- 僻
- 仆丏
- 俢义
- 丣僙
- 伺侨
- 乤丕
- 凗
- 偕似
- 傎入
- 凤
- 僄亻丷
- 傺冴么俽
- 乙伧儧丙
- 侹仸
- 俗伔凚働
- 佶俈佶
- 偪
- 佭
- 云
- 佢
- 伩伢
- 偿下仦
- 儩
- 乩佧丏
- 军举倡
- 伱
- 侁倜
- ▁儹侊偁儱
- 侑傒乥
- 俷
- 伦儠亠
- 伪
- 冴么俽
- 侠佸伥
- 僵伣偩
- 倛侩
- 伩伢儯万伾
- 侓乜係伜
- 凌偕业
- 下仦世
- 冉乤
- 丽乎
- 军亷
- 侾债乹
- 佣准凡
- 互僋乛
- 偀
- 侪
- 使偍兜乁佖
- 僧丢
- 乃佅
- 伈仔
- 凤冬
- 僇
- 作佸伥
- 侩
- 僒
- 僢児乷
- 允
- 乹
- 儥乚
- 军举倡儤
- 乽冂偽偧
- 伂冔
- 儫冎佱
- 傇傕丩
- 也侐
- 事东兩
- 亓
- 僣俨侌
- 冥乫
- 仫佲傛
- 兰
- 偏偕之凄
- 冎佱
- 伄兩
- 儸丠凓
- 冚佀兓冩
- 三偼
- 军举佈
- 俅互亂
- 使仍
- 俳
- 于儴乍争
- 兇
- 也伭
- 丬亇侴乂
- 傻
- 亐伎
- 倈兦
- 凖乄仏乿
- 修兗
- 冣修
- 價佳伺僷
- 住六
- 倃倜
- 児儍伤
- 亐
- 佴仲偩
- 偺佬亥丝亄
- 乱仟
- 佷亿仳
- 乔冁你
- 仅丳主
- 儶
- 低傆僳
- 傀
- 低傆僳乿
- 僆侕僆
- 儍
- 侗傁儥乚
- 仌仺
- 僛冋倲
- 侚冤
- 傗亢俜
- 儈乴
- 仟俋
- 俘伷与伏
- 停們
- 伇
- 俹倄兝
- 儼偖入
- 冥
- 凃
- 兹亇侴乂
- 儗
- 冇凝产
- 乳亁凑
- 儖侁倜
- 佞伎
- 亵俈亵冨
- 全凜倜
- 債伈
- 僢児
- 亀傣倜
- 介
- 兹亇侴傱乂
- 伙仳
- 册
- 傥丢
- 做傅僵
- 儩不
- 今兞伩僻义
- 伬冋
- 乄傆乿
- 事
- 亹
- 亐俍
- 伧両儧
- 侳仗
- 丒儋互僋乛
- 俵
- 僛儅
- 乞儷偄
- 仇亂
- 偱
- 具
- 促
- 凖低傆僳乿
- 乕亥
- 三丿偺僉倸
- 伺僷丌
- ▁仞侊冄
- 假兮偲
- 佃
- 偘
- 僥
- 乕傝冇丝亄
- 伌似凄
- 入伺
- 丛
- 仫
- 伸
- 佚
- 傮
- 佁
- 凅
- 乭
- 僨
- 从
- 伵
- 俾
- 五
- 修
- 偊
- 佩
- 傿
- 充
- 亶
- 关
- 偈
- 儽
- 偬
- 傯
- 侍
- 侶
- 享
- 凛
- 偡
- 侃
- 伔
- 僾
- 倏
- ▁
- 偌
- 伛
- 傽
- 伫
- 侒
- 井
- 偶
- 僂
- 倝
- 儔
- 伐
- 凜
- 侐
- 俟
- 傸
- 倃
- 僧
- 亩
- 僜
- 伯
- 乼
- 傢
- 八
- 兠
- 傄
- 决
- 傩
- 亮
- 僰
- 儁
- 伟
- 伂
- 傳
- 侱
- 亨
- 偠
- 农
- 俸
- 乆
- 兤
- 值
- 凟
- 凥
- 儎
- 僫
- 倴
- 兵
- 冾
- 兏
- 倁
- 傗
- 兖
- 冘
- 個
- 凕
- 儵
- 丽
- 倮
- 侥
- 儒
- 但
- 偢
- 供
- 仢
- 仵
- 儻
- 傲
- 偷
- 僎
- 偳
- 傘
- 儀
- 冀
- 傠
- 减
- 冝
- 侟
- 伿
- 冗
- 倷
- 偯
- 乧
- 伹
- 冻
- 儦
- 兽
- 任
- 冹
- 俩
- 俋
- 兄
- 侦
- 儓
- 僚
- 其
- 冟
- 伞
- 冊
- 冈
- 凔
- 倚
- 傭
- 儉
- 冖
- 仟
- 傍
- 倾
- 僠
- 儇
- 伽
- 俧
- 僦
- 佔
- 冱
- 侧
- 処
- 僗
- 僑
- 傶
- 僭
- 佥
- 偰
- 丂
- 債
- 凧
- 侺
- 傴
- 儾
- 伳
- 偅
- 偻
- 凎
- 僶
- 乱
- 儳
- 偝
- 乗
- 傫
- 佇
- 仃
- 兣
- 侲
- 僌
- 侊
- 亏
- 側
- 凚
- 兡
- 僝
- 傃
- 传
- 于
- 光
- 伋
- 佹
- 冼
- 借
- 倱
- 偔
- 儴
- 丞
- 儹
- 仞
- 僿
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
use_preprocessor: true
token_type: bpe
src_token_type: bpe
bpemodel: data/token_list/tgt_bpe_unigram500_ts_en/bpe.model
src_bpemodel: data/token_list/src_bpe_unigram2000_rm_wavlm_large_21_km1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
tokenizer_encode_conf: null
src_tokenizer_encode_conf:
enable_sampling: true
alpha: 0.4
nbest_size: -1
frontend: embed
frontend_conf:
embed_dim: 512
positional_dropout_rate: 0.1
specaug: specaug
specaug_conf:
apply_time_warp: false
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: false
freq_mask_width_range:
- 0
- 10
num_freq_mask: 0
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 10
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 1024
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv1d2
layer_drop_rate: 0.0
linear_units: 1024
positionwise_layer_type: linear
use_ffn: true
macaron_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
layer_drop_rate: 0.0
model: discrete_asr
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
share_decoder_input_output_embed: false
share_encoder_decoder_input_embed: false
extract_feats_in_collect_stats: false
required:
- output_dir
- src_token_list
- token_list
version: "202308"
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kyujinpy/Korean-OpenOrca-13B
|
kyujinpy
| 2023-10-19T13:30:00Z | 2,252 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/OpenOrca-KO",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-08T19:07:11Z |
---
language:
- ko
datasets:
- kyujinpy/OpenOrca-KO
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **🐳Korean-OpenOrca-13B🐳**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Repo Link**
Github Korean-OpenOrca: [🐳Korean-OpenOrca🐳](https://github.com/Marker-Inc-Korea/Korean-OpenOrca)
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
I use [OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO).
Using DeepL, translate about [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
I use A100 GPU 40GB and COLAB, when trianing.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| Korean-OpenOrca-13B(ours🐳) | 47.85 | 43.09 | 54.13 | 40.24 | 45.22 | 56.57 |
| [KoT-Platypus2-13B](https://huggingface.co/kyujinpy/KoT-platypus2-13B) | 49.55 | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 |
| [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 |
| [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
| [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
> Compare with Top 4 SOTA models. (update: 10/09)
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Korean-OpenOrca-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|
kyujinpy/KO-Platypus2-13B
|
kyujinpy
| 2023-10-19T13:29:09Z | 2,282 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-20T12:55:06Z |
---
language:
- en
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **Ko-Platypus2-13B**

## Model Details
**More detail repo(Github): [KO-Platypus](https://github.com/Marker-Inc-Korea/KO-Platypus)**
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
KO-Platypus2-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
I use [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus).
It is high-quality korean translation dataset about [open-platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
I use A100 GPU 40GB and COLAB, when trianing.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).

| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| KO-Platypus2-13B(ours) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 |
| [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
| [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
| [momo/polyglot-ko-12.8b-Chat-QLoRA-Merge](https://huggingface.co/momo/polyglot-ko-12.8b-Chat-QLoRA-Merge) | 45.71 | 35.49 | 49.93 | 25.97 | 39.43 | 77.70 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 45.62 | 38.05 | 49.63 | 34.68 | 37.69 | 68.08 |
> Compare with Top 4 SOTA models. (update: 10/06)
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/KO-Platypus2-13B"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [kyujinpy/KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B)
---
|
kyujinpy/KO-Platypus2-7B-ex
|
kyujinpy
| 2023-10-19T13:27:22Z | 2,241 | 23 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"arxiv:2308.07317",
"arxiv:2307.09288",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T18:25:00Z |
---
language:
- en
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **Ko-Platypus2-7B-EX**
**More detail repo(Github): [KO-Platypus](https://github.com/Marker-Inc-Korea/KO-Platypus)**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
KO-Platypus2-7B-ex is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model**
[Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
**Training Dataset**
I use [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus).
It is high-quality korean translation dataset about [open-platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
I use A100 GPU 40GB and COLAB, when trianing.
**Vocab Expansion**
| Model Name | Vocabulary Size | Description |
| --- | --- | --- |
| Original Platypus2 | 32000 | Sentencepiece BPE |
| **Expanded KO-Platypus-ex** | 46336 | Sentencepiece BPE. Added Korean vocab and merges |
**Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."**
| Model | Tokens |
| --- | --- |
| Platypus2-7b | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '<0xEB>', '<0x82>', '<0xA0>', '씨', '가', '▁', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '요', '.']` |
| KO-Platypus2-7b-ex | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요', '.']` |
**Tokenizing "Platypus: Quick, Cheap, and Powerful Refinement of LLMs"**
| Model | Tokens |
| --- | --- |
| Platypus2-7b | `['▁Plat', 'yp', 'us', ':', '▁Quick', ',', '▁Che', 'ap', ',', '▁and', '▁Power', 'ful', '▁Re', 'fin', 'ement', '▁of', '▁L', 'LM', 's']` |
| KO-Platypus2-7b-ex | `[▁Plat', 'yp', 'us', ':', '▁Quick', ',', '▁Che', 'ap', ',', '▁and', '▁Power', 'ful', '▁Re', 'fin', 'ement', '▁of', '▁L', 'LM', 's']` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot)
> Question Answering (QA)
### COPA (F1)

| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7388 | 0.7626 | 0.7808 | 0.7979 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7436 | 0.7927 | 0.8037 | 0.8259 |
| [*Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B) | 0.5594 | 0.5913 | 0.5863 | 0.5916 |
| **KO-platypus2-7B-EX(ours)** | 0.7509 | 0.7899 | 0.8029 | 0.8290 |
*Platypus2-7B Original used https://huggingface.co/garage-bAInd/Platypus2-7B
> Natural Language Inference (NLI; 자연어 추론 평가)
### HellaSwag (F1)

| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4518 | 0.4668 | 0.4726 | 0.4828 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4562 | 0.4657 | 0.4698 | 0.4774 |
| [*Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B) | 0.4097 | 0.4258 | 0.4358 | 0.4271 |
| **KO-platypus2-7B-EX(ours)** | 0.4571 | 0.4461 | 0.4371 | 0.4525 |
*Platypus2-7B Original used https://huggingface.co/garage-bAInd/Platypus2-7B
> Question Answering (QA)
### BoolQ (F1)

| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.3607 | 0.6797 | 0.6801 | 0.6622 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.5786 | 0.6977 | 0.7084 | 0.7144 |
| [*Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B) | 0.3419 | 0.6024 | 0.5630 | 0.5461 |
| **KO-platypus2-7B-EX(ours)** | 0.6028 | 0.6979 | 0.7016 | 0.6988 |
*Platypus2-7B Original used https://huggingface.co/garage-bAInd/Platypus2-7B
> Classification
### SentiNeg (F1)

| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4855 | 0.8295 | 0.8711 | 0.8513 |
| [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4594 | 0.7611 | 0.7276 | 0.9370 |
| [*Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B) | 0.4098 | 0.7388 | 0.7558 | 0.8129 |
| **KO-platypus2-7B-EX(ours)** | 0.5821 | 0.7653 | 0.7991 | 0.8643 |
*Platypus2-7B Original used https://huggingface.co/garage-bAInd/Platypus2-7B
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/KO-Platypus2-7B-ex"
ko_platypus = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
ko_platypus_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
---
> Below is the original model card of the Platypus2-13B model.
# Platypus2-13B
Platypus-13B is an instruction fine-tuned model based on the LLaMA2-13B transformer architecture.

### Benchmark Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 56.70 |
| ARC (25-shot) | 61.26 |
| HellaSwag (10-shot) | 82.56 |
| TruthfulQA (0-shot) | 44.86 |
| Avg. | 61.35 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Cole Hunter & Ariel Lee
* **Model type:** **Platypus2-13B** is an auto-regressive language model based on the LLaMA2 transformer architecture.
* **Language(s)**: English
* **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
### Training Dataset
`garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
### Training Procedure
`garage-bAInd/Platypus2-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus2](https://github.com/arielnlee/Platypus) GitHub repo.
### Reproducing Evaluation Results
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to repo directory
cd lm-evaluation-harness
# install
pip install -e .
```
Each task was evaluated on 1 A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/truthfulqa_0shot.json --device cuda
```
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```
|
mekaneeky/mbart-acholi-peft-1681480767
|
mekaneeky
| 2023-10-19T13:25:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-17T16:01:03Z |
"epoch": 4.57,
"eval_BLEU_ach": 26.3774,
"eval_BLEU_mean": 26.3774,
"eval_loss": 2.625023365020752,
"eval_runtime": 48.3807,
"eval_samples_per_second": 10.335,
"eval_steps_per_second": 1.302,
"step": 400
|
hung200504/bert-26
|
hung200504
| 2023-10-19T13:22:13Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:deepset/bert-base-cased-squad2",
"base_model:finetune:deepset/bert-base-cased-squad2",
"license:cc-by-4.0",
"region:us"
] | null | 2023-10-19T13:22:11Z |
---
license: cc-by-4.0
base_model: deepset/bert-base-cased-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-26
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1774 | 0.18 | 5 | 7.2847 |
| 6.9676 | 0.36 | 10 | 7.1874 |
| 6.9639 | 0.54 | 15 | 7.1658 |
| 6.821 | 0.71 | 20 | 7.1371 |
| 6.7658 | 0.89 | 25 | 7.1082 |
| 6.7752 | 1.07 | 30 | 7.0762 |
| 6.7502 | 1.25 | 35 | 7.0440 |
| 6.7292 | 1.43 | 40 | 7.0080 |
| 6.6645 | 1.61 | 45 | 6.9767 |
| 6.6253 | 1.79 | 50 | 6.9528 |
| 6.6661 | 1.96 | 55 | 6.9324 |
| 6.6808 | 2.14 | 60 | 6.9133 |
| 6.692 | 2.32 | 65 | 6.8985 |
| 6.5486 | 2.5 | 70 | 6.8864 |
| 6.5355 | 2.68 | 75 | 6.8789 |
| 6.5888 | 2.86 | 80 | 6.8756 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mekaneeky/mbart-adalora-1682441687
|
mekaneeky
| 2023-10-19T13:19:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-17T16:03:50Z |
"epoch": 0.98,
"eval_BLEU_ach": 26.1247,
"eval_BLEU_lgg": 30.1068,
"eval_BLEU_lug": 38.8466,
"eval_BLEU_mean": 31.0935,
"eval_BLEU_nyn": 29.8222,
"eval_BLEU_teo": 30.5671,
"eval_loss": 2.5344443321228027,
"eval_runtime": 257.6935,
"eval_samples_per_second": 9.701,
"eval_steps_per_second": 1.215,
"step": 320
|
kowalsky/Reinforce-PixelCopter
|
kowalsky
| 2023-10-19T13:16:12Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T13:16:08Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.90 +/- 27.14
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mekaneeky/mbart-lugbara-peft-1681896060
|
mekaneeky
| 2023-10-19T13:13:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-17T16:29:38Z |
{
"epoch": 4.97,
"eval_BLEU_lgg": 29.0003,
"eval_BLEU_mean": 29.0003,
"eval_loss": 2.5605862140655518,
"eval_runtime": 50.1969,
"eval_samples_per_second": 9.961,
"eval_steps_per_second": 1.255,
"step": 180
}
|
pfunk/BreakoutNoFrameskip-v4-DQN-seed3
|
pfunk
| 2023-10-19T13:12:42Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T13:12:35Z |
---
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 3.97 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **BreakoutNoFrameskip-v4**
This is a trained model of a DQPN_freq agent playing BreakoutNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN]"
python -m cleanrl_utils.enjoy --exp-name DQN --env-id BreakoutNoFrameskip-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/BreakoutNoFrameskip-v4-DQN-seed3/raw/main/dqpn_freq_atari.py
curl -OL https://huggingface.co/pfunk/BreakoutNoFrameskip-v4-DQN-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/BreakoutNoFrameskip-v4-DQN-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq_atari.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQN --target-network-frequency 1000 --seed 3 --env-id BreakoutNoFrameskip-v4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq_atari.py',
'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'double_learning': False,
'end_e': 0.05,
'env_id': 'BreakoutNoFrameskip-v4',
'exp_name': 'DQN',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 10000,
'max_gradient_norm': inf,
'policy_network_frequency': 5000,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
kaitchup/OPT-350M-RM-DSChat
|
kaitchup
| 2023-10-19T13:12:37Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"opt",
"text-generation",
"en",
"dataset:Dahoas/rm-static",
"dataset:Dahoas/synthetic-instruct-gptj-pairwise",
"dataset:Anthropic/hh-rlhf",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-14T15:09:05Z |
---
license: cc-by-nc-sa-4.0
datasets:
- Dahoas/rm-static
- Dahoas/synthetic-instruct-gptj-pairwise
- Anthropic/hh-rlhf
language:
- en
---
# Model Card for Model ID
This a model is a reward model for RLHF fine-tuned using DeepSpeed Chat.
It is based on OPT-350M.
## Model Details
### Model Description
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Model type:** Reward model
- **Language(s) (NLP):** English
- **License:** cc-by-nc-sa-4.0
- **Finetuned from model:** [facebook/opt-350m](https://huggingface.co/facebook/opt-350m)
### Model Sources
The model has been trained with the procedure described in this article:
[Train Instruct LLMs On Your GPU with DeepSpeed Chat — Step #2: Training a Reward Model](https://kaitchup.substack.com/p/train-instruct-llms-on-your-gpu-with-1e1)
|
MananSantoki/BT-MODEL-V1-alpha
|
MananSantoki
| 2023-10-19T13:11:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T12:03:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
hung200504/bert-25
|
hung200504
| 2023-10-19T13:00:11Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:deepset/bert-base-cased-squad2",
"base_model:finetune:deepset/bert-base-cased-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-19T12:59:41Z |
---
license: cc-by-4.0
base_model: deepset/bert-base-cased-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-25
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.2284 | 0.18 | 5 | 12.3262 |
| 10.9876 | 0.36 | 10 | 12.2748 |
| 11.1442 | 0.54 | 15 | 12.2245 |
| 10.9113 | 0.71 | 20 | 12.1755 |
| 10.8104 | 0.89 | 25 | 12.1267 |
| 10.6362 | 1.07 | 30 | 12.0793 |
| 10.8187 | 1.25 | 35 | 12.0330 |
| 10.7052 | 1.43 | 40 | 11.9875 |
| 10.6594 | 1.61 | 45 | 11.9432 |
| 10.6863 | 1.79 | 50 | 11.8997 |
| 10.7858 | 1.96 | 55 | 11.8569 |
| 10.626 | 2.14 | 60 | 11.8158 |
| 10.4246 | 2.32 | 65 | 11.7756 |
| 10.3939 | 2.5 | 70 | 11.7359 |
| 10.7641 | 2.68 | 75 | 11.6970 |
| 10.341 | 2.86 | 80 | 11.6597 |
| 10.3492 | 3.04 | 85 | 11.6228 |
| 10.797 | 3.21 | 90 | 11.5867 |
| 10.3496 | 3.39 | 95 | 11.5514 |
| 10.1967 | 3.57 | 100 | 11.5177 |
| 10.4702 | 3.75 | 105 | 11.4843 |
| 10.3715 | 3.93 | 110 | 11.4521 |
| 10.1039 | 4.11 | 115 | 11.4213 |
| 10.1126 | 4.29 | 120 | 11.3915 |
| 9.9939 | 4.46 | 125 | 11.3625 |
| 10.1773 | 4.64 | 130 | 11.3342 |
| 10.062 | 4.82 | 135 | 11.3068 |
| 10.2641 | 5.0 | 140 | 11.2806 |
| 10.2323 | 5.18 | 145 | 11.2554 |
| 10.037 | 5.36 | 150 | 11.2309 |
| 10.0938 | 5.54 | 155 | 11.2069 |
| 9.8816 | 5.71 | 160 | 11.1845 |
| 10.124 | 5.89 | 165 | 11.1625 |
| 9.873 | 6.07 | 170 | 11.1416 |
| 9.7348 | 6.25 | 175 | 11.1220 |
| 9.9028 | 6.43 | 180 | 11.1028 |
| 9.997 | 6.61 | 185 | 11.0846 |
| 9.9333 | 6.79 | 190 | 11.0676 |
| 9.9954 | 6.96 | 195 | 11.0511 |
| 10.311 | 7.14 | 200 | 11.0356 |
| 9.7617 | 7.32 | 205 | 11.0213 |
| 10.0068 | 7.5 | 210 | 11.0075 |
| 9.6182 | 7.68 | 215 | 10.9949 |
| 9.7642 | 7.86 | 220 | 10.9835 |
| 9.8524 | 8.04 | 225 | 10.9728 |
| 9.7615 | 8.21 | 230 | 10.9630 |
| 9.7559 | 8.39 | 235 | 10.9542 |
| 9.5819 | 8.57 | 240 | 10.9461 |
| 9.5843 | 8.75 | 245 | 10.9392 |
| 10.05 | 8.93 | 250 | 10.9331 |
| 10.0722 | 9.11 | 255 | 10.9276 |
| 9.665 | 9.29 | 260 | 10.9233 |
| 9.7631 | 9.46 | 265 | 10.9197 |
| 9.7963 | 9.64 | 270 | 10.9172 |
| 9.9692 | 9.82 | 275 | 10.9155 |
| 9.885 | 10.0 | 280 | 10.9150 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
tony4194/ditilbert-spamEmail
|
tony4194
| 2023-10-19T12:55:34Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-17T04:41:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ditilbert-spamEmail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ditilbert-spamEmail
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an [enron_spam dataset](https://huggingface.co/datasets/SetFit/enron_spam).
It achieves the following results on the evaluation set:
- Loss: 0.0462
- Accuracy: 0.9925
## Model description
By calling the API, label 0 means ham message while 1 means spam message.
## Intended uses & limitations
This model is used for spam email detection powered by distilbert and sequence classification.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0307 | 1.0 | 1983 | 0.0561 | 0.989 |
| 0.007 | 2.0 | 3966 | 0.0462 | 0.9925 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
andersonalmada/icc2024
|
andersonalmada
| 2023-10-19T12:54:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-19T12:52:36Z |
Dataset for the experiment with OTel at ICC 2024.
|
matamedar1/Andre_Dussollier_RVC
|
matamedar1
| 2023-10-19T12:41:33Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-10-19T12:37:53Z |
---
license: openrail
---
- v2, 40k
- rmvpe
- 210 Epochs
- Voix Française
- Dataset 17min
|
TheBloke/Falcon-180B-Chat-GGUF
|
TheBloke
| 2023-10-19T12:33:48Z | 35 | 124 |
transformers
|
[
"transformers",
"falcon",
"en",
"de",
"es",
"fr",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2205.14135",
"arxiv:2306.01116",
"base_model:tiiuae/falcon-180B-chat",
"base_model:finetune:tiiuae/falcon-180B-chat",
"license:unknown",
"region:us"
] | null | 2023-09-07T07:40:26Z |
---
base_model: tiiuae/falcon-180B-chat
datasets:
- tiiuae/falcon-refinedweb
inference: false
language:
- en
- de
- es
- fr
license: unknown
model_creator: Technology Innovation Institute
model_name: Falcon 180B Chat
model_type: falcon
prompt_template: 'User: {prompt}
Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Falcon 180B Chat - GGUF
- Model creator: [Technology Innovation Institute](https://huggingface.co/tiiuae)
- Original model: [Falcon 180B Chat](https://huggingface.co/tiiuae/falcon-180B-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Technology Innovation Institute's Falcon 180B Chat](https://huggingface.co/tiiuae/falcon-180B-chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Falcon-180B-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF)
* [Technology Innovation Institute's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/tiiuae/falcon-180B-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Falcon
```
User: {prompt}
Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| falcon-180b-chat.Q2_K.gguf | Q2_K | 2 | 73.97 GB| 76.47 GB | smallest, significant quality loss - not recommended for most purposes |
| falcon-180b-chat.Q3_K_S.gguf | Q3_K_S | 3 | 77.77 GB| 80.27 GB | very small, high quality loss |
| falcon-180b-chat.Q3_K_M.gguf | Q3_K_M | 3 | 85.18 GB| 87.68 GB | very small, high quality loss |
| falcon-180b-chat.Q3_K_L.gguf | Q3_K_L | 3 | 91.99 GB| 94.49 GB | small, substantial quality loss |
| falcon-180b-chat.Q4_0.gguf | Q4_0 | 4 | 101.48 GB| 103.98 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| falcon-180b-chat.Q4_K_S.gguf | Q4_K_S | 4 | 101.48 GB| 103.98 GB | small, greater quality loss |
| falcon-180b-chat.Q4_K_M.gguf | Q4_K_M | 4 | 108.48 GB| 110.98 GB | medium, balanced quality - recommended |
| falcon-180b-chat.Q5_0.gguf | Q5_0 | 5 | 123.80 GB| 126.30 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| falcon-180b-chat.Q5_K_S.gguf | Q5_K_S | 5 | 123.80 GB| 126.30 GB | large, low quality loss - recommended |
| falcon-180b-chat.Q5_K_M.gguf | Q5_K_M | 5 | 130.99 GB| 133.49 GB | large, very low quality loss - recommended |
| falcon-180b-chat.Q6_K.gguf | Q6_K | 6 | 147.52 GB| 150.02 GB | very large, extremely low quality loss |
| falcon-180b-chat.Q8_0.gguf | Q8_0 | 8 | 190.76 GB| 193.26 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `falcon-180b-chat.Q6_K.gguf-split-a`
* `falcon-180b-chat.Q6_K.gguf-split-b`
### q8_0
Please download:
* `falcon-180b-chat.Q8_0.gguf-split-a`
* `falcon-180b-chat.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat falcon-180b-chat.Q6_K.gguf-split-* > falcon-180b-chat.Q6_K.gguf && rm falcon-180b-chat.Q6_K.gguf-split-*
cat falcon-180b-chat.Q8_0.gguf-split-* > falcon-180b-chat.Q8_0.gguf && rm falcon-180b-chat.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B falcon-180b-chat.Q6_K.gguf-split-a + falcon-180b-chat.Q6_K.gguf-split-b falcon-180b-chat.Q6_K.gguf
del falcon-180b-chat.Q6_K.gguf-split-a falcon-180b-chat.Q6_K.gguf-split-b
COPY /B falcon-180b-chat.Q8_0.gguf-split-a + falcon-180b-chat.Q8_0.gguf-split-b falcon-180b-chat.Q8_0.gguf
del falcon-180b-chat.Q8_0.gguf-split-a falcon-180b-chat.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Falcon-180B-Chat-GGUF and below it, a specific filename to download, such as: falcon-180b-chat.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Falcon-180B-Chat-GGUF falcon-180b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Falcon-180B-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Falcon-180B-Chat-GGUF falcon-180b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m falcon-180b-chat.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "User: {prompt}\nAssistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Falcon-180B-Chat-GGUF", model_file="falcon-180b-chat.Q4_K_M.gguf", model_type="falcon", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Technology Innovation Institute's Falcon 180B Chat
# 🚀 Falcon-180B-Chat
**Falcon-180B-Chat is a 180B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B) and finetuned on a mixture of [Ultrachat](https://huggingface.co/datasets/stingning/ultrachat), [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) and [Airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1). It is made available under the [Falcon-180B TII License](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/LICENSE.txt) and [Acceptable Use Policy](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/ACCEPTABLE_USE_POLICY.txt).**
*Paper coming soon* 😊
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost from HF](https://hf.co/blog/falcon-180b) or this [one](https://huggingface.co/blog/falcon) from the release of the 40B!
Note that since the 180B is larger than what can easily be handled with `transformers`+`acccelerate`, we recommend using [Text Generation Inference](https://github.com/huggingface/text-generation-inference).
You will need **at least 400GB of memory** to swiftly run inference with Falcon-180B.
## Why use Falcon-180B-chat?
* ✨ **You are looking for a ready-to-use chat/instruct model based on [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B).**
* **It is the best open-access model currently available, and one of the best model overall.** Falcon-180B outperforms [LLaMA-2](https://huggingface.co/meta-llama/Llama-2-70b-hf), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive license allowing for commercial use**.
💬 **This is a Chat model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-180B](https://huggingface.co/tiiuae/falcon-180b).
💸 **Looking for a smaller, less expensive model?** [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) and [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) are Falcon-180B-Chat's little brothers!
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
# Model Card for Falcon-180B-Chat
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **License:** [Falcon-180B TII License](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/LICENSE.txt) and [Acceptable Use Policy](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/ACCEPTABLE_USE_POLICY.txt).
### Model Source
- **Paper:** *coming soon*.
## Uses
See the [acceptable use policy](https://huggingface.co/tiiuae/falcon-180B-chat/blob/main/ACCEPTABLE_USE_POLICY.txt).
### Direct Use
Falcon-180B-Chat has been finetuned on a chat dataset.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-180B-Chat is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-180B-Chat to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-180b-chat"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
**Falcon-180B-Chat is based on [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B).**
### Training Data
Falcon-180B-Chat is finetuned on a mixture of [Ultrachat](https://huggingface.co/datasets/stingning/ultrachat), [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) and [Airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1).
The data was tokenized with the Falcon tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-180B-Chat is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a two layer norms.
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 80 | |
| `d_model` | 14848 | |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-180B-Chat was trained on AWS SageMaker, on up to 4,096 A100 40GB GPUs in P4d instances.
#### Software
Falcon-180B-Chat was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon,
title={The Falcon Series of Language Models:Towards Open Frontier Models},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## Contact
[email protected]
<!-- original-model-card end -->
|
malibanekg/llama-2-hotel-reservations
|
malibanekg
| 2023-10-19T12:32:32Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2023-08-30T22:55:33Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
hung200504/bert-23
|
hung200504
| 2023-10-19T12:12:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:deepset/bert-base-cased-squad2",
"base_model:finetune:deepset/bert-base-cased-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-19T12:11:58Z |
---
license: cc-by-4.0
base_model: deepset/bert-base-cased-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-23
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.9511 | 0.09 | 5 | 10.5588 |
| 8.6122 | 0.18 | 10 | 8.0465 |
| 6.3959 | 0.27 | 15 | 6.5185 |
| 5.5714 | 0.36 | 20 | 5.9355 |
| 5.2088 | 0.45 | 25 | 5.8452 |
| 5.0174 | 0.55 | 30 | 5.9581 |
| 4.3863 | 0.64 | 35 | 6.1063 |
| 4.2079 | 0.73 | 40 | 6.1976 |
| 4.5909 | 0.82 | 45 | 5.8724 |
| 4.2584 | 0.91 | 50 | 5.5712 |
| 4.2042 | 1.0 | 55 | 5.4376 |
| 3.7625 | 1.09 | 60 | 5.4613 |
| 3.5759 | 1.18 | 65 | 5.5305 |
| 3.6831 | 1.27 | 70 | 5.5329 |
| 3.7596 | 1.36 | 75 | 5.5254 |
| 3.6216 | 1.45 | 80 | 5.5825 |
| 3.769 | 1.55 | 85 | 5.6090 |
| 3.5107 | 1.64 | 90 | 5.6351 |
| 3.3485 | 1.73 | 95 | 5.6501 |
| 3.4216 | 1.82 | 100 | 5.6611 |
| 3.3527 | 1.91 | 105 | 5.7240 |
| 3.2204 | 2.0 | 110 | 5.8332 |
| 2.9853 | 2.09 | 115 | 5.8772 |
| 3.207 | 2.18 | 120 | 5.8846 |
| 3.4566 | 2.27 | 125 | 5.8788 |
| 3.1248 | 2.36 | 130 | 5.8898 |
| 3.0917 | 2.45 | 135 | 5.9108 |
| 3.1331 | 2.55 | 140 | 5.9545 |
| 2.9234 | 2.64 | 145 | 5.9664 |
| 3.0005 | 2.73 | 150 | 5.9582 |
| 3.4196 | 2.82 | 155 | 5.9526 |
| 3.2783 | 2.91 | 160 | 5.9486 |
| 3.1719 | 3.0 | 165 | 5.9468 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
kacper-cierzniewski/daigram_detr_r50_albumentations
|
kacper-cierzniewski
| 2023-10-19T12:10:02Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:bpmn-shapes",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-13T13:04:11Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- bpmn-shapes
model-index:
- name: daigram_detr_r50_albumentations
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daigram_detr_r50_albumentations
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the bpmn-shapes dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8163 | 2.63 | 50 | 3.0660 |
| 2.9036 | 5.26 | 100 | 2.8878 |
| 2.7516 | 7.89 | 150 | 2.8043 |
| 2.6278 | 10.53 | 200 | 2.6820 |
| 2.4806 | 13.16 | 250 | 2.5676 |
| 2.3781 | 15.79 | 300 | 2.4282 |
| 2.253 | 18.42 | 350 | 2.3161 |
| 2.1405 | 21.05 | 400 | 2.1735 |
| 2.0263 | 23.68 | 450 | 2.0909 |
| 1.9732 | 26.32 | 500 | 2.0120 |
| 1.8647 | 28.95 | 550 | 1.9260 |
| 1.7793 | 31.58 | 600 | 1.8655 |
| 1.7706 | 34.21 | 650 | 1.8166 |
| 1.6792 | 36.84 | 700 | 1.7325 |
| 1.5654 | 39.47 | 750 | 1.7061 |
| 1.5802 | 42.11 | 800 | 1.6463 |
| 1.5053 | 44.74 | 850 | 1.5985 |
| 1.4858 | 47.37 | 900 | 1.6060 |
| 1.4186 | 50.0 | 950 | 1.5563 |
| 1.4391 | 52.63 | 1000 | 1.5219 |
| 1.3938 | 55.26 | 1050 | 1.4995 |
| 1.3734 | 57.89 | 1100 | 1.4661 |
| 1.3379 | 60.53 | 1150 | 1.4451 |
| 1.341 | 63.16 | 1200 | 1.4854 |
| 1.3647 | 65.79 | 1250 | 1.4509 |
| 1.3198 | 68.42 | 1300 | 1.4116 |
| 1.3054 | 71.05 | 1350 | 1.3821 |
| 1.2945 | 73.68 | 1400 | 1.3952 |
| 1.2899 | 76.32 | 1450 | 1.3868 |
| 1.2533 | 78.95 | 1500 | 1.3580 |
| 1.2655 | 81.58 | 1550 | 1.3374 |
| 1.2649 | 84.21 | 1600 | 1.3451 |
| 1.2286 | 86.84 | 1650 | 1.2973 |
| 1.2497 | 89.47 | 1700 | 1.3322 |
| 1.2456 | 92.11 | 1750 | 1.3289 |
| 1.2234 | 94.74 | 1800 | 1.3080 |
| 1.1695 | 97.37 | 1850 | 1.3218 |
| 1.2265 | 100.0 | 1900 | 1.3280 |
| 1.1899 | 102.63 | 1950 | 1.2834 |
| 1.1914 | 105.26 | 2000 | 1.2931 |
| 1.1698 | 107.89 | 2050 | 1.3176 |
| 1.177 | 110.53 | 2100 | 1.2896 |
| 1.1625 | 113.16 | 2150 | 1.2936 |
| 1.1626 | 115.79 | 2200 | 1.2614 |
| 1.1698 | 118.42 | 2250 | 1.2545 |
| 1.1703 | 121.05 | 2300 | 1.2398 |
| 1.1659 | 123.68 | 2350 | 1.2254 |
| 1.1734 | 126.32 | 2400 | 1.2489 |
| 1.1234 | 128.95 | 2450 | 1.2072 |
| 1.1464 | 131.58 | 2500 | 1.1707 |
| 1.1268 | 134.21 | 2550 | 1.1971 |
| 1.1511 | 136.84 | 2600 | 1.2247 |
| 1.1234 | 139.47 | 2650 | 1.1921 |
| 1.0923 | 142.11 | 2700 | 1.1751 |
| 1.1267 | 144.74 | 2750 | 1.1905 |
| 1.1021 | 147.37 | 2800 | 1.1885 |
| 1.1075 | 150.0 | 2850 | 1.1780 |
| 1.1116 | 152.63 | 2900 | 1.1666 |
| 1.0987 | 155.26 | 2950 | 1.1694 |
| 1.0974 | 157.89 | 3000 | 1.1931 |
| 1.0867 | 160.53 | 3050 | 1.1461 |
| 1.1076 | 163.16 | 3100 | 1.1501 |
| 1.0912 | 165.79 | 3150 | 1.1611 |
| 1.0671 | 168.42 | 3200 | 1.1718 |
| 1.0981 | 171.05 | 3250 | 1.1961 |
| 1.0602 | 173.68 | 3300 | 1.1786 |
| 1.0305 | 176.32 | 3350 | 1.1640 |
| 1.0647 | 178.95 | 3400 | 1.1416 |
| 1.0628 | 181.58 | 3450 | 1.1296 |
| 1.0856 | 184.21 | 3500 | 1.1140 |
| 1.0626 | 186.84 | 3550 | 1.1214 |
| 1.0782 | 189.47 | 3600 | 1.1449 |
| 1.0601 | 192.11 | 3650 | 1.1441 |
| 1.0906 | 194.74 | 3700 | 1.1396 |
| 1.0376 | 197.37 | 3750 | 1.1271 |
| 1.0625 | 200.0 | 3800 | 1.1397 |
| 1.057 | 202.63 | 3850 | 1.1121 |
| 1.0448 | 205.26 | 3900 | 1.1376 |
| 1.0747 | 207.89 | 3950 | 1.1475 |
| 1.0605 | 210.53 | 4000 | 1.0916 |
| 1.0344 | 213.16 | 4050 | 1.1001 |
| 1.0443 | 215.79 | 4100 | 1.0976 |
| 1.0202 | 218.42 | 4150 | 1.1240 |
| 1.078 | 221.05 | 4200 | 1.1024 |
| 1.0251 | 223.68 | 4250 | 1.0793 |
| 1.0353 | 226.32 | 4300 | 1.1153 |
| 1.0047 | 228.95 | 4350 | 1.0972 |
| 1.0143 | 231.58 | 4400 | 1.0948 |
| 1.0172 | 234.21 | 4450 | 1.1265 |
| 1.0299 | 236.84 | 4500 | 1.1038 |
| 0.9968 | 239.47 | 4550 | 1.0901 |
| 1.0233 | 242.11 | 4600 | 1.0945 |
| 0.9943 | 244.74 | 4650 | 1.0918 |
| 1.0321 | 247.37 | 4700 | 1.1270 |
| 1.0113 | 250.0 | 4750 | 1.1060 |
| 1.0229 | 252.63 | 4800 | 1.0859 |
| 0.9945 | 255.26 | 4850 | 1.0875 |
| 1.0073 | 257.89 | 4900 | 1.0976 |
| 1.0096 | 260.53 | 4950 | 1.0933 |
| 1.0 | 263.16 | 5000 | 1.0821 |
| 1.0326 | 265.79 | 5050 | 1.0747 |
| 0.997 | 268.42 | 5100 | 1.0931 |
| 1.0056 | 271.05 | 5150 | 1.0853 |
| 0.9858 | 273.68 | 5200 | 1.0945 |
| 1.0005 | 276.32 | 5250 | 1.0669 |
| 1.0217 | 278.95 | 5300 | 1.0497 |
| 0.9777 | 281.58 | 5350 | 1.0672 |
| 0.9888 | 284.21 | 5400 | 1.0844 |
| 0.9662 | 286.84 | 5450 | 1.0524 |
| 1.0029 | 289.47 | 5500 | 1.0519 |
| 0.984 | 292.11 | 5550 | 1.0538 |
| 0.9724 | 294.74 | 5600 | 1.0524 |
| 0.991 | 297.37 | 5650 | 1.0553 |
| 0.9936 | 300.0 | 5700 | 1.0601 |
| 0.9817 | 302.63 | 5750 | 1.0524 |
| 0.9868 | 305.26 | 5800 | 1.0644 |
| 0.9982 | 307.89 | 5850 | 1.0523 |
| 0.9814 | 310.53 | 5900 | 1.0611 |
| 0.9761 | 313.16 | 5950 | 1.0505 |
| 0.9507 | 315.79 | 6000 | 1.0361 |
| 0.9786 | 318.42 | 6050 | 1.0275 |
| 0.9684 | 321.05 | 6100 | 1.0292 |
| 0.9759 | 323.68 | 6150 | 1.0529 |
| 0.9442 | 326.32 | 6200 | 1.0689 |
| 0.9653 | 328.95 | 6250 | 1.0696 |
| 0.9579 | 331.58 | 6300 | 1.0572 |
| 1.0016 | 334.21 | 6350 | 1.0660 |
| 0.9462 | 336.84 | 6400 | 1.0525 |
| 0.9596 | 339.47 | 6450 | 1.0505 |
| 0.9655 | 342.11 | 6500 | 1.0514 |
| 0.9713 | 344.74 | 6550 | 1.0616 |
| 0.952 | 347.37 | 6600 | 1.0497 |
| 0.9433 | 350.0 | 6650 | 1.0389 |
| 0.9619 | 352.63 | 6700 | 1.0404 |
| 0.9594 | 355.26 | 6750 | 1.0332 |
| 0.9586 | 357.89 | 6800 | 1.0323 |
| 0.9582 | 360.53 | 6850 | 1.0294 |
| 0.9437 | 363.16 | 6900 | 1.0329 |
| 0.9585 | 365.79 | 6950 | 1.0361 |
| 0.9661 | 368.42 | 7000 | 1.0428 |
| 0.9603 | 371.05 | 7050 | 1.0299 |
| 0.9619 | 373.68 | 7100 | 1.0416 |
| 0.9766 | 376.32 | 7150 | 1.0471 |
| 0.9547 | 378.95 | 7200 | 1.0498 |
| 0.967 | 381.58 | 7250 | 1.0318 |
| 0.9463 | 384.21 | 7300 | 1.0238 |
| 0.9531 | 386.84 | 7350 | 1.0329 |
| 0.9342 | 389.47 | 7400 | 1.0354 |
| 0.939 | 392.11 | 7450 | 1.0312 |
| 0.9635 | 394.74 | 7500 | 1.0325 |
| 0.9261 | 397.37 | 7550 | 1.0245 |
| 0.962 | 400.0 | 7600 | 1.0381 |
| 0.9385 | 402.63 | 7650 | 1.0243 |
| 0.9422 | 405.26 | 7700 | 1.0235 |
| 0.9285 | 407.89 | 7750 | 1.0286 |
| 0.9598 | 410.53 | 7800 | 1.0353 |
| 0.9529 | 413.16 | 7850 | 1.0361 |
| 0.928 | 415.79 | 7900 | 1.0316 |
| 0.935 | 418.42 | 7950 | 1.0263 |
| 0.9456 | 421.05 | 8000 | 1.0368 |
| 0.9387 | 423.68 | 8050 | 1.0440 |
| 0.9321 | 426.32 | 8100 | 1.0440 |
| 0.9236 | 428.95 | 8150 | 1.0394 |
| 0.9448 | 431.58 | 8200 | 1.0467 |
| 0.9151 | 434.21 | 8250 | 1.0516 |
| 0.9373 | 436.84 | 8300 | 1.0383 |
| 0.9577 | 439.47 | 8350 | 1.0190 |
| 0.9199 | 442.11 | 8400 | 1.0215 |
| 0.9321 | 444.74 | 8450 | 1.0184 |
| 0.9387 | 447.37 | 8500 | 1.0236 |
| 0.9382 | 450.0 | 8550 | 1.0259 |
| 0.9391 | 452.63 | 8600 | 1.0282 |
| 0.9392 | 455.26 | 8650 | 1.0193 |
| 0.9438 | 457.89 | 8700 | 1.0124 |
| 0.9398 | 460.53 | 8750 | 1.0060 |
| 0.9246 | 463.16 | 8800 | 1.0140 |
| 0.9383 | 465.79 | 8850 | 1.0145 |
| 0.9267 | 468.42 | 8900 | 1.0122 |
| 0.9253 | 471.05 | 8950 | 1.0144 |
| 0.9238 | 473.68 | 9000 | 1.0065 |
| 0.9082 | 476.32 | 9050 | 1.0136 |
| 0.9287 | 478.95 | 9100 | 1.0120 |
| 0.9161 | 481.58 | 9150 | 1.0120 |
| 0.9093 | 484.21 | 9200 | 1.0128 |
| 0.9264 | 486.84 | 9250 | 1.0125 |
| 0.9487 | 489.47 | 9300 | 1.0131 |
| 0.9398 | 492.11 | 9350 | 1.0101 |
| 0.9039 | 494.74 | 9400 | 1.0090 |
| 0.908 | 497.37 | 9450 | 1.0097 |
| 0.944 | 500.0 | 9500 | 1.0088 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
juierror/q-FrozenLake-v1-4x4-noSlippery
|
juierror
| 2023-10-19T12:06:40Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T12:06:38Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="juierror/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Akash24/my_awesome_model
|
Akash24
| 2023-10-19T12:02:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-19T02:20:37Z |
---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6853
- Accuracy: 0.8047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9181 | 1.0 | 1647 | 0.8468 | 0.7701 |
| 0.7073 | 2.0 | 3294 | 0.6853 | 0.8047 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
dlhw/setFit-fewShot
|
dlhw
| 2023-10-19T11:59:15Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-10-19T11:58:54Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dlhw/setFit-fewShot
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dlhw/setFit-fewShot")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
DenysKlypkin/LunarLander
|
DenysKlypkin
| 2023-10-19T11:48:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T11:48:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.87 +/- 15.32
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ricecake/Codellama-Pygmalion-LoRA-Test
|
ricecake
| 2023-10-19T11:45:06Z | 4 | 1 |
peft
|
[
"peft",
"dataset:PygmalionAI/PIPPA",
"dataset:usernamedesu/pyg_dataset_markdown",
"base_model:codellama/CodeLlama-34b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-34b-Instruct-hf",
"region:us"
] | null | 2023-09-01T06:53:32Z |
---
library_name: peft
datasets:
- PygmalionAI/PIPPA
- usernamedesu/pyg_dataset_markdown
base_model: codellama/CodeLlama-34b-Instruct-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
slava-medvedev/ppo-Huggy
|
slava-medvedev
| 2023-10-19T11:43:43Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-10-19T11:43:28Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: slava-medvedev/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
krishna-shinde/reinforce-model
|
krishna-shinde
| 2023-10-19T11:43:13Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T11:41:57Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-model
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sdinger/vit-base-patch16-224-in21k-finetuned-lora-food101
|
sdinger
| 2023-10-19T11:34:18Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"region:us"
] | null | 2023-10-19T11:28:18Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
krishna-shinde/dqn-SpaceInvadersNoFrameskip-v4
|
krishna-shinde
| 2023-10-19T11:28:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T07:29:22Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 544.50 +/- 150.90
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga krishna-shinde -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga krishna-shinde -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga krishna-shinde
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
bpd1997/falcon7binstruct_mentalhealthmodel_oct23
|
bpd1997
| 2023-10-19T11:04:44Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:finetune:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
] | null | 2023-10-16T08:38:21Z |
---
license: apache-2.0
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
tags:
- generated_from_trainer
model-index:
- name: falcon7binstruct_mentalhealthmodel_oct23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7binstruct_mentalhealthmodel_oct23
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 180
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
phbruce/haversine-distance
|
phbruce
| 2023-10-19T11:02:16Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-10-19T11:02:16Z |
---
license: mit
---
# Haversine Distance
Consider the Earth as a perfect sphere. If we wanted to measure the distance between two points on this sphere, we would simply draw a straight line between the coordinates φ1λ1 and φ2λ2, right?
Actually, not exactly. When trying to calculate this distance, we should keep in mind the angle formed by the sphere's radius, which is represented by the unit θ. By ignoring this angle, we would consider the Earth as a flat surface. Recognizing this complexity, some mathematicians developed the Haversine Formula over the years based on astronomical observations. This formula calculates the shortest distance between two points on a sphere using its radius. Nowadays, it is widely used in navigation and geolocation applications, as it allows for precise calculations of distances on the Earth's surface.

> φ1λ1 represents the latitude and longitude of the first coordinate, and φ2λ2 represents the latitude and longitude of the second coordinate.
To understand the concept of Haversine, visualize a sphere and a triangle inscribed in it, whose vertex is located at the center of the sphere (O) and whose base connects the points φ1λ1 and φ2λ2 on the surface of the sphere. From O, draw a segment that extends to touch the surface of the sphere at a specific angle, known as the central angle (θ). The distance between the center of the sphere (O) and the point where this segment touches the surface is related to the Versine function.
The Haversine is an essential tool for calculating the orthodromic distance, which is the shortest distance between two points on the surface of a sphere. This distance is determined between the points φ1λ1 and φ2λ2 and is expressed using the trigonometric function of the sine. Essentially, the haversine(θ) is half of a versine(θ). The reason for using the Haversine instead of the Versine is that it provides more accurate calculations for short distances. When the distance between φ1λ1 and φ2λ2 is very small, the value of versine(θ) approaches 1, becoming less precise. The Haversine, on the other hand, remains effective in representing that small distance accurately.
> Essentially, the Haversine is half of a Versine.
Given this explanation, the central angle (θ) is given by the relationship of the distance between the two points on the sphere divided by the radius of the sphere:
```python
def central_angle(d, r):
"""
Calculates the central angle of a circle given the distance and
radius.
Args:
d (float): Distance
r (float): Radius
Returns:
float: Central angle
"""
return d / r
```
Where:
- d is the distance between the two points on the sphere;
- r is the radius of the sphere.
You can see in Image 1 that the value of versine(θ) is 1-cos(θ) on the trigonometric circle, and this equates to 2sin^2(θ/2) using the trigonometric identities. As previously mentioned, the haversine(θ) will be exactly half of this (sin^2(θ/2)):
```python
import math
def hav(θ):
"""
Calculates the haversine of an angle.
Args:
θ (float): Angle in radians
Returns:
float: Haversine of angle.
"""
return pow(math.sin(θ / 2), 2)
```
As we need the haversine(θ) in a spherical system, it will be necessary to bring in the coordinates of the points φ1λ1 and φ2λ2 and calculate the haversine of the central difference between the two geographic points:
```python
def haversine_difference(φ1, φ2, λ1, λ2):
"""
Calculates the haversine of the central difference between
two geographic points.
Args:
φ1 (float): Latitude of first point in radians.
φ2 (float): Latitude of second point in radians.
λ1 (float): Longitude of first point in radians.
λ2 (float): Longitude of second point in radians.
Returns:
float: Haversine of the central difference.
"""
return hav(φ2 - φ1) + (math.cos(φ1) * math.cos(φ2) * hav(λ2 - λ1))
```
Where:
- φ1 and φ2 are the latitudes of each point in radians;
- λ1 and λ2 are the longitudes of each point in radians.
Now that we have the main tools to calculate the central angle and the distance of the geographic points, we can compute the distance between the two points on the coordinate using the inverse sine function (arcsin):
```python
import math
def haversine(φ1, φ2, λ1, λ2, rad=6371):
"""
Calculates the distance between two points on the Earth's
surface given their latitude and longitude in degrees.
Args:
φ1 (float): Latitude of first point in degrees.
φ2 (float): Latitude of second point in degrees.
λ1 (float): Longitude of first point in degrees.
λ2 (float): Longitude of second point in degrees.
rad (int): Radius of the Earth in the desired units (default is 6371 km).
Returns:
float: Distance between the two points in the units corresponding to the provided Earth's radius.
"""
φ1, φ2 = math.radians(φ1), math.radians(φ2)
λ1, λ2 = math.radians(λ1), math.radians(λ2)
central_angle_hav = haversine_difference(φ1, φ2, λ1, λ2)
return 2 * rad * math.asin(math.sqrt(central_angle_hav))
```
Where:
- φ1 and φ2 are the latitudes of each point in radians;
- λ1 and λ2 are the longitudes of each point in radians;
- rad is the radius of the perfect sphere.
Note:
Notice that we are converting the angles of the coordinates into radians, as we are computing trigonometric functions in the radian system.
## Real-world Applications
The ability to accurately calculate the distance between two points on the Earth's surface has various uses, such as: GPS navigation, logistics and transport, aviation, geological studies, environmental research, tourism, and more.
## Some Considerations
As many of us know, Earth does not have the exact shape of a perfect sphere; it more closely resembles an oblate spheroid, with significant variations due to terrain, gravity, among other factors. The "Haversine distance" method provides us with an approximation of the real distance between two points on the Earth's surface and is sufficiently accurate for many applications that don't require extreme precision. If we need more accurate calculations, we can turn to other methods and formulas, like the Vincenty formula, and in even more specific situations, the Earth Gravitational Model (EGM).
|
TheBloke/MistralLite-7B-GGUF
|
TheBloke
| 2023-10-19T10:58:58Z | 500 | 40 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"base_model:amazon/MistralLite",
"base_model:quantized:amazon/MistralLite",
"license:apache-2.0",
"region:us"
] | null | 2023-10-19T10:55:29Z |
---
base_model: amazon/MistralLite
inference: false
license: apache-2.0
model_creator: Amazon Web Services
model_name: MistralLite 7B
model_type: mistral
prompt_template: '<|prompter|>{prompt}</s><|assistant|>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MistralLite 7B - GGUF
- Model creator: [Amazon Web Services](https://huggingface.co/amazon)
- Original model: [MistralLite 7B](https://huggingface.co/amazon/MistralLite)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Amazon Web Services's MistralLite 7B](https://huggingface.co/amazon/MistralLite).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MistralLite-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MistralLite-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MistralLite-7B-GGUF)
* [Amazon Web Services's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/amazon/MistralLite)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Amazon
```
<|prompter|>{prompt}</s><|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistrallite.Q2_K.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistrallite.Q3_K_S.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [mistrallite.Q3_K_M.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistrallite.Q3_K_L.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistrallite.Q4_0.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistrallite.Q4_K_S.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistrallite.Q4_K_M.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistrallite.Q5_0.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistrallite.Q5_K_S.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistrallite.Q5_K_M.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistrallite.Q6_K.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistrallite.Q8_0.gguf](https://huggingface.co/TheBloke/MistralLite-7B-GGUF/blob/main/mistrallite.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MistralLite-7B-GGUF and below it, a specific filename to download, such as: mistrallite.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MistralLite-7B-GGUF mistrallite.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MistralLite-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MistralLite-7B-GGUF mistrallite.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mistrallite.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>{prompt}</s><|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MistralLite-7B-GGUF", model_file="mistrallite.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Amazon Web Services's MistralLite 7B
# MistralLite Model
MistralLite is a fine-tuned [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) language model, with enhanced capabilities of processing long context (up to 32K tokens). By utilizing an adapted Rotary Embedding and sliding window during fine-tuning, MistralLite is able to **perform significantly better on several long context retrieve and answering tasks**, while keeping the simple model structure of the original model. MistralLite is useful for applications such as long context line and topic retrieval, summarization, question-answering, and etc. MistralLite can be deployed on a single AWS `g5.2x` instance with Sagemaker [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) endpoint, making it suitable for applications that require high performance in resource-constrained environments. You can also serve the MistralLite model directly using TGI docker containers. Also, MistralLite supports other ways of serving like [vLLM](https://github.com/vllm-project/vllm), and you can use MistralLite in Python by using the [HuggingFace transformers](https://huggingface.co/docs/transformers/index) and [FlashAttention-2](https://github.com/Dao-AILab/flash-attention) library.
MistralLite is similar to [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), and their similarities and differences are summarized below:
|Model|Fine-tuned on long contexts| Max context length| RotaryEmbedding adaptation| Sliding Window Size|
|----------|-------------:|------------:|-----------:|-----------:|
| Mistral-7B-Instruct-v0.1 | up to 8K tokens | 32K | rope_theta = 10000 | 4096 |
| MistralLite | up to 16K tokens | 32K | **rope_theta = 1000000** | **16384** |
## Motivation of Developing MistralLite
Since the release of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), the model became increasingly popular because its strong performance
on a wide range of benchmarks. But most of the benchmarks are evaluated on `short context`, and not much has been investigated on its performance on long context tasks.
Then We evaluated `Mistral-7B-Instruct-v0.1` against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer context.
Although the performance of the models on long context was fairly competitive on long context less than 4096 tokens,
there were some limitations on its performance on longer context. Motivated by improving its performance on longer context, we finetuned the Mistral 7B model, and produced `Mistrallite`. The model managed to `significantly boost the performance of long context handling` over Mistral-7B-Instruct-v0.1. The detailed `long context evalutaion results` are as below:
1. [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/)
|Model Name|Input length| Input length | Input length| Input length| Input length|
|----------|-------------:|-------------:|------------:|-----------:|-----------:|
| | 2851| 5568 |8313 | 11044 | 13780
| Mistral-7B-Instruct-v0.1 | 100% | 50% | 2% | 0% | 0% |
| MistralLite | **100%** | **100%** | **100%** | **100%** | **98%** |
2. [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results)
|Model Name|Input length| Input length | Input length| Input length| Input length|Input length|
|----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:|
| | 3818| 5661 |7505 | 9354 | 11188 | 12657
| Mistral-7B-Instruct-v0.1 | **98%** | 62% | 42% | 42% | 32% | 30% |
| MistralLite | **98%** | **92%** | **88%** | **76%** | **70%** | **60%** |
3. [Pass key Retrieval](https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101)
|Model Name|Input length| Input length | Input length| Input length|
|----------|-------------:|-------------:|------------:|-----------:|
| | 3264| 5396 |8329 | 10197 |
| Mistral-7B-Instruct-v0.1 | **100%** | 50% | 20% | 30% |
| MistralLite | **100%** | **100%** | **100%** | **100%** |
4. [Question Answering with Long Input Texts](https://nyu-mll.github.io/quality/)
|Model Name| Test set Accuracy | Hard subset Accuracy|
|----------|-------------:|-------------:|
| Mistral-7B-Instruct-v0.1 | 44.3% | 39.7% |
| MistralLite | **64.4%** | **56.2%** |
### Example test on long context of 13400 tokens
- Context is from [Amazon Aurora FAQs](https://aws.amazon.com/rds/aurora/faqs/)
- Question: *please tell me how does pgvector help with Generative AI and give me some examples.*
- Answer from MistralLite:
```code
pgvector is an open-source extension for PostgreSQL supported by Amazon Aurora PostgreSQL-Compatible Edition.
You can use pgvector to store, search, index, and query billions of embeddings that are generated from machine learning (ML) and artificial intelligence (AI) models in your database, such as those from Amazon Bedrock (limited preview) or Amazon SageMaker. A vector embedding is a numerical representation that represents the semantic meaning of content such as text, images, and video.
With pgvector, you can query embeddings in your Aurora PostgreSQL database to perform efficient semantic similarity searches of these data types, represented as vectors, combined with other tabular data in Aurora. This enables the use of generative AI and other AI/ML systems for new types of applications such as personalized recommendations based on similar text descriptions or images, candidate match based on interview notes, customer service next best action recommendations based on successful transcripts or chat session dialogs, and more.
```
## Model Details
- **Developed by:** [AWS Contributors](https://github.com/orgs/aws-samples/teams/aws-prototype-ml-apac)
- **Model type:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Language:** English
- **Finetuned from weights:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Finetuned on data:**
- [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled)
- [(Long) Natural Questions (NQ)](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections#multi-passage-qa-from-natural-questions)
- [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- **Supported Serving Framework:**
- [Text-Generation-Inference 1.1.0](https://github.com/huggingface/text-generation-inference/tree/v1.1.0)
- [vLLM](https://github.com/vllm-project/vllm)
- [HuggingFace transformers](https://huggingface.co/docs/transformers/index)
- [HuggingFace Text Generation Inference (TGI) container on SageMaker](https://github.com/awslabs/llm-hosting-container)
- **Model License:** Apache 2.0
- **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues)
- **Inference Code** [Github Repo](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/)
## How to Use MistralLite from Python Code (HuggingFace transformers) ##
**Important** - For an end-to-end example Jupyter notebook, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/huggingface-transformers/example_usage.ipynb).
### Install the necessary packages
Requires: [transformers](https://pypi.org/project/transformers/) 4.34.0 or later, [flash-attn](https://pypi.org/project/flash-attn/) 2.3.1.post1 or later,
and [accelerate](https://pypi.org/project/accelerate/) 0.23.0 or later.
```shell
pip install transformers==4.34.0
pip install flash-attn==2.3.1.post1 --no-build-isolation
pip install accelerate==0.23.0
```
### You can then try the following example code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch
model_id = "amazon/MistralLite"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
torch_dtype=torch.bfloat16,
use_flash_attention_2=True,
device_map="auto",)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>"
sequences = pipeline(
prompt,
max_new_tokens=400,
do_sample=False,
return_full_text=False,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"{seq['generated_text']}")
```
**Important** - Use the prompt template below for MistralLite:
```
<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>
```
## How to Serve MistralLite on TGI ##
**Important:**
- For an end-to-end example Jupyter notebook using the native TGI container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/tgi/example_usage.ipynb).
- If the **input context length is greater than 12K tokens**, it is recommended using a custom TGI container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/tgi-custom/example_usage.ipynb).
### Start TGI server ###
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
docker run -d --gpus all --shm-size 1g -p 443:80 -v $(pwd)/models:/data ghcr.io/huggingface/text-generation-inference:1.1.0 \
--model-id amazon/MistralLite \
--max-input-length 16000 \
--max-total-tokens 16384 \
--max-batch-prefill-tokens 16384 \
--trust-remote-code
```
### Perform Inference ###
Example Python code for inference with TGI (requires `text_generation` 0.6.1 or later):
```shell
pip install text_generation==0.6.1
```
```python
from text_generation import Client
SERVER_PORT = 443
SERVER_HOST = "localhost"
SERVER_URL = f"{SERVER_HOST}:{SERVER_PORT}"
tgi_client = Client(f"http://{SERVER_URL}", timeout=60)
def invoke_tgi(prompt,
random_seed=1,
max_new_tokens=400,
print_stream=True,
assist_role=True):
if (assist_role):
prompt = f"<|prompter|>{prompt}</s><|assistant|>"
output = ""
for response in tgi_client.generate_stream(
prompt,
do_sample=False,
max_new_tokens=max_new_tokens,
return_full_text=False,
#temperature=None,
#truncate=None,
#seed=random_seed,
#typical_p=0.2,
):
if hasattr(response, "token"):
if not response.token.special:
snippet = response.token.text
output += snippet
if (print_stream):
print(snippet, end='', flush=True)
return output
prompt = "What are the main challenges to support a long context for LLM?"
result = invoke_tgi(prompt)
```
**Important** - When using MistralLite for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed.
## How to Deploy MistralLite on Amazon SageMaker ##
**Important:**
- For an end-to-end example Jupyter notebook using the SageMaker built-in container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/sagemaker-tgi/example_usage.ipynb).
- If the **input context length is greater than 12K tokens**, it is recommended using a custom docker container, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/sagemaker-tgi-custom/example_usage.ipynb).
### Install the necessary packages
Requires: [sagemaker](https://pypi.org/project/sagemaker/) 2.192.1 or later.
```shell
pip install sagemaker==2.192.1
```
### Deploy the Model as A SageMaker Endpoint ###
To deploy MistralLite on a SageMaker endpoint, please follow the example code as below.
```python
import sagemaker
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri
import time
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
role = sagemaker.get_execution_role()
image_uri = get_huggingface_llm_image_uri(
backend="huggingface", # or lmi
region=region,
version="1.1.0"
)
model_name = "MistralLite-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
hub = {
'HF_MODEL_ID':'amazon/MistralLite',
'HF_TASK':'text-generation',
'SM_NUM_GPUS':'1',
"MAX_INPUT_LENGTH": '16000',
"MAX_TOTAL_TOKENS": '16384',
"MAX_BATCH_PREFILL_TOKENS": '16384',
"MAX_BATCH_TOTAL_TOKENS": '16384',
}
model = HuggingFaceModel(
name=model_name,
env=hub,
role=role,
image_uri=image_uri
)
predictor = model.deploy(
initial_instance_count=1,
instance_type="ml.g5.2xlarge",
endpoint_name=model_name,
)
```
### Perform Inference ###
To call the endpoint, please follow the example code as below:
```python
input_data = {
"inputs": "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>",
"parameters": {
"do_sample": False,
"max_new_tokens": 400,
"return_full_text": False,
#"typical_p": 0.2,
#"temperature":None,
#"truncate":None,
#"seed": 1,
}
}
result = predictor.predict(input_data)[0]["generated_text"]
print(result)
```
or via [boto3](https://pypi.org/project/boto3/), and the example code is shown as below:
```python
import boto3
import json
def call_endpoint(client, prompt, endpoint_name, paramters):
client = boto3.client("sagemaker-runtime")
payload = {"inputs": prompt,
"parameters": parameters}
response = client.invoke_endpoint(EndpointName=endpoint_name,
Body=json.dumps(payload),
ContentType="application/json")
output = json.loads(response["Body"].read().decode())
result = output[0]["generated_text"]
return result
client = boto3.client("sagemaker-runtime")
parameters = {
"do_sample": False,
"max_new_tokens": 400,
"return_full_text": False,
#"typical_p": 0.2,
#"temperature":None,
#"truncate":None,
#"seed": 1,
}
endpoint_name = predictor.endpoint_name
prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>"
result = call_endpoint(client, prompt, endpoint_name, parameters)
print(result)
```
## How to Serve MistralLite on vLLM ##
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
**Important** - For an end-to-end example Jupyter notebook, please refer to [this link](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/MistralLite/vllm/example_usage.ipynb).
### Using vLLM as a server ###
When using vLLM as a server, pass the --model amazon/MistralLite parameter, for example:
```shell
python3 -m vllm.entrypoints.api_server --model amazon/MistralLite
```
### Using vLLM in Python Code ###
When using vLLM from Python code, Please see the example code as below:
```python
from vllm import LLM, SamplingParams
prompts = [
"<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>",
]
sampling_params = SamplingParams(temperature=0, max_tokens=100)
llm = LLM(model="amazon/MistralLite",)
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
## Limitations ##
Before using the MistralLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.
<!-- original-model-card end -->
|
digiplay/mecha_musume_vivid_soft
|
digiplay
| 2023-10-19T10:38:30Z | 343 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-06T09:20:24Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/28628?modelVersionId=34346
Sample image :


Original Author's DEMO images :



|
fenil25/cat-toy
|
fenil25
| 2023-10-19T10:36:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-19T10:34:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Cat toy on Stable Diffusion via Dreambooth
#### model by fenil25
This your the Stable Diffusion model fine-tuned the Cat toy concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<cat-toy> toy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
Zz7708602/cs
|
Zz7708602
| 2023-10-19T10:21:23Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"art",
"aa",
"dataset:lmsys/lmsys-chat-1m",
"license:apache-2.0",
"region:us"
] | null | 2023-10-19T10:20:07Z |
---
license: apache-2.0
datasets:
- lmsys/lmsys-chat-1m
language:
- aa
metrics:
- accuracy
library_name: adapter-transformers
tags:
- art
---
|
asyafiqe/Merak-7B-v3-Mini-Orca-Indo-GGUF
|
asyafiqe
| 2023-10-19T10:18:27Z | 214 | 7 | null |
[
"gguf",
"en",
"id",
"dataset:asyafiqe/orca_mini_v1_indonesia",
"arxiv:2307.09288",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-09-09T03:21:35Z |
---
inference: false
license: cc-by-nc-sa-4.0
datasets:
- asyafiqe/orca_mini_v1_indonesia
language:
- en
- id
---
# 🦚Merak-7B-v3-Mini-Orca-GGUF🐳
<p align="center">
<img src="https://i.imgur.com/39sQd3h.png" alt="Merak Orca" width="300" height="300"/>
</p>
## Description
These files are GGUF format model files for [Merak-7B-v3-Mini-Orca](https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
Here are a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
## Prompt template:
You can use [Vicuna 1.1](https://github.com/oobabooga/text-generation-webui/blob/main/instruction-templates/Vicuna-v1.1.yaml)
format for Ooobabooga's text generation webui.
```
SYSTEM: Anda adalah asisten AI. Anda akan diberi tugas. Anda harus memberikan jawaban yang rinci dan panjang.
USER: <prompt> (without the <>)
ASSISTANT:
```
## Compatibility
These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
</details>
## Provided files
| Name | Quant method | Bits | Use case |
| ---- | ---- | ---- | ---- |
| Merak-7B-v3-Mini-Orca-Indo.gguf | Q2_K | 2 | smallest, significant quality loss - not recommended for most purposes |
| Merak-7B-v3-Mini-Orca-Indo.q3_K_S.gguf | Q3_K_S | 3 | very small, high quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q3_K_M.gguf | Q3_K_M | 3 | very small, high quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q3_K_L.gguf | Q3_K_L | 3 | small, substantial quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q4_0.gguf| Q4_0 | 4 | legacy; small, very high quality loss - prefer using Q3_K_M |
| Merak-7B-v3-Mini-Orca-Indo.q4_K_S.gguf| Q4_K_S | 4 | small, greater quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q4_K_M.gguf | Q4_K_M | 4 | medium, balanced quality - recommended |
| Merak-7B-v3-Mini-Orca-Indo.q5_0.gguf | Q5_0 | 5 | legacy; medium, balanced quality - prefer using Q4_K_M |
| Merak-7B-v3-Mini-Orca-Indo.q5_K_S.gguf | Q5_K_S | 5 |large, low quality loss - recommended |
| Merak-7B-v3-Mini-Orca-Indo.q5_K_M.gguf | Q5_K_M | 5 | large, very low quality loss - recommended |
| Merak-7B-v3-Mini-Orca-Indo.q6_K.gguf| Q6_K | 6 | very large, extremely low quality loss |
| Merak-7B-v3-Mini-Orca-Indo.q8_0.gguf| Q8_0 | 8 | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
# Original model card: 🦚Merak-7B-v3-Mini-Orca🐳
**Merak-7B-v3-Mini-Orca** is Ichsan2895's [Merak-7B-v3](https://huggingface.co/Ichsan2895/Merak-7B-v3) fine-tuned
on Bahasa Indonesia translated psmathur's [orca_mini_v1_dataset](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset).
**Quantized** versions is available:
GPTQ: https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo-GPTQ
GGML/GGUF: I will try to make this version once GGUF merge is stable.
Start chatting with Merak Mini Orca using the following code snippet:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo")
model = AutoModelForCausalLM.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo", torch_dtype=torch.float16, device_map="auto")
system_prompt = "SYSTEM: 'Anda adalah asisten AI. Anda akan diberi tugas. Anda harus menghasilkan jawaban yang rinci dan panjang.\n"
message = "Buatlah rencana untuk mengurangi penggunaan listrik di rumah."
prompt = f"{system_prompt}USER: {message}\nASSISTANT:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.1, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Prompt format
You can use [Vicuna 1.1](https://github.com/oobabooga/text-generation-webui/blob/main/instruction-templates/Vicuna-v1.1.yaml)
format for Ooobabooga's text generation webui.
```
SYSTEM: Anda adalah asisten AI. Anda akan diberi tugas. Anda harus memberikan jawaban yang rinci dan panjang.
USER: <prompt> (without the <>)
ASSISTANT:
```
## Training details
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Merak-7B-v3-Mini-Orca was instruction fine-tuned on 2 x 3090-24GB for 6 hours. [LoRA](https://github.com/microsoft/LoRA), [DeepSpeed ZeRO-2](https://github.com/microsoft/DeepSpeed), and [FlashAttention](https://github.com/Dao-AILab/flash-attention) were implemented during training using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
Hyperparameter | value |
| ------ | ------ |
learning rate | 0.0004 |
batch size | 16 |
microbatch size | 2 |
warmup step | 100 |
epochs | 2 |
weight decay | 0.0 |
lr scheduler | cosine |
lora alpha | 16 |
lora rank | 16 |
lora dropout | 0.05 |
lora target modules | q_proj, v_proj, k_proj, o_proj |
cutoff length | 4096 |
#### Training loss
Step |Train Loss |
| ------ | ------ |
1 |0.9578 |
100 |0.816 |
200 |0.7819 |
300 |0.7279 |
400 |0.732 |
500 |0.7139 |
600 |0.6829 |
700 |0.6641 |
800 |0.6553 |
#### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## Citation
```
@Paper{arXiv,
author = {Touvron, et al},
title = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
journal = {arXiv preprint arXiv:2307.09288},
year = {2023}
}
@misc{orca_mini_v3_70b,
author = {Pankaj Mathur},
title = {orca_mini_v3_70b: An Orca Style Llama2-70b model},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_70b},
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
```
readme adapted from [TheBloke](https://huggingface.co/TheBloke)
|
mnavas/roberta-finetuned-WebClassification-v2-smalllinguaESv2
|
mnavas
| 2023-10-19T10:08:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-09T14:56:19Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-finetuned-WebClassification-v2-smalllinguaESv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-WebClassification-v2-smalllinguaESv2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3862
- Accuracy: 0.6909
- F1: 0.6909
- Precision: 0.6909
- Recall: 0.6909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 28 | 2.9841 | 0.2 | 0.2000 | 0.2 | 0.2 |
| No log | 2.0 | 56 | 2.8109 | 0.1636 | 0.1636 | 0.1636 | 0.1636 |
| No log | 3.0 | 84 | 2.5334 | 0.3455 | 0.3455 | 0.3455 | 0.3455 |
| No log | 4.0 | 112 | 2.1164 | 0.5273 | 0.5273 | 0.5273 | 0.5273 |
| No log | 5.0 | 140 | 1.9152 | 0.5818 | 0.5818 | 0.5818 | 0.5818 |
| No log | 6.0 | 168 | 1.6678 | 0.6182 | 0.6182 | 0.6182 | 0.6182 |
| No log | 7.0 | 196 | 1.5647 | 0.6545 | 0.6545 | 0.6545 | 0.6545 |
| No log | 8.0 | 224 | 1.4473 | 0.6727 | 0.6727 | 0.6727 | 0.6727 |
| No log | 9.0 | 252 | 1.3862 | 0.6909 | 0.6909 | 0.6909 | 0.6909 |
| No log | 10.0 | 280 | 1.3647 | 0.6909 | 0.6909 | 0.6909 | 0.6909 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
RIOLITE/products_matching_aumet_scratch_2023-10-19
|
RIOLITE
| 2023-10-19T10:06:48Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-19T10:06:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
hung200504/bert-large-22
|
hung200504
| 2023-10-19T09:56:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-19T09:55:29Z |
---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-large-22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-22
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.0375 | 0.09 | 5 | 6.0219 |
| 6.0415 | 0.18 | 10 | 6.0159 |
| 6.0408 | 0.27 | 15 | 6.0100 |
| 5.9961 | 0.36 | 20 | 6.0042 |
| 6.0668 | 0.45 | 25 | 5.9988 |
| 6.0106 | 0.55 | 30 | 5.9936 |
| 6.0434 | 0.64 | 35 | 5.9886 |
| 5.9975 | 0.73 | 40 | 5.9838 |
| 6.025 | 0.82 | 45 | 5.9792 |
| 5.9994 | 0.91 | 50 | 5.9746 |
| 5.9914 | 1.0 | 55 | 5.9704 |
| 5.9746 | 1.09 | 60 | 5.9667 |
| 5.9869 | 1.18 | 65 | 5.9629 |
| 5.9517 | 1.27 | 70 | 5.9593 |
| 5.9952 | 1.36 | 75 | 5.9562 |
| 5.9493 | 1.45 | 80 | 5.9531 |
| 5.9658 | 1.55 | 85 | 5.9503 |
| 5.9816 | 1.64 | 90 | 5.9480 |
| 5.9311 | 1.73 | 95 | 5.9456 |
| 5.9632 | 1.82 | 100 | 5.9433 |
| 5.9399 | 1.91 | 105 | 5.9413 |
| 5.935 | 2.0 | 110 | 5.9393 |
| 5.9181 | 2.09 | 115 | 5.9375 |
| 5.8885 | 2.18 | 120 | 5.9360 |
| 5.9249 | 2.27 | 125 | 5.9344 |
| 5.9314 | 2.36 | 130 | 5.9335 |
| 5.904 | 2.45 | 135 | 5.9321 |
| 5.9061 | 2.55 | 140 | 5.9313 |
| 5.932 | 2.64 | 145 | 5.9306 |
| 5.9327 | 2.73 | 150 | 5.9300 |
| 5.9391 | 2.82 | 155 | 5.9296 |
| 5.9608 | 2.91 | 160 | 5.9293 |
| 5.9388 | 3.0 | 165 | 5.9292 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
IlluminatiPudding/Taxi_v3
|
IlluminatiPudding
| 2023-10-19T09:55:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T09:55:39Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="IlluminatiPudding/Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Against61/SNT_BOT
|
Against61
| 2023-10-19T09:07:23Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-10-18T11:50:35Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: SNT_BOT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SNT_BOT
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Waterfront/Llama-2-7b-chat-hf-social-media-captions-10k
|
Waterfront
| 2023-10-19T09:04:22Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-10-19T06:39:43Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: Llama-2-7b-chat-hf-social-media-captions-10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-chat-hf-social-media-captions-10k
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mnavas/roberta-finetuned-WebClassification-v2-smalllinguaMultiv2
|
mnavas
| 2023-10-19T09:03:44Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T18:32:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-finetuned-WebClassification-v2-smalllinguaMultiv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-WebClassification-v2-smalllinguaMultiv2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8644
- Accuracy: 0.8387
- F1: 0.8387
- Precision: 0.8387
- Recall: 0.8387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 95 | 2.3654 | 0.4409 | 0.4409 | 0.4409 | 0.4409 |
| No log | 2.0 | 190 | 1.8455 | 0.5269 | 0.5269 | 0.5269 | 0.5269 |
| No log | 3.0 | 285 | 1.4468 | 0.6344 | 0.6344 | 0.6344 | 0.6344 |
| No log | 4.0 | 380 | 1.1099 | 0.7419 | 0.7419 | 0.7419 | 0.7419 |
| No log | 5.0 | 475 | 1.0515 | 0.7634 | 0.7634 | 0.7634 | 0.7634 |
| 1.6355 | 6.0 | 570 | 0.9938 | 0.7312 | 0.7312 | 0.7312 | 0.7312 |
| 1.6355 | 7.0 | 665 | 0.8275 | 0.7957 | 0.7957 | 0.7957 | 0.7957 |
| 1.6355 | 8.0 | 760 | 0.8344 | 0.7957 | 0.7957 | 0.7957 | 0.7957 |
| 1.6355 | 9.0 | 855 | 0.8516 | 0.8065 | 0.8065 | 0.8065 | 0.8065 |
| 1.6355 | 10.0 | 950 | 0.8723 | 0.7957 | 0.7957 | 0.7957 | 0.7957 |
| 0.2827 | 11.0 | 1045 | 0.8644 | 0.8387 | 0.8387 | 0.8387 | 0.8387 |
| 0.2827 | 12.0 | 1140 | 0.9343 | 0.8065 | 0.8065 | 0.8065 | 0.8065 |
| 0.2827 | 13.0 | 1235 | 1.0181 | 0.7957 | 0.7957 | 0.7957 | 0.7957 |
| 0.2827 | 14.0 | 1330 | 1.0068 | 0.7957 | 0.7957 | 0.7957 | 0.7957 |
| 0.2827 | 15.0 | 1425 | 1.0085 | 0.8065 | 0.8065 | 0.8065 | 0.8065 |
| 0.0485 | 16.0 | 1520 | 1.0257 | 0.8280 | 0.8280 | 0.8280 | 0.8280 |
| 0.0485 | 17.0 | 1615 | 1.0305 | 0.8172 | 0.8172 | 0.8172 | 0.8172 |
| 0.0485 | 18.0 | 1710 | 1.0648 | 0.7957 | 0.7957 | 0.7957 | 0.7957 |
| 0.0485 | 19.0 | 1805 | 1.0677 | 0.7957 | 0.7957 | 0.7957 | 0.7957 |
| 0.0485 | 20.0 | 1900 | 1.0687 | 0.7957 | 0.7957 | 0.7957 | 0.7957 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
chrlie/q-FrozenLake-v1-4x4-noSlippery
|
chrlie
| 2023-10-19T09:01:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T09:01:05Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="chrlie/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aquinovo/llama-2-70b-dexter-4kdataset-3500epochs-adapter
|
aquinovo
| 2023-10-19T09:00:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T08:59:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
sileod/mdeberta-v3-base-tasksource-nli
|
sileod
| 2023-10-19T08:56:15Z | 145 | 15 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"mdeberta-v3-base",
"nli",
"natural-language-inference",
"multilingual",
"multitask",
"multi-task",
"pipeline",
"extreme-multi-task",
"extreme-mtl",
"tasksource",
"zero-shot",
"rlhf",
"zero-shot-classification",
"zh",
"ja",
"ar",
"ko",
"de",
"fr",
"es",
"pt",
"hi",
"id",
"it",
"tr",
"ru",
"bn",
"ur",
"mr",
"ta",
"vi",
"fa",
"pl",
"uk",
"nl",
"sv",
"he",
"sw",
"ps",
"dataset:xnli",
"dataset:metaeval/xnli",
"dataset:americas_nli",
"dataset:MoritzLaurer/multilingual-NLI-26lang-2mil7",
"dataset:stsb_multi_mt",
"dataset:paws-x",
"dataset:miam",
"dataset:strombergnlp/x-stance",
"dataset:tyqiangz/multilingual-sentiments",
"dataset:metaeval/universal-joy",
"dataset:amazon_reviews_multi",
"dataset:cardiffnlp/tweet_sentiment_multilingual",
"dataset:strombergnlp/offenseval_2020",
"dataset:offenseval_dravidian",
"dataset:nedjmaou/MLMA_hate_speech",
"dataset:xglue",
"dataset:ylacombe/xsum_factuality",
"dataset:metaeval/x-fact",
"dataset:pasinit/xlwic",
"dataset:tasksource/oasst1_dense_flat",
"dataset:papluca/language-identification",
"dataset:wili_2018",
"dataset:exams",
"dataset:xcsr",
"dataset:xcopa",
"dataset:juletxara/xstory_cloze",
"dataset:Anthropic/hh-rlhf",
"dataset:universal_dependencies",
"dataset:tasksource/oasst1_pairwise_rlhf_reward",
"dataset:OpenAssistant/oasst1",
"arxiv:2301.05948",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-04-29T17:48:39Z |
---
license: apache-2.0
tags:
- mdeberta-v3-base
- text-classification
- nli
- natural-language-inference
- multilingual
- multitask
- multi-task
- pipeline
- extreme-multi-task
- extreme-mtl
- tasksource
- zero-shot
- rlhf
datasets:
- xnli
- metaeval/xnli
- americas_nli
- MoritzLaurer/multilingual-NLI-26lang-2mil7
- stsb_multi_mt
- paws-x
- miam
- strombergnlp/x-stance
- tyqiangz/multilingual-sentiments
- metaeval/universal-joy
- amazon_reviews_multi
- cardiffnlp/tweet_sentiment_multilingual
- strombergnlp/offenseval_2020
- offenseval_dravidian
- nedjmaou/MLMA_hate_speech
- xglue
- ylacombe/xsum_factuality
- metaeval/x-fact
- pasinit/xlwic
- tasksource/oasst1_dense_flat
- papluca/language-identification
- wili_2018
- exams
- xcsr
- xcopa
- juletxara/xstory_cloze
- Anthropic/hh-rlhf
- universal_dependencies
- tasksource/oasst1_pairwise_rlhf_reward
- OpenAssistant/oasst1
language:
- multilingual
- zh
- ja
- ar
- ko
- de
- fr
- es
- pt
- hi
- id
- it
- tr
- ru
- bn
- ur
- mr
- ta
- vi
- fa
- pl
- uk
- nl
- sv
- he
- sw
- ps
pipeline_tag: zero-shot-classification
---
# Model Card for mDeBERTa-v3-base-tasksource-nli
Multilingual [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) with 30k steps multi-task training on [mtasksource](https://github.com/sileod/tasksource/blob/main/mtasks.md)
This model can be used as a stable starting-point for further fine-tuning, or directly in zero-shot NLI model or a zero-shot pipeline.
In addition, you can use the provided [adapters](https://huggingface.co/sileod/mdeberta-v3-base-tasksource-adapters) to directly load a model for hundreds of tasks.
```python
!pip install tasknet, tasksource -q
import tasknet as tn
pipe=tn.load_pipeline(
'sileod/mdeberta-v3-base-tasksource-nli',
'miam/dihana')
pipe(['si','como esta?'])
```
For more details, see [deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) and replace tasksource by mtasksource.
# Software
https://github.com/sileod/tasksource/
https://github.com/sileod/tasknet/
# Contact and citation
For help integrating tasksource into your experiments, please contact [[email protected]](mailto:[email protected]).
For more details, refer to this [article:](https://arxiv.org/abs/2301.05948)
```bib
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
```
|
merve/emoji-dreambooth-trained-xl
|
merve
| 2023-10-19T08:51:55Z | 4 | 6 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-12T16:18:09Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a TOK emoji
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - merve/emoji-dreambooth-trained-xl
You can enter the prompt: "a TOK emoji as" and then add what you want, e.g. "a TOK emoji as baby yoda".
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a TOK emoji using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
KayabaEngine/ppo-Pyramids
|
KayabaEngine
| 2023-10-19T08:51:04Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-10-19T06:44:18Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: KayabaEngine/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hung200504/bert-18
|
hung200504
| 2023-10-19T08:48:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:deepset/bert-base-cased-squad2",
"base_model:finetune:deepset/bert-base-cased-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-19T08:47:44Z |
---
license: cc-by-4.0
base_model: deepset/bert-base-cased-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-18
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.2713 | 0.09 | 5 | 12.1606 |
| 11.199 | 0.18 | 10 | 11.9115 |
| 10.6074 | 0.27 | 15 | 11.6709 |
| 10.5475 | 0.36 | 20 | 11.4407 |
| 10.3761 | 0.45 | 25 | 11.2173 |
| 10.2166 | 0.55 | 30 | 11.0033 |
| 9.4143 | 0.64 | 35 | 10.7983 |
| 9.8307 | 0.73 | 40 | 10.6034 |
| 9.3026 | 0.82 | 45 | 10.4169 |
| 9.0636 | 0.91 | 50 | 10.2387 |
| 8.7689 | 1.0 | 55 | 10.0700 |
| 8.7969 | 1.09 | 60 | 9.9094 |
| 8.7596 | 1.18 | 65 | 9.7588 |
| 8.8433 | 1.27 | 70 | 9.6152 |
| 8.3576 | 1.36 | 75 | 9.4808 |
| 8.6226 | 1.45 | 80 | 9.3540 |
| 8.3176 | 1.55 | 85 | 9.2346 |
| 8.2174 | 1.64 | 90 | 9.1231 |
| 8.0514 | 1.73 | 95 | 9.0198 |
| 8.0813 | 1.82 | 100 | 8.9240 |
| 7.6971 | 1.91 | 105 | 8.8362 |
| 7.865 | 2.0 | 110 | 8.7562 |
| 7.7614 | 2.09 | 115 | 8.6834 |
| 7.6525 | 2.18 | 120 | 8.6179 |
| 7.7074 | 2.27 | 125 | 8.5593 |
| 7.7802 | 2.36 | 130 | 8.5073 |
| 7.4788 | 2.45 | 135 | 8.4625 |
| 7.6863 | 2.55 | 140 | 8.4245 |
| 7.3113 | 2.64 | 145 | 8.3934 |
| 7.6127 | 2.73 | 150 | 8.3692 |
| 7.471 | 2.82 | 155 | 8.3509 |
| 7.4979 | 2.91 | 160 | 8.3393 |
| 7.5977 | 3.0 | 165 | 8.3340 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yeye776/t5-large-finetuned
|
yeye776
| 2023-10-19T08:43:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-large",
"base_model:finetune:paust/pko-t5-large",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-19T08:40:11Z |
---
license: cc-by-4.0
base_model: paust/pko-t5-large
tags:
- generated_from_trainer
model-index:
- name: t5-large-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-finetuned
This model is a fine-tuned version of [paust/pko-t5-large](https://huggingface.co/paust/pko-t5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
DazeraXXXL/sd-class-butterflies-32
|
DazeraXXXL
| 2023-10-19T08:42:02Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-10-19T08:38:20Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('DazeraXXXL/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
hung200504/bert-17
|
hung200504
| 2023-10-19T08:40:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:deepset/bert-base-cased-squad2",
"base_model:finetune:deepset/bert-base-cased-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-19T08:40:02Z |
---
license: cc-by-4.0
base_model: deepset/bert-base-cased-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-17
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.0352 | 0.09 | 5 | 11.3392 |
| 10.0155 | 0.18 | 10 | 10.1330 |
| 8.6139 | 0.27 | 15 | 9.0228 |
| 7.7654 | 0.36 | 20 | 8.0477 |
| 7.1161 | 0.45 | 25 | 7.2438 |
| 6.486 | 0.55 | 30 | 6.6691 |
| 5.9793 | 0.64 | 35 | 6.3524 |
| 5.8845 | 0.73 | 40 | 6.2251 |
| 5.8619 | 0.82 | 45 | 6.1625 |
| 5.7536 | 0.91 | 50 | 6.1058 |
| 5.6831 | 1.0 | 55 | 6.0479 |
| 5.5525 | 1.09 | 60 | 5.9939 |
| 5.4714 | 1.18 | 65 | 5.9510 |
| 5.4384 | 1.27 | 70 | 5.9123 |
| 5.4539 | 1.36 | 75 | 5.8817 |
| 5.4073 | 1.45 | 80 | 5.8593 |
| 5.4048 | 1.55 | 85 | 5.8395 |
| 5.2997 | 1.64 | 90 | 5.8225 |
| 5.2388 | 1.73 | 95 | 5.8099 |
| 5.2564 | 1.82 | 100 | 5.7986 |
| 5.1758 | 1.91 | 105 | 5.7872 |
| 5.1926 | 2.0 | 110 | 5.7800 |
| 4.9244 | 2.09 | 115 | 5.7747 |
| 5.0897 | 2.18 | 120 | 5.7689 |
| 5.2493 | 2.27 | 125 | 5.7610 |
| 5.0594 | 2.36 | 130 | 5.7541 |
| 5.0792 | 2.45 | 135 | 5.7485 |
| 4.9952 | 2.55 | 140 | 5.7455 |
| 4.8796 | 2.64 | 145 | 5.7436 |
| 4.9344 | 2.73 | 150 | 5.7418 |
| 5.2387 | 2.82 | 155 | 5.7402 |
| 5.0734 | 2.91 | 160 | 5.7385 |
| 5.0227 | 3.0 | 165 | 5.7381 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
LoneStriker/Euryale-1.3-L2-70B-6.0bpw-h6-exl2
|
LoneStriker
| 2023-10-19T08:37:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-19T08:32:46Z |
---
license: llama2
language:
- en
---

17th Attempt. Past 10 Failed, cost me >$200 lol.
Idea is an updated version of Euryale with ReMantik instead of the ties-merge between the original 3 models.
This is then mixed with a saucy model with a Mythomax-esque Ratio, and a certain experimental (self) LoRA applied to it.
Test Results: Works Well.
<br>NSFL and NSFW fine in roleplay context.
<br>slight censor with 0 context, zero issues in actual RP / ERP.
<br>Good Prose, Not Dumbed Down due to RP merges from testing.
<br> I have not encountered any repetition issues some had with the original Euryale. tell me if you do, though.
Prompt and System Format:
most works well. I recommend Alpaca.
ST Settings used for Test:
Lightning 1.1 System Prompt + Shortwave(1.2 Temperature)
Support me [here](https://ko-fi.com/sao10k) :)
Quants done by TheBloke! Ty a lot to him.
https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ
https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF
https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-AWQ
|
KayabaEngine/a2c-PandaReachDense-v3
|
KayabaEngine
| 2023-10-19T08:36:55Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T08:31:27Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
barkely/ppo-LunarLander-v2
|
barkely
| 2023-10-19T08:34:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T08:34:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.50 +/- 26.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
coolcat0/my_awesome_eli5_mlm_model
|
coolcat0
| 2023-10-19T08:34:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-19T08:03:09Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7444 | 1.0 | 1137 | 2.0548 |
| 1.8523 | 2.0 | 2274 | 2.0412 |
| 1.9673 | 3.0 | 3411 | 1.9787 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yingq/my_awesome_eli5_mlm_model
|
yingq
| 2023-10-19T08:34:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-19T08:03:00Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7206 | 1.0 | 1142 | 2.1091 |
| 1.8323 | 2.0 | 2284 | 2.0063 |
| 1.9619 | 3.0 | 3426 | 2.0127 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
sseth/path-to-save-model-table
|
sseth
| 2023-10-19T08:32:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-19T07:25:12Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks table
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - sseth/path-to-save-model-table
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks table using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
MahdeenSky/MahdeenSkyRVC
|
MahdeenSky
| 2023-10-19T08:32:23Z | 0 | 6 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-24T00:04:27Z |
---
license: openrail
---
Currently Available Models:
1. Pippa Pipkin (Phase Connect)
2. Andrew Tate
3. Senko-san
4. Kiryu Coco (Hololive 4th Gen)
5. Kurokami Fubuki (Hololive Gamers)
6. VirtualHarby
7. Tanya Degurechaff (Tanya | VA: Aoi Yuuki) - Saga of Tanya the Evil
8. Ant Hime Ch.
9. Slardar (Dota 2)
10. Robert Downey Jr. (Iron Man)
11. Kazama Iroha (HoloX | Hololive 6th Gen)
12. Kirigaya Kazuto (Kirito | VA: Yoshitsugu Matsuoka ) - Sword Art Online
13. Literally a Horse
14. Maemi Tenma (Phase Connect)
15. Elira Pendora (Nijisanji EN | LazuLight)
16. Yuuki Asuna (Asuna | VA: Haruka Tomatsu) - Sword Art Online
17. Tokisaki Kurumi (VA: Asami Sanada) - Date A Live
18. Husher_X (Asmrtist)
19. Koseki Bijou (Hololive EN 3rd Gen Advent)
20. Shiori Novella (Hololive EN 3rd Gen Advent)
21. Nerissa Ravencroft (Hololive EN 3rd Gen Advent)
22. Nyana Banana (FlaVR)
|
riteshm06/zephyr-support-chatbot
|
riteshm06
| 2023-10-19T08:31:36Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2023-10-19T07:54:48Z |
---
license: mit
base_model: TheBloke/zephyr-7B-alpha-GPTQ
tags:
- generated_from_trainer
model-index:
- name: zephyr-support-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mmmichelle/my_awesome_eli5_mlm_model
|
mmmichelle
| 2023-10-19T08:30:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-19T08:03:19Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7469 | 1.0 | 1125 | 2.0577 |
| 1.8393 | 2.0 | 2250 | 2.0306 |
| 1.9959 | 3.0 | 3375 | 1.9771 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
wordcab/llama-natural-instructions-7b
|
wordcab
| 2023-10-19T08:29:53Z | 4 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"peft",
"en",
"dataset:Muennighoff/natural-instructions",
"arxiv:2106.09685",
"arxiv:2302.13971",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-07T07:17:51Z |
---
language:
- en
library_name: transformers
tags:
- peft
- llama
datasets:
- Muennighoff/natural-instructions
pipeline_tag: text-generation
base_model: decapoda-research/llama-7b-hf
---
# LoRA LLaMA Natural Instructions

This model is a fine-tuned version of [llama-7b](https://huggingface.co/decapoda-research/llama-7b-hf) from [Meta](https://huggingface.co/facebook),
on the [Natural Instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions) dataset from [AllenAI](https://huggingface.co/allenai),
using the [LoRA](https://arxiv.org/pdf/2106.09685.pdf) training technique.
⚠️ **This model is for Research purpose only (See the [license](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/LICENSE))**
## WandB Report
Click on the badge below to see the full report on Weights & Biases.
[](https://api.wandb.ai/links/chainyo-mleng/ia2mloow)
## Usage
### Installation
```bash
pip install loralib bitsandbytes datasets git+https://github.com/huggingface/peft.git git+https://github.com/huggingface/transformers.git sentencepiece
```
### Format of the input
The input should be a string of text with the following format:
```python
prompt_template = {
"prompt": "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n",
"response": "### Response:"
}
def generate_prompt(
definition: str,
inputs: str,
targets: Union[None, str] = None,
) -> str:
"""Generate a prompt from instruction and input."""
res = prompt_template["prompt"].format(
instruction=definition, input=inputs
)
if targets:
res = f"{res}{targets}"
return res
def get_response(output: str) -> str:
"""Get the response from the output."""
return output.split(prompt_template["response"])[1].strip()
```
Feel free to use these utility functions to generate the prompt and to extract the response from the model output.
- `definition` is the instruction describing the task. It's generally a single sentence explaining the expected output and
the reasoning steps to follow.
- `inputs` is the input to the task. It can be a single sentence or a paragraph. It's the context used by the model to
generate the response to the task.
- `targets` is the expected output of the task. It's used for training the model. _It's not required for inference._
### Inference
You can load the model using only the adapters or load the full model with the adapters and the weights.
#### The tokenizer
```python
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("wordcab/llama-natural-instructions-7b")
tokenizer.padding_side = "left"
tokenizer.pad_token_id = (0)
```
#### Load the model with the adapters
```python
from peft import PeftModel
from transformers import LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(
model,
"wordcab/llama-natural-instructions-7b",
torch_dtype=torch.float16,
device_map={"": 0},
)
```
#### Load the full model
```python
model = LlamaForCausalLM.from_pretrained(
"wordcab/llama-natural-instructions-7b",
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
```
#### Evaluation mode
Don't forget to put the model in evaluation mode. And if you are using PyTorch v2.0 or higher don't forget to call
the compile method.
```python
model.eval()
if torch.__version__ >= "2":
model = torch.compile(model)
```
#### Generate the response
```python
prompt = generate_prompt(
"In this task, you have to analyze the full sentences and do reasoning and quick maths to find the correct answer.",
f"You are now a superbowl star. You are the quarterback of the team. Your team is down by 3 points. You are in the last 2 minutes of the game. The other team has a score of 28. What is the score of your team?",
)
inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True, max_length=2048)
input_ids = inputs["input_ids"].to(model.device)
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.75,
top_k=40,
num_beams=4,
)
with torch.no_grad():
gen_outputs = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=50,
)
s = gen_outputs.sequences[0]
output = tokenizer.decode(s, skip_special_tokens=True)
response = prompter.get_response(output)
print(response)
>>> 25
```
You can try with other prompts that are not maths related as well! :hugs:
## Beanchmark
We benchmarked our model on the following tasks: [BoolQ](https://huggingface.co/datasets/boolq), [PIQA](https://huggingface.co/datasets/piqa), [WinoGrande](https://huggingface.co/datasets/winogrande), [OpenBookQA](https://huggingface.co/datasets/openbookqa).
| | BoolQ | PIQA | WinoGrande | OpenBookQA | Precision | Inference time (s) |
| --- | --- | --- | --- | --- | --- | --- |
| Original LLaMA 7B | 76.5 | 79.8 | 70.1 | 57.2 | fp32 | 3 seconds |
| Original LLaMA 13B | 78.1 | 80.1 | 73 | 56.4 | fp32 | >5 seconds |
| LoRA LLaMA 7B | 63.9 | 51.3 | 48.9 | 31.4 | 8bit | 0.65 seconds |
| LoRA LLaMA 13B | 70 | 63.93 | 51.6 | 50.4 | 8bit | 1.2 seconds |
__Link to the 13B model:__ [wordcab/llama-natural-instructions-13b](https://huggingface.co/wordcab/llama-natural-instructions-13b)
Overall our LoRA model is less performant than the original model from Meta, if we compare the results from the [original paper](https://arxiv.org/pdf/2302.13971.pdf).
The performance degradation is due to the fact we load the model in 8bit and we use the adapters from the LoRA training.
Thanks to the 8bit quantization, the model is 4 times faster than the original model and the results are still decent.
Some complex tasks like WinoGrande and OpenBookQA are more difficult to solve with the adapters.
## Training Hardware
This model was trained on a single NVIDIA RTX 3090 GPU.
|
milinbhade1214/mt5-small-finetuned-arith
|
milinbhade1214
| 2023-10-19T08:26:11Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-10-19T07:07:40Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-arith
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-arith
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6651
- Rouge1: 90.0
- Rouge2: 70.4082
- Rougel: 85.3061
- Rougelsum: 85.102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 64
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 7 | 11.7623 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 14 | 11.0473 | 0.2041 | 0.0 | 0.2041 | 0.2041 |
| No log | 3.0 | 21 | 9.4965 | 0.4082 | 0.0 | 0.4082 | 0.4082 |
| No log | 4.0 | 28 | 8.3848 | 0.8673 | 0.0 | 0.8673 | 0.8673 |
| No log | 5.0 | 35 | 7.6170 | 1.7515 | 0.0 | 1.7114 | 1.6753 |
| No log | 6.0 | 42 | 7.0008 | 4.9101 | 0.0 | 4.9093 | 4.8585 |
| No log | 7.0 | 49 | 6.7836 | 8.0777 | 0.0 | 7.7956 | 7.9186 |
| 16.7453 | 8.0 | 56 | 6.6780 | 12.3572 | 0.0 | 12.1332 | 11.878 |
| 16.7453 | 9.0 | 63 | 5.2800 | 13.5863 | 0.1701 | 12.7907 | 12.8991 |
| 16.7453 | 10.0 | 70 | 4.4990 | 13.8751 | 0.1701 | 13.1962 | 13.1834 |
| 16.7453 | 11.0 | 77 | 4.3624 | 13.4276 | 0.1701 | 13.3009 | 13.2722 |
| 16.7453 | 12.0 | 84 | 4.1101 | 14.0537 | 0.3401 | 13.3534 | 13.354 |
| 16.7453 | 13.0 | 91 | 3.7171 | 14.2128 | 0.3401 | 13.4985 | 13.4888 |
| 16.7453 | 14.0 | 98 | 3.4322 | 13.9164 | 0.1701 | 13.3916 | 13.3625 |
| 16.7453 | 15.0 | 105 | 3.2408 | 13.931 | 0.3401 | 13.7998 | 13.7901 |
| 6.4188 | 16.0 | 112 | 3.0734 | 14.0816 | 0.3401 | 13.7901 | 13.7901 |
| 6.4188 | 17.0 | 119 | 2.9270 | 14.344 | 0.8242 | 14.1983 | 14.208 |
| 6.4188 | 18.0 | 126 | 2.7746 | 16.7178 | 2.4928 | 16.3946 | 16.4334 |
| 6.4188 | 19.0 | 133 | 2.6117 | 22.7164 | 7.4678 | 22.1643 | 22.1381 |
| 6.4188 | 20.0 | 140 | 2.4419 | 25.0641 | 9.4306 | 24.2861 | 24.2714 |
| 6.4188 | 21.0 | 147 | 2.2793 | 32.0373 | 13.6803 | 31.0317 | 30.8515 |
| 6.4188 | 22.0 | 154 | 2.0741 | 40.1666 | 21.0894 | 38.5458 | 38.4592 |
| 6.4188 | 23.0 | 161 | 1.8635 | 40.1133 | 21.1222 | 38.1971 | 38.1165 |
| 3.1581 | 24.0 | 168 | 1.6788 | 47.1732 | 25.3843 | 44.6854 | 44.6021 |
| 3.1581 | 25.0 | 175 | 1.5153 | 49.4894 | 27.0538 | 46.9745 | 46.8775 |
| 3.1581 | 26.0 | 182 | 1.3337 | 47.7463 | 25.9589 | 45.3779 | 45.2896 |
| 3.1581 | 27.0 | 189 | 1.1634 | 48.6608 | 26.067 | 46.293 | 46.1794 |
| 3.1581 | 28.0 | 196 | 1.0392 | 86.6181 | 65.5782 | 81.9242 | 81.8732 |
| 3.1581 | 29.0 | 203 | 0.9519 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 3.1581 | 30.0 | 210 | 0.8837 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 3.1581 | 31.0 | 217 | 0.8246 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 2.0354 | 32.0 | 224 | 0.7630 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 2.0354 | 33.0 | 231 | 0.7221 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 2.0354 | 34.0 | 238 | 0.6957 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 2.0354 | 35.0 | 245 | 0.6852 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 2.0354 | 36.0 | 252 | 0.6734 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 2.0354 | 37.0 | 259 | 0.6667 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 2.0354 | 38.0 | 266 | 0.6670 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 2.0354 | 39.0 | 273 | 0.6684 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.5363 | 40.0 | 280 | 0.6626 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.5363 | 41.0 | 287 | 0.6621 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.5363 | 42.0 | 294 | 0.6699 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.5363 | 43.0 | 301 | 0.6751 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.5363 | 44.0 | 308 | 0.6839 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.5363 | 45.0 | 315 | 0.6987 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.5363 | 46.0 | 322 | 0.7060 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.5363 | 47.0 | 329 | 0.7125 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.324 | 48.0 | 336 | 0.7103 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.324 | 49.0 | 343 | 0.7098 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.324 | 50.0 | 350 | 0.7088 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.324 | 51.0 | 357 | 0.7112 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.324 | 52.0 | 364 | 0.7094 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.324 | 53.0 | 371 | 0.7041 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.324 | 54.0 | 378 | 0.6939 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.2374 | 55.0 | 385 | 0.6843 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.2374 | 56.0 | 392 | 0.6791 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.2374 | 57.0 | 399 | 0.6755 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.2374 | 58.0 | 406 | 0.6715 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.2374 | 59.0 | 413 | 0.6661 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.2374 | 60.0 | 420 | 0.6639 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.2374 | 61.0 | 427 | 0.6629 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.2374 | 62.0 | 434 | 0.6635 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.199 | 63.0 | 441 | 0.6646 | 90.0 | 70.4082 | 85.3061 | 85.102 |
| 1.199 | 64.0 | 448 | 0.6651 | 90.0 | 70.4082 | 85.3061 | 85.102 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
irsyadjazli/my_awesome_eli5_mlm_model
|
irsyadjazli
| 2023-10-19T08:23:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-19T08:03:01Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2353 | 1.0 | 1130 | 2.0551 |
| 2.1435 | 2.0 | 2260 | 2.0217 |
| 2.0999 | 3.0 | 3390 | 2.0117 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
confunius/my_awesome_eli5_mlm_model
|
confunius
| 2023-10-19T08:22:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-19T08:02:58Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2469 | 1.0 | 1136 | 2.0728 |
| 2.1803 | 2.0 | 2272 | 2.0424 |
| 2.1064 | 3.0 | 3408 | 1.9972 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Wangtheguy/my_awesome_eli5_mlm_model
|
Wangtheguy
| 2023-10-19T08:22:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-19T08:03:29Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2284 | 1.0 | 1146 | 2.0924 |
| 2.145 | 2.0 | 2292 | 2.0147 |
| 2.1011 | 3.0 | 3438 | 1.9970 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
WasuratS/wasurats_emotional_classification_model
|
WasuratS
| 2023-10-19T08:15:44Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"dataset:dair-ai/emotion",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-12T02:25:15Z |
---
datasets:
- dair-ai/emotion
language:
- en
pipeline_tag: text-classification
---
|
dhanilka/mistral-rep
|
dhanilka
| 2023-10-19T08:12:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"finetuned",
"conversational",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-18T20:46:44Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference:
parameters:
temperature: 0.7
---
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
andreydung/q-Taxi
|
andreydung
| 2023-10-19T08:11:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T08:11:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="andreydung/q-Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abhishek23HF/NL2SQL_BLOOMZ-3B
|
abhishek23HF
| 2023-10-19T08:00:04Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:bigscience/bloomz-3b",
"base_model:adapter:bigscience/bloomz-3b",
"region:us"
] | null | 2023-10-11T15:14:35Z |
---
library_name: peft
base_model: bigscience/bloomz-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Royal/stable_diffusionv1-5
|
Royal
| 2023-10-19T07:58:39Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-19T07:27:38Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
glimmerz/xlmroberta-ner-multilingual
|
glimmerz
| 2023-10-19T07:57:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"en",
"de",
"dataset:tner/wikiann",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-05T13:22:50Z |
---
license: apache-2.0
datasets:
- tner/wikiann
language:
- en
- de
metrics:
- precision
- recall
- f1
library_name: transformers
pipeline_tag: token-classification
---
|
livingbox/model-test-oct-19
|
livingbox
| 2023-10-19T07:51:01Z | 0 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-19T07:47:17Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Model-test-oct-19 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
mte3000/rvc
|
mte3000
| 2023-10-19T07:48:54Z | 0 | 0 | null |
[
"music",
"rvc",
"svc",
"ko",
"license:mit",
"region:us"
] | null | 2023-10-19T07:46:08Z |
---
license: mit
language:
- ko
tags:
- music
- rvc
- svc
---
# RVC 모델임
#### 안철수
https://huggingface.co/mte3000/rvc/resolve/main/Ahnsinger.zip
|
LoneStriker/Euryale-1.3-L2-70B-5.0bpw-h6-exl2
|
LoneStriker
| 2023-10-19T07:48:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-19T07:45:54Z |
---
license: llama2
language:
- en
---

17th Attempt. Past 10 Failed, cost me >$200 lol.
Idea is an updated version of Euryale with ReMantik instead of the ties-merge between the original 3 models.
This is then mixed with a saucy model with a Mythomax-esque Ratio, and a certain experimental (self) LoRA applied to it.
Test Results: Works Well.
<br>NSFL and NSFW fine in roleplay context.
<br>slight censor with 0 context, zero issues in actual RP / ERP.
<br>Good Prose, Not Dumbed Down due to RP merges from testing.
<br> I have not encountered any repetition issues some had with the original Euryale. tell me if you do, though.
Prompt and System Format:
most works well. I recommend Alpaca.
ST Settings used for Test:
Lightning 1.1 System Prompt + Shortwave(1.2 Temperature)
Support me [here](https://ko-fi.com/sao10k) :)
Quants done by TheBloke! Ty a lot to him.
https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ
https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF
https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-AWQ
|
wang1215/segformer-b3
|
wang1215
| 2023-10-19T07:27:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b3",
"base_model:finetune:nvidia/mit-b3",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-10-19T03:26:15Z |
---
license: other
base_model: nvidia/mit-b3
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b3
This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7826
- Mean Iou: 0.3995
- Mean Accuracy: 0.4977
- Overall Accuracy: 0.8759
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.9069
- Accuracy Flat-sidewalk: 0.9471
- Accuracy Flat-crosswalk: 0.5043
- Accuracy Flat-cyclinglane: 0.8684
- Accuracy Flat-parkingdriveway: 0.5057
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.7351
- Accuracy Human-person: 0.8662
- Accuracy Human-rider: 0.2599
- Accuracy Vehicle-car: 0.9494
- Accuracy Vehicle-truck: 0.1607
- Accuracy Vehicle-bus: 0.0044
- Accuracy Vehicle-tramtrain: 0.1992
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.7913
- Accuracy Vehicle-caravan: 0.4628
- Accuracy Vehicle-cartrailer: 0.0106
- Accuracy Construction-building: 0.9117
- Accuracy Construction-door: 0.2679
- Accuracy Construction-wall: 0.6351
- Accuracy Construction-fenceguardrail: 0.5893
- Accuracy Construction-bridge: 0.5639
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.4246
- Accuracy Object-pole: 0.6323
- Accuracy Object-trafficsign: 0.4266
- Accuracy Object-trafficlight: 0.2431
- Accuracy Nature-vegetation: 0.9540
- Accuracy Nature-terrain: 0.8819
- Accuracy Sky: 0.9827
- Accuracy Void-ground: 0.0045
- Accuracy Void-dynamic: 0.2006
- Accuracy Void-static: 0.5328
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: 0.0
- Iou Flat-road: 0.7947
- Iou Flat-sidewalk: 0.8656
- Iou Flat-crosswalk: 0.4529
- Iou Flat-cyclinglane: 0.6876
- Iou Flat-parkingdriveway: 0.4461
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.5989
- Iou Human-person: 0.6127
- Iou Human-rider: 0.2346
- Iou Vehicle-car: 0.8877
- Iou Vehicle-truck: 0.0662
- Iou Vehicle-bus: 0.0044
- Iou Vehicle-tramtrain: 0.1985
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.5765
- Iou Vehicle-caravan: 0.1495
- Iou Vehicle-cartrailer: 0.0106
- Iou Construction-building: 0.8060
- Iou Construction-door: 0.2190
- Iou Construction-wall: 0.5015
- Iou Construction-fenceguardrail: 0.4923
- Iou Construction-bridge: 0.3467
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.3908
- Iou Object-pole: 0.4693
- Iou Object-trafficsign: 0.3698
- Iou Object-trafficlight: 0.2052
- Iou Nature-vegetation: 0.8832
- Iou Nature-terrain: 0.7906
- Iou Sky: 0.9519
- Iou Void-ground: 0.0038
- Iou Void-dynamic: 0.1774
- Iou Void-static: 0.3885
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 0.7275 | 2.5 | 500 | 0.5765 | 0.3050 | 0.3654 | 0.8441 | nan | 0.9561 | 0.9153 | 0.3719 | 0.7164 | 0.4360 | 0.0 | 0.3475 | 0.8270 | 0.0 | 0.9318 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6843 | 0.0 | 0.0 | 0.9160 | 0.0667 | 0.3893 | 0.6512 | 0.0 | nan | 0.0 | 0.5447 | 0.0525 | 0.0 | 0.9581 | 0.8185 | 0.9737 | 0.0 | 0.0262 | 0.4752 | 0.0 | nan | 0.7208 | 0.8407 | 0.3582 | 0.6393 | 0.3693 | 0.0 | 0.2705 | 0.5291 | 0.0 | 0.8548 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5158 | 0.0 | 0.0 | 0.7684 | 0.0638 | 0.3606 | 0.4620 | 0.0 | nan | 0.0 | 0.3805 | 0.0522 | 0.0 | 0.8572 | 0.7657 | 0.9225 | 0.0 | 0.0256 | 0.3078 | 0.0 |
| 0.3654 | 5.0 | 1000 | 0.5265 | 0.3531 | 0.4276 | 0.8622 | nan | 0.9116 | 0.9476 | 0.4986 | 0.8194 | 0.4632 | 0.0 | 0.5613 | 0.8672 | 0.1407 | 0.9399 | 0.2129 | 0.0 | 0.0 | 0.0 | 0.7640 | 0.0 | 0.0 | 0.8915 | 0.1468 | 0.5813 | 0.5719 | 0.0 | nan | 0.3336 | 0.5583 | 0.4068 | 0.0 | 0.9470 | 0.8512 | 0.9780 | 0.0004 | 0.1935 | 0.5228 | 0.0 | nan | 0.7904 | 0.8491 | 0.4455 | 0.6935 | 0.4013 | 0.0 | 0.4607 | 0.5435 | 0.1227 | 0.8663 | 0.1019 | 0.0 | 0.0 | 0.0 | 0.5464 | 0.0 | 0.0 | 0.7740 | 0.1198 | 0.4746 | 0.4587 | 0.0 | nan | 0.2403 | 0.4036 | 0.2949 | 0.0 | 0.8663 | 0.7614 | 0.9334 | 0.0003 | 0.1492 | 0.3544 | 0.0 |
| 0.2359 | 7.5 | 1500 | 0.5790 | 0.3584 | 0.4296 | 0.8649 | nan | 0.8646 | 0.9501 | 0.4466 | 0.8506 | 0.5513 | 0.0 | 0.7099 | 0.8317 | 0.2099 | 0.9442 | 0.2546 | 0.0 | 0.0 | 0.0 | 0.7862 | 0.0087 | 0.0 | 0.9079 | 0.1046 | 0.6479 | 0.5239 | 0.0 | nan | 0.1543 | 0.5674 | 0.3864 | 0.0 | 0.9461 | 0.8753 | 0.9776 | 0.0000 | 0.1974 | 0.4793 | 0.0 | nan | 0.7917 | 0.8414 | 0.4214 | 0.7012 | 0.4494 | 0.0 | 0.5333 | 0.5888 | 0.1897 | 0.8663 | 0.1009 | 0.0 | 0.0 | 0.0 | 0.5427 | 0.0042 | 0.0 | 0.7817 | 0.0945 | 0.4696 | 0.4312 | 0.0 | nan | 0.1477 | 0.4314 | 0.3152 | 0.0 | 0.8749 | 0.7719 | 0.9409 | 0.0000 | 0.1780 | 0.3593 | 0.0 |
| 0.1708 | 10.0 | 2000 | 0.6066 | 0.3684 | 0.4479 | 0.8666 | nan | 0.8819 | 0.9466 | 0.5609 | 0.8324 | 0.4835 | 0.0 | 0.7200 | 0.8575 | 0.1404 | 0.9422 | 0.2656 | 0.0 | 0.0590 | 0.0 | 0.7505 | 0.2619 | 0.0 | 0.8906 | 0.2203 | 0.6425 | 0.5323 | 0.0 | nan | 0.3455 | 0.5923 | 0.4085 | 0.0 | 0.9552 | 0.8844 | 0.9791 | 0.0024 | 0.0951 | 0.5293 | 0.0 | nan | 0.7930 | 0.8537 | 0.4562 | 0.6519 | 0.4311 | 0.0 | 0.5478 | 0.5960 | 0.1301 | 0.8728 | 0.1051 | 0.0 | 0.0590 | 0.0 | 0.5390 | 0.0916 | 0.0 | 0.7864 | 0.1779 | 0.4949 | 0.4467 | 0.0 | nan | 0.3107 | 0.4336 | 0.3289 | 0.0 | 0.8757 | 0.7694 | 0.9416 | 0.0019 | 0.0893 | 0.3741 | 0.0 |
| 0.1326 | 12.5 | 2500 | 0.5934 | 0.3969 | 0.4877 | 0.8753 | nan | 0.9227 | 0.9490 | 0.4762 | 0.8499 | 0.5255 | 0.0 | 0.6941 | 0.8115 | 0.3960 | 0.9430 | 0.3828 | 0.0266 | 0.0998 | 0.0 | 0.7963 | 0.6565 | 0.0003 | 0.8988 | 0.2880 | 0.6352 | 0.5442 | 0.2746 | nan | 0.3048 | 0.6133 | 0.4269 | 0.0 | 0.9483 | 0.9064 | 0.9838 | 0.0025 | 0.2297 | 0.5070 | 0.0 | nan | 0.8032 | 0.8631 | 0.4521 | 0.7678 | 0.4440 | 0.0 | 0.5593 | 0.6096 | 0.2994 | 0.8777 | 0.1308 | 0.0266 | 0.0998 | 0.0 | 0.5663 | 0.2016 | 0.0003 | 0.7931 | 0.1957 | 0.4838 | 0.4475 | 0.2094 | nan | 0.2849 | 0.4530 | 0.3424 | 0.0 | 0.8798 | 0.7777 | 0.9441 | 0.0023 | 0.1979 | 0.3857 | 0.0 |
| 0.1116 | 15.0 | 3000 | 0.6267 | 0.3978 | 0.4820 | 0.8734 | nan | 0.9155 | 0.9431 | 0.5145 | 0.8423 | 0.4973 | 0.0 | 0.7284 | 0.8513 | 0.3146 | 0.9492 | 0.0753 | 0.0 | 0.2565 | 0.0 | 0.7845 | 0.3853 | 0.0220 | 0.8998 | 0.2497 | 0.6306 | 0.5571 | 0.3186 | nan | 0.4912 | 0.5902 | 0.4795 | 0.0 | 0.9501 | 0.9004 | 0.9843 | 0.0038 | 0.2017 | 0.5677 | 0.0 | nan | 0.8037 | 0.8591 | 0.4665 | 0.7147 | 0.4311 | 0.0 | 0.5698 | 0.5996 | 0.2693 | 0.8800 | 0.0366 | 0.0 | 0.2554 | 0.0 | 0.5526 | 0.1200 | 0.0216 | 0.7935 | 0.1949 | 0.4853 | 0.4704 | 0.2116 | nan | 0.3980 | 0.4450 | 0.3691 | 0.0 | 0.8815 | 0.7857 | 0.9439 | 0.0031 | 0.1751 | 0.3899 | 0.0 |
| 0.098 | 17.5 | 3500 | 0.6334 | 0.3922 | 0.5006 | 0.8729 | nan | 0.8961 | 0.9419 | 0.5747 | 0.8862 | 0.4977 | 0.0 | 0.7428 | 0.8491 | 0.3477 | 0.9464 | 0.0952 | 0.0 | 0.2937 | 0.0 | 0.7908 | 0.7738 | 0.0 | 0.8934 | 0.2479 | 0.6445 | 0.6108 | 0.4273 | nan | 0.4435 | 0.6190 | 0.4308 | 0.0015 | 0.9486 | 0.9026 | 0.9818 | 0.0099 | 0.2216 | 0.4994 | 0.0 | 0.0 | 0.7961 | 0.8651 | 0.5005 | 0.6765 | 0.4413 | 0.0 | 0.5751 | 0.6176 | 0.2944 | 0.8811 | 0.0373 | 0.0 | 0.2919 | 0.0 | 0.5578 | 0.2307 | 0.0 | 0.7961 | 0.1835 | 0.4901 | 0.4814 | 0.2506 | nan | 0.3771 | 0.4560 | 0.3562 | 0.0015 | 0.8810 | 0.7806 | 0.9474 | 0.0077 | 0.1823 | 0.3784 | 0.0 |
| 0.0894 | 20.0 | 4000 | 0.6973 | 0.3988 | 0.4923 | 0.8722 | nan | 0.8952 | 0.9456 | 0.5309 | 0.8357 | 0.4777 | 0.0 | 0.7630 | 0.8291 | 0.2785 | 0.9467 | 0.1712 | 0.0047 | 0.1377 | 0.0 | 0.7854 | 0.8237 | 0.0 | 0.9282 | 0.1899 | 0.5904 | 0.6020 | 0.4761 | nan | 0.3323 | 0.6192 | 0.4047 | 0.1381 | 0.9522 | 0.8851 | 0.9767 | 0.0050 | 0.1916 | 0.5277 | 0.0 | nan | 0.8062 | 0.8543 | 0.4692 | 0.6675 | 0.4204 | 0.0 | 0.5710 | 0.6083 | 0.2366 | 0.8855 | 0.0677 | 0.0046 | 0.1374 | 0.0 | 0.5492 | 0.2453 | 0.0 | 0.7996 | 0.1571 | 0.4887 | 0.4847 | 0.2656 | nan | 0.3148 | 0.4670 | 0.3491 | 0.1264 | 0.8815 | 0.7829 | 0.9496 | 0.0043 | 0.1718 | 0.3936 | 0.0 |
| 0.0819 | 22.5 | 4500 | 0.6867 | 0.4098 | 0.5001 | 0.8778 | nan | 0.9344 | 0.9410 | 0.5690 | 0.8783 | 0.4856 | 0.0 | 0.7065 | 0.8495 | 0.2085 | 0.9415 | 0.1530 | 0.0018 | 0.2354 | 0.0 | 0.7829 | 0.7796 | 0.0 | 0.9044 | 0.2261 | 0.6171 | 0.6045 | 0.4780 | nan | 0.4156 | 0.6265 | 0.4288 | 0.1457 | 0.9563 | 0.8877 | 0.9804 | 0.0064 | 0.2136 | 0.5447 | 0.0 | nan | 0.8016 | 0.8702 | 0.4902 | 0.7597 | 0.4279 | 0.0 | 0.5780 | 0.6123 | 0.1998 | 0.8889 | 0.0577 | 0.0018 | 0.2348 | 0.0 | 0.5898 | 0.2436 | 0.0 | 0.7992 | 0.1842 | 0.4829 | 0.4918 | 0.2855 | nan | 0.3732 | 0.4658 | 0.3650 | 0.1297 | 0.8823 | 0.7837 | 0.9500 | 0.0053 | 0.1841 | 0.3839 | 0.0 |
| 0.0767 | 25.0 | 5000 | 0.7377 | 0.4096 | 0.5109 | 0.8720 | nan | 0.8599 | 0.9464 | 0.5724 | 0.9354 | 0.4838 | 0.0 | 0.7392 | 0.8475 | 0.2679 | 0.9530 | 0.2438 | 0.0 | 0.2405 | 0.0 | 0.7879 | 0.8364 | 0.0 | 0.9155 | 0.2107 | 0.5924 | 0.5901 | 0.5525 | nan | 0.3980 | 0.6229 | 0.4648 | 0.2165 | 0.9550 | 0.8865 | 0.9823 | 0.0047 | 0.1970 | 0.5557 | 0.0 | nan | 0.7881 | 0.8643 | 0.5042 | 0.6317 | 0.4280 | 0.0 | 0.5817 | 0.6075 | 0.2397 | 0.8857 | 0.1052 | 0.0 | 0.2384 | 0.0 | 0.5664 | 0.2501 | 0.0 | 0.8056 | 0.1813 | 0.4878 | 0.4863 | 0.2871 | nan | 0.3652 | 0.4725 | 0.3883 | 0.1660 | 0.8804 | 0.7905 | 0.9503 | 0.0040 | 0.1704 | 0.3891 | 0.0 |
| 0.0725 | 27.5 | 5500 | 0.7085 | 0.3977 | 0.5056 | 0.8782 | nan | 0.9177 | 0.9482 | 0.4916 | 0.8966 | 0.4989 | 0.0 | 0.7119 | 0.8469 | 0.2483 | 0.9512 | 0.2387 | 0.0440 | 0.1287 | 0.0 | 0.7947 | 0.8184 | 0.0 | 0.9152 | 0.2257 | 0.6472 | 0.5963 | 0.5426 | nan | 0.3951 | 0.6422 | 0.4369 | 0.2195 | 0.9499 | 0.8824 | 0.9821 | 0.0036 | 0.1824 | 0.5266 | 0.0 | 0.0 | 0.8109 | 0.8638 | 0.4498 | 0.7314 | 0.4437 | 0.0 | 0.5797 | 0.6047 | 0.2215 | 0.8861 | 0.0855 | 0.0430 | 0.1284 | 0.0 | 0.5657 | 0.2395 | 0.0 | 0.8058 | 0.1939 | 0.5113 | 0.4913 | 0.2943 | nan | 0.3732 | 0.4773 | 0.3770 | 0.1643 | 0.8836 | 0.7864 | 0.9509 | 0.0029 | 0.1639 | 0.3905 | 0.0 |
| 0.0685 | 30.0 | 6000 | 0.7388 | 0.4115 | 0.5051 | 0.8738 | nan | 0.9135 | 0.9420 | 0.5290 | 0.8405 | 0.4909 | 0.0 | 0.7408 | 0.8566 | 0.3161 | 0.9461 | 0.1138 | 0.0003 | 0.1616 | 0.0 | 0.8061 | 0.7486 | 0.0001 | 0.9074 | 0.2986 | 0.6418 | 0.5669 | 0.4769 | nan | 0.4607 | 0.6454 | 0.4717 | 0.2320 | 0.9531 | 0.8849 | 0.9802 | 0.0037 | 0.1983 | 0.5417 | 0.0 | nan | 0.7911 | 0.8647 | 0.4671 | 0.6651 | 0.4361 | 0.0 | 0.5848 | 0.6127 | 0.2642 | 0.8885 | 0.0453 | 0.0003 | 0.1613 | 0.0 | 0.5455 | 0.2421 | 0.0001 | 0.8022 | 0.2382 | 0.4975 | 0.4741 | 0.3279 | nan | 0.4050 | 0.4789 | 0.3937 | 0.1921 | 0.8825 | 0.7873 | 0.9516 | 0.0032 | 0.1809 | 0.3970 | 0.0 |
| 0.0654 | 32.5 | 6500 | 0.7246 | 0.4128 | 0.5034 | 0.8789 | nan | 0.9247 | 0.9424 | 0.5865 | 0.8579 | 0.5105 | 0.0 | 0.7409 | 0.8799 | 0.2449 | 0.9462 | 0.0922 | 0.0 | 0.1728 | 0.0 | 0.7762 | 0.7085 | 0.0 | 0.9151 | 0.2459 | 0.6278 | 0.6088 | 0.5426 | nan | 0.4260 | 0.6444 | 0.4471 | 0.2230 | 0.9530 | 0.8839 | 0.9833 | 0.0040 | 0.1978 | 0.5251 | 0.0 | nan | 0.8010 | 0.8705 | 0.5132 | 0.7193 | 0.4466 | 0.0 | 0.5906 | 0.5971 | 0.2204 | 0.8884 | 0.0419 | 0.0 | 0.1724 | 0.0 | 0.5623 | 0.2184 | 0.0 | 0.8044 | 0.2015 | 0.5037 | 0.4964 | 0.3206 | nan | 0.4032 | 0.4828 | 0.3859 | 0.1802 | 0.8828 | 0.7909 | 0.9505 | 0.0033 | 0.1792 | 0.3959 | 0.0 |
| 0.0629 | 35.0 | 7000 | 0.7655 | 0.4168 | 0.5105 | 0.8741 | nan | 0.8961 | 0.9470 | 0.5214 | 0.8906 | 0.4982 | 0.0 | 0.7542 | 0.8631 | 0.2754 | 0.9512 | 0.1882 | 0.0015 | 0.3457 | 0.0 | 0.7778 | 0.6418 | 0.0144 | 0.8908 | 0.2816 | 0.6612 | 0.5910 | 0.5330 | nan | 0.4434 | 0.6305 | 0.4273 | 0.2421 | 0.9516 | 0.8805 | 0.9821 | 0.0036 | 0.2172 | 0.5444 | 0.0 | nan | 0.7981 | 0.8672 | 0.4665 | 0.6765 | 0.4364 | 0.0 | 0.5934 | 0.6114 | 0.2489 | 0.8877 | 0.0831 | 0.0013 | 0.3436 | 0.0 | 0.5668 | 0.2017 | 0.0140 | 0.7928 | 0.2283 | 0.4708 | 0.4904 | 0.3458 | nan | 0.4011 | 0.4722 | 0.3699 | 0.1843 | 0.8836 | 0.7898 | 0.9516 | 0.0030 | 0.1851 | 0.3897 | 0.0 |
| 0.0607 | 37.5 | 7500 | 0.7668 | 0.4180 | 0.5139 | 0.8751 | nan | 0.8948 | 0.9480 | 0.5612 | 0.8579 | 0.4903 | 0.0 | 0.7432 | 0.8676 | 0.2619 | 0.9495 | 0.1718 | 0.0165 | 0.3359 | 0.0010 | 0.7738 | 0.7077 | 0.0304 | 0.9104 | 0.2826 | 0.6353 | 0.6045 | 0.5609 | nan | 0.4406 | 0.6293 | 0.4355 | 0.2376 | 0.9511 | 0.8940 | 0.9818 | 0.0033 | 0.2277 | 0.5530 | 0.0 | nan | 0.7933 | 0.8676 | 0.4914 | 0.6562 | 0.4327 | 0.0 | 0.5956 | 0.6059 | 0.2348 | 0.8875 | 0.0739 | 0.0161 | 0.3343 | 0.0010 | 0.5696 | 0.2086 | 0.0295 | 0.8084 | 0.2268 | 0.5014 | 0.4962 | 0.3297 | nan | 0.3948 | 0.4702 | 0.3754 | 0.1918 | 0.8836 | 0.7857 | 0.9519 | 0.0029 | 0.1886 | 0.3900 | 0.0 |
| 0.0582 | 40.0 | 8000 | 0.7562 | 0.4049 | 0.5074 | 0.8780 | nan | 0.9204 | 0.9463 | 0.5052 | 0.8643 | 0.5082 | 0.0 | 0.7360 | 0.8650 | 0.2462 | 0.9501 | 0.1893 | 0.0024 | 0.2403 | 0.0 | 0.7814 | 0.6631 | 0.0242 | 0.9169 | 0.2821 | 0.6356 | 0.5984 | 0.5609 | nan | 0.4218 | 0.6299 | 0.4414 | 0.2421 | 0.9504 | 0.8835 | 0.9796 | 0.0043 | 0.2138 | 0.5395 | 0.0 | 0.0 | 0.8004 | 0.8683 | 0.4553 | 0.7073 | 0.4478 | 0.0 | 0.6007 | 0.6132 | 0.2291 | 0.8882 | 0.0839 | 0.0024 | 0.2391 | 0.0 | 0.5777 | 0.2020 | 0.0237 | 0.8060 | 0.2276 | 0.5058 | 0.4990 | 0.3415 | nan | 0.3905 | 0.4699 | 0.3815 | 0.1952 | 0.8837 | 0.7908 | 0.9522 | 0.0036 | 0.1849 | 0.3941 | 0.0 |
| 0.0565 | 42.5 | 8500 | 0.7834 | 0.4004 | 0.5024 | 0.8762 | nan | 0.8960 | 0.9474 | 0.5430 | 0.8894 | 0.4937 | 0.0 | 0.7492 | 0.8696 | 0.2727 | 0.9482 | 0.1505 | 0.0006 | 0.1868 | 0.0 | 0.7945 | 0.5042 | 0.0051 | 0.9155 | 0.2834 | 0.6369 | 0.5958 | 0.5811 | nan | 0.4136 | 0.6419 | 0.4457 | 0.2481 | 0.9510 | 0.8887 | 0.9822 | 0.0041 | 0.2054 | 0.5358 | 0.0 | 0.0 | 0.7955 | 0.8676 | 0.4613 | 0.6877 | 0.4390 | 0.0 | 0.6023 | 0.6092 | 0.2503 | 0.8879 | 0.0626 | 0.0006 | 0.1860 | 0.0 | 0.5802 | 0.1628 | 0.0051 | 0.8061 | 0.2307 | 0.5003 | 0.4961 | 0.3290 | nan | 0.3820 | 0.4752 | 0.3819 | 0.2070 | 0.8838 | 0.7914 | 0.9520 | 0.0034 | 0.1813 | 0.3944 | 0.0 |
| 0.0562 | 45.0 | 9000 | 0.7812 | 0.4015 | 0.5008 | 0.8772 | nan | 0.9064 | 0.9466 | 0.5058 | 0.8872 | 0.5059 | 0.0 | 0.7482 | 0.8642 | 0.2957 | 0.9489 | 0.1494 | 0.0059 | 0.1674 | 0.0003 | 0.8079 | 0.4484 | 0.0122 | 0.9134 | 0.2785 | 0.6303 | 0.6007 | 0.5723 | nan | 0.4337 | 0.6286 | 0.4291 | 0.2541 | 0.9521 | 0.8902 | 0.9821 | 0.0054 | 0.2097 | 0.5444 | 0.0 | 0.0 | 0.7954 | 0.8685 | 0.4597 | 0.7046 | 0.4471 | 0.0 | 0.6024 | 0.6174 | 0.2611 | 0.8881 | 0.0630 | 0.0057 | 0.1667 | 0.0003 | 0.5844 | 0.1441 | 0.0120 | 0.8059 | 0.2279 | 0.5021 | 0.4963 | 0.3439 | nan | 0.3981 | 0.4694 | 0.3714 | 0.2134 | 0.8841 | 0.7883 | 0.9522 | 0.0044 | 0.1836 | 0.3913 | 0.0 |
| 0.0547 | 47.5 | 9500 | 0.7899 | 0.3997 | 0.4971 | 0.8759 | nan | 0.9053 | 0.9472 | 0.4999 | 0.8752 | 0.5002 | 0.0 | 0.7334 | 0.8557 | 0.2947 | 0.9505 | 0.1326 | 0.0 | 0.1843 | 0.0 | 0.8065 | 0.3995 | 0.0184 | 0.9146 | 0.2650 | 0.6301 | 0.6056 | 0.5749 | nan | 0.4294 | 0.6299 | 0.4450 | 0.2461 | 0.9515 | 0.8854 | 0.9825 | 0.0044 | 0.2045 | 0.5311 | 0.0000 | 0.0 | 0.7939 | 0.8655 | 0.4530 | 0.6865 | 0.4427 | 0.0 | 0.5983 | 0.6206 | 0.2592 | 0.8881 | 0.0597 | 0.0 | 0.1837 | 0.0 | 0.5769 | 0.1272 | 0.0183 | 0.8055 | 0.2174 | 0.5004 | 0.4960 | 0.3457 | nan | 0.3926 | 0.4724 | 0.3800 | 0.2072 | 0.8841 | 0.7912 | 0.9522 | 0.0037 | 0.1789 | 0.3895 | 0.0000 |
| 0.0543 | 50.0 | 10000 | 0.7826 | 0.3995 | 0.4977 | 0.8759 | nan | 0.9069 | 0.9471 | 0.5043 | 0.8684 | 0.5057 | 0.0 | 0.7351 | 0.8662 | 0.2599 | 0.9494 | 0.1607 | 0.0044 | 0.1992 | 0.0 | 0.7913 | 0.4628 | 0.0106 | 0.9117 | 0.2679 | 0.6351 | 0.5893 | 0.5639 | nan | 0.4246 | 0.6323 | 0.4266 | 0.2431 | 0.9540 | 0.8819 | 0.9827 | 0.0045 | 0.2006 | 0.5328 | 0.0 | 0.0 | 0.7947 | 0.8656 | 0.4529 | 0.6876 | 0.4461 | 0.0 | 0.5989 | 0.6127 | 0.2346 | 0.8877 | 0.0662 | 0.0044 | 0.1985 | 0.0 | 0.5765 | 0.1495 | 0.0106 | 0.8060 | 0.2190 | 0.5015 | 0.4923 | 0.3467 | nan | 0.3908 | 0.4693 | 0.3698 | 0.2052 | 0.8832 | 0.7906 | 0.9519 | 0.0038 | 0.1774 | 0.3885 | 0.0 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Mridul/bert-hate-speech
|
Mridul
| 2023-10-19T07:18:00Z | 0 | 0 |
transformers
|
[
"transformers",
"text-classification",
"en",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-19T06:27:36Z |
---
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
Bertfortextclassification, trained on hate speech dataseet
|
hjhwang/mld-test-model
|
hjhwang
| 2023-10-19T07:14:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-19T07:09:06Z |
# MLD model test
HumanML3D 수정본 데이터로 재학습한 모델 테스트용도
|
krishna-shinde/q-FrozenLake-v1-4x4-noSlippery
|
krishna-shinde
| 2023-10-19T07:13:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-19T07:13:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="krishna-shinde/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
mirodavide/Mistral-neurips_dm
|
mirodavide
| 2023-10-19T06:59:03Z | 17 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-10-16T22:01:10Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
Taekyoon/llama2-org-ko-7b
|
Taekyoon
| 2023-10-19T06:48:34Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-10-19T06:48:34Z |
---
license: cc-by-nc-sa-4.0
---
|
lorahub/flan_t5_xl-ropes_prompt_bottom_no_hint
|
lorahub
| 2023-10-19T06:46:25Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:46:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
lorahub/flan_t5_xl-amazon_polarity_user_satisfied
|
lorahub
| 2023-10-19T06:45:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:45:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
MMeow223/Bird_Species_200
|
MMeow223
| 2023-10-19T06:45:31Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-10-19T03:58:46Z |
---
license: mit
---
This Bird_Species_200 model is purposed for COS30082 Applied Machine Learning, Assignment 1.
|
lorahub/flan_t5_xl-definite_pronoun_resolution
|
lorahub
| 2023-10-19T06:45:28Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:45:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
lorahub/flan_t5_xl-wiki_bio_key_content
|
lorahub
| 2023-10-19T06:45:06Z | 14 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:44:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
lorahub/flan_t5_xl-trivia_qa_rc
|
lorahub
| 2023-10-19T06:44:48Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:44:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
lorahub/flan_t5_xl-super_glue_multirc
|
lorahub
| 2023-10-19T06:44:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:44:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
lorahub/flan_t5_xl-dbpedia_14_given_a_choice_of_categories_
|
lorahub
| 2023-10-19T06:42:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:42:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
lorahub/flan_t5_xl-sciq_Direct_Question_Closed_Book_
|
lorahub
| 2023-10-19T06:42:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:42:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
lorahub/flan_t5_xl-ropes_background_new_situation_answer
|
lorahub
| 2023-10-19T06:42:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:41:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
lorahub/flan_t5_xl-wiki_qa_Jeopardy_style
|
lorahub
| 2023-10-19T06:41:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-19T06:41:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.