modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 00:42:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 00:42:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sontn122/tmp_trainer | sontn122 | 2023-09-09T12:20:41Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| multiple-choice | 2023-09-09T12:17:50Z | ---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Thamer/resnet-fine_tuned | Thamer | 2023-09-09T12:16:21Z | 259 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:Falah/Alzheimer_MRI",
"base_model:microsoft/resnet-34",
"base_model:finetune:microsoft/resnet-34",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-08-11T23:18:43Z | ---
license: apache-2.0
base_model: microsoft/resnet-34
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet-fine_tuned
results: []
datasets:
- Falah/Alzheimer_MRI
library_name: transformers
pipeline_tag: image-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-fine_tuned
This model is a fine-tuned version of [microsoft/resnet-34](https://huggingface.co/microsoft/resnet-34) on the Falah/Alzheimer_MRI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1983
- Accuracy: 0.9219
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9041 | 1.0 | 80 | 0.9659 | 0.5352 |
| 0.8743 | 2.0 | 160 | 0.9348 | 0.5797 |
| 0.7723 | 3.0 | 240 | 0.7793 | 0.6594 |
| 0.6864 | 4.0 | 320 | 0.6799 | 0.7031 |
| 0.5347 | 5.0 | 400 | 0.5596 | 0.7703 |
| 0.4282 | 6.0 | 480 | 0.5078 | 0.7766 |
| 0.4315 | 7.0 | 560 | 0.5455 | 0.7680 |
| 0.3747 | 8.0 | 640 | 0.4203 | 0.8266 |
| 0.2977 | 9.0 | 720 | 0.3926 | 0.8469 |
| 0.2252 | 10.0 | 800 | 0.3024 | 0.8742 |
| 0.2675 | 11.0 | 880 | 0.2731 | 0.8906 |
| 0.2136 | 12.0 | 960 | 0.3045 | 0.875 |
| 0.1998 | 13.0 | 1040 | 0.2370 | 0.9 |
| 0.2406 | 14.0 | 1120 | 0.2387 | 0.9086 |
| 0.1873 | 15.0 | 1200 | 0.1983 | 0.9219 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3 |
moniem/finetuning-sentiment-model-3000-samples | moniem | 2023-09-09T11:42:08Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-09T11:35:48Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8646864686468646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
- Accuracy: 0.8633
- F1: 0.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Prot10/vit-base-patch16-224-for-pre_evaluation | Prot10 | 2023-09-09T11:30:17Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-08-29T17:34:40Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-for-pre_evaluation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-for-pre_evaluation
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6048
- Accuracy: 0.3929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5774 | 0.98 | 16 | 1.5109 | 0.3022 |
| 1.4794 | 1.97 | 32 | 1.4942 | 0.3242 |
| 1.4536 | 2.95 | 48 | 1.4943 | 0.3187 |
| 1.421 | 4.0 | 65 | 1.4247 | 0.3407 |
| 1.3882 | 4.98 | 81 | 1.4944 | 0.3462 |
| 1.3579 | 5.97 | 97 | 1.4180 | 0.3571 |
| 1.2838 | 6.95 | 113 | 1.4693 | 0.3681 |
| 1.2695 | 8.0 | 130 | 1.4359 | 0.3434 |
| 1.2016 | 8.98 | 146 | 1.4656 | 0.3599 |
| 1.2087 | 9.97 | 162 | 1.4550 | 0.3379 |
| 1.206 | 10.95 | 178 | 1.5056 | 0.3516 |
| 1.1236 | 12.0 | 195 | 1.5003 | 0.3434 |
| 1.0534 | 12.98 | 211 | 1.5193 | 0.3269 |
| 1.0024 | 13.97 | 227 | 1.4890 | 0.3681 |
| 0.9767 | 14.95 | 243 | 1.5628 | 0.3434 |
| 0.9201 | 16.0 | 260 | 1.6306 | 0.3516 |
| 0.9136 | 16.98 | 276 | 1.5715 | 0.3626 |
| 0.8566 | 17.97 | 292 | 1.5966 | 0.3654 |
| 0.8273 | 18.95 | 308 | 1.6048 | 0.3929 |
| 0.7825 | 20.0 | 325 | 1.6175 | 0.3846 |
| 0.736 | 20.98 | 341 | 1.6526 | 0.3929 |
| 0.7008 | 21.97 | 357 | 1.6563 | 0.3736 |
| 0.6714 | 22.95 | 373 | 1.7319 | 0.3901 |
| 0.7039 | 24.0 | 390 | 1.6866 | 0.3929 |
| 0.628 | 24.98 | 406 | 1.7023 | 0.3791 |
| 0.6182 | 25.97 | 422 | 1.7301 | 0.3901 |
| 0.5957 | 26.95 | 438 | 1.7157 | 0.3846 |
| 0.5973 | 28.0 | 455 | 1.7478 | 0.3709 |
| 0.5655 | 28.98 | 471 | 1.7377 | 0.3736 |
| 0.5631 | 29.54 | 480 | 1.7374 | 0.3736 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
felixshier/ac-01-bert-finetuned | felixshier | 2023-09-09T11:25:10Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-08-15T23:32:39Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: ac-01-bert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ac-01-bert-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1172
- Validation Loss: 0.5493
- Train F1: 0.8137
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4030, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.5556 | 0.4472 | 0.7965 | 0 |
| 0.3877 | 0.4268 | 0.8107 | 1 |
| 0.2931 | 0.4459 | 0.8165 | 2 |
| 0.1734 | 0.5071 | 0.8223 | 3 |
| 0.1172 | 0.5493 | 0.8137 | 4 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sd-dreambooth-library/tatar-style | sd-dreambooth-library | 2023-09-09T11:18:50Z | 33 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-09T11:15:48Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### tatar style on Stable Diffusion via Dreambooth
#### model by nailmarsel
This your the Stable Diffusion model fine-tuned the tatar style concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **tatar_style**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:













|
xszhou/CartPole-v1 | xszhou | 2023-09-09T11:17:27Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-09T11:17:17Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ingeol/llama_qlora_test0 | ingeol | 2023-09-09T11:16:05Z | 0 | 0 | peft | [
"peft",
"pytorch",
"region:us"
]
| null | 2023-09-09T10:44:13Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Anya2099/llama2-qlora-finetunined-french | Anya2099 | 2023-09-09T11:09:59Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-09T11:09:52Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Kyrmasch/mDeBERTa-v3-base-SQuAD2-kaz | Kyrmasch | 2023-09-09T11:08:17Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"question-answering",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-05T05:58:16Z | Base: timpal0l/mdeberta-v3-base-squad2 |
BobaStr/emails_bert_gpu | BobaStr | 2023-09-09T11:01:10Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-09T10:45:51Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: BobaStr/emails_bert_gpu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BobaStr/emails_bert_gpu
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1924
- Validation Loss: 0.2284
- Train Accuracy: 0.9346
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17315, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2907 | 0.2394 | 0.9266 | 0 |
| 0.2191 | 0.2311 | 0.9319 | 1 |
| 0.1924 | 0.2284 | 0.9346 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
shadowsbuiltin/lora-trained-xl | shadowsbuiltin | 2023-09-09T10:48:43Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-09-09T10:14:55Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - shadowsbuiltin/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
haouarin/jais-13b-chat-8bits | haouarin | 2023-09-09T10:45:56Z | 6 | 3 | transformers | [
"transformers",
"pytorch",
"jais",
"text-generation",
"custom_code",
"autotrain_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2023-09-08T12:19:30Z | Demo google colab : https://colab.research.google.com/drive/13rz5tGDdHc3fTah8qT9rmOKdIg1ylqcD?usp=sharing |
RVC-RU/glad-valakas-ru | RVC-RU | 2023-09-09T10:38:53Z | 0 | 8 | null | [
"license:mit",
"region:us"
]
| null | 2023-09-09T06:47:31Z | ---
license: mit
---
# Русскоязычная модель на стримера GLAD VALAKAS
###### By nekoanime :)
##### - Модель сделана в 350 эпох. D и G файлы стандартные
##### - Датасет есть в файлах, можно свободно тренить и допиливать модель до идеала если хотите.
## Тесты модели (Мат присутствует)
### Ниже ссылки для скачивания аудио (прямые)
[Запись голоса 1 в реальном времени](https://cdn.discordapp.com/attachments/650365898678468647/1149966845969969192/valakas_1.mp3)
[Запись голоса 2 в реальном времени](https://cdn.discordapp.com/attachments/650365898678468647/1149966846326493246/valakas_2.mp3)
|
badokorach/bert-base-multilingual-cased-finetuned-luganda-qa | badokorach | 2023-09-09T10:31:09Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-09T09:09:27Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-luganda-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-luganda-qa
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3748 | 1.0 | 2215 | 0.1817 |
| 0.0707 | 2.0 | 4430 | 0.0123 |
| 0.0141 | 3.0 | 6645 | 0.0007 |
| 0.0045 | 4.0 | 8860 | 0.0002 |
| 0.0005 | 5.0 | 11075 | 0.0000 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
davanstrien/detr_beyond_words | davanstrien | 2023-09-09T10:30:30Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"license:mit",
"endpoints_compatible",
"region:us"
]
| object-detection | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- object-detection
widget:
- src: https://huggingface.co/davanstrien/detr_beyond_words/resolve/main/19.jpg
example_title: page
- src: https://huggingface.co/davanstrien/detr_beyond_words/resolve/main/65.jpg
example_title: page2
---
# detr_beyond_words (WIP)
[facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) fine tuned on [Beyond Words](https://github.com/LibraryOfCongress/newspaper-navigator/tree/master/beyond_words_data). |
camenduru/ffmpeg-cuda | camenduru | 2023-09-09T10:17:18Z | 0 | 1 | null | [
"region:us"
]
| null | 2023-09-09T10:16:55Z | FFmpeg README
=============
FFmpeg is a collection of libraries and tools to process multimedia content
such as audio, video, subtitles and related metadata.
## Libraries
* `libavcodec` provides implementation of a wider range of codecs.
* `libavformat` implements streaming protocols, container formats and basic I/O access.
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
* `libavfilter` provides means to alter decoded audio and video through a directed graph of connected filters.
* `libavdevice` provides an abstraction to access capture and playback devices.
* `libswresample` implements audio mixing and resampling routines.
* `libswscale` implements color conversion and scaling routines.
## Tools
* [ffmpeg](https://ffmpeg.org/ffmpeg.html) is a command line toolbox to
manipulate, convert and stream multimedia content.
* [ffplay](https://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
* [ffprobe](https://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
multimedia content.
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
## Documentation
The offline documentation is available in the **doc/** directory.
The online documentation is available in the main [website](https://ffmpeg.org)
and in the [wiki](https://trac.ffmpeg.org).
### Examples
Coding examples are available in the **doc/examples** directory.
## License
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
GPL. Please refer to the LICENSE file for detailed information.
## Contributing
Patches should be submitted to the ffmpeg-devel mailing list using
`git format-patch` or `git send-email`. Github pull requests should be
avoided because they are not part of our review process and will be ignored.
|
antikpatel128/OUTPUT_DIR | antikpatel128 | 2023-09-09T09:54:33Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-09-08T14:21:44Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text:
---
# Slider SDXL - LoRA

<h2 id="heading-2">SDXL ONLY</h2><ul><li><p>weight: <strong>0 to 5.0</strong></p></li><li><p>positive: <strong>more realistic</strong></p></li><li><p>negative: <strong>less realistic, cartoon, painting, etc</strong></p></li></ul><p></p><p>I noticed the more bizarre your prompt gets, the more SDXL wants to turn it into a cartoon. This helps give you the ability to adjust the level of realism in a photo. All images were generated without refiner. I refuse. </p><p></p><p>If you like my work, I am not asking for coffee, but a kind review is always appreciated.<br /><br /></p>
## Image examples for the model:




|
abeiler/goatV10-QLORA | abeiler | 2023-09-09T09:33:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-07T15:54:21Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: goatV10-QLORA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goatV10-QLORA
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4692 | 0.16 | 200 | 0.4549 |
| 0.4234 | 0.31 | 400 | 0.4144 |
| 0.3943 | 0.47 | 600 | 0.4011 |
| 0.4079 | 0.63 | 800 | 0.3922 |
| 0.4171 | 0.79 | 1000 | 0.3877 |
| 0.3983 | 0.94 | 1200 | 0.3861 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bhuvaneshwari/worktual_vectone_cai | Bhuvaneshwari | 2023-09-09T09:27:48Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-09T09:13:35Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
StefanoCaloni/Cartpole | StefanoCaloni | 2023-09-09T09:21:50Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-09T09:21:39Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CzarnyRycerz/poca-SoccerTwos | CzarnyRycerz | 2023-09-09T09:16:53Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-09-09T09:07:29Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: CzarnyRycerz/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
GregoRio123/itys | GregoRio123 | 2023-09-09T09:15:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-06-15T09:36:15Z | ---
license: creativeml-openrail-m
---
|
om-ashish-soni/bert-base-cased | om-ashish-soni | 2023-09-09T08:58:40Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-09T08:47:03Z | ---
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9245192191975353
- name: Recall
type: recall
value: 0.9319212946114467
- name: F1
type: f1
value: 0.9282054999758349
- name: Accuracy
type: accuracy
value: 0.9332577853652794
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased
This model was trained from scratch on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3045
- Precision: 0.9245
- Recall: 0.9319
- F1: 0.9282
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2707 | 1.0 | 1756 | 0.3120 | 0.9171 | 0.9263 | 0.9217 | 0.9267 |
| 0.1829 | 2.0 | 3512 | 0.2928 | 0.9189 | 0.9295 | 0.9242 | 0.9299 |
| 0.1411 | 3.0 | 5268 | 0.3045 | 0.9245 | 0.9319 | 0.9282 | 0.9333 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Sunny98/Taxi-v3 | Sunny98 | 2023-09-09T08:58:01Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-09T08:57:59Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Sunny98/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
OpenMOSS/moss-vits-onnx-model | OpenMOSS | 2023-09-09T08:55:34Z | 0 | 1 | null | [
"onnx",
"zh",
"region:us"
]
| null | 2023-09-09T08:46:08Z | ---
language:
- zh
---
# MOSS声线vits模型(900 epochs)
从电源《流浪地球1》和《流浪地球2》提取MOSS原声进行vits微调训练后的预训练模型,已转换为ONNX模型。
**All models and their derivatives provided on this page are prohibited from commercial use!**
**本页面提供的所有模型及其衍生物严禁商用!**
**Please bear all consequences caused by using the models below!**
**请自行承担使用模型而造成的一切后果!**
|
KobanBanan/ruRoberta-large_ner | KobanBanan | 2023-09-09T08:41:56Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:ai-forever/ruRoberta-large",
"base_model:finetune:ai-forever/ruRoberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-08T14:34:59Z | ---
base_model: ai-forever/ruRoberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ruRoberta-large_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruRoberta-large_ner
This model is a fine-tuned version of [ai-forever/ruRoberta-large](https://huggingface.co/ai-forever/ruRoberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1853
- Precision: 0.7273
- Recall: 0.8
- F1: 0.7619
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.4171 | 0.5833 | 0.7 | 0.6364 | 0.8067 |
| No log | 2.0 | 30 | 0.2306 | 0.6765 | 0.7667 | 0.7188 | 0.9 |
| No log | 3.0 | 45 | 0.1853 | 0.7273 | 0.8 | 0.7619 | 0.9333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0.dev20230621+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Venkatesh4342/pegasus-samsum | Venkatesh4342 | 2023-09-09T07:39:35Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-07T14:22:01Z | ---
base_model: google/pegasus-large
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: pegasus-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 0.4659
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4091
- Rouge1: 0.4659
- Rouge2: 0.2345
- Rougel: 0.3946
- Rougelsum: 0.3951
- Gen Len: 17.7467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.8025 | 0.27 | 500 | 1.4403 | 0.4466 | 0.2101 | 0.3832 | 0.3841 | 21.64 |
| 1.5936 | 0.54 | 1000 | 1.3766 | 0.4786 | 0.2374 | 0.4017 | 0.4013 | 21.24 |
| 1.5926 | 0.81 | 1500 | 1.3910 | 0.5118 | 0.2643 | 0.4282 | 0.4286 | 20.2267 |
| 1.5067 | 1.09 | 2000 | 1.4028 | 0.4982 | 0.261 | 0.4155 | 0.4157 | 20.4267 |
| 1.5712 | 1.36 | 2500 | 1.4236 | 0.4712 | 0.234 | 0.3964 | 0.3969 | 17.0 |
| 1.6177 | 1.63 | 3000 | 1.4151 | 0.4768 | 0.2382 | 0.4019 | 0.4022 | 16.28 |
| 1.6289 | 1.9 | 3500 | 1.4112 | 0.4744 | 0.2346 | 0.402 | 0.4033 | 17.0267 |
| 1.6326 | 2.17 | 4000 | 1.4096 | 0.4682 | 0.234 | 0.3985 | 0.3994 | 17.1333 |
| 1.5929 | 2.44 | 4500 | 1.4093 | 0.4637 | 0.2342 | 0.3939 | 0.3942 | 17.16 |
| 1.4351 | 2.72 | 5000 | 1.4090 | 0.4684 | 0.2346 | 0.3953 | 0.3955 | 17.8133 |
| 1.6445 | 2.99 | 5500 | 1.4091 | 0.4659 | 0.2345 | 0.3946 | 0.3951 | 17.7467 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
syp1229/whisper-small-Young | syp1229 | 2023-09-09T07:22:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-09T03:27:32Z | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Small ko-Yfreq-E - syp1229
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ko-Yfreq-E - syp1229
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aihub Y E dialogue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3217
- Cer: 0.0749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1927 | 0.3 | 100 | 0.3277 | 0.0937 |
| 0.1915 | 0.59 | 200 | 0.3208 | 0.0843 |
| 0.135 | 0.89 | 300 | 0.3242 | 0.0940 |
| 0.062 | 1.19 | 400 | 0.3134 | 0.0819 |
| 0.0512 | 1.48 | 500 | 0.3234 | 0.0827 |
| 0.036 | 1.78 | 600 | 0.3145 | 0.0811 |
| 0.0217 | 2.07 | 700 | 0.3208 | 0.0807 |
| 0.0148 | 2.37 | 800 | 0.3228 | 0.0785 |
| 0.0359 | 2.67 | 900 | 0.3162 | 0.0789 |
| 0.0256 | 2.96 | 1000 | 0.3219 | 0.0784 |
| 0.0054 | 3.26 | 1100 | 0.3224 | 0.0770 |
| 0.0087 | 3.56 | 1200 | 0.3202 | 0.0748 |
| 0.0045 | 3.85 | 1300 | 0.3191 | 0.0755 |
| 0.0095 | 4.15 | 1400 | 0.3165 | 0.0739 |
| 0.0043 | 4.44 | 1500 | 0.3189 | 0.0738 |
| 0.0024 | 4.74 | 1600 | 0.3217 | 0.0749 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
922-CA/natsuki-lm-lora-tests | 922-CA | 2023-09-09T07:14:46Z | 0 | 0 | null | [
"license:llama2",
"region:us"
]
| null | 2023-09-07T08:39:45Z | ---
license: llama2
---
For better/best results, use "Player" and "Natsuki" like so:
\nPlayer: (prompt)\Natsuki:
# l2-7b-natsuki-v0.1 (09/07/2023)
* Fine-tuned on Natsuki dialogue from DDLC (dataset of ~800 items augmented by [MythoMax-l2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) to turn into multi-turn chat dialogue)
* From chat LLaMA-2-7b
* Lora of [l2-7b-natsuki-ddlc-v0.1](https://huggingface.co/922-CA/l2-7b-natsuki-ddlc-v0.1)
# l2-7b-natsuki-v0.1-Kv2 (09/08/2023)
* Fine-tuned on Natsuki dialogue from DDLC (dataset of ~800 items augmented by [MythoMax-l2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) to turn into multi-turn chat dialogue)
* From [Kimiko-LLaMA-2-7b](https://huggingface.co/johnwick123forevr/Llama2-chat-kimiko-Sharded-2gb)
* Lora of [l2-7b-natsuki-ddlc-v0.1-Kv2](https://huggingface.co/922-CA/l2-7b-natsuki-ddlc-v0.1-Kv2)
|
Jedida/sentence_sentiments_analysis_bert | Jedida | 2023-09-09T07:14:06Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-08T11:28:13Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: sentence_sentiments_analysis_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence_sentiments_analysis_bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2690
- F1-score: 0.9132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3365 | 1.0 | 2500 | 0.3218 | 0.9066 |
| 0.2477 | 2.0 | 5000 | 0.2690 | 0.9132 |
| 0.1417 | 3.0 | 7500 | 0.3876 | 0.9178 |
| 0.0645 | 4.0 | 10000 | 0.4436 | 0.9216 |
| 0.0304 | 5.0 | 12500 | 0.5194 | 0.9208 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
asyafiqe/Merak-7B-v3-Mini-Orca-Indo | asyafiqe | 2023-09-09T07:00:02Z | 13 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"id",
"dataset:asyafiqe/orca_mini_v1_indonesia",
"arxiv:2307.09288",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-08-26T08:36:51Z | ---
inference: false
license: cc-by-nc-sa-4.0
datasets:
- asyafiqe/orca_mini_v1_indonesia
language:
- en
- id
---
# 🦚Merak-7B-v3-Mini-Orca🐳
<p align="center">
<img src="https://i.imgur.com/39sQd3h.png" alt="Merak Orca" width="300" height="300"/>
</p>
**Merak-7B-v3-Mini-Orca** is Ichsan2895's [Merak-7B-v3](https://huggingface.co/Ichsan2895/Merak-7B-v3) fine-tuned
on Bahasa Indonesia translated psmathur's [orca_mini_v1_dataset](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset).
## Usage
This model fit on 16GB VRAM GPU (Google Collab T4 wil do), by using BitsandBytes it can run on 6GB VRAM GPU.
[](https://colab.research.google.com/drive/11xmPcRNirGwZcpgmNPNpUioJUG4PQBuh)
**Quantized** versions is available:
GPTQ: https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo-GPTQ
GGML/GGUF: I will try to make this version once GGUF merge is stable.
Start chatting with Merak Mini Orca using the following code snippet:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo")
model = AutoModelForCausalLM.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo", torch_dtype=torch.float16, device_map="auto")
system_prompt = "SYSTEM: 'Anda adalah asisten AI. Anda akan diberi tugas. Anda harus menghasilkan jawaban yang rinci dan panjang.\n"
message = "Buatlah rencana untuk mengurangi penggunaan listrik di rumah."
prompt = f"{system_prompt}USER: {message}\nASSISTANT:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.1, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Prompt format
You can use [Vicuna 1.1](https://github.com/oobabooga/text-generation-webui/blob/main/instruction-templates/Vicuna-v1.1.yaml)
format for Ooobabooga's text generation webui.
```
SYSTEM: Anda adalah asisten AI. Anda akan diberi tugas. Anda harus memberikan jawaban yang rinci dan panjang.
USER: <prompt> (without the <>)
ASSISTANT:
```
## Training details
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Merak-7B-v3-Mini-Orca was instruction fine-tuned on 2 x 3090-24GB for 6 hours. [LoRA](https://github.com/microsoft/LoRA), [DeepSpeed ZeRO-2](https://github.com/microsoft/DeepSpeed), and [FlashAttention](https://github.com/Dao-AILab/flash-attention) were implemented during training using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
Hyperparameter | value |
| ------ | ------ |
learning rate | 0.0004 |
batch size | 16 |
microbatch size | 2 |
warmup step | 100 |
epochs | 2 |
weight decay | 0.0 |
lr scheduler | cosine |
lora alpha | 16 |
lora rank | 16 |
lora dropout | 0.05 |
lora target modules | q_proj, v_proj, k_proj, o_proj |
cutoff length | 4096 |
#### Training loss
Step |Train Loss |
| ------ | ------ |
1 |0.9578 |
100 |0.816 |
200 |0.7819 |
300 |0.7279 |
400 |0.732 |
500 |0.7139 |
600 |0.6829 |
700 |0.6641 |
800 |0.6553 |
#### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## Citation
```
@Paper{arXiv,
author = {Touvron, et al},
title = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
journal = {arXiv preprint arXiv:2307.09288},
year = {2023}
}
@misc{orca_mini_v3_70b,
author = {Pankaj Mathur},
title = {orca_mini_v3_70b: An Orca Style Llama2-70b model},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_70b},
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
``` |
AdiOO7/SalesKRA | AdiOO7 | 2023-09-09T06:59:42Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-09T06:59:33Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Dorgus/horse_model | Dorgus | 2023-09-09T06:50:17Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stablediffusionapi/bb95-furry-mix",
"base_model:finetune:stablediffusionapi/bb95-furry-mix",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-09T03:44:22Z |
---
license: creativeml-openrail-m
base_model: stablediffusionapi/bb95-furry-mix
instance_prompt: handsome sks anthro horse with black and white fur
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Dorgus/horse_model
This is a dreambooth model derived from stablediffusionapi/bb95-furry-mix. The weights were trained on handsome sks anthro horse with black and white fur using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
sk0032/coqui-tts-model-adam | sk0032 | 2023-09-09T06:43:08Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-08T12:29:19Z | Epochs- 11,276
GLOBAL_STEP: 1248150 |
shenshan/chinese-alpaca-2-gguf | shenshan | 2023-09-09T06:42:50Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-08T08:36:30Z | ---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- text-generation-inference
---
# Chinese-Alpaca-2 7B & 13B
Quantized by [llama.cpp](https://github.com/ggerganov/llama.cpp)
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for details. |
922-CA/l2-7b-yuri-ddlc-v0.1-gguf | 922-CA | 2023-09-09T06:33:08Z | 3 | 0 | null | [
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-08T13:00:10Z | ---
license: llama2
---
GGUFs of [l2-7b-yuri-ddlc-v0.1](https://huggingface.co/922-CA/l2-7b-yuri-ddlc-v0.1). (Primarily tested and run with Koboldcpp v1.41+).
QLora (hf and GGML) [here](https://huggingface.co/922-CA/yuri-lm-lora-tests/tree/main/l2-7b-yuri-v0.1). |
thainq107/flan-t5-small-amazon-reviews-multi | thainq107 | 2023-09-09T06:33:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-09T05:46:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-small-amazon-reviews-multi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-amazon-reviews-multi
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4648
- Accuracy: 0.598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4927 | 1.0 | 6250 | 0.4850 | 0.5824 |
| 0.4756 | 2.0 | 12500 | 0.4799 | 0.5892 |
| 0.4679 | 3.0 | 18750 | 0.4756 | 0.591 |
| 0.4568 | 4.0 | 25000 | 0.4780 | 0.594 |
| 0.4586 | 5.0 | 31250 | 0.4769 | 0.5942 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1
- Datasets 2.9.0
- Tokenizers 0.13.3
|
foduucom/pan-card-detection | foduucom | 2023-09-09T06:31:44Z | 1 | 5 | null | [
"pancard",
"object detection",
"yolov8",
"pancard object-detection",
"Identification Document detection",
"PAN Number",
"Personal Identification",
"Indian ID Card",
"Tax Document",
"Financial Document",
"Government ID",
"Indian PAN Card",
"Legal Document",
"Taxpayer Information",
"PAN Card Holder",
"PAN Card Image",
"ID Verification",
"object-detection",
"en",
"model-index",
"region:us"
]
| object-detection | 2023-08-18T05:33:16Z | ---
tags:
- pancard
- object detection
- yolov8
- pancard object-detection
- Identification Document detection
- PAN Number
- Personal Identification
- Indian ID Card
- Tax Document
- Financial Document
- Government ID
- Indian PAN Card
- Legal Document
- Taxpayer Information
- PAN Card Holder
- PAN Card Image
- ID Verification
model-index:
- name: foduucom/pan-card-detection
results:
- task:
type: object-detection
metrics:
- type: precision
value: 0.72196
name: [email protected](box)
language:
- en
metrics:
- accuracy
pipeline_tag: object-detection
---
<div align="center">
<img width="640" alt="foduucom/pan-card-detection" src="https://huggingface.co/foduucom/pan-card-detection/resolve/main/PAN-Card-Detection.jpg">
</div>
# Model Overview
The PANCard-Detect model is a yolov8 object detection model trained to detect and locate PAN (Permanent Account Number) cards in images. It is built upon the ultralytics library and fine-tuned using a dataset of annotated PAN card images.
## Intended Use
The model is intended to be used for detecting details like Name,Father Name,DOB,PAN Number, on PAN cards in images. It can be incorporated into applications that require automated detection and extraction of PAN card information from images.
## Performance
The model has been evaluated on a held-out test dataset and achieved the following performance metrics:
Average Precision (AP): 0.90
Precision: 0.92
Recall: 0.89
F1 Score: 0.89
Please note that the actual performance may vary based on the input data distribution and quality.
### Recommendations
Users should be informed about the model's limitations and potential biases. Further testing and validation are advised for specific use cases to evaluate its performance accurately.
Load model and perform prediction:
## How to Get Started with the Model
To get started with the YOLOv8s object Detection model, follow these steps:
```bash
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('foduucom/pan-card-detection')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = '/path/to/your/document/images'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
## Training Data
The model was trained on a diverse dataset containing images of PAN cards from different sources, resolutions, and lighting conditions. The dataset was annotated with bounding box coordinates to indicate the location of the PAN card within the image.
Total Number of Images: 1,100
Annotation Format: Bounding box coordinates (xmin, ymin, xmax, ymax)
## Fine-tuning Process
- Pretrained Model: TheError: Errors in your YAML metadata model was initialized with a pretrained object detection backbone (e.g. YOLO).
- Loss Function: Mean Average Precision (mAP) loss was used for optimization during training.
- Optimizer: Adam optimizer with a learning rate of 1e-4.
- Batch Size:-1
- Training Time: 1 hours on a single NVIDIA GeForce RTX 3090 GPU.
## Model Limitations
The model's performance is subject to variations in image quality, lighting conditions, and image resolutions.
The model may struggle with detecting PAN cards in cases of extreme occlusion or overlapping objects.
The model may not generalize well to non-standard PAN card formats or variations.
#### Software
The model was trained and fine-tuned using a Jupyter Notebook environment.
## Model Card Contact
For inquiries and contributions, please contact us at [email protected].
```bibtex
@ModelCard{
author = {Nehul Agrawal and
Rahul parihar},
title = {YOLOv8s pan-card Detection},
year = {2023}
}
```
--- |
dhanushreddy29/neverendingdream | dhanushreddy29 | 2023-09-09T06:21:53Z | 30 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-09T06:18:24Z | ---
license: creativeml-openrail-m
---
|
dsmsb/esg-tweet-bert_0909_testing_v1 | dsmsb | 2023-09-09T05:44:15Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-09T02:38:31Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: esg-tweet-bert_0909_testing_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esg-tweet-bert_0909_testing_v1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 246 | 0.0440 | 0.9887 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bedus-creation/eng-limbu-model-003 | bedus-creation | 2023-09-09T05:42:38Z | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-30T19:14:09Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/eng-limbu-model-003
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bedus-creation/eng-limbu-model-003
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.0945
- Validation Loss: 7.8306
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.3053 | 7.9749 | 0 |
| 8.0945 | 7.8306 | 1 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.12.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bkowshik/swag-multiple-choice | bkowshik | 2023-09-09T05:32:12Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| multiple-choice | 2023-09-08T12:48:11Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: swag-multiple-choice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swag-multiple-choice
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0120
- Accuracy: 0.7052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 157 | 0.8148 | 0.6848 |
| No log | 2.0 | 314 | 0.8738 | 0.702 |
| No log | 3.0 | 471 | 1.0120 | 0.7052 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
fnlp/bart-base-chinese | fnlp | 2023-09-09T05:16:01Z | 4,805 | 95 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"Chinese",
"seq2seq",
"BART",
"zh",
"arxiv:2109.05729",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- text2text-generation
- Chinese
- seq2seq
- BART
language: zh
---
# Chinese BART-Base
### News
**12/30/2022**
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
- **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
- **Position Embeddings** We extend the max_position_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
| | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG |
| :--------- | :---: | :-----: | :-----: | :---: | :---: |
| Previous | | | | | |
| bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 |
| cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 |
| bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 |
| cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 |
| Updataed | | | | | |
| bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 |
| cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 |
| bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 |
| cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 |
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
- Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache).
## Model description
This is an implementation of Chinese BART-Base.
[**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf)
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
**Github Link:** https://github.com/fastnlp/CPT
## Usage
```python
>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("fnlp/bart-base-chinese")
>>> model = BartForConditionalGeneration.from_pretrained("fnlp/bart-base-chinese")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("北京是[MASK]的首都", max_length=50, do_sample=False)
[{'generated_text': '北 京 是 中 国 的 首 都'}]
```
**Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.**
## Citation
```bibtex
@article{shao2021cpt,
title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation},
author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
journal={arXiv preprint arXiv:2109.05729},
year={2021}
}
```
|
polejowska/detr-r101-cd45rb-8ah-1l | polejowska | 2023-09-09T05:03:46Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb_nan_xywh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-08-21T08:13:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb_nan_xywh
model-index:
- name: detr-r101-cd45rb-8ah-1l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r101-cd45rb-8ah-1l
This model is a fine-tuned version of [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) on the cd45rb_nan_xywh dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.9740
- eval_runtime: 214.8855
- eval_samples_per_second: 8.288
- eval_steps_per_second: 1.038
- epoch: 3.0
- step: 13818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
polejowska/detr-r101-cd45rb-8ah-12l | polejowska | 2023-09-09T05:03:02Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb_nan_xywh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-08-20T19:54:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb_nan_xywh
model-index:
- name: detr-r101-cd45rb-8ah-12l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r101-cd45rb-8ah-12l
This model is a fine-tuned version of [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) on the cd45rb_nan_xywh dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.0401 | 1.0 | 4606 | 2.1637 |
| 2.7151 | 2.0 | 9212 | 2.0666 |
| 2.6021 | 3.0 | 13818 | 1.9868 |
| 2.5221 | 4.0 | 18424 | 1.8958 |
| 2.4541 | 5.0 | 23030 | 1.8810 |
| 2.4155 | 6.0 | 27636 | 1.8369 |
| 2.3531 | 7.0 | 32242 | 1.8040 |
| 2.31 | 8.0 | 36848 | 1.7979 |
| 2.2841 | 9.0 | 41454 | 1.7521 |
| 2.2555 | 10.0 | 46060 | 1.7243 |
| 2.3388 | 11.0 | 50666 | 1.8520 |
| 2.3523 | 12.0 | 55272 | 1.8499 |
| 2.3515 | 13.0 | 59878 | 1.7635 |
| 2.3236 | 14.0 | 64484 | 1.7787 |
| 2.2676 | 15.0 | 69090 | 1.7518 |
| 2.2787 | 16.0 | 73696 | 1.7879 |
| 2.2523 | 17.0 | 78302 | 1.7303 |
| 2.2357 | 18.0 | 82908 | 1.7361 |
| 2.2068 | 19.0 | 87514 | 1.6916 |
| 2.1972 | 20.0 | 92120 | 1.6941 |
| 2.1856 | 21.0 | 96726 | 1.6824 |
| 2.1611 | 22.0 | 101332 | 1.6711 |
| 2.1419 | 23.0 | 105938 | 1.6535 |
| 2.1412 | 24.0 | 110544 | 1.6602 |
| 2.1285 | 25.0 | 115150 | 1.6447 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Onutoa/1_6e-3_10_0.5 | Onutoa | 2023-09-09T04:29:22Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-09T01:30:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_6e-3_10_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_6e-3_10_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9536
- Accuracy: 0.7596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.006
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.948 | 1.0 | 590 | 2.2396 | 0.6214 |
| 2.5635 | 2.0 | 1180 | 2.2693 | 0.6275 |
| 2.5246 | 3.0 | 1770 | 1.9556 | 0.6141 |
| 2.329 | 4.0 | 2360 | 2.3951 | 0.4801 |
| 2.1726 | 5.0 | 2950 | 1.7234 | 0.6618 |
| 2.0265 | 6.0 | 3540 | 1.5347 | 0.6679 |
| 2.0227 | 7.0 | 4130 | 1.8508 | 0.6064 |
| 1.8725 | 8.0 | 4720 | 2.0863 | 0.6584 |
| 1.8575 | 9.0 | 5310 | 4.0052 | 0.4639 |
| 1.8071 | 10.0 | 5900 | 3.1552 | 0.6468 |
| 1.6655 | 11.0 | 6490 | 1.3147 | 0.7104 |
| 1.501 | 12.0 | 7080 | 1.3005 | 0.6844 |
| 1.538 | 13.0 | 7670 | 1.7051 | 0.6948 |
| 1.4114 | 14.0 | 8260 | 1.4922 | 0.7028 |
| 1.3916 | 15.0 | 8850 | 1.6514 | 0.7034 |
| 1.3373 | 16.0 | 9440 | 1.9420 | 0.5896 |
| 1.271 | 17.0 | 10030 | 2.9731 | 0.6624 |
| 1.3123 | 18.0 | 10620 | 1.4756 | 0.6609 |
| 1.2775 | 19.0 | 11210 | 1.4888 | 0.6612 |
| 1.2341 | 20.0 | 11800 | 1.4493 | 0.7159 |
| 1.1907 | 21.0 | 12390 | 1.7638 | 0.7110 |
| 1.2035 | 22.0 | 12980 | 1.0716 | 0.7291 |
| 1.0365 | 23.0 | 13570 | 1.2975 | 0.6853 |
| 1.1041 | 24.0 | 14160 | 1.0275 | 0.7220 |
| 1.1326 | 25.0 | 14750 | 1.0228 | 0.7385 |
| 1.0261 | 26.0 | 15340 | 1.1473 | 0.7076 |
| 1.0168 | 27.0 | 15930 | 1.0435 | 0.7205 |
| 1.0653 | 28.0 | 16520 | 1.0105 | 0.7358 |
| 0.9418 | 29.0 | 17110 | 1.0397 | 0.7232 |
| 1.0591 | 30.0 | 17700 | 1.3640 | 0.6917 |
| 0.9186 | 31.0 | 18290 | 0.9679 | 0.7459 |
| 0.8665 | 32.0 | 18880 | 1.0310 | 0.7303 |
| 0.9005 | 33.0 | 19470 | 1.0498 | 0.7235 |
| 0.8494 | 34.0 | 20060 | 0.9766 | 0.7358 |
| 0.8474 | 35.0 | 20650 | 1.0077 | 0.7465 |
| 0.7973 | 36.0 | 21240 | 1.0674 | 0.7428 |
| 0.8049 | 37.0 | 21830 | 1.0074 | 0.7398 |
| 0.8241 | 38.0 | 22420 | 0.9613 | 0.7453 |
| 0.7793 | 39.0 | 23010 | 0.9864 | 0.7398 |
| 0.7781 | 40.0 | 23600 | 1.0741 | 0.7456 |
| 0.7539 | 41.0 | 24190 | 0.9809 | 0.7550 |
| 0.7403 | 42.0 | 24780 | 0.9993 | 0.7339 |
| 0.7494 | 43.0 | 25370 | 0.9887 | 0.7477 |
| 0.7091 | 44.0 | 25960 | 1.1792 | 0.7125 |
| 0.7236 | 45.0 | 26550 | 0.9549 | 0.7443 |
| 0.6947 | 46.0 | 27140 | 1.3568 | 0.7440 |
| 0.6928 | 47.0 | 27730 | 1.0682 | 0.7517 |
| 0.6578 | 48.0 | 28320 | 1.0993 | 0.7486 |
| 0.7723 | 49.0 | 28910 | 1.0381 | 0.7260 |
| 0.7169 | 50.0 | 29500 | 0.9510 | 0.7486 |
| 0.6424 | 51.0 | 30090 | 1.0781 | 0.7281 |
| 0.6652 | 52.0 | 30680 | 0.9623 | 0.7541 |
| 0.6274 | 53.0 | 31270 | 0.9476 | 0.7498 |
| 0.6295 | 54.0 | 31860 | 0.9461 | 0.7474 |
| 0.6252 | 55.0 | 32450 | 1.0873 | 0.7278 |
| 0.632 | 56.0 | 33040 | 0.9470 | 0.7492 |
| 0.5865 | 57.0 | 33630 | 1.4737 | 0.7355 |
| 0.6029 | 58.0 | 34220 | 1.0871 | 0.7477 |
| 0.5935 | 59.0 | 34810 | 1.0781 | 0.7514 |
| 0.6023 | 60.0 | 35400 | 0.9968 | 0.7581 |
| 0.5849 | 61.0 | 35990 | 1.0700 | 0.7547 |
| 0.5813 | 62.0 | 36580 | 1.2525 | 0.7425 |
| 0.5557 | 63.0 | 37170 | 0.9643 | 0.7541 |
| 0.541 | 64.0 | 37760 | 1.0179 | 0.7547 |
| 0.5693 | 65.0 | 38350 | 1.0064 | 0.7401 |
| 0.5562 | 66.0 | 38940 | 1.2333 | 0.7367 |
| 0.5677 | 67.0 | 39530 | 0.9976 | 0.7388 |
| 0.5357 | 68.0 | 40120 | 0.9795 | 0.7413 |
| 0.5372 | 69.0 | 40710 | 1.1113 | 0.7462 |
| 0.5563 | 70.0 | 41300 | 1.1366 | 0.7492 |
| 0.5377 | 71.0 | 41890 | 0.9343 | 0.7502 |
| 0.5442 | 72.0 | 42480 | 1.1735 | 0.7465 |
| 0.5124 | 73.0 | 43070 | 0.9499 | 0.7514 |
| 0.5007 | 74.0 | 43660 | 1.2104 | 0.7456 |
| 0.5094 | 75.0 | 44250 | 0.9865 | 0.7474 |
| 0.5118 | 76.0 | 44840 | 1.0542 | 0.7474 |
| 0.5166 | 77.0 | 45430 | 0.9762 | 0.7615 |
| 0.5071 | 78.0 | 46020 | 0.9333 | 0.7581 |
| 0.4961 | 79.0 | 46610 | 1.0310 | 0.7535 |
| 0.4863 | 80.0 | 47200 | 1.0242 | 0.7492 |
| 0.4801 | 81.0 | 47790 | 1.0528 | 0.7535 |
| 0.4975 | 82.0 | 48380 | 1.0188 | 0.7554 |
| 0.4868 | 83.0 | 48970 | 0.9455 | 0.7596 |
| 0.4661 | 84.0 | 49560 | 0.9841 | 0.7557 |
| 0.4765 | 85.0 | 50150 | 0.9570 | 0.7538 |
| 0.4732 | 86.0 | 50740 | 1.0383 | 0.7535 |
| 0.4846 | 87.0 | 51330 | 0.9560 | 0.7587 |
| 0.4641 | 88.0 | 51920 | 0.9716 | 0.7578 |
| 0.477 | 89.0 | 52510 | 0.9581 | 0.7606 |
| 0.4567 | 90.0 | 53100 | 0.9674 | 0.7569 |
| 0.4567 | 91.0 | 53690 | 0.9718 | 0.7587 |
| 0.4676 | 92.0 | 54280 | 0.9535 | 0.7520 |
| 0.4532 | 93.0 | 54870 | 0.9593 | 0.7563 |
| 0.4727 | 94.0 | 55460 | 0.9611 | 0.7584 |
| 0.4535 | 95.0 | 56050 | 0.9539 | 0.7602 |
| 0.4569 | 96.0 | 56640 | 0.9506 | 0.7587 |
| 0.4417 | 97.0 | 57230 | 0.9616 | 0.7584 |
| 0.4314 | 98.0 | 57820 | 0.9488 | 0.7593 |
| 0.4318 | 99.0 | 58410 | 0.9439 | 0.7587 |
| 0.4415 | 100.0 | 59000 | 0.9536 | 0.7596 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
xiaol/RWKV-claude-4-World-7B-65k | xiaol | 2023-09-09T04:26:25Z | 0 | 52 | null | [
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:OpenLeecher/Teatime",
"license:apache-2.0",
"region:us"
]
| null | 2023-08-05T08:07:49Z | ---
license: apache-2.0
datasets:
- Norquinal/claude_multiround_chat_30k
- OpenLeecher/Teatime
---
# RWKV role play model
## According our community users, this model is better than claude2.
This is a model trained based on RWKV world 7B model with 65336 context, which can do claude-like task.
Good at novel, role play and multi turn chat.
You can test this model in this buggy UI: https://rwkv.ai-creator.net/risu or https://rwkv.ai-creator.net/st ,API hosted by RWKV Runner, remember frequency penalty is sensitive and fixed a lot of repeating.
and Use temp 0.1 ,topp 0.7 could have better results.
# other:
if you use RWKV runner as API,
https://github.com/josStorer/RWKV-Runner/blob/a057bb6c5bebc346a50ae746f2b10000627552b0/backend-python/routes/completion.py#L52C29-L52C29
change user_name,assistant_name to User,Assistant to replace default Question,Answer, due to the finetune format


also you can do multi-lang with RWKV Runner





 |
minfeng-ai/ppo-Huggy | minfeng-ai | 2023-09-09T04:22:54Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-09T04:22:48Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: minfeng-ai/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lllyasviel/sd_control_collection | lllyasviel | 2023-09-09T04:08:17Z | 0 | 1,871 | null | [
"region:us"
]
| null | 2023-08-29T06:43:22Z | Collection of community SD control models for users to download flexibly.
All files are already float16 and in safetensor format.
The files are mirrored with the below script:
files = {
'diffusers_xl_canny_small.safetensors': 'https://huggingface.co/diffusers/controlnet-canny-sdxl-1.0-small/resolve/main/diffusion_pytorch_model.bin',
'diffusers_xl_canny_mid.safetensors': 'https://huggingface.co/diffusers/controlnet-canny-sdxl-1.0-mid/resolve/main/diffusion_pytorch_model.bin',
'diffusers_xl_canny_full.safetensors': 'https://huggingface.co/diffusers/controlnet-canny-sdxl-1.0/resolve/main/diffusion_pytorch_model.bin',
'diffusers_xl_depth_small.safetensors': 'https://huggingface.co/diffusers/controlnet-depth-sdxl-1.0-small/resolve/main/diffusion_pytorch_model.bin',
'diffusers_xl_depth_mid.safetensors': 'https://huggingface.co/diffusers/controlnet-depth-sdxl-1.0-mid/resolve/main/diffusion_pytorch_model.bin',
'diffusers_xl_depth_full.safetensors': 'https://huggingface.co/diffusers/controlnet-depth-sdxl-1.0/resolve/main/diffusion_pytorch_model.bin',
'thibaud_xl_openpose.safetensors': 'https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/resolve/main/OpenPoseXL2.safetensors',
'thibaud_xl_openpose_256lora.safetensors': 'https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/resolve/main/control-lora-openposeXL2-rank256.safetensors',
'sargezt_xl_depth_faid_vidit.safetensors': 'https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-depth-faid-vidit/resolve/main/diffusion_pytorch_model.bin',
'sargezt_xl_depth_zeed.safetensors': 'https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-depth-zeed/resolve/main/diffusion_pytorch_model.bin',
'sargezt_xl_depth.safetensors': 'https://huggingface.co/SargeZT/controlnet-v1e-sdxl-depth/resolve/main/diffusion_pytorch_model.bin',
'sargezt_xl_softedge.safetensors': 'https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-softedge-dexined/resolve/main/controlnet-sd-xl-1.0-softedge-dexined.safetensors',
'sai_xl_canny_128lora.safetensors': 'https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-canny-rank128.safetensors',
'sai_xl_canny_256lora.safetensors': 'https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors',
'sai_xl_depth_128lora.safetensors': 'https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-depth-rank128.safetensors',
'sai_xl_depth_256lora.safetensors': 'https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-depth-rank256.safetensors',
'sai_xl_sketch_128lora.safetensors': 'https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-sketch-rank128-metadata.safetensors',
'sai_xl_sketch_256lora.safetensors': 'https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-sketch-rank256.safetensors',
'sai_xl_recolor_128lora.safetensors': 'https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-recolor-rank128.safetensors',
'sai_xl_recolor_256lora.safetensors': 'https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-recolor-rank256.safetensors',
'ioclab_sd15_recolor.safetensors': 'https://huggingface.co/ioclab/control_v1p_sd15_brightness/resolve/main/diffusion_pytorch_model.safetensors',
't2i-adapter_xl_canny.safetensors': 'https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models_XL/adapter-xl-canny.pth',
't2i-adapter_xl_openpose.safetensors': 'https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models_XL/adapter-xl-openpose.pth',
't2i-adapter_xl_sketch.safetensors': 'https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models_XL/adapter-xl-sketch.pth',
'ip-adapter_sd15_plus.safetensors': 'https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus_sd15.bin',
'ip-adapter_sd15.safetensors': 'https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15.bin',
'ip-adapter_xl.safetensors': 'https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl.bin',
'kohya_controllllite_xl_depth_anime.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01008016e_sdxl_depth_anime.safetensors',
'kohya_controllllite_xl_canny_anime.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_canny_anime.safetensors',
'kohya_controllllite_xl_scribble_anime.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_fake_scribble_anime.safetensors',
'kohya_controllllite_xl_openpose_anime.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_pose_anime.safetensors',
'kohya_controllllite_xl_openpose_anime_v2.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_pose_anime_v2_500-1000.safetensors',
'kohya_controllllite_xl_blur_anime_beta.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01016032e_sdxl_blur_anime_beta.safetensors',
'kohya_controllllite_xl_blur.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_blur-500-1000.safetensors',
'kohya_controllllite_xl_blur_anime.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_blur-anime_500-1000.safetensors',
'kohya_controllllite_xl_canny.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_canny.safetensors',
'kohya_controllllite_xl_depth.safetensors': 'https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_depth_500-1000.safetensors',
't2i-adapter_diffusers_xl_canny.safetensors': 'https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors',
't2i-adapter_diffusers_xl_lineart.safetensors': 'https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors',
't2i-adapter_diffusers_xl_depth_midas.safetensors': 'https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors',
't2i-adapter_diffusers_xl_openpose.safetensors': 'https://huggingface.co/TencentARC/t2i-adapter-openpose-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors',
't2i-adapter_diffusers_xl_depth_zoe.safetensors': 'https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors',
't2i-adapter_diffusers_xl_sketch.safetensors': 'https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors',
}
If you download the files from raw URL, you may need to rename them.
However, files in https://huggingface.co/lllyasviel/sd_control_collection/tree/main are already renamed and can be directly downloaded.
Feel free to contact us if you are author of any listed models and you want some models to be removed/added (by opening an issue in this HuggingFace page). |
BauyrjanQ/wav2vec2-large-mms-1b-kazakh-speech2ner-kscsyn-8b-4ep | BauyrjanQ | 2023-09-09T03:52:59Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-06T13:38:50Z | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-kazakh-speech2ner-kscsyn-8b-4ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-kazakh-speech2ner-kscsyn-8b-4ep
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 6.6358 | 0.07 | 2000 | 6.5080 | 1.0000 |
| 6.6338 | 0.15 | 4000 | 6.5080 | 1.0000 |
| 0.0 | 0.22 | 6000 | nan | 1.0 |
| 0.0 | 0.3 | 8000 | nan | 1.0 |
| 0.0 | 0.37 | 10000 | nan | 1.0 |
| 0.0 | 0.44 | 12000 | nan | 1.0 |
| 0.0 | 0.52 | 14000 | nan | 1.0 |
| 0.0 | 0.59 | 16000 | nan | 1.0 |
| 0.0 | 0.66 | 18000 | nan | 1.0 |
| 0.0 | 0.74 | 20000 | nan | 1.0 |
| 0.0 | 0.81 | 22000 | nan | 1.0 |
| 0.0 | 0.89 | 24000 | nan | 1.0 |
| 0.0 | 0.96 | 26000 | nan | 1.0 |
| 0.0 | 1.03 | 28000 | nan | 1.0 |
| 0.0 | 1.11 | 30000 | nan | 1.0 |
| 0.0 | 1.18 | 32000 | nan | 1.0 |
| 0.0 | 1.25 | 34000 | nan | 1.0 |
| 0.0 | 1.33 | 36000 | nan | 1.0 |
| 0.0 | 1.4 | 38000 | nan | 1.0 |
| 0.0 | 1.48 | 40000 | nan | 1.0 |
| 0.0 | 1.55 | 42000 | nan | 1.0 |
| 0.0 | 1.62 | 44000 | nan | 1.0 |
| 0.0 | 1.7 | 46000 | nan | 1.0 |
| 0.0 | 1.77 | 48000 | nan | 1.0 |
| 0.0 | 1.84 | 50000 | nan | 1.0 |
| 0.0 | 1.92 | 52000 | nan | 1.0 |
| 0.0 | 1.99 | 54000 | nan | 1.0 |
| 0.0 | 2.07 | 56000 | nan | 1.0 |
| 0.0 | 2.14 | 58000 | nan | 1.0 |
| 0.0 | 2.21 | 60000 | nan | 1.0 |
| 0.0 | 2.29 | 62000 | nan | 1.0 |
| 0.0 | 2.36 | 64000 | nan | 1.0 |
| 0.0 | 2.43 | 66000 | nan | 1.0 |
| 0.0 | 2.51 | 68000 | nan | 1.0 |
| 0.0 | 2.58 | 70000 | nan | 1.0 |
| 0.0 | 2.66 | 72000 | nan | 1.0 |
| 0.0 | 2.73 | 74000 | nan | 1.0 |
| 0.0 | 2.8 | 76000 | nan | 1.0 |
| 0.0 | 2.88 | 78000 | nan | 1.0 |
| 0.0 | 2.95 | 80000 | nan | 1.0 |
| 0.0 | 3.02 | 82000 | nan | 1.0 |
| 0.0 | 3.1 | 84000 | nan | 1.0 |
| 0.0 | 3.17 | 86000 | nan | 1.0 |
| 0.0 | 3.25 | 88000 | nan | 1.0 |
| 0.0 | 3.32 | 90000 | nan | 1.0 |
| 0.0 | 3.39 | 92000 | nan | 1.0 |
| 0.0 | 3.47 | 94000 | nan | 1.0 |
| 0.0 | 3.54 | 96000 | nan | 1.0 |
| 0.0 | 3.61 | 98000 | nan | 1.0 |
| 0.0 | 3.69 | 100000 | nan | 1.0 |
| 0.0 | 3.76 | 102000 | nan | 1.0 |
| 0.0 | 3.84 | 104000 | nan | 1.0 |
| 0.0 | 3.91 | 106000 | nan | 1.0 |
| 0.0 | 3.98 | 108000 | nan | 1.0 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HeshamHaroon/falcon-rw-1b-4bit | HeshamHaroon | 2023-09-09T02:36:27Z | 115 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"falcon",
"text-generation",
"text-generation-inference",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-08-24T03:24:43Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
---
# GPTQ Algorithm with `auto-gptq` Integration
## Model Description
The GPTQ algorithm, developed by Frantar et al., is designed to compress transformer-based language models into fewer bits with minimal performance degradation. The `auto-gptq` library, based on the GPTQ algorithm, has been seamlessly integrated into the 🤗 transformers, enabling users to load and work with models quantized using the GPTQ algorithm.
## Features
- **Quantization**: Compress transformer-based language models with minimal performance loss.
- **Integration with 🤗 transformers**: Directly load models quantized with the GPTQ algorithm.
- **Flexibility**: Offers two scenarios for users:
1. Quantize a language model from scratch.
2. Load a pre-quantized model from the 🤗 Hub.
- **Calibration**: Uses model inference to calibrate the quantized weights, ensuring optimal performance.
- **Custom Dataset Support**: Users can quantize models using either a supported dataset or a custom dataset.
## Intended Use
This integration is intended for users who want to compress their transformer-based language models without significant performance loss. It's especially useful for deployment scenarios where model size is a constraint.
## Limitations and Considerations
- The quality of quantization may vary based on the dataset used for calibration. It's recommended to use a dataset closely related to the model's domain for best results.
- While the GPTQ algorithm minimizes performance degradation, some loss in performance is expected, especially at lower bit quantizations.
## Training Data
The GPTQ algorithm requires calibration data for optimal quantization. Users can either use supported datasets like "c4", "wikitext2", etc., or provide a custom dataset for calibration.
## Evaluation Results
Performance after quantization may vary based on the dataset used for calibration and the bit precision chosen for quantization. It's recommended to evaluate the quantized model on relevant tasks to ensure it meets the desired performance criteria.
## References
- Frantar et al., "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers"
- [AutoGPTQ GitHub Repository](https://github.com/PanQiWei/AutoGPTQ)
|
mason-suh/segformer-b0-scene-parse-150 | mason-suh | 2023-09-09T02:26:50Z | 39 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-09T02:22:29Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3973
- Mean Iou: 0.0680
- Mean Accuracy: 0.1186
- Overall Accuracy: 0.4426
- Per Category Iou: [0.2999132601380852, 0.4630614571324311, 0.8494943128957715, 0.14233739316477417, 0.39192816320128615, 0.1455819287922609, 0.44671787744534813, 0.0, 0.0, nan, nan, 0.0032424974129010003, 0.43662592045927145, 0.0045309713818001114, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.2720583194314313, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.010599892464859052, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan]
- Per Category Accuracy: [0.502569255269595, 0.8807004182609924, 0.9985712521828092, 0.7670389719570048, 0.5324006381318397, 0.30651723142339693, 0.47333836617082586, 0.0, 0.0, nan, nan, 0.16327543424317617, 0.8831689374195889, 0.01110156138089099, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.2820956352779489, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.011447532144338449, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.5925 | 4.0 | 20 | 4.7720 | 0.0212 | 0.0672 | 0.3206 | [0.11976851134191081, 0.38334117506478765, 0.6040252306121705, 0.09018609898295743, 0.48270596399084187, 0.0041055656732601345, 0.4418798753005824, 0.0, 9.62741888899586e-06, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.04351231577862517, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.00794912559618442, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0016854755972446139, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0] | [0.14132001434002942, 0.630511555918608, 0.9740532340853368, 0.3144328349246382, 0.6187881354352267, 0.010236489146696764, 0.5207072908802858, 0.0, 0.003861003861003861, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.05002132600350694, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.02336448598130841, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0030690195816792873, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.2481 | 8.0 | 40 | 4.3452 | 0.0347 | 0.0848 | 0.4225 | [0.31878797999758784, 0.3811269539171945, 0.6125681433787502, 0.08831500428589667, 0.5583576813577318, 0.007361977551235384, 0.3466478934916281, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.18248530126873258, 0.0, 0.0019590256797583083, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan] | [0.4189971246777761, 0.7502594937875038, 0.9923388542971208, 0.36347805200264216, 0.7955181625212067, 0.03301923935011418, 0.6060192027560102, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.19563053883702194, nan, 0.0019590256797583083, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.9512 | 12.0 | 60 | 3.7972 | 0.0477 | 0.0939 | 0.4628 | [0.32868958391339964, 0.4267475938107196, 0.7696097319315242, 0.10632746554744663, 0.5287313644955446, 0.016014532738962497, 0.4332043043627807, 0.0, 0.0, nan, nan, 0.0, 0.0619188921859545, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.18699580509841884, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.003303600925008259, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4980648276635751, 0.81206200826695, 0.9864709928915386, 0.3407644268300006, 0.7926710718004852, 0.06779705504081472, 0.8436772320811492, 0.0, 0.0, nan, nan, 0.0, 0.06195361725992148, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.19224207383536326, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0033181252592285357, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.3322 | 16.0 | 80 | 3.5456 | 0.0613 | 0.1018 | 0.4809 | [0.3083296322955113, 0.4604564042023073, 0.8584488957370313, 0.12119443363536976, 0.4930589109160795, 0.00017254443019077412, 0.6028061720507764, 0.0, 0.0, nan, nan, 0.0, 0.24008937120970317, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.25236198592071135, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.03713915017839767, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5400996485717282, 0.8779672884281126, 0.9827197949188319, 0.39275205668648294, 0.7283758266284214, 0.0006036903855744245, 0.9229922699869217, 0.0, 0.0, nan, nan, 0.0, 0.24814436050539374, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.2582342069096251, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.037992534218166736, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.3947 | 20.0 | 100 | 3.5717 | 0.0694 | 0.1101 | 0.4864 | [0.334452369892911, 0.5539576222863141, 0.7892230541235303, 0.11641931562092278, 0.4948702047566203, 0.012397172981114587, 0.48957001052619664, 0.0, 0.0, nan, nan, 0.0002263540824575586, 0.47844428154520685, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.38967309741827433, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.016493337666558337, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6409383451898713, 0.8234121744043926, 0.9910218192722205, 0.47152164775115596, 0.7268550700320398, 0.03370167195989396, 0.6898637944050442, 0.0, 0.0, nan, nan, 0.006947890818858561, 0.57058027908818, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.4245296431448746, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.01683948569058482, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.3618 | 24.0 | 120 | 3.3814 | 0.0714 | 0.1169 | 0.4911 | [0.310456832329434, 0.46519434656754766, 0.8779938790645205, 0.1255508255583059, 0.48867630450309085, 0.07450525176937295, 0.6471261092566962, 0.0, 0.0, nan, nan, 0.0005998018046210817, 0.45734176315976416, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.19894345620719478, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.06418342361059583, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4883414097614629, 0.9085814134887634, 0.9884406658082421, 0.5543295502311896, 0.7398440891573517, 0.17352161474054437, 0.8451605014407383, 0.0, 0.0, nan, nan, 0.01141439205955335, 0.7433444396793455, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.2016729064973224, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0717544587308171, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.1457 | 28.0 | 140 | 3.3476 | 0.0713 | 0.1179 | 0.4895 | [0.3011950023290366, 0.5049268996148568, 0.888496347413394, 0.12930476713666605, 0.47902222346286844, 0.07675787703120776, 0.6198238220315364, 0.0, 0.0, nan, nan, 0.001500513333508832, 0.47996957143004, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.26146154909459524, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.03674716756112105, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5156581902785331, 0.8881412874576979, 0.9818496110630942, 0.6360415540743409, 0.7312548772075607, 0.13415050263786452, 0.7803272762070835, 0.0, 0.0, nan, nan, 0.028287841191066997, 0.7701316266948174, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.2689209042225487, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0408958938199917, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.6055 | 32.0 | 160 | 3.3460 | 0.0689 | 0.1179 | 0.4890 | [0.32649590262271666, 0.5056804060509994, 0.8700009750440053, 0.12267555893392665, 0.468803602657295, 0.07192848854076007, 0.5994797019769066, 0.0, 0.0, nan, nan, 0.003425355788443502, 0.4597796580128175, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.22139820927918966, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0003056935422239205, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5641460244219481, 0.8435144978718438, 0.996777967885512, 0.6638443523689426, 0.6704059701015525, 0.10412346780755401, 0.8465374432476688, 0.0, 0.0, nan, nan, 0.04168734491315137, 0.8129680335169729, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.23202691815553766, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.00033181252592285357, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.9229 | 36.0 | 180 | 3.2913 | 0.0656 | 0.1176 | 0.4780 | [0.3270110877216905, 0.518042814625525, 0.8234783662586354, 0.13474353989957344, 0.47765052073873904, 0.08675908221797324, 0.4741205157746879, 0.0, 0.0, nan, nan, 0.002650574291096404, 0.4613683710390702, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.23309365613907346, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0010821674267604544, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5614633658585361, 0.831728259334099, 0.9987300019402748, 0.7388158289797634, 0.6934303847700621, 0.1429171369327279, 0.6388318855064913, 0.0, 0.0, nan, nan, 0.07890818858560794, 0.8328769834724375, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.24273731102791338, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0011613438407299876, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.8213 | 40.0 | 200 | 3.3428 | 0.0665 | 0.1213 | 0.4634 | [0.3223505876166847, 0.4981345007728367, 0.8518527818091426, 0.13781940529235245, 0.45572528779428917, 0.14172438691733746, 0.46409354040751793, 0.0, 0.0, nan, nan, 0.0031949295045919025, 0.4365871703006675, 0.004470363521833662, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.2320277752564219, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.04314363945829942, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.520289434471187, 0.844205457661055, 0.9973718095708465, 0.7963430012610341, 0.630711825946611, 0.2781962781175359, 0.5805005901179172, 0.0, 0.0, nan, nan, 0.12258064516129032, 0.8663114835219213, 0.015757054863200115, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.24228709539832236, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.04836167565325591, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.934 | 44.0 | 220 | 3.1534 | 0.0663 | 0.1190 | 0.4639 | [0.3073402436124139, 0.46043491585157187, 0.884316458140858, 0.1158555133079848, 0.4147496045795597, 0.1871375072667541, 0.5523209455179557, 0.0, 0.0, nan, nan, 0.002945334590009425, 0.45775478239445294, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.07021946542295035, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.05956785555719994, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.4632975726817562, 0.8858657265518957, 0.9961253299937088, 0.6861376328589444, 0.588732551914795, 0.36331662248352975, 0.7880626056630055, 0.0, 0.0, nan, nan, 0.062034739454094295, 0.8580806914525121, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.07028102933510261, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.06677727084197428, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.8191 | 48.0 | 240 | 3.3973 | 0.0680 | 0.1186 | 0.4426 | [0.2999132601380852, 0.4630614571324311, 0.8494943128957715, 0.14233739316477417, 0.39192816320128615, 0.1455819287922609, 0.44671787744534813, 0.0, 0.0, nan, nan, 0.0032424974129010003, 0.43662592045927145, 0.0045309713818001114, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.2720583194314313, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.010599892464859052, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] | [0.502569255269595, 0.8807004182609924, 0.9985712521828092, 0.7670389719570048, 0.5324006381318397, 0.30651723142339693, 0.47333836617082586, 0.0, 0.0, nan, nan, 0.16327543424317617, 0.8831689374195889, 0.01110156138089099, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.2820956352779489, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.011447532144338449, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan] |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
OttoYu/Tree-Inspection | OttoYu | 2023-09-09T02:13:13Z | 180 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:OttoYu/autotrain-data-tree-inspection",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-09T02:07:18Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- OttoYu/autotrain-data-tree-inspection
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 2.1481896644746374
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 87833143598
- CO2 Emissions (in grams): 2.1482
## Validation Metrics
- Loss: 1.251
- Accuracy: 0.652
- Macro F1: 0.594
- Micro F1: 0.652
- Weighted F1: 0.620
- Macro Precision: 0.629
- Micro Precision: 0.652
- Weighted Precision: 0.642
- Macro Recall: 0.617
- Micro Recall: 0.652
- Weighted Recall: 0.652 |
Onutoa/1_8e-3_5_0.5 | Onutoa | 2023-09-09T01:48:23Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-08T22:48:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_8e-3_5_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_8e-3_5_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9097
- Accuracy: 0.7502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.008
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.7895 | 1.0 | 590 | 1.8785 | 0.6150 |
| 2.562 | 2.0 | 1180 | 2.8327 | 0.4046 |
| 2.4023 | 3.0 | 1770 | 2.0853 | 0.5217 |
| 2.3167 | 4.0 | 2360 | 1.5879 | 0.6505 |
| 2.161 | 5.0 | 2950 | 1.9917 | 0.4914 |
| 1.794 | 6.0 | 3540 | 2.5834 | 0.5110 |
| 1.9698 | 7.0 | 4130 | 3.1462 | 0.4927 |
| 1.5971 | 8.0 | 4720 | 1.6865 | 0.5966 |
| 1.5201 | 9.0 | 5310 | 3.4553 | 0.6413 |
| 1.5841 | 10.0 | 5900 | 3.1799 | 0.6327 |
| 1.5231 | 11.0 | 6490 | 1.1451 | 0.6933 |
| 1.3941 | 12.0 | 7080 | 1.1390 | 0.6884 |
| 1.3679 | 13.0 | 7670 | 1.4767 | 0.6902 |
| 1.2653 | 14.0 | 8260 | 1.5274 | 0.7028 |
| 1.2451 | 15.0 | 8850 | 1.6725 | 0.7073 |
| 1.255 | 16.0 | 9440 | 1.5284 | 0.7012 |
| 1.184 | 17.0 | 10030 | 1.0831 | 0.6979 |
| 1.1215 | 18.0 | 10620 | 2.0515 | 0.5755 |
| 1.0766 | 19.0 | 11210 | 1.1808 | 0.7263 |
| 1.1108 | 20.0 | 11800 | 1.0647 | 0.7190 |
| 1.0272 | 21.0 | 12390 | 1.2527 | 0.6654 |
| 1.036 | 22.0 | 12980 | 1.1910 | 0.6783 |
| 0.9735 | 23.0 | 13570 | 1.0311 | 0.7037 |
| 0.9167 | 24.0 | 14160 | 0.9997 | 0.7021 |
| 0.8494 | 25.0 | 14750 | 1.0338 | 0.7284 |
| 0.8461 | 26.0 | 15340 | 1.4642 | 0.6495 |
| 0.8466 | 27.0 | 15930 | 0.9877 | 0.7370 |
| 0.8498 | 28.0 | 16520 | 0.9401 | 0.7287 |
| 0.7851 | 29.0 | 17110 | 1.0208 | 0.7336 |
| 0.7796 | 30.0 | 17700 | 0.9350 | 0.7232 |
| 0.7725 | 31.0 | 18290 | 1.4097 | 0.7162 |
| 0.7599 | 32.0 | 18880 | 1.1313 | 0.7333 |
| 0.768 | 33.0 | 19470 | 1.0272 | 0.7379 |
| 0.7007 | 34.0 | 20060 | 0.9294 | 0.7364 |
| 0.6718 | 35.0 | 20650 | 0.9347 | 0.7330 |
| 0.6786 | 36.0 | 21240 | 1.0231 | 0.7416 |
| 0.6822 | 37.0 | 21830 | 0.9767 | 0.7413 |
| 0.6667 | 38.0 | 22420 | 0.9351 | 0.7272 |
| 0.6497 | 39.0 | 23010 | 0.9574 | 0.7355 |
| 0.638 | 40.0 | 23600 | 1.0610 | 0.7437 |
| 0.6468 | 41.0 | 24190 | 1.1462 | 0.7434 |
| 0.6046 | 42.0 | 24780 | 0.9750 | 0.7211 |
| 0.6079 | 43.0 | 25370 | 1.2040 | 0.7419 |
| 0.5806 | 44.0 | 25960 | 1.1603 | 0.7018 |
| 0.5753 | 45.0 | 26550 | 1.0639 | 0.7110 |
| 0.5693 | 46.0 | 27140 | 1.0966 | 0.7422 |
| 0.5757 | 47.0 | 27730 | 1.0137 | 0.7468 |
| 0.5692 | 48.0 | 28320 | 0.9476 | 0.7382 |
| 0.5732 | 49.0 | 28910 | 1.0004 | 0.7291 |
| 0.5563 | 50.0 | 29500 | 0.9870 | 0.7394 |
| 0.5217 | 51.0 | 30090 | 0.9681 | 0.7312 |
| 0.5239 | 52.0 | 30680 | 0.9812 | 0.7456 |
| 0.525 | 53.0 | 31270 | 1.0355 | 0.7196 |
| 0.5136 | 54.0 | 31860 | 0.9161 | 0.7385 |
| 0.5249 | 55.0 | 32450 | 1.0093 | 0.7382 |
| 0.5092 | 56.0 | 33040 | 1.0072 | 0.7428 |
| 0.4754 | 57.0 | 33630 | 1.0560 | 0.7425 |
| 0.4716 | 58.0 | 34220 | 0.9922 | 0.7425 |
| 0.4913 | 59.0 | 34810 | 1.0014 | 0.7480 |
| 0.4773 | 60.0 | 35400 | 0.9148 | 0.7352 |
| 0.4725 | 61.0 | 35990 | 0.9691 | 0.7474 |
| 0.4656 | 62.0 | 36580 | 0.9459 | 0.7453 |
| 0.4565 | 63.0 | 37170 | 0.9521 | 0.7388 |
| 0.4502 | 64.0 | 37760 | 1.0172 | 0.7474 |
| 0.4765 | 65.0 | 38350 | 0.9504 | 0.7327 |
| 0.4439 | 66.0 | 38940 | 0.9998 | 0.7443 |
| 0.4424 | 67.0 | 39530 | 1.0985 | 0.7498 |
| 0.4541 | 68.0 | 40120 | 0.9088 | 0.7446 |
| 0.4321 | 69.0 | 40710 | 0.9322 | 0.7379 |
| 0.4346 | 70.0 | 41300 | 1.0028 | 0.7495 |
| 0.4329 | 71.0 | 41890 | 0.8949 | 0.7385 |
| 0.4344 | 72.0 | 42480 | 0.9631 | 0.7544 |
| 0.4111 | 73.0 | 43070 | 0.9800 | 0.7272 |
| 0.4183 | 74.0 | 43660 | 1.1350 | 0.7541 |
| 0.4234 | 75.0 | 44250 | 0.9444 | 0.7511 |
| 0.4297 | 76.0 | 44840 | 0.9584 | 0.7526 |
| 0.4172 | 77.0 | 45430 | 0.9165 | 0.7413 |
| 0.4083 | 78.0 | 46020 | 0.9103 | 0.7401 |
| 0.4078 | 79.0 | 46610 | 0.9100 | 0.7468 |
| 0.3977 | 80.0 | 47200 | 0.9172 | 0.7480 |
| 0.3885 | 81.0 | 47790 | 0.9714 | 0.7523 |
| 0.4012 | 82.0 | 48380 | 1.0683 | 0.7547 |
| 0.3831 | 83.0 | 48970 | 0.9867 | 0.7575 |
| 0.3878 | 84.0 | 49560 | 0.9245 | 0.7541 |
| 0.3841 | 85.0 | 50150 | 0.9662 | 0.7327 |
| 0.3835 | 86.0 | 50740 | 0.9532 | 0.7505 |
| 0.3755 | 87.0 | 51330 | 0.9645 | 0.7492 |
| 0.379 | 88.0 | 51920 | 0.9183 | 0.7483 |
| 0.38 | 89.0 | 52510 | 0.9787 | 0.7523 |
| 0.37 | 90.0 | 53100 | 0.9205 | 0.7443 |
| 0.368 | 91.0 | 53690 | 0.9236 | 0.7446 |
| 0.3737 | 92.0 | 54280 | 0.9023 | 0.7419 |
| 0.3663 | 93.0 | 54870 | 0.9200 | 0.7514 |
| 0.3763 | 94.0 | 55460 | 0.9496 | 0.7517 |
| 0.3635 | 95.0 | 56050 | 0.9487 | 0.7508 |
| 0.3656 | 96.0 | 56640 | 0.9122 | 0.7502 |
| 0.3604 | 97.0 | 57230 | 0.9036 | 0.7498 |
| 0.3475 | 98.0 | 57820 | 0.9054 | 0.7474 |
| 0.3552 | 99.0 | 58410 | 0.9078 | 0.7471 |
| 0.3564 | 100.0 | 59000 | 0.9097 | 0.7502 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
trieudemo11/llama_7b_attrb_cate_big_l280_17 | trieudemo11 | 2023-09-09T01:46:47Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-09T01:46:28Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
kaungmyat/translation | kaungmyat | 2023-09-09T01:33:31Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-08T16:35:20Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: translation
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 5.6441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6122
- Bleu: 5.6441
- Gen Len: 17.5838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8593 | 1.0 | 6355 | 1.6362 | 5.4979 | 17.59 |
| 1.8198 | 2.0 | 12710 | 1.6122 | 5.6441 | 17.5838 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DunnBC22/bert-base-cased-finetuned-Stromberg_NLP_Twitter-PoS_v2 | DunnBC22 | 2023-09-09T01:31:12Z | 109 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:twitter_pos_vcb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-07T02:05:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- twitter_pos_vcb
metrics:
- accuracy
- poseval
- f1
- recall
- precision
model-index:
- name: bert-base-cased-finetuned-Stromberg_NLP_Twitter-PoS_v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: twitter_pos_vcb
type: twitter_pos_vcb
config: twitter-pos-vcb
split: train
args: twitter-pos-vcb
metrics:
- name: Accuracy
type: accuracy
value: 0.9853480683735223
language:
- en
pipeline_tag: token-classification
---
# bert-base-cased-finetuned-Stromberg_NLP_Twitter-PoS_v2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the twitter_pos_vcb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0502
| Token | Precision | Recall | F1-Score | Support |
|:-----:|:-----:|:-----:|:-----:|:-----:|
| $ | 0.0 | 0.0 | 0.0 | 3
| '' | 0.9312320916905444 | 0.9530791788856305 | 0.9420289855072465 | 341 |
| ( | 0.9791666666666666 | 0.9591836734693877 | 0.9690721649484536 | 196 |
| ) | 0.960167714884696 | 0.9703389830508474 | 0.9652265542676501 | 472 |
| , | 0.9988979501873485 | 0.9993384785005512 | 0.9991181657848325 | 4535 |
| . | 0.9839189708141322 | 0.9894762249577601 | 0.9866897730281368 | 20715 |
| : | 0.9926405887528997 | 0.9971072719967858 | 0.9948689168604183 | 12445 |
| Cc | 0.9991067440821796 | 0.9986607142857142 | 0.9988836793927215 | 4480 |
| Cd | 0.9903884661593912 | 0.9899919935948759 | 0.9901901901901902 | 2498 |
| Dt | 0.9981148589510537 | 0.9976446837146703 | 0.9978797159492478 | 14860 |
| Ex | 0.9142857142857143 | 0.9846153846153847 | 0.9481481481481482 | 65 |
| Fw | 1.0 | 0.1 | 0.18181818181818182 | 10 |
| Ht | 0.999877541023757 | 0.9997551120362435 | 0.9998163227820978 | 8167 |
| In | 0.9960399353003514 | 0.9954846981437092 | 0.9957622393219583 | 17939 |
| Jj | 0.9812470698546648 | 0.9834756049808129 | 0.9823600735322877 | 12769 |
| Jjr | 0.9304511278195489 | 0.9686888454011742 | 0.9491850431447747 | 511 |
| Jjs | 0.9578414839797639 | 0.9726027397260274 | 0.9651656754460493 | 584 |
| Md | 0.9901398761751892 | 0.9908214777420835 | 0.990480559697213 | 4358 |
| Nn | 0.9810285563194078 | 0.9819697621331922 | 0.9814989335846437 | 30227 |
| Nnp | 0.9609722697706266 | 0.9467116357504216 | 0.9537886510363575 | 8895 |
| Nnps | 1.0 | 0.037037037037037035 | 0.07142857142857142 | 27 |
| Nns | 0.9697771061579146 | 0.9776564681985528 | 0.9737008471361739 | 7877 |
| Pos | 0.9977272727272727 | 0.984304932735426 | 0.9909706546275394 | 446 |
| Prp | 0.9983503349829983 | 0.9985184187487373 | 0.9984343697917544 | 29698 |
| Prp$ | 0.9974262182566919 | 0.9974262182566919 | 0.9974262182566919 | 5828 |
| Rb | 0.9939770374552983 | 0.9929802569727358 | 0.9934783971906942 | 15955 |
| Rbr | 0.9058823529411765 | 0.8191489361702128 | 0.8603351955307263 | 94 |
| Rbs | 0.92 | 1.0 | 0.9583333333333334 | 69 |
| Rp | 0.9802197802197802 | 0.9903774981495189 | 0.9852724594992636 | 1351 |
| Rt | 0.9995065383666419 | 0.9996298581122763 | 0.9995681944358769 | 8105 |
| Sym | 0.0 | 0.0 | 0.0 | 9 |
| To | 0.9984649496844619 | 0.9989761092150171 | 0.9987204640450398 | 5860 |
| Uh | 0.9614460148062687 | 0.9507510933637574 | 0.9560686457287633 | 10518 |
| Url | 1.0 | 0.9997242900468707 | 0.9998621260168207 | 3627 |
| Usr | 0.9999025388626285 | 1.0 | 0.9999512670565303 | 20519 |
| Vb | 0.9619302598929085 | 0.9570556133056133 | 0.9594867452615125 | 15392 |
| Vbd | 0.9592894152479645 | 0.9548719837907533 | 0.9570756023262255 | 5429 |
| Vbg | 0.9848831077518018 | 0.984191111891797 | 0.9845369882270251 | 5693 |
| Vbn | 0.9053408597481546 | 0.9164835164835164 | 0.910878112712975 | 2275 |
| Vbp | 0.963605718209626 | 0.9666228317364894 | 0.9651119169688633 | 15969 |
| Vbz | 0.9881780250347705 | 0.9861207494795281 | 0.9871483153872872 | 5764 |
| Wdt | 0.8666666666666667 | 0.9285714285714286 | 0.896551724137931 | 14 |
| Wp | 0.99125 | 0.993734335839599 | 0.9924906132665832 | 1596 |
| Wrb | 0.9963488843813387 | 0.9979683055668428 | 0.9971579374746244 | 2461 |
| `` | 0.9481865284974094 | 0.9786096256684492 | 0.963157894736842 | 187 |
Overall
- Accuracy: 0.9853
- Macro avg:
- Precision: 0.9296417163691048
- Recall: 0.8931046018294694
- F1-score: 0.8930917459781836
- Support: 308833
- Weighted avg:
- Precision: 0.985306457604231
- Recall: 0.9853480683735223
- F1-Score: 0.9852689858931941
- Support: 308833
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Token%20Classification/Monolingual/StrombergNLP-Twitter_pos_vcb/NER%20Project%20Using%20StrombergNLP%20Twitter_pos_vcb%20Dataset%20with%20PosEval.ipynb.
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://huggingface.co/datasets/strombergnlp/twitter_pos_vcb
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3 |
FunkEngine/SchweinZwei-13b | FunkEngine | 2023-09-09T01:20:57Z | 15 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"dataset:SchweinZwei/PIPPA",
"dataset:Open-Orca/OpenOrca",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:databricks/databricks-dolly-15k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-08T09:56:32Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
license: llama2
datasets:
- SchweinZwei/PIPPA
- Open-Orca/OpenOrca
- Norquinal/claude_multiround_chat_30k
- jondurbin/airoboros-gpt4-1.4.1
- databricks/databricks-dolly-15k
---
<h1 style="text-align: center">SchweinZwei/SchweinZwei-13b</h1>
<h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2>
## Model Details
The long-awaited release of our new models based on Llama-2 is finally here. SchweinZwei-13b (formerly known as Metharme) is based on
[Llama-2 13B](https://huggingface.co/meta-llama/llama-2-13b-hf) released by Meta AI.
The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting,
but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion
that the Metharme prompting format is superior (and easier to use) compared to the classic Schweinen.
This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories
and conversations with synthetically generated instructions attached.
This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
## Prompting
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history.
### Prompting example
The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
```
## Dataset
The dataset used to fine-tune this model includes our own [PIPPA], along with several other instruction
datasets, and datasets acquired from various RP forums.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
## Acknowledgements
We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model. |
nitikaverma26/Reinforce-Pixelcopter-PLE-v0 | nitikaverma26 | 2023-09-09T01:01:50Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-09T01:01:45Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 81.80 +/- 50.82
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
celinelee/codellama13B_risctoarm | celinelee | 2023-09-09T00:38:42Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-09T00:38:35Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Miladrmz/dqn-SpaceInvadersNoFrameskip-v4 | Miladrmz | 2023-09-09T00:20:08Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-09T00:19:26Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 726.50 +/- 253.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Miladrmz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Miladrmz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Miladrmz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nanom/bert_adaptation_referencias_de_vinos | nanom | 2023-09-08T23:55:58Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-08T23:37:54Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_referencias_de_vinos
results: []
widget:
- text: "Este [MASK] argentino de altura es una verdadera"
example_title: Example 1
- text: "Los sabores de [MASK] persisten"
example_title: Example 2
- text: "Con un color profundo e [MASK]"
example_title: Example 3
- text: "Hecho 100% de [MASK]"
example_title: Example 4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_referencias_de_vinos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3123 | 1.0 | 375 | 2.7183 |
| 2.6604 | 2.0 | 750 | 2.4759 |
| 2.448 | 3.0 | 1125 | 2.4108 |
| 2.3606 | 4.0 | 1500 | 2.3783 |
| 2.2859 | 5.0 | 1875 | 2.2942 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AlienKevin/whisper-small-jyutping-without-tones | AlienKevin | 2023-09-08T23:54:58Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-08T23:53:39Z | ---
language:
- yue
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: Whisper Small Jyutping without Tones
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Jyutping without Tones
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 14.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0701
- eval_wer: 9.8213
- eval_runtime: 1761.3114
- eval_samples_per_second: 1.453
- eval_steps_per_second: 0.182
- epoch: 0.78
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
TheDarkLord69696969/nllb-200-distilled-600M-finetuned_srimadbhagavatam_sns | TheDarkLord69696969 | 2023-09-08T23:31:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-08T21:17:13Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: nllb-200-distilled-600M-finetuned_srimadbhagavatam_sns
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-distilled-600M-finetuned_srimadbhagavatam_sns
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9632
- Rouge1: 39.9844
- Rouge2: 15.8187
- Rougel: 24.7601
- Rougelsum: 37.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 4.2029 | 1.0 | 193 | 3.5530 | 17.4525 | 1.8199 | 14.417 | 15.7939 |
| 3.6789 | 2.0 | 386 | 3.2385 | 18.4399 | 2.3063 | 14.4777 | 16.8663 |
| 3.4121 | 3.0 | 579 | 2.9913 | 18.6292 | 2.1671 | 14.0775 | 17.4039 |
| 3.1958 | 4.0 | 772 | 2.7935 | 20.9044 | 3.0869 | 15.7866 | 19.4597 |
| 3.0238 | 5.0 | 965 | 2.6154 | 22.9863 | 3.1733 | 15.4087 | 21.6705 |
| 2.8546 | 6.0 | 1158 | 2.4343 | 24.7063 | 4.0564 | 16.1424 | 23.2821 |
| 2.7 | 7.0 | 1351 | 2.2810 | 26.2011 | 4.6714 | 16.7887 | 24.6723 |
| 2.5532 | 8.0 | 1544 | 2.1071 | 30.7319 | 6.3718 | 17.4858 | 28.8254 |
| 2.42 | 9.0 | 1737 | 1.9742 | 28.5217 | 5.2919 | 16.9577 | 26.5686 |
| 2.2991 | 10.0 | 1930 | 1.8234 | 29.8937 | 6.3088 | 17.2141 | 28.0302 |
| 2.1851 | 11.0 | 2123 | 1.7177 | 29.8642 | 6.9874 | 18.2935 | 28.0493 |
| 2.0829 | 12.0 | 2316 | 1.5891 | 30.7551 | 6.7111 | 18.1772 | 28.8555 |
| 1.9954 | 13.0 | 2509 | 1.4965 | 32.6313 | 8.0662 | 18.4981 | 30.8014 |
| 1.9055 | 14.0 | 2702 | 1.3996 | 33.0299 | 9.6554 | 19.2763 | 31.2127 |
| 1.8372 | 15.0 | 2895 | 1.3271 | 35.4767 | 10.7234 | 20.2759 | 33.1856 |
| 1.7635 | 16.0 | 3088 | 1.2533 | 35.5164 | 11.5198 | 21.3301 | 33.2617 |
| 1.7052 | 17.0 | 3281 | 1.1865 | 37.5692 | 13.6047 | 22.9496 | 35.2626 |
| 1.6495 | 18.0 | 3474 | 1.1414 | 37.7493 | 13.6471 | 22.6947 | 35.6368 |
| 1.6009 | 19.0 | 3667 | 1.0859 | 40.251 | 15.2568 | 24.4602 | 37.955 |
| 1.5589 | 20.0 | 3860 | 1.0536 | 37.8875 | 14.5794 | 23.4696 | 35.8989 |
| 1.5209 | 21.0 | 4053 | 1.0268 | 38.4126 | 14.9535 | 24.3597 | 36.435 |
| 1.4963 | 22.0 | 4246 | 0.9982 | 40.9518 | 16.6418 | 25.284 | 38.5787 |
| 1.4651 | 23.0 | 4439 | 0.9771 | 39.4774 | 16.4189 | 24.7979 | 37.3614 |
| 1.451 | 24.0 | 4632 | 0.9662 | 40.4131 | 16.5895 | 25.0073 | 38.3018 |
| 1.4351 | 25.0 | 4825 | 0.9632 | 39.9844 | 15.8187 | 24.7601 | 37.8611 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
speechlessai/speechless-baichuan2-dolphin-orca-platypus-13b | speechlessai | 2023-09-08T23:29:00Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"dataset:ehartford/dolphin",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-08T11:27:45Z | ---
language:
- en
- zh
license: apache-2.0
tasks:
- text-generation
datasets:
- ehartford/dolphin
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
---
<p><h1> speechless-baichuan2-dolphin-orca-platypus-13b </h1></p>
Fine-tune the baichuan-inc/Baichuan2-13B-Base with Dolphin, Orca and Platypus datasets.
| Metric | Value |
| --- | --- |
| ARC | |
| HellaSwag | |
| MMLU | |
| TruthfulQA | |
| Average | |
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:[email protected]).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-13B-Base", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-13B-Base", device_map="auto", trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
Baichuan 2 模型的社区使用需遵循[《Baichuan 2 模型社区许可协议》]。Baichuan 2 支持商用。如果将 Baichuan 2 模型或其衍生品用作商业用途,请您按照如下方式联系许可方,以进行登记并向许可方申请书面授权:联系邮箱 [[email protected]]。
The use of the source code in this repository follows the open-source license Apache 2.0. Community use of the Baichuan 2 model must adhere to the [Community License for Baichuan 2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). Baichuan 2 supports commercial use. If you are using the Baichuan 2 models or their derivatives for commercial purposes, please contact the licensor in the following manner for registration and to apply for written authorization: Email [email protected].
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:[email protected]
[Email]: mailto:[email protected]
[[email protected]]: mailto:[email protected]
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
|
Robo0890/roboxl | Robo0890 | 2023-09-08T22:56:33Z | 217 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-09-08T22:56:26Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: scifi
widget:
- text: scifi
---
# RoboXL

None
## Image examples for the model:








|
Brouz/Slerpeno | Brouz | 2023-09-08T22:51:29Z | 1,534 | 4 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-08T00:33:20Z | ---
license: cc-by-4.0
---
Uses the same models Stheno does but merging using SLERP method instead
13B model |
Onutoa/1_8e-3_1_0.5 | Onutoa | 2023-09-08T22:47:28Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-08T19:46:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_8e-3_1_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_8e-3_1_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5223
- Accuracy: 0.7101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.008
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.047 | 1.0 | 590 | 0.5930 | 0.6147 |
| 1.1566 | 2.0 | 1180 | 0.8138 | 0.3786 |
| 0.8071 | 3.0 | 1770 | 1.1906 | 0.6217 |
| 0.8515 | 4.0 | 2360 | 0.5963 | 0.5232 |
| 0.7727 | 5.0 | 2950 | 0.5584 | 0.6043 |
| 0.864 | 6.0 | 3540 | 1.9242 | 0.3783 |
| 0.7792 | 7.0 | 4130 | 0.7053 | 0.5116 |
| 0.768 | 8.0 | 4720 | 2.9011 | 0.3783 |
| 0.7931 | 9.0 | 5310 | 0.6747 | 0.3783 |
| 0.726 | 10.0 | 5900 | 5.3441 | 0.3783 |
| 0.7177 | 11.0 | 6490 | 0.7048 | 0.3783 |
| 0.6681 | 12.0 | 7080 | 0.6229 | 0.3783 |
| 0.6889 | 13.0 | 7670 | 1.0114 | 0.6205 |
| 0.6618 | 14.0 | 8260 | 2.8718 | 0.6217 |
| 0.6566 | 15.0 | 8850 | 1.5485 | 0.6217 |
| 0.6227 | 16.0 | 9440 | 0.7295 | 0.6220 |
| 0.6016 | 17.0 | 10030 | 0.6356 | 0.6217 |
| 0.5891 | 18.0 | 10620 | 0.9814 | 0.6266 |
| 0.5534 | 19.0 | 11210 | 1.4086 | 0.6205 |
| 0.5574 | 20.0 | 11800 | 1.9522 | 0.6211 |
| 0.5349 | 21.0 | 12390 | 0.5543 | 0.6355 |
| 0.5171 | 22.0 | 12980 | 0.5258 | 0.6780 |
| 0.5043 | 23.0 | 13570 | 0.7235 | 0.4746 |
| 0.4775 | 24.0 | 14160 | 0.5588 | 0.6428 |
| 0.4721 | 25.0 | 14750 | 0.5342 | 0.6731 |
| 0.461 | 26.0 | 15340 | 0.7023 | 0.5560 |
| 0.461 | 27.0 | 15930 | 1.0768 | 0.4144 |
| 0.4312 | 28.0 | 16520 | 0.5149 | 0.6798 |
| 0.4378 | 29.0 | 17110 | 0.8702 | 0.5226 |
| 0.4214 | 30.0 | 17700 | 0.8323 | 0.6514 |
| 0.4205 | 31.0 | 18290 | 0.4795 | 0.6869 |
| 0.3944 | 32.0 | 18880 | 0.4763 | 0.6969 |
| 0.3874 | 33.0 | 19470 | 1.5854 | 0.6248 |
| 0.3779 | 34.0 | 20060 | 0.5091 | 0.6914 |
| 0.3723 | 35.0 | 20650 | 0.7588 | 0.6541 |
| 0.3693 | 36.0 | 21240 | 0.7886 | 0.5128 |
| 0.3602 | 37.0 | 21830 | 1.4420 | 0.4719 |
| 0.3522 | 38.0 | 22420 | 0.9082 | 0.5073 |
| 0.3488 | 39.0 | 23010 | 0.6001 | 0.6853 |
| 0.3348 | 40.0 | 23600 | 0.6879 | 0.6492 |
| 0.3482 | 41.0 | 24190 | 1.7803 | 0.6315 |
| 0.3324 | 42.0 | 24780 | 0.5648 | 0.6997 |
| 0.3318 | 43.0 | 25370 | 0.9623 | 0.6618 |
| 0.336 | 44.0 | 25960 | 0.6179 | 0.6459 |
| 0.3167 | 45.0 | 26550 | 0.5041 | 0.6997 |
| 0.3069 | 46.0 | 27140 | 0.4954 | 0.7003 |
| 0.3078 | 47.0 | 27730 | 0.5356 | 0.7028 |
| 0.2981 | 48.0 | 28320 | 1.3955 | 0.6450 |
| 0.3037 | 49.0 | 28910 | 0.5689 | 0.6878 |
| 0.2887 | 50.0 | 29500 | 0.8592 | 0.5517 |
| 0.28 | 51.0 | 30090 | 0.5939 | 0.6838 |
| 0.2786 | 52.0 | 30680 | 0.6514 | 0.6765 |
| 0.2778 | 53.0 | 31270 | 1.8380 | 0.6339 |
| 0.2797 | 54.0 | 31860 | 1.1076 | 0.6440 |
| 0.2773 | 55.0 | 32450 | 0.4983 | 0.6972 |
| 0.2746 | 56.0 | 33040 | 1.5742 | 0.4483 |
| 0.2691 | 57.0 | 33630 | 0.8767 | 0.6498 |
| 0.2555 | 58.0 | 34220 | 0.6028 | 0.6113 |
| 0.2675 | 59.0 | 34810 | 0.7268 | 0.6664 |
| 0.2567 | 60.0 | 35400 | 0.5953 | 0.6593 |
| 0.2555 | 61.0 | 35990 | 0.5564 | 0.6795 |
| 0.2525 | 62.0 | 36580 | 0.7419 | 0.6009 |
| 0.2451 | 63.0 | 37170 | 0.5019 | 0.7043 |
| 0.2431 | 64.0 | 37760 | 0.5603 | 0.6997 |
| 0.2373 | 65.0 | 38350 | 0.5755 | 0.6612 |
| 0.2387 | 66.0 | 38940 | 0.6158 | 0.6254 |
| 0.2433 | 67.0 | 39530 | 0.5994 | 0.6150 |
| 0.2354 | 68.0 | 40120 | 0.5195 | 0.7101 |
| 0.2361 | 69.0 | 40710 | 0.5164 | 0.7076 |
| 0.234 | 70.0 | 41300 | 0.5001 | 0.6997 |
| 0.2341 | 71.0 | 41890 | 1.0352 | 0.4728 |
| 0.2245 | 72.0 | 42480 | 0.5045 | 0.7073 |
| 0.2219 | 73.0 | 43070 | 0.5208 | 0.7080 |
| 0.216 | 74.0 | 43660 | 0.5116 | 0.7061 |
| 0.2227 | 75.0 | 44250 | 0.5224 | 0.7089 |
| 0.2163 | 76.0 | 44840 | 0.6881 | 0.5960 |
| 0.217 | 77.0 | 45430 | 0.5131 | 0.7 |
| 0.2209 | 78.0 | 46020 | 0.5344 | 0.7086 |
| 0.2094 | 79.0 | 46610 | 0.6909 | 0.6098 |
| 0.21 | 80.0 | 47200 | 0.7910 | 0.5829 |
| 0.2069 | 81.0 | 47790 | 0.7681 | 0.6575 |
| 0.2021 | 82.0 | 48380 | 0.5345 | 0.7083 |
| 0.2077 | 83.0 | 48970 | 0.5224 | 0.7043 |
| 0.2002 | 84.0 | 49560 | 0.5126 | 0.7015 |
| 0.2033 | 85.0 | 50150 | 0.5920 | 0.7003 |
| 0.2021 | 86.0 | 50740 | 0.5589 | 0.7040 |
| 0.1873 | 87.0 | 51330 | 0.5470 | 0.7101 |
| 0.1972 | 88.0 | 51920 | 0.5276 | 0.7040 |
| 0.1855 | 89.0 | 52510 | 0.5280 | 0.7049 |
| 0.1916 | 90.0 | 53100 | 0.5261 | 0.7046 |
| 0.1912 | 91.0 | 53690 | 0.5950 | 0.6569 |
| 0.1917 | 92.0 | 54280 | 0.5402 | 0.6850 |
| 0.1879 | 93.0 | 54870 | 0.5765 | 0.7037 |
| 0.1923 | 94.0 | 55460 | 0.5297 | 0.6991 |
| 0.1894 | 95.0 | 56050 | 0.5150 | 0.7083 |
| 0.1853 | 96.0 | 56640 | 0.5276 | 0.6976 |
| 0.1848 | 97.0 | 57230 | 0.5356 | 0.7113 |
| 0.1796 | 98.0 | 57820 | 0.5585 | 0.7086 |
| 0.1848 | 99.0 | 58410 | 0.5230 | 0.7101 |
| 0.1849 | 100.0 | 59000 | 0.5223 | 0.7101 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
actionpace/Llama-2-13b-hf | actionpace | 2023-09-08T22:36:02Z | 0 | 0 | null | [
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-08T20:42:00Z | ---
license: other
language:
- en
---
**Some of my own quants:**
* Llama-2-13b-hf_Q5_1.gguf
**Source:** [meta-llama](https://huggingface.co/meta-llama)
**Source Model:** [Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
**Source models for meta-llama/Llama-2-13b-hf**
- [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) ([Ref](https://huggingface.co/actionpace/Llama-2-13b-hf))
**Models utilizing meta-llama/Llama-2-13b-hf**
- [The-Face-Of-Goonery/Huginn-v3-13b](https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b) ([Ref](https://huggingface.co/actionpace/Huginn-v3-13b)) (Finetune, kaiokendev/SuperCOT-dataset)
- [Fredithefish/Guanaco-13B-Uncensored](https://huggingface.co/Fredithefish/Guanaco-13B-Uncensored) ([Ref](https://huggingface.co/actionpace/Guanaco-13B-Uncensored)) (Finetune, Fredithefish/openassistant-guanaco-unfiltered)
- [PeanutJar/LLaMa-2-PeanutButter_v19_R8-7B](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v19_R8-7B) ([Ref](https://huggingface.co/actionpace/LLaMa-2-PeanutButter_v19_R8-7B)) (Finetune, Custom-V19)
- [jondurbin/spicyboros-7b-2.2](https://huggingface.co/jondurbin/spicyboros-7b-2.2) ([Ref](https://huggingface.co/actionpace/spicyboros-7b-2.2)) (Finetune, jondurbin/airoboros-2.2)
- [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) ([Ref](https://huggingface.co/actionpace/Llama-2-13b-hf))
|
Onutoa/1_6e-3_1_0.5 | Onutoa | 2023-09-08T22:30:55Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-08T19:31:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_6e-3_1_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_6e-3_1_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4885
- Accuracy: 0.7401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.006
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9248 | 1.0 | 590 | 0.7400 | 0.3786 |
| 0.8836 | 2.0 | 1180 | 0.7971 | 0.3914 |
| 0.8513 | 3.0 | 1770 | 0.6664 | 0.6217 |
| 0.7488 | 4.0 | 2360 | 0.7384 | 0.6217 |
| 0.729 | 5.0 | 2950 | 1.0125 | 0.6217 |
| 0.7097 | 6.0 | 3540 | 0.7106 | 0.5046 |
| 0.6521 | 7.0 | 4130 | 0.5533 | 0.6098 |
| 0.6704 | 8.0 | 4720 | 0.4852 | 0.6587 |
| 0.6271 | 9.0 | 5310 | 0.5153 | 0.6850 |
| 0.6134 | 10.0 | 5900 | 0.4555 | 0.6948 |
| 0.5702 | 11.0 | 6490 | 0.4732 | 0.6716 |
| 0.5428 | 12.0 | 7080 | 0.4548 | 0.6963 |
| 0.5681 | 13.0 | 7670 | 0.4534 | 0.6859 |
| 0.5238 | 14.0 | 8260 | 0.6556 | 0.6725 |
| 0.5103 | 15.0 | 8850 | 0.5050 | 0.7110 |
| 0.5004 | 16.0 | 9440 | 0.4638 | 0.6813 |
| 0.4614 | 17.0 | 10030 | 0.4935 | 0.7113 |
| 0.4702 | 18.0 | 10620 | 0.4570 | 0.7040 |
| 0.4305 | 19.0 | 11210 | 0.4871 | 0.7190 |
| 0.4402 | 20.0 | 11800 | 0.5026 | 0.6722 |
| 0.4035 | 21.0 | 12390 | 0.4476 | 0.7208 |
| 0.3907 | 22.0 | 12980 | 0.6030 | 0.6367 |
| 0.3686 | 23.0 | 13570 | 0.4396 | 0.7131 |
| 0.3765 | 24.0 | 14160 | 0.4589 | 0.7180 |
| 0.3709 | 25.0 | 14750 | 0.4440 | 0.7107 |
| 0.3446 | 26.0 | 15340 | 1.0145 | 0.5728 |
| 0.3433 | 27.0 | 15930 | 0.6213 | 0.6627 |
| 0.331 | 28.0 | 16520 | 0.4566 | 0.7144 |
| 0.3373 | 29.0 | 17110 | 0.5484 | 0.7284 |
| 0.3117 | 30.0 | 17700 | 0.6371 | 0.6648 |
| 0.2988 | 31.0 | 18290 | 0.7013 | 0.7089 |
| 0.2928 | 32.0 | 18880 | 0.4553 | 0.7281 |
| 0.297 | 33.0 | 19470 | 0.5225 | 0.6976 |
| 0.2808 | 34.0 | 20060 | 0.4951 | 0.7343 |
| 0.2735 | 35.0 | 20650 | 0.5188 | 0.7095 |
| 0.2624 | 36.0 | 21240 | 0.4961 | 0.7367 |
| 0.2642 | 37.0 | 21830 | 0.4731 | 0.7254 |
| 0.2548 | 38.0 | 22420 | 0.4635 | 0.7260 |
| 0.2575 | 39.0 | 23010 | 0.4896 | 0.7073 |
| 0.244 | 40.0 | 23600 | 0.5605 | 0.7358 |
| 0.2472 | 41.0 | 24190 | 0.6450 | 0.7266 |
| 0.2433 | 42.0 | 24780 | 0.4922 | 0.7367 |
| 0.2312 | 43.0 | 25370 | 0.5115 | 0.7269 |
| 0.2355 | 44.0 | 25960 | 0.4879 | 0.7388 |
| 0.2204 | 45.0 | 26550 | 0.5023 | 0.7355 |
| 0.2223 | 46.0 | 27140 | 0.4976 | 0.7355 |
| 0.22 | 47.0 | 27730 | 0.5051 | 0.7364 |
| 0.2056 | 48.0 | 28320 | 0.4973 | 0.7205 |
| 0.2166 | 49.0 | 28910 | 0.5008 | 0.7180 |
| 0.2129 | 50.0 | 29500 | 0.5323 | 0.7382 |
| 0.1973 | 51.0 | 30090 | 0.5689 | 0.6908 |
| 0.2025 | 52.0 | 30680 | 0.4855 | 0.7367 |
| 0.1977 | 53.0 | 31270 | 0.5230 | 0.7211 |
| 0.1946 | 54.0 | 31860 | 0.5969 | 0.7333 |
| 0.2063 | 55.0 | 32450 | 0.5340 | 0.7098 |
| 0.1967 | 56.0 | 33040 | 0.5589 | 0.7361 |
| 0.1793 | 57.0 | 33630 | 0.5207 | 0.7358 |
| 0.1872 | 58.0 | 34220 | 0.4926 | 0.7394 |
| 0.1831 | 59.0 | 34810 | 0.5265 | 0.7434 |
| 0.1808 | 60.0 | 35400 | 0.5113 | 0.7407 |
| 0.1892 | 61.0 | 35990 | 0.4972 | 0.7416 |
| 0.1795 | 62.0 | 36580 | 0.5121 | 0.7391 |
| 0.172 | 63.0 | 37170 | 0.4857 | 0.7321 |
| 0.176 | 64.0 | 37760 | 0.5014 | 0.7232 |
| 0.1763 | 65.0 | 38350 | 0.5061 | 0.7370 |
| 0.1753 | 66.0 | 38940 | 0.4840 | 0.7358 |
| 0.1716 | 67.0 | 39530 | 0.5262 | 0.7361 |
| 0.1675 | 68.0 | 40120 | 0.4844 | 0.7324 |
| 0.1647 | 69.0 | 40710 | 0.5357 | 0.7440 |
| 0.1702 | 70.0 | 41300 | 0.4852 | 0.7394 |
| 0.1666 | 71.0 | 41890 | 0.4749 | 0.7391 |
| 0.162 | 72.0 | 42480 | 0.5616 | 0.7385 |
| 0.1546 | 73.0 | 43070 | 0.5089 | 0.7352 |
| 0.1525 | 74.0 | 43660 | 0.5315 | 0.7382 |
| 0.1595 | 75.0 | 44250 | 0.5300 | 0.7419 |
| 0.1555 | 76.0 | 44840 | 0.5664 | 0.7407 |
| 0.1604 | 77.0 | 45430 | 0.5057 | 0.7416 |
| 0.1584 | 78.0 | 46020 | 0.5008 | 0.7355 |
| 0.1574 | 79.0 | 46610 | 0.5206 | 0.7398 |
| 0.1552 | 80.0 | 47200 | 0.5176 | 0.7361 |
| 0.1501 | 81.0 | 47790 | 0.4955 | 0.7376 |
| 0.1492 | 82.0 | 48380 | 0.5001 | 0.7391 |
| 0.1508 | 83.0 | 48970 | 0.4963 | 0.7379 |
| 0.1463 | 84.0 | 49560 | 0.5148 | 0.7413 |
| 0.1449 | 85.0 | 50150 | 0.4868 | 0.7349 |
| 0.1489 | 86.0 | 50740 | 0.5012 | 0.7419 |
| 0.1415 | 87.0 | 51330 | 0.4963 | 0.7321 |
| 0.145 | 88.0 | 51920 | 0.5046 | 0.7291 |
| 0.1375 | 89.0 | 52510 | 0.5011 | 0.7416 |
| 0.1387 | 90.0 | 53100 | 0.5041 | 0.7440 |
| 0.1428 | 91.0 | 53690 | 0.4940 | 0.7425 |
| 0.1442 | 92.0 | 54280 | 0.4912 | 0.7401 |
| 0.139 | 93.0 | 54870 | 0.5014 | 0.7428 |
| 0.1406 | 94.0 | 55460 | 0.4919 | 0.7391 |
| 0.1387 | 95.0 | 56050 | 0.5063 | 0.7446 |
| 0.1368 | 96.0 | 56640 | 0.4902 | 0.7410 |
| 0.1391 | 97.0 | 57230 | 0.4947 | 0.7407 |
| 0.136 | 98.0 | 57820 | 0.4922 | 0.7413 |
| 0.133 | 99.0 | 58410 | 0.4926 | 0.7394 |
| 0.1379 | 100.0 | 59000 | 0.4885 | 0.7401 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mgmeskill/downstrike-320m | mgmeskill | 2023-09-08T22:04:56Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-09-08T22:02:18Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mgmeskill/downstrike-320m
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MattStammers/a2c-PandaPickAndPlace-v3 | MattStammers | 2023-09-08T22:00:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-08T21:55:15Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -40.00 +/- 20.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mmnga/stockmark-gpt-neox-japanese-1.4b-gguf | mmnga | 2023-09-08T22:00:37Z | 727 | 1 | null | [
"gguf",
"gpt-neox",
"ja",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2023-08-22T12:45:18Z | ---
license: mit
language:
- ja
tags:
- gpt-neox
---
# stockmark-gpt-neox-japanese-1.4b-gguf
[stockmarkさんが公開しているgpt-neox-japanese-1.4b](https://huggingface.co/stockmark/gpt-neox-japanese-1.4b)のggufフォーマット変換版です。
注意:こちらはブランチで試用になります。llama.cpp本家にgptneoxが実装された時に、このggufファイルが使用できない可能性があります。
***[GitHubリポジトリの readme はこちら](https://github.com/mmnga/llama.cpp/tree/mmnga-dev)***
## Usage (試用)
```
git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j
./main -m 'stockmark-gpt-neox-japanese-1.4b-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1
```
**CUBLAS**
```
LLAMA_CUBLAS=1 make -j
./main -m 'stockmark-gpt-neox-japanese-1.4b-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' -ngl 24
```
|
PHL99/Reinforce-Cartpole-v1 | PHL99 | 2023-09-08T21:39:42Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-08T21:39:31Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
quantumaikr/falcon-180B-WizardLM_Orca | quantumaikr | 2023-09-08T21:28:26Z | 1,512 | 1 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:tiiuae/falcon-refinedweb",
"dataset:pankajmathur/WizardLM_Orca",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-08T03:47:54Z | ---
datasets:
- tiiuae/falcon-refinedweb
- pankajmathur/WizardLM_Orca
language:
- en
- de
- es
- fr
inference: false
---
# 🇰🇷 quantumaikr/falcon-180B-WizardLM_Orca
**quantumaikr/falcon-180B-WizardLM_Orca is a 180B parameters causal decoder-only model built by [quantumaikr](https://www.quantumai.kr) based on [Falcon-180B-chat](https://huggingface.co/tiiuae/falcon-180B-chat)**
## How to Get Started with the Model
To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "quantumaikr/falcon-180B-WizardLM_Orca"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Contact
🇰🇷 www.quantumai.kr
🇰🇷 [email protected] [초거대언어모델 기술도입 문의환영] |
Dischordo/Anime | Dischordo | 2023-09-08T21:20:48Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-09-08T21:12:00Z | ---
license: openrail
---
Nekezuga: Clip Skip 1 capable Manga style model tuned away from bhili styles and more towards retro western tastes.
Preview images are mostly raw at 1024 no upscaling, metadata is left on images. |
rebolforces/a2c-PandaReachDense-v2g | rebolforces | 2023-09-08T21:17:39Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-19T05:58:22Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.03 +/- 0.78
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
rebolforces/a2c-PandaReachDense-v2f | rebolforces | 2023-09-08T21:17:25Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-19T05:42:53Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.23 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
rebolforces/a2c-PandaReachDense-v2 | rebolforces | 2023-09-08T21:16:37Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-19T02:41:36Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.54 +/- 1.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
rnkVikcdkam/q-FrozenLake-v1-4x4-noSlippery | rnkVikcdkam | 2023-09-08T21:08:06Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-08T21:08:04Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rnkVikcdkam/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MattStammers/a2c-PandaReachDense-v3 | MattStammers | 2023-09-08T20:52:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-08T20:28:58Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Having some issues with the video but this is a much better robotic reacher - will try to sort later on |
Onutoa/1_8e-3_10_0.1 | Onutoa | 2023-09-08T19:46:25Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-08T16:45:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_8e-3_10_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_8e-3_10_0.1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0109
- Accuracy: 0.7272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.008
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.8619 | 1.0 | 590 | 1.0251 | 0.4685 |
| 1.3275 | 2.0 | 1180 | 1.3329 | 0.3795 |
| 1.2711 | 3.0 | 1770 | 1.3427 | 0.3817 |
| 1.2563 | 4.0 | 2360 | 0.9486 | 0.6352 |
| 1.3677 | 5.0 | 2950 | 1.5968 | 0.4266 |
| 1.2101 | 6.0 | 3540 | 2.8999 | 0.6217 |
| 1.2131 | 7.0 | 4130 | 1.7592 | 0.4410 |
| 1.0951 | 8.0 | 4720 | 1.0889 | 0.6535 |
| 1.1265 | 9.0 | 5310 | 1.6306 | 0.4963 |
| 1.0834 | 10.0 | 5900 | 0.8228 | 0.6789 |
| 0.9934 | 11.0 | 6490 | 0.9519 | 0.6789 |
| 0.9867 | 12.0 | 7080 | 1.2001 | 0.6471 |
| 0.9321 | 13.0 | 7670 | 0.7980 | 0.6850 |
| 0.914 | 14.0 | 8260 | 0.7659 | 0.7092 |
| 0.9005 | 15.0 | 8850 | 0.8234 | 0.7104 |
| 0.8728 | 16.0 | 9440 | 0.9553 | 0.6948 |
| 0.7346 | 17.0 | 10030 | 2.0394 | 0.5012 |
| 0.8001 | 18.0 | 10620 | 1.2116 | 0.6180 |
| 0.8778 | 19.0 | 11210 | 0.8516 | 0.6823 |
| 0.7117 | 20.0 | 11800 | 1.1178 | 0.6251 |
| 0.6709 | 21.0 | 12390 | 0.8929 | 0.7125 |
| 0.7554 | 22.0 | 12980 | 0.9317 | 0.6801 |
| 0.7167 | 23.0 | 13570 | 1.3876 | 0.6061 |
| 0.6239 | 24.0 | 14160 | 0.9124 | 0.6737 |
| 0.6273 | 25.0 | 14750 | 0.8818 | 0.7242 |
| 0.5882 | 26.0 | 15340 | 1.0614 | 0.6728 |
| 0.5567 | 27.0 | 15930 | 1.0177 | 0.7306 |
| 0.5606 | 28.0 | 16520 | 1.3018 | 0.6459 |
| 0.5559 | 29.0 | 17110 | 1.4926 | 0.6914 |
| 0.4879 | 30.0 | 17700 | 0.9648 | 0.6924 |
| 0.4945 | 31.0 | 18290 | 0.9028 | 0.7150 |
| 0.4876 | 32.0 | 18880 | 0.8188 | 0.7257 |
| 0.455 | 33.0 | 19470 | 1.0325 | 0.7312 |
| 0.468 | 34.0 | 20060 | 0.9495 | 0.7330 |
| 0.4324 | 35.0 | 20650 | 0.8765 | 0.7202 |
| 0.4098 | 36.0 | 21240 | 1.5105 | 0.6963 |
| 0.4002 | 37.0 | 21830 | 0.9019 | 0.7309 |
| 0.4077 | 38.0 | 22420 | 0.8470 | 0.7223 |
| 0.378 | 39.0 | 23010 | 0.9477 | 0.7196 |
| 0.3697 | 40.0 | 23600 | 0.9213 | 0.7226 |
| 0.3957 | 41.0 | 24190 | 0.9321 | 0.7260 |
| 0.338 | 42.0 | 24780 | 0.8633 | 0.7284 |
| 0.343 | 43.0 | 25370 | 0.9502 | 0.7355 |
| 0.3454 | 44.0 | 25960 | 1.1264 | 0.6930 |
| 0.3288 | 45.0 | 26550 | 1.5310 | 0.6440 |
| 0.3075 | 46.0 | 27140 | 1.0321 | 0.7067 |
| 0.326 | 47.0 | 27730 | 1.0041 | 0.7257 |
| 0.3035 | 48.0 | 28320 | 0.9984 | 0.7168 |
| 0.3318 | 49.0 | 28910 | 0.9336 | 0.7294 |
| 0.2923 | 50.0 | 29500 | 1.2029 | 0.6758 |
| 0.2813 | 51.0 | 30090 | 0.9525 | 0.7217 |
| 0.2844 | 52.0 | 30680 | 1.0021 | 0.7242 |
| 0.2706 | 53.0 | 31270 | 0.9836 | 0.7187 |
| 0.2748 | 54.0 | 31860 | 0.9966 | 0.7113 |
| 0.2585 | 55.0 | 32450 | 1.0029 | 0.7211 |
| 0.2603 | 56.0 | 33040 | 0.9700 | 0.7235 |
| 0.2442 | 57.0 | 33630 | 0.9675 | 0.7330 |
| 0.2503 | 58.0 | 34220 | 1.0088 | 0.7373 |
| 0.2473 | 59.0 | 34810 | 0.9043 | 0.7306 |
| 0.2503 | 60.0 | 35400 | 1.0069 | 0.7211 |
| 0.233 | 61.0 | 35990 | 1.0046 | 0.7245 |
| 0.2248 | 62.0 | 36580 | 1.0468 | 0.7217 |
| 0.2343 | 63.0 | 37170 | 0.9263 | 0.7202 |
| 0.2312 | 64.0 | 37760 | 1.1075 | 0.7101 |
| 0.2173 | 65.0 | 38350 | 1.0439 | 0.7205 |
| 0.2138 | 66.0 | 38940 | 1.1012 | 0.7364 |
| 0.2037 | 67.0 | 39530 | 1.0094 | 0.7336 |
| 0.2129 | 68.0 | 40120 | 0.9811 | 0.7275 |
| 0.1937 | 69.0 | 40710 | 1.0312 | 0.7419 |
| 0.2102 | 70.0 | 41300 | 1.0208 | 0.7318 |
| 0.2078 | 71.0 | 41890 | 1.0093 | 0.7174 |
| 0.2037 | 72.0 | 42480 | 1.1041 | 0.7404 |
| 0.1903 | 73.0 | 43070 | 0.9927 | 0.7318 |
| 0.1898 | 74.0 | 43660 | 1.0875 | 0.7431 |
| 0.1966 | 75.0 | 44250 | 0.9659 | 0.7257 |
| 0.1967 | 76.0 | 44840 | 1.0025 | 0.7254 |
| 0.191 | 77.0 | 45430 | 0.9488 | 0.7306 |
| 0.1916 | 78.0 | 46020 | 1.0042 | 0.7327 |
| 0.1819 | 79.0 | 46610 | 1.0258 | 0.7355 |
| 0.1794 | 80.0 | 47200 | 1.0124 | 0.7309 |
| 0.1773 | 81.0 | 47790 | 0.9920 | 0.7324 |
| 0.1852 | 82.0 | 48380 | 1.0088 | 0.7367 |
| 0.1809 | 83.0 | 48970 | 1.0702 | 0.7352 |
| 0.1695 | 84.0 | 49560 | 1.0249 | 0.7260 |
| 0.1704 | 85.0 | 50150 | 1.0086 | 0.7294 |
| 0.1698 | 86.0 | 50740 | 1.0465 | 0.7318 |
| 0.1609 | 87.0 | 51330 | 1.0387 | 0.7291 |
| 0.1654 | 88.0 | 51920 | 1.0260 | 0.7297 |
| 0.1589 | 89.0 | 52510 | 1.0342 | 0.7257 |
| 0.1624 | 90.0 | 53100 | 1.0773 | 0.7297 |
| 0.1633 | 91.0 | 53690 | 1.0567 | 0.7309 |
| 0.1593 | 92.0 | 54280 | 1.0176 | 0.7196 |
| 0.1558 | 93.0 | 54870 | 1.0428 | 0.7257 |
| 0.1536 | 94.0 | 55460 | 1.0158 | 0.7294 |
| 0.1559 | 95.0 | 56050 | 1.0159 | 0.7315 |
| 0.1577 | 96.0 | 56640 | 1.0299 | 0.7306 |
| 0.1518 | 97.0 | 57230 | 1.0132 | 0.7281 |
| 0.1477 | 98.0 | 57820 | 0.9931 | 0.7266 |
| 0.1529 | 99.0 | 58410 | 1.0248 | 0.7272 |
| 0.1445 | 100.0 | 59000 | 1.0109 | 0.7272 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Onutoa/1_6e-3_10_0.1 | Onutoa | 2023-09-08T19:31:14Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-08T16:31:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_6e-3_10_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_6e-3_10_0.1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9853
- Accuracy: 0.7416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.006
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4161 | 1.0 | 590 | 1.9327 | 0.6217 |
| 1.4964 | 2.0 | 1180 | 1.4733 | 0.6217 |
| 1.4294 | 3.0 | 1770 | 1.3770 | 0.6217 |
| 1.3196 | 4.0 | 2360 | 1.1956 | 0.4070 |
| 1.1661 | 5.0 | 2950 | 0.9866 | 0.6333 |
| 1.1565 | 6.0 | 3540 | 0.9164 | 0.6453 |
| 1.0435 | 7.0 | 4130 | 1.0146 | 0.5786 |
| 1.0861 | 8.0 | 4720 | 0.8707 | 0.6541 |
| 1.0246 | 9.0 | 5310 | 0.9747 | 0.6728 |
| 0.9761 | 10.0 | 5900 | 1.0055 | 0.6560 |
| 0.9672 | 11.0 | 6490 | 0.7808 | 0.6869 |
| 0.8746 | 12.0 | 7080 | 0.8158 | 0.6768 |
| 0.8883 | 13.0 | 7670 | 0.7982 | 0.6917 |
| 0.8257 | 14.0 | 8260 | 0.9875 | 0.6869 |
| 0.8053 | 15.0 | 8850 | 0.9210 | 0.7171 |
| 0.7995 | 16.0 | 9440 | 0.7910 | 0.7168 |
| 0.7376 | 17.0 | 10030 | 0.8382 | 0.7122 |
| 0.6743 | 18.0 | 10620 | 1.0620 | 0.6141 |
| 0.6343 | 19.0 | 11210 | 0.7421 | 0.7245 |
| 0.6499 | 20.0 | 11800 | 0.7841 | 0.7187 |
| 0.5897 | 21.0 | 12390 | 0.9551 | 0.6713 |
| 0.6163 | 22.0 | 12980 | 1.0281 | 0.7135 |
| 0.5617 | 23.0 | 13570 | 0.9252 | 0.7245 |
| 0.5282 | 24.0 | 14160 | 0.8599 | 0.7080 |
| 0.5402 | 25.0 | 14750 | 0.8381 | 0.7254 |
| 0.493 | 26.0 | 15340 | 1.0387 | 0.6657 |
| 0.474 | 27.0 | 15930 | 0.7978 | 0.7266 |
| 0.4658 | 28.0 | 16520 | 0.8697 | 0.7306 |
| 0.4624 | 29.0 | 17110 | 0.8746 | 0.7287 |
| 0.4333 | 30.0 | 17700 | 0.9256 | 0.7254 |
| 0.4324 | 31.0 | 18290 | 0.8635 | 0.7336 |
| 0.4352 | 32.0 | 18880 | 1.0482 | 0.7232 |
| 0.4144 | 33.0 | 19470 | 1.2383 | 0.6872 |
| 0.3822 | 34.0 | 20060 | 0.9361 | 0.7324 |
| 0.3549 | 35.0 | 20650 | 0.9758 | 0.7180 |
| 0.3597 | 36.0 | 21240 | 1.1784 | 0.7239 |
| 0.3598 | 37.0 | 21830 | 0.9757 | 0.7336 |
| 0.3421 | 38.0 | 22420 | 1.3951 | 0.7245 |
| 0.3309 | 39.0 | 23010 | 1.1202 | 0.7401 |
| 0.3209 | 40.0 | 23600 | 0.9882 | 0.7358 |
| 0.3214 | 41.0 | 24190 | 0.9997 | 0.7343 |
| 0.3101 | 42.0 | 24780 | 0.8871 | 0.7376 |
| 0.2913 | 43.0 | 25370 | 1.0116 | 0.7401 |
| 0.2884 | 44.0 | 25960 | 1.1248 | 0.7291 |
| 0.2761 | 45.0 | 26550 | 0.8363 | 0.7291 |
| 0.2761 | 46.0 | 27140 | 1.0666 | 0.7202 |
| 0.2674 | 47.0 | 27730 | 1.0285 | 0.7416 |
| 0.2647 | 48.0 | 28320 | 0.9575 | 0.7300 |
| 0.2662 | 49.0 | 28910 | 0.9258 | 0.7373 |
| 0.2726 | 50.0 | 29500 | 1.0936 | 0.7346 |
| 0.2461 | 51.0 | 30090 | 1.0192 | 0.7196 |
| 0.2485 | 52.0 | 30680 | 1.0543 | 0.7382 |
| 0.245 | 53.0 | 31270 | 0.9507 | 0.7336 |
| 0.2377 | 54.0 | 31860 | 0.8907 | 0.7361 |
| 0.2379 | 55.0 | 32450 | 0.9788 | 0.7327 |
| 0.2335 | 56.0 | 33040 | 1.0168 | 0.7413 |
| 0.2251 | 57.0 | 33630 | 1.0117 | 0.7346 |
| 0.2293 | 58.0 | 34220 | 0.9280 | 0.7336 |
| 0.2211 | 59.0 | 34810 | 0.9735 | 0.7401 |
| 0.2236 | 60.0 | 35400 | 0.9822 | 0.7404 |
| 0.2123 | 61.0 | 35990 | 1.0189 | 0.7346 |
| 0.207 | 62.0 | 36580 | 1.0436 | 0.7401 |
| 0.2059 | 63.0 | 37170 | 0.9571 | 0.7410 |
| 0.2052 | 64.0 | 37760 | 1.0027 | 0.7419 |
| 0.193 | 65.0 | 38350 | 0.9395 | 0.7413 |
| 0.2099 | 66.0 | 38940 | 1.0325 | 0.7358 |
| 0.1968 | 67.0 | 39530 | 1.0441 | 0.7398 |
| 0.1887 | 68.0 | 40120 | 1.1337 | 0.7413 |
| 0.1911 | 69.0 | 40710 | 1.0438 | 0.7382 |
| 0.1955 | 70.0 | 41300 | 1.0361 | 0.7394 |
| 0.1998 | 71.0 | 41890 | 1.0202 | 0.7349 |
| 0.1944 | 72.0 | 42480 | 1.0261 | 0.7407 |
| 0.1755 | 73.0 | 43070 | 1.0091 | 0.7422 |
| 0.1836 | 74.0 | 43660 | 0.9986 | 0.7425 |
| 0.1856 | 75.0 | 44250 | 0.9461 | 0.7404 |
| 0.187 | 76.0 | 44840 | 0.9383 | 0.7385 |
| 0.1873 | 77.0 | 45430 | 1.0445 | 0.7416 |
| 0.1763 | 78.0 | 46020 | 1.0263 | 0.7410 |
| 0.1749 | 79.0 | 46610 | 0.9650 | 0.7370 |
| 0.1728 | 80.0 | 47200 | 0.9903 | 0.7343 |
| 0.1668 | 81.0 | 47790 | 1.0391 | 0.7382 |
| 0.1693 | 82.0 | 48380 | 0.9794 | 0.7346 |
| 0.1665 | 83.0 | 48970 | 1.0463 | 0.7355 |
| 0.1609 | 84.0 | 49560 | 0.9976 | 0.7373 |
| 0.165 | 85.0 | 50150 | 1.0040 | 0.7404 |
| 0.1622 | 86.0 | 50740 | 1.0184 | 0.7419 |
| 0.1615 | 87.0 | 51330 | 0.9825 | 0.7336 |
| 0.1624 | 88.0 | 51920 | 0.9889 | 0.7394 |
| 0.1557 | 89.0 | 52510 | 0.9938 | 0.7370 |
| 0.1515 | 90.0 | 53100 | 1.0207 | 0.7385 |
| 0.1565 | 91.0 | 53690 | 1.0081 | 0.7401 |
| 0.1582 | 92.0 | 54280 | 0.9308 | 0.7364 |
| 0.1513 | 93.0 | 54870 | 0.9795 | 0.7398 |
| 0.1572 | 94.0 | 55460 | 0.9688 | 0.7382 |
| 0.1514 | 95.0 | 56050 | 1.0002 | 0.7410 |
| 0.1546 | 96.0 | 56640 | 0.9869 | 0.7401 |
| 0.1534 | 97.0 | 57230 | 0.9694 | 0.7370 |
| 0.1405 | 98.0 | 57820 | 0.9705 | 0.7404 |
| 0.149 | 99.0 | 58410 | 0.9859 | 0.7413 |
| 0.1456 | 100.0 | 59000 | 0.9853 | 0.7416 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
voyzan/ppo-Pyramids-Training | voyzan | 2023-09-08T18:56:17Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-09-08T18:56:11Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: voyzan/ppo-Pyramids-Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Weni/ZeroShot-Llama2-13B-nenhuma | Weni | 2023-09-08T18:55:27Z | 0 | 0 | peft | [
"peft",
"pytorch",
"llama",
"pt",
"region:us"
]
| null | 2023-08-03T18:08:36Z | ---
language:
- pt
library_name: peft
---
This model was trained with 20k data in Portuguese in a prompt format.
It was trained to receive an input dictionary containing: the phrase to be sorted and the class options (including the 'none' class)
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0 |
actionpace/LLaMa-2-PeanutButter_v19_R8-7B | actionpace | 2023-09-08T18:53:56Z | 0 | 0 | null | [
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-08T16:03:43Z | ---
license: other
language:
- en
---
**Some of my own quants:**
* LLaMa-2-PeanutButter_v19_R8-7B_Q5_1.gguf
**Source:** [PeanutJar](https://huggingface.co/PeanutJar)
**Source Model:** [LLaMa-2-PeanutButter_v19_R8-7B](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v19_R8-7B)
|
LarryAIDraw/Momo-V1 | LarryAIDraw | 2023-09-08T18:47:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-08T18:36:54Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/119296/momo-belia-deviluke-to-love-ru |
VanGraham/LolaZieta | VanGraham | 2023-09-08T18:38:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-07T17:11:01Z | ---
license: creativeml-openrail-m
---
|
audreyt/Taiwan-LLaMa-v1.0-GGML | audreyt | 2023-09-08T18:36:08Z | 0 | 38 | null | [
"text-generation",
"zh",
"dataset:yentinglin/zh_TW_c4",
"dataset:yentinglin/traditional_chinese_instructions",
"arxiv:2305.13711",
"arxiv:2104.09864",
"license:llama2",
"region:us"
]
| text-generation | 2023-08-11T13:06:52Z | ---
datasets:
- yentinglin/zh_TW_c4
- yentinglin/traditional_chinese_instructions
inference: false
license: llama2
language:
- zh
model_creator: Yen-Ting Lin
model_link: https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0
model_name: Language Models for Taiwanese Culture 1.0
model_type: llama
quantized_by: Audrey Tang
pipeline_tag: text-generation
---
<!-- header start -->
<!-- header end -->
# Taiwan-LLaMa-v1.0 - GGML
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Language Models for Taiwanese Culture v1.0](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0)
## Description
This repo contains GGML format model files for [Yen-Ting Lin's Language Models for Taiwanese Culture v1.0](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0).
### Important note regarding GGML files.
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
## Repositories available
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/audreyt/Taiwan-LLaMa-v1.0-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/audreyt/Taiwan-LLaMa-v1.0-GGML)
* [Yen-Ting Lin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0)
<!-- footer start -->
<!-- footer end -->
# Original model card: Yen-Ting Lin's Language Models for Taiwanese Culture v1.0
# Language Models for Taiwanese Culture
<p align="center">
✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a>
•
🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
• 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 |
| 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 |
| NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。|
| 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。|
| 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> |
| Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional mandarin for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Language Models for Taiwanese Culture},
year={2023},
url={https://github.com/MiuLab/Taiwan-LLaMa},
note={Code and models available at https://github.com/MiuLab/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
|
daochf/Lora-MetaLlama2-7b-hf-PuceDs04x10-v01 | daochf | 2023-09-08T18:27:33Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-08T17:34:28Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
rossevine/Model_G_S_P_Wav2Vec2 | rossevine | 2023-09-08T18:22:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-08-14T07:53:49Z | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Model_G_S_P_Wav2Vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model_G_S_P_Wav2Vec2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5098
- Wer: 0.5366
- Cer: 0.2277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.0449 | 3.49 | 400 | 1.7576 | 0.6270 | 0.2607 |
| 0.484 | 6.99 | 800 | 1.8072 | 0.6043 | 0.2536 |
| 0.3335 | 10.48 | 1200 | 2.0222 | 0.5892 | 0.2500 |
| 0.2559 | 13.97 | 1600 | 2.4174 | 0.5719 | 0.2448 |
| 0.1999 | 17.47 | 2000 | 2.2888 | 0.5566 | 0.2376 |
| 0.1546 | 20.96 | 2400 | 2.5271 | 0.5753 | 0.2400 |
| 0.1225 | 24.45 | 2800 | 2.4489 | 0.5427 | 0.2327 |
| 0.0983 | 27.95 | 3200 | 2.5098 | 0.5366 | 0.2277 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
LarryAIDraw/Succub_LoRA | LarryAIDraw | 2023-09-08T18:22:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-08T18:03:55Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/142014/lora-oror-succubus-konosuba-oror |
LarryAIDraw/izumi_hashima_v1 | LarryAIDraw | 2023-09-08T18:21:40Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-08T17:59:25Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/141751/izumi-hashima-or-saekano-how-to-raise-a-boring-girlfriend |
LarryAIDraw/Kikyo-10 | LarryAIDraw | 2023-09-08T18:20:57Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-08T17:57:12Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/141792/kikyo-inuyashaanime-version |
LarryAIDraw/MikotoV2-02 | LarryAIDraw | 2023-09-08T18:20:42Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-08T17:56:42Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/95904/mikoto-aketa-idolmaster |
voyzan/ppo-SnowballTarget | voyzan | 2023-09-08T18:01:58Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-09-08T17:51:14Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: voyzan/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
newronai/clma2-13b-Chat-Adapter-Unvalidated | newronai | 2023-09-08T17:48:51Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-08T17:48:29Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
dribar/DanModel1 | dribar | 2023-09-08T17:44:26Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-09-08T15:13:11Z | ---
license: openrail
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Just something to try out
- **Developed by:** Dan Ribar
|
vladjr/mt5-small-finetuned-americanas-pt | vladjr | 2023-09-08T17:31:00Z | 3 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-07T18:33:19Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_keras_callback
model-index:
- name: vladjr/mt5-small-finetuned-americanas-pt
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vladjr/mt5-small-finetuned-americanas-pt
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3639
- Validation Loss: 2.2243
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 39624, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.6980 | 2.8237 | 0 |
| 3.3824 | 2.5473 | 1 |
| 2.8673 | 2.3947 | 2 |
| 2.6298 | 2.3175 | 3 |
| 2.5025 | 2.2665 | 4 |
| 2.4292 | 2.2444 | 5 |
| 2.3823 | 2.2295 | 6 |
| 2.3639 | 2.2243 | 7 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.12.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Subsets and Splits